Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
855
| labels
stringlengths 4
721
| body
stringlengths 1
261k
| index
stringclasses 13
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
240k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
548,667
| 16,068,726,891
|
IssuesEvent
|
2021-04-24 01:50:04
|
KATO-Hiro/AtCoderClans
|
https://api.github.com/repos/KATO-Hiro/AtCoderClans
|
closed
|
Googleサイト内検索を導入したが、レイアウトがイマイチよくない気がする
|
help wanted priority high
|
## WHY
- 作者がHTML・CSSをあまり理解していないせいで、見づらい感じ
- 余白が不自然に多い気がする
- タブレットやスマホだとあまり気にならないが、PCだと野暮ったい印象を受ける
- Clansのユーザの2/3はPCユーザであるため、これはマズい
|
1.0
|
Googleサイト内検索を導入したが、レイアウトがイマイチよくない気がする - ## WHY
- 作者がHTML・CSSをあまり理解していないせいで、見づらい感じ
- 余白が不自然に多い気がする
- タブレットやスマホだとあまり気にならないが、PCだと野暮ったい印象を受ける
- Clansのユーザの2/3はPCユーザであるため、これはマズい
|
priority
|
googleサイト内検索を導入したが、レイアウトがイマイチよくない気がする why 作者がhtml・cssをあまり理解していないせいで、見づらい感じ 余白が不自然に多い気がする タブレットやスマホだとあまり気にならないが、pcだと野暮ったい印象を受ける 、これはマズい
| 1
|
77,393
| 3,506,368,530
|
IssuesEvent
|
2016-01-08 06:10:55
|
OregonCore/OregonCore
|
https://api.github.com/repos/OregonCore/OregonCore
|
closed
|
Error C2065 / C2070 (BB #439)
|
duplicate migrated Priority: High Type: Bug
|
This issue was migrated from bitbucket.
**Original Reporter:** sh1fty88
**Original Date:** 11.03.2013 17:05:55 GMT+0000
**Original Priority:** blocker
**Original Type:** bug
**Original State:** duplicate
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/439
<hr>
While building/compiling the last 3 core's i get the following error:
using: Windows server 2008R2 / Visual Studio 2010 Express / Cmake 2.8.10.2
13> Creating library C:/build/src/oregonrealm/Debug/oregon-realm.lib and object C:/build/src/oregonrealm/Debug/oregon-realm.exp
14> WheatyExceptionReport.cpp
13> oregon-realm.vcxproj -> C:\build\bin\Debug\oregon-realm.exe
14>..\..\..\OregonCore\src\oregoncore\CliRunnable.cpp(701): error C2065: 'commandbuf' : undeclared identifier
14>..\..\..\OregonCore\src\oregoncore\CliRunnable.cpp(701): error C2065: 'commandbuf' : undeclared identifier
14>..\..\..\OregonCore\src\oregoncore\CliRunnable.cpp(701): error C2070: ''unknown-type'': illegal sizeof operand
15>------ Build started: Project: ALL_BUILD, Configuration: Debug Win32 ------
15> Building Custom Rule C:/OregonCore/CMakeLists.txt
15> CMake does not need to re-run because C:\build\CMakeFiles\generate.stamp is up-to-date.
15> Build all projects
16>------ Skipped Build: Project: INSTALL, Configuration: Debug Win32 ------
16>Project not selected to build for this solution configuration
========== Build: 14 succeeded, 1 failed, 0 up-to-date, 1 skipped ==========
|
1.0
|
Error C2065 / C2070 (BB #439) - This issue was migrated from bitbucket.
**Original Reporter:** sh1fty88
**Original Date:** 11.03.2013 17:05:55 GMT+0000
**Original Priority:** blocker
**Original Type:** bug
**Original State:** duplicate
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/439
<hr>
While building/compiling the last 3 core's i get the following error:
using: Windows server 2008R2 / Visual Studio 2010 Express / Cmake 2.8.10.2
13> Creating library C:/build/src/oregonrealm/Debug/oregon-realm.lib and object C:/build/src/oregonrealm/Debug/oregon-realm.exp
14> WheatyExceptionReport.cpp
13> oregon-realm.vcxproj -> C:\build\bin\Debug\oregon-realm.exe
14>..\..\..\OregonCore\src\oregoncore\CliRunnable.cpp(701): error C2065: 'commandbuf' : undeclared identifier
14>..\..\..\OregonCore\src\oregoncore\CliRunnable.cpp(701): error C2065: 'commandbuf' : undeclared identifier
14>..\..\..\OregonCore\src\oregoncore\CliRunnable.cpp(701): error C2070: ''unknown-type'': illegal sizeof operand
15>------ Build started: Project: ALL_BUILD, Configuration: Debug Win32 ------
15> Building Custom Rule C:/OregonCore/CMakeLists.txt
15> CMake does not need to re-run because C:\build\CMakeFiles\generate.stamp is up-to-date.
15> Build all projects
16>------ Skipped Build: Project: INSTALL, Configuration: Debug Win32 ------
16>Project not selected to build for this solution configuration
========== Build: 14 succeeded, 1 failed, 0 up-to-date, 1 skipped ==========
|
priority
|
error bb this issue was migrated from bitbucket original reporter original date gmt original priority blocker original type bug original state duplicate direct link while building compiling the last core s i get the following error using windows server visual studio express cmake creating library c build src oregonrealm debug oregon realm lib and object c build src oregonrealm debug oregon realm exp wheatyexceptionreport cpp oregon realm vcxproj c build bin debug oregon realm exe oregoncore src oregoncore clirunnable cpp error commandbuf undeclared identifier oregoncore src oregoncore clirunnable cpp error commandbuf undeclared identifier oregoncore src oregoncore clirunnable cpp error unknown type illegal sizeof operand build started project all build configuration debug building custom rule c oregoncore cmakelists txt cmake does not need to re run because c build cmakefiles generate stamp is up to date build all projects skipped build project install configuration debug project not selected to build for this solution configuration build succeeded failed up to date skipped
| 1
|
305,924
| 9,378,350,433
|
IssuesEvent
|
2019-04-04 12:42:14
|
AugurProject/augur
|
https://api.github.com/repos/AugurProject/augur
|
closed
|
Total Cost column needs to display unrealized cost
|
Bug Priority: High
|
steps to reproduce....buy 3 shares of an outcome at one price...sell 1 share higher to get a realized and unrealized pnl. Then sell another 1 share at a different price. Total cost is still displaying on the original purchase amount and not on the remaining open balance.
|
1.0
|
Total Cost column needs to display unrealized cost - steps to reproduce....buy 3 shares of an outcome at one price...sell 1 share higher to get a realized and unrealized pnl. Then sell another 1 share at a different price. Total cost is still displaying on the original purchase amount and not on the remaining open balance.
|
priority
|
total cost column needs to display unrealized cost steps to reproduce buy shares of an outcome at one price sell share higher to get a realized and unrealized pnl then sell another share at a different price total cost is still displaying on the original purchase amount and not on the remaining open balance
| 1
|
235,539
| 7,739,852,894
|
IssuesEvent
|
2018-05-28 17:58:29
|
gnebehay/schoselwette
|
https://api.github.com/repos/gnebehay/schoselwette
|
closed
|
make baseUrl stuff configurable
|
high priority
|
Currently you have to adapt
package.json
App.vue
router.js
in order to be able to set all paths correctly. Can this configuration somehow be centralized so that only one file has to be changed? (preferably a separate config file). I need that for automatic deployment.
|
1.0
|
make baseUrl stuff configurable - Currently you have to adapt
package.json
App.vue
router.js
in order to be able to set all paths correctly. Can this configuration somehow be centralized so that only one file has to be changed? (preferably a separate config file). I need that for automatic deployment.
|
priority
|
make baseurl stuff configurable currently you have to adapt package json app vue router js in order to be able to set all paths correctly can this configuration somehow be centralized so that only one file has to be changed preferably a separate config file i need that for automatic deployment
| 1
|
95,297
| 3,941,705,782
|
IssuesEvent
|
2016-04-27 08:55:52
|
raml-org/raml-js-parser-2
|
https://api.github.com/repos/raml-org/raml-js-parser-2
|
closed
|
map type expression not supported
|
bug priority:high
|
When trying to use **map** type expression, as stated [here](http://docs.raml.org/specs/1.0/#raml-10-spec-type-expressions)
I get *"Syntax error:Expected \"|\" or end of input but \"{\" found.*
Example:
```
#%RAML 1.0
title: My Api
/maps:
get:
responses:
200:
body:
type: number{}
```
|
1.0
|
map type expression not supported - When trying to use **map** type expression, as stated [here](http://docs.raml.org/specs/1.0/#raml-10-spec-type-expressions)
I get *"Syntax error:Expected \"|\" or end of input but \"{\" found.*
Example:
```
#%RAML 1.0
title: My Api
/maps:
get:
responses:
200:
body:
type: number{}
```
|
priority
|
map type expression not supported when trying to use map type expression as stated i get syntax error expected or end of input but found example raml title my api maps get responses body type number
| 1
|
681,425
| 23,310,733,928
|
IssuesEvent
|
2022-08-08 08:01:49
|
okTurtles/group-income
|
https://api.github.com/repos/okTurtles/group-income
|
closed
|
Proposals to change voting threshold don't show reason [$25 bounty]
|
Kind:Bug Note:Up-for-grabs App:Frontend Level:Starter Priority:High Note:UI/UX Note:Bounty Note:Contracts
|
### Problem
The reason I gave for this proposal isn't appearing, even though reasons do appear on some other proposals (for example, to remove a member):
<img width="782" alt="Screen Shot 2022-04-28 at 1 51 38 PM" src="https://user-images.githubusercontent.com/138706/165843599-6d6bc8d6-9d64-4d3f-a0f5-dd7c6e70af1c.png">
### Solution
Find out why it's not appearing for changing the voting threshold and fix.
### Bounty
$25 bounty for a clean solution to this (paid in cryptocurrency).
|
1.0
|
Proposals to change voting threshold don't show reason [$25 bounty] - ### Problem
The reason I gave for this proposal isn't appearing, even though reasons do appear on some other proposals (for example, to remove a member):
<img width="782" alt="Screen Shot 2022-04-28 at 1 51 38 PM" src="https://user-images.githubusercontent.com/138706/165843599-6d6bc8d6-9d64-4d3f-a0f5-dd7c6e70af1c.png">
### Solution
Find out why it's not appearing for changing the voting threshold and fix.
### Bounty
$25 bounty for a clean solution to this (paid in cryptocurrency).
|
priority
|
proposals to change voting threshold don t show reason problem the reason i gave for this proposal isn t appearing even though reasons do appear on some other proposals for example to remove a member img width alt screen shot at pm src solution find out why it s not appearing for changing the voting threshold and fix bounty bounty for a clean solution to this paid in cryptocurrency
| 1
|
65,192
| 3,226,986,489
|
IssuesEvent
|
2015-10-10 19:42:28
|
chocolatey/chocolatey.org
|
https://api.github.com/repos/chocolatey/chocolatey.org
|
reopened
|
New packages are not able to be accessed on the package cache
|
0 - Backlog Bug Priority_HIGH
|
https://gitter.im/chocolatey/choco?at=56194f121b0e279854bdbdf2
> i think something is broken for the newly pushed packages. download the newest nupkgs manually:
https://chocolatey.org/packages/chromium
https://chocolatey.org/packages/qbittorrent
https://chocolatey.org/packages/kvrt
i get this:
https://packages.chocolatey.org/chromium.48.0.2533.0.nupkg
I think there is some issue in the S3 bucket resolution for things that are pushed after the policy is created. They are explicitly setting permissions on the package, but even removing those did not appear to work.
I've found the closest person that has this problem as the last message at https://forums.aws.amazon.com/thread.jspa?messageID=555932
|
1.0
|
New packages are not able to be accessed on the package cache - https://gitter.im/chocolatey/choco?at=56194f121b0e279854bdbdf2
> i think something is broken for the newly pushed packages. download the newest nupkgs manually:
https://chocolatey.org/packages/chromium
https://chocolatey.org/packages/qbittorrent
https://chocolatey.org/packages/kvrt
i get this:
https://packages.chocolatey.org/chromium.48.0.2533.0.nupkg
I think there is some issue in the S3 bucket resolution for things that are pushed after the policy is created. They are explicitly setting permissions on the package, but even removing those did not appear to work.
I've found the closest person that has this problem as the last message at https://forums.aws.amazon.com/thread.jspa?messageID=555932
|
priority
|
new packages are not able to be accessed on the package cache i think something is broken for the newly pushed packages download the newest nupkgs manually i get this i think there is some issue in the bucket resolution for things that are pushed after the policy is created they are explicitly setting permissions on the package but even removing those did not appear to work i ve found the closest person that has this problem as the last message at
| 1
|
756,093
| 26,456,634,927
|
IssuesEvent
|
2023-01-16 14:49:09
|
woocommerce/woocommerce
|
https://api.github.com/repos/woocommerce/woocommerce
|
closed
|
[COT/HPOS] WooCommerce API calls not behaving as expected for orders with search param `page`
|
type: bug needs: author feedback priority: high focus: wc rest api focus: custom order tables plugin: woocommerce
|
### Prerequisites
- [X] I have carried out troubleshooting steps and I believe I have found a bug.
- [X] I have searched for similar bugs in both open and closed issues and cannot find a duplicate.
### Describe the bug
The `page` search param is no longer taking effect so, say we had 10 orders and we called the api to GET orders and return the first page using search params as follows:
```
params: {
per_page: 4,
page: 1
},
```
it is returning the same results as below:
```
params: {
per_page: 4,
page: 2
},
```
i.e. both only returning the first page values
### Expected behavior
The `page` search param should have an effect on the results returned in the GET orders call
i.e. If we had 10 orders and we called the api to GET orders and return the first page using search params as follows:
```
params: {
per_page: 4,
page: 1
},
```
It should returning different results to below:
```
params: {
per_page: 4,
page: 2
},
```
i.e. the first call should return the first 4 orders and the second call should return the next 4 (different) orders
### Actual behavior
The `page` search param is no longer taking effect so, say we had 10 orders and we called the api to GET orders and return the first page using search params as follows:
```
params: {
per_page: 4,
page: 1
},
```
it is returning the same results as below:
```
params: {
per_page: 4,
page: 2
},
```
i.e. both only returning the first page values
### Steps to reproduce
1. Enable COT/HPOS
2. Use the WooCommerce API to create 8 orders
3. Attempt to search on orders and get the first page of results (given 4 results per page) with search param as follows:
```
{
per_page: 4,
page: 1
}
```
4. Attempt to search on orders and get the second page of results (given 4 results per page) with search param as follows:
```
{
per_page: 4,
page: 2
}
```
5. The same results are returned for each
### WordPress Environment
`
### WordPress Environment ###
WordPress address (URL): http://localhost:8086
Site address (URL): http://localhost:8086
WC Version: 7.0.0
REST API Version: ✔ 7.0.0
WC Blocks Version: ✔ 8.5.1
Action Scheduler Version: ✔ 3.4.0
Log Directory Writable: ✔
WP Version: 6.0.2
WP Multisite: –
WP Memory Limit: 256 MB
WP Debug Mode: –
WP Cron: ✔
Language: en_US
External object cache: –
### Server Environment ###
Server Info: Apache/2.4.54 (Debian)
PHP Version: 7.4.32
PHP Post Max Size: 8 MB
PHP Time Limit: 30
PHP Max Input Vars: 1000
cURL Version: 7.74.0
OpenSSL/1.1.1n
SUHOSIN Installed: –
MySQL Version: 5.5.5-10.9.3-MariaDB-1:10.9.3+maria~ubu2204
Max Upload Size: 2 MB
Default Timezone is UTC: ✔
fsockopen/cURL: ✔
SoapClient: ❌ Your server does not have the SoapClient class enabled - some gateway plugins which use SOAP may not work as expected.
DOMDocument: ✔
GZip: ✔
Multibyte String: ✔
Remote Post: ✔
Remote Get: ✔
### Database ###
WC Database Version: 7.0.0
WC Database Prefix: wp_
Total Database Size: 5.19MB
Database Data Size: 3.52MB
Database Index Size: 1.67MB
wp_woocommerce_sessions: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_woocommerce_api_keys: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_woocommerce_attribute_taxonomies: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_woocommerce_downloadable_product_permissions: Data: 0.02MB + Index: 0.06MB + Engine InnoDB
wp_woocommerce_order_items: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_woocommerce_order_itemmeta: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_woocommerce_tax_rates: Data: 0.02MB + Index: 0.06MB + Engine InnoDB
wp_woocommerce_tax_rate_locations: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_woocommerce_shipping_zones: Data: 0.02MB + Index: 0.00MB + Engine InnoDB
wp_woocommerce_shipping_zone_locations: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_woocommerce_shipping_zone_methods: Data: 0.02MB + Index: 0.00MB + Engine InnoDB
wp_woocommerce_payment_tokens: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_woocommerce_payment_tokenmeta: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_woocommerce_log: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_actionscheduler_actions: Data: 0.02MB + Index: 0.11MB + Engine InnoDB
wp_actionscheduler_claims: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_actionscheduler_groups: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_actionscheduler_logs: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_commentmeta: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_comments: Data: 0.02MB + Index: 0.09MB + Engine InnoDB
wp_links: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_options: Data: 2.52MB + Index: 0.03MB + Engine InnoDB
wp_postmeta: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_posts: Data: 0.02MB + Index: 0.06MB + Engine InnoDB
wp_termmeta: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_terms: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_term_relationships: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_term_taxonomy: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_usermeta: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_users: Data: 0.02MB + Index: 0.05MB + Engine InnoDB
wp_wc_admin_notes: Data: 0.02MB + Index: 0.00MB + Engine InnoDB
wp_wc_admin_note_actions: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_wc_category_lookup: Data: 0.02MB + Index: 0.00MB + Engine InnoDB
wp_wc_customer_lookup: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_wc_download_log: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_wc_orders: Data: 0.02MB + Index: 0.11MB + Engine InnoDB
wp_wc_orders_meta: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_wc_order_addresses: Data: 0.02MB + Index: 0.06MB + Engine InnoDB
wp_wc_order_coupon_lookup: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_wc_order_operational_data: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_wc_order_product_lookup: Data: 0.02MB + Index: 0.06MB + Engine InnoDB
wp_wc_order_stats: Data: 0.02MB + Index: 0.05MB + Engine InnoDB
wp_wc_order_tax_lookup: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_wc_product_attributes_lookup: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_wc_product_download_directories: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_wc_product_meta_lookup: Data: 0.02MB + Index: 0.09MB + Engine InnoDB
wp_wc_rate_limits: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_wc_reserved_stock: Data: 0.02MB + Index: 0.00MB + Engine InnoDB
wp_wc_tax_rate_classes: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_wc_webhooks: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_wpml_mails: Data: 0.02MB + Index: 0.00MB + Engine InnoDB
### Post Type Counts ###
attachment: 1
page: 7
post: 2
### Security ###
Secure connection (HTTPS): ❌
Your store is not using HTTPS. Learn more about HTTPS and SSL Certificates.
Hide errors from visitors: ✔
### Active Plugins (5) ###
WooCommerce Enable COT: by – 0.0.2
JSON Basic Authentication: by WordPress API Team – 0.1
WooCommerce Reset: by WooCommerce – 0.1.0
WooCommerce: by Automattic – 7.0.0-dev
WP Mail Logging: by Wysija – 1.10.4
### Inactive Plugins (2) ###
Akismet Anti-Spam: by Automattic – 5.0
Hello Dolly: by Matt Mullenweg – 1.7.2
### Settings ###
API Enabled: –
Force SSL: –
Currency: USD ($)
Currency Position: left
Thousand Separator: ,
Decimal Separator: .
Number of Decimals: 2
Taxonomies: Product Types: external (external)
grouped (grouped)
simple (simple)
variable (variable)
Taxonomies: Product Visibility: exclude-from-catalog (exclude-from-catalog)
exclude-from-search (exclude-from-search)
featured (featured)
outofstock (outofstock)
rated-1 (rated-1)
rated-2 (rated-2)
rated-3 (rated-3)
rated-4 (rated-4)
rated-5 (rated-5)
Connected to WooCommerce.com: –
Enforce Approved Product Download Directories: ✔
### WC Pages ###
Shop base: #5 - /shop/
Cart: #6 - /cart/
Checkout: #7 - /checkout/
My account: #8 - /my-account/
Terms and conditions: ❌ Page not set
### Theme ###
Name: Twenty Nineteen
Version: 2.3
Author URL: https://wordpress.org/
Child Theme: ❌ – If you are modifying WooCommerce on a parent theme that you did not build personally we recommend using a child theme. See: How to create a child theme
WooCommerce Support: ✔
### Templates ###
Overrides: –
### Admin ###
Enabled Features: activity-panels
analytics
coupons
customer-effort-score-tracks
experimental-products-task
experimental-import-products-task
experimental-fashion-sample-products
experimental-product-tour
shipping-smart-defaults
shipping-setting-tour
homescreen
marketing
multichannel-marketing
mobile-app-banner
navigation
onboarding
onboarding-tasks
remote-inbox-notifications
remote-free-extensions
payment-gateway-suggestions
shipping-label-banner
subscriptions
store-alerts
transient-notices
woo-mobile-welcome
wc-pay-promotion
wc-pay-welcome-page
Disabled Features: minified-js
new-product-management-experience
settings
Daily Cron: ✔ Next scheduled: 2022-10-04 15:10:05 +00:00
Options: ✔
Notes: 2
Onboarding: -
### Action Scheduler ###
Pending: 2
Oldest: 2022-10-04 15:11:10 +0000
Newest: 2022-10-04 15:11:11 +0000
### Status report information ###
Generated at: 2022-10-04 15:11:26 +00:00
`
### Isolating the problem
- [X] I have deactivated other plugins and confirmed this bug occurs when only WooCommerce plugin is active.
- [X] This bug happens with a default WordPress theme active, or [Storefront](https://woocommerce.com/storefront/).
- [X] I can reproduce this bug consistently using the steps above.
|
1.0
|
[COT/HPOS] WooCommerce API calls not behaving as expected for orders with search param `page` - ### Prerequisites
- [X] I have carried out troubleshooting steps and I believe I have found a bug.
- [X] I have searched for similar bugs in both open and closed issues and cannot find a duplicate.
### Describe the bug
The `page` search param is no longer taking effect so, say we had 10 orders and we called the api to GET orders and return the first page using search params as follows:
```
params: {
per_page: 4,
page: 1
},
```
it is returning the same results as below:
```
params: {
per_page: 4,
page: 2
},
```
i.e. both only returning the first page values
### Expected behavior
The `page` search param should have an effect on the results returned in the GET orders call
i.e. If we had 10 orders and we called the api to GET orders and return the first page using search params as follows:
```
params: {
per_page: 4,
page: 1
},
```
It should returning different results to below:
```
params: {
per_page: 4,
page: 2
},
```
i.e. the first call should return the first 4 orders and the second call should return the next 4 (different) orders
### Actual behavior
The `page` search param is no longer taking effect so, say we had 10 orders and we called the api to GET orders and return the first page using search params as follows:
```
params: {
per_page: 4,
page: 1
},
```
it is returning the same results as below:
```
params: {
per_page: 4,
page: 2
},
```
i.e. both only returning the first page values
### Steps to reproduce
1. Enable COT/HPOS
2. Use the WooCommerce API to create 8 orders
3. Attempt to search on orders and get the first page of results (given 4 results per page) with search param as follows:
```
{
per_page: 4,
page: 1
}
```
4. Attempt to search on orders and get the second page of results (given 4 results per page) with search param as follows:
```
{
per_page: 4,
page: 2
}
```
5. The same results are returned for each
### WordPress Environment
`
### WordPress Environment ###
WordPress address (URL): http://localhost:8086
Site address (URL): http://localhost:8086
WC Version: 7.0.0
REST API Version: ✔ 7.0.0
WC Blocks Version: ✔ 8.5.1
Action Scheduler Version: ✔ 3.4.0
Log Directory Writable: ✔
WP Version: 6.0.2
WP Multisite: –
WP Memory Limit: 256 MB
WP Debug Mode: –
WP Cron: ✔
Language: en_US
External object cache: –
### Server Environment ###
Server Info: Apache/2.4.54 (Debian)
PHP Version: 7.4.32
PHP Post Max Size: 8 MB
PHP Time Limit: 30
PHP Max Input Vars: 1000
cURL Version: 7.74.0
OpenSSL/1.1.1n
SUHOSIN Installed: –
MySQL Version: 5.5.5-10.9.3-MariaDB-1:10.9.3+maria~ubu2204
Max Upload Size: 2 MB
Default Timezone is UTC: ✔
fsockopen/cURL: ✔
SoapClient: ❌ Your server does not have the SoapClient class enabled - some gateway plugins which use SOAP may not work as expected.
DOMDocument: ✔
GZip: ✔
Multibyte String: ✔
Remote Post: ✔
Remote Get: ✔
### Database ###
WC Database Version: 7.0.0
WC Database Prefix: wp_
Total Database Size: 5.19MB
Database Data Size: 3.52MB
Database Index Size: 1.67MB
wp_woocommerce_sessions: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_woocommerce_api_keys: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_woocommerce_attribute_taxonomies: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_woocommerce_downloadable_product_permissions: Data: 0.02MB + Index: 0.06MB + Engine InnoDB
wp_woocommerce_order_items: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_woocommerce_order_itemmeta: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_woocommerce_tax_rates: Data: 0.02MB + Index: 0.06MB + Engine InnoDB
wp_woocommerce_tax_rate_locations: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_woocommerce_shipping_zones: Data: 0.02MB + Index: 0.00MB + Engine InnoDB
wp_woocommerce_shipping_zone_locations: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_woocommerce_shipping_zone_methods: Data: 0.02MB + Index: 0.00MB + Engine InnoDB
wp_woocommerce_payment_tokens: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_woocommerce_payment_tokenmeta: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_woocommerce_log: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_actionscheduler_actions: Data: 0.02MB + Index: 0.11MB + Engine InnoDB
wp_actionscheduler_claims: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_actionscheduler_groups: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_actionscheduler_logs: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_commentmeta: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_comments: Data: 0.02MB + Index: 0.09MB + Engine InnoDB
wp_links: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_options: Data: 2.52MB + Index: 0.03MB + Engine InnoDB
wp_postmeta: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_posts: Data: 0.02MB + Index: 0.06MB + Engine InnoDB
wp_termmeta: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_terms: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_term_relationships: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_term_taxonomy: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_usermeta: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_users: Data: 0.02MB + Index: 0.05MB + Engine InnoDB
wp_wc_admin_notes: Data: 0.02MB + Index: 0.00MB + Engine InnoDB
wp_wc_admin_note_actions: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_wc_category_lookup: Data: 0.02MB + Index: 0.00MB + Engine InnoDB
wp_wc_customer_lookup: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_wc_download_log: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_wc_orders: Data: 0.02MB + Index: 0.11MB + Engine InnoDB
wp_wc_orders_meta: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_wc_order_addresses: Data: 0.02MB + Index: 0.06MB + Engine InnoDB
wp_wc_order_coupon_lookup: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_wc_order_operational_data: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_wc_order_product_lookup: Data: 0.02MB + Index: 0.06MB + Engine InnoDB
wp_wc_order_stats: Data: 0.02MB + Index: 0.05MB + Engine InnoDB
wp_wc_order_tax_lookup: Data: 0.02MB + Index: 0.03MB + Engine InnoDB
wp_wc_product_attributes_lookup: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_wc_product_download_directories: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_wc_product_meta_lookup: Data: 0.02MB + Index: 0.09MB + Engine InnoDB
wp_wc_rate_limits: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_wc_reserved_stock: Data: 0.02MB + Index: 0.00MB + Engine InnoDB
wp_wc_tax_rate_classes: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_wc_webhooks: Data: 0.02MB + Index: 0.02MB + Engine InnoDB
wp_wpml_mails: Data: 0.02MB + Index: 0.00MB + Engine InnoDB
### Post Type Counts ###
attachment: 1
page: 7
post: 2
### Security ###
Secure connection (HTTPS): ❌
Your store is not using HTTPS. Learn more about HTTPS and SSL Certificates.
Hide errors from visitors: ✔
### Active Plugins (5) ###
WooCommerce Enable COT: by – 0.0.2
JSON Basic Authentication: by WordPress API Team – 0.1
WooCommerce Reset: by WooCommerce – 0.1.0
WooCommerce: by Automattic – 7.0.0-dev
WP Mail Logging: by Wysija – 1.10.4
### Inactive Plugins (2) ###
Akismet Anti-Spam: by Automattic – 5.0
Hello Dolly: by Matt Mullenweg – 1.7.2
### Settings ###
API Enabled: –
Force SSL: –
Currency: USD ($)
Currency Position: left
Thousand Separator: ,
Decimal Separator: .
Number of Decimals: 2
Taxonomies: Product Types: external (external)
grouped (grouped)
simple (simple)
variable (variable)
Taxonomies: Product Visibility: exclude-from-catalog (exclude-from-catalog)
exclude-from-search (exclude-from-search)
featured (featured)
outofstock (outofstock)
rated-1 (rated-1)
rated-2 (rated-2)
rated-3 (rated-3)
rated-4 (rated-4)
rated-5 (rated-5)
Connected to WooCommerce.com: –
Enforce Approved Product Download Directories: ✔
### WC Pages ###
Shop base: #5 - /shop/
Cart: #6 - /cart/
Checkout: #7 - /checkout/
My account: #8 - /my-account/
Terms and conditions: ❌ Page not set
### Theme ###
Name: Twenty Nineteen
Version: 2.3
Author URL: https://wordpress.org/
Child Theme: ❌ – If you are modifying WooCommerce on a parent theme that you did not build personally we recommend using a child theme. See: How to create a child theme
WooCommerce Support: ✔
### Templates ###
Overrides: –
### Admin ###
Enabled Features: activity-panels
analytics
coupons
customer-effort-score-tracks
experimental-products-task
experimental-import-products-task
experimental-fashion-sample-products
experimental-product-tour
shipping-smart-defaults
shipping-setting-tour
homescreen
marketing
multichannel-marketing
mobile-app-banner
navigation
onboarding
onboarding-tasks
remote-inbox-notifications
remote-free-extensions
payment-gateway-suggestions
shipping-label-banner
subscriptions
store-alerts
transient-notices
woo-mobile-welcome
wc-pay-promotion
wc-pay-welcome-page
Disabled Features: minified-js
new-product-management-experience
settings
Daily Cron: ✔ Next scheduled: 2022-10-04 15:10:05 +00:00
Options: ✔
Notes: 2
Onboarding: -
### Action Scheduler ###
Pending: 2
Oldest: 2022-10-04 15:11:10 +0000
Newest: 2022-10-04 15:11:11 +0000
### Status report information ###
Generated at: 2022-10-04 15:11:26 +00:00
`
### Isolating the problem
- [X] I have deactivated other plugins and confirmed this bug occurs when only WooCommerce plugin is active.
- [X] This bug happens with a default WordPress theme active, or [Storefront](https://woocommerce.com/storefront/).
- [X] I can reproduce this bug consistently using the steps above.
|
priority
|
woocommerce api calls not behaving as expected for orders with search param page prerequisites i have carried out troubleshooting steps and i believe i have found a bug i have searched for similar bugs in both open and closed issues and cannot find a duplicate describe the bug the page search param is no longer taking effect so say we had orders and we called the api to get orders and return the first page using search params as follows params per page page it is returning the same results as below params per page page i e both only returning the first page values expected behavior the page search param should have an effect on the results returned in the get orders call i e if we had orders and we called the api to get orders and return the first page using search params as follows params per page page it should returning different results to below params per page page i e the first call should return the first orders and the second call should return the next different orders actual behavior the page search param is no longer taking effect so say we had orders and we called the api to get orders and return the first page using search params as follows params per page page it is returning the same results as below params per page page i e both only returning the first page values steps to reproduce enable cot hpos use the woocommerce api to create orders attempt to search on orders and get the first page of results given results per page with search param as follows per page page attempt to search on orders and get the second page of results given results per page with search param as follows per page page the same results are returned for each wordpress environment wordpress environment wordpress address url site address url wc version rest api version ✔ wc blocks version ✔ action scheduler version ✔ log directory writable ✔ wp version wp multisite – wp memory limit mb wp debug mode – wp cron ✔ language en us external object cache – server environment server info apache debian php version php post max size mb php time limit php max input vars curl version openssl suhosin installed – mysql version mariadb maria max upload size mb default timezone is utc ✔ fsockopen curl ✔ soapclient ❌ your server does not have the soapclient class enabled some gateway plugins which use soap may not work as expected domdocument ✔ gzip ✔ multibyte string ✔ remote post ✔ remote get ✔ database wc database version wc database prefix wp total database size database data size database index size wp woocommerce sessions data index engine innodb wp woocommerce api keys data index engine innodb wp woocommerce attribute taxonomies data index engine innodb wp woocommerce downloadable product permissions data index engine innodb wp woocommerce order items data index engine innodb wp woocommerce order itemmeta data index engine innodb wp woocommerce tax rates data index engine innodb wp woocommerce tax rate locations data index engine innodb wp woocommerce shipping zones data index engine innodb wp woocommerce shipping zone locations data index engine innodb wp woocommerce shipping zone methods data index engine innodb wp woocommerce payment tokens data index engine innodb wp woocommerce payment tokenmeta data index engine innodb wp woocommerce log data index engine innodb wp actionscheduler actions data index engine innodb wp actionscheduler claims data index engine innodb wp actionscheduler groups data index engine innodb wp actionscheduler logs data index engine innodb wp commentmeta data index engine innodb wp comments data index engine innodb wp links data index engine innodb wp options data index engine innodb wp postmeta data index engine innodb wp posts data index engine innodb wp termmeta data index engine innodb wp terms data index engine innodb wp term relationships data index engine innodb wp term taxonomy data index engine innodb wp usermeta data index engine innodb wp users data index engine innodb wp wc admin notes data index engine innodb wp wc admin note actions data index engine innodb wp wc category lookup data index engine innodb wp wc customer lookup data index engine innodb wp wc download log data index engine innodb wp wc orders data index engine innodb wp wc orders meta data index engine innodb wp wc order addresses data index engine innodb wp wc order coupon lookup data index engine innodb wp wc order operational data data index engine innodb wp wc order product lookup data index engine innodb wp wc order stats data index engine innodb wp wc order tax lookup data index engine innodb wp wc product attributes lookup data index engine innodb wp wc product download directories data index engine innodb wp wc product meta lookup data index engine innodb wp wc rate limits data index engine innodb wp wc reserved stock data index engine innodb wp wc tax rate classes data index engine innodb wp wc webhooks data index engine innodb wp wpml mails data index engine innodb post type counts attachment page post security secure connection https ❌ your store is not using https learn more about https and ssl certificates hide errors from visitors ✔ active plugins woocommerce enable cot by – json basic authentication by wordpress api team – woocommerce reset by woocommerce – woocommerce by automattic – dev wp mail logging by wysija – inactive plugins akismet anti spam by automattic – hello dolly by matt mullenweg – settings api enabled – force ssl – currency usd currency position left thousand separator decimal separator number of decimals taxonomies product types external external grouped grouped simple simple variable variable taxonomies product visibility exclude from catalog exclude from catalog exclude from search exclude from search featured featured outofstock outofstock rated rated rated rated rated rated rated rated rated rated connected to woocommerce com – enforce approved product download directories ✔ wc pages shop base shop cart cart checkout checkout my account my account terms and conditions ❌ page not set theme name twenty nineteen version author url child theme ❌ – if you are modifying woocommerce on a parent theme that you did not build personally we recommend using a child theme see how to create a child theme woocommerce support ✔ templates overrides – admin enabled features activity panels analytics coupons customer effort score tracks experimental products task experimental import products task experimental fashion sample products experimental product tour shipping smart defaults shipping setting tour homescreen marketing multichannel marketing mobile app banner navigation onboarding onboarding tasks remote inbox notifications remote free extensions payment gateway suggestions shipping label banner subscriptions store alerts transient notices woo mobile welcome wc pay promotion wc pay welcome page disabled features minified js new product management experience settings daily cron ✔ next scheduled options ✔ notes onboarding action scheduler pending oldest newest status report information generated at isolating the problem i have deactivated other plugins and confirmed this bug occurs when only woocommerce plugin is active this bug happens with a default wordpress theme active or i can reproduce this bug consistently using the steps above
| 1
|
772,299
| 27,115,371,433
|
IssuesEvent
|
2023-02-15 18:08:38
|
NIAEFEUP/tts-revamp-fe
|
https://api.github.com/repos/NIAEFEUP/tts-revamp-fe
|
closed
|
Add courses outside of major
|
low priority high effort
|
- [x] Add extra courses button in modal
- [x] Extra courses Combobox with all available courses
- [ ] Selecting a course from the Combobox adds it to the selected courses
|
1.0
|
Add courses outside of major - - [x] Add extra courses button in modal
- [x] Extra courses Combobox with all available courses
- [ ] Selecting a course from the Combobox adds it to the selected courses
|
priority
|
add courses outside of major add extra courses button in modal extra courses combobox with all available courses selecting a course from the combobox adds it to the selected courses
| 1
|
267,858
| 8,393,845,751
|
IssuesEvent
|
2018-10-09 21:50:50
|
prettier/prettier
|
https://api.github.com/repos/prettier/prettier
|
opened
|
YAML: Incorrect quotes are used when string contains mixed quotes
|
lang:yaml priority:high type:bug
|
**Prettier 1.14.2**
[Playground link](https://prettier.io/playground/#N4Igxg9gdgLgprEAuEAdEByAhq3IBG6qUIANCBAA4wCW0AzsqFgE4sQDuACqwoylgBuEGgBMyBFljABrODADKlaTSgBzZDBYBXOOVX04LGFylqAtlmQAzLABtD5AFb0AHgCEps+Qqzm4ADKqcDb2jiDKLIYsyCAAnn52EpQsqjAA6mIwABbIABwADOQpEIbpUpSxKXDRgiHkLHAAjto0jaZYFlZItg56IIbmNJo6-fSqanZwAIraEPChfeQwWPiZojnIAEzLUjR2EwDCEOaWsVDQ9SDahgAqq-y9hgC+z0A)
```sh
--parser yaml
```
**Input:**
```yaml
"'a\"b"
```
**Output:**
```yaml
''a"b'
```
**Second Output:**
```yaml
SyntaxError: Document is not valid YAML (bad indentation?) (1:3)
> 1 | ''a"b'
| ^^^
> 2 |
| ^
```
**Expected behavior:**
|
1.0
|
YAML: Incorrect quotes are used when string contains mixed quotes - **Prettier 1.14.2**
[Playground link](https://prettier.io/playground/#N4Igxg9gdgLgprEAuEAdEByAhq3IBG6qUIANCBAA4wCW0AzsqFgE4sQDuACqwoylgBuEGgBMyBFljABrODADKlaTSgBzZDBYBXOOVX04LGFylqAtlmQAzLABtD5AFb0AHgCEps+Qqzm4ADKqcDb2jiDKLIYsyCAAnn52EpQsqjAA6mIwABbIABwADOQpEIbpUpSxKXDRgiHkLHAAjto0jaZYFlZItg56IIbmNJo6-fSqanZwAIraEPChfeQwWPiZojnIAEzLUjR2EwDCEOaWsVDQ9SDahgAqq-y9hgC+z0A)
```sh
--parser yaml
```
**Input:**
```yaml
"'a\"b"
```
**Output:**
```yaml
''a"b'
```
**Second Output:**
```yaml
SyntaxError: Document is not valid YAML (bad indentation?) (1:3)
> 1 | ''a"b'
| ^^^
> 2 |
| ^
```
**Expected behavior:**
|
priority
|
yaml incorrect quotes are used when string contains mixed quotes prettier sh parser yaml input yaml a b output yaml a b second output yaml syntaxerror document is not valid yaml bad indentation a b expected behavior
| 1
|
795,361
| 28,070,704,005
|
IssuesEvent
|
2023-03-29 18:50:28
|
ucb-rit/coldfront
|
https://api.github.com/repos/ucb-rit/coldfront
|
closed
|
Gracefully handle case when CILogon does not provide user first/last name fields
|
bug high priority lrc-only
|
In general, when a user logs in with CILogon, it provides information about the user, including email, first name, last name, etc.
However, when CILogon fails to provide some of these fields (e.g., first name), the [custom application logic](https://github.com/ucb-rit/coldfront/blob/master/coldfront/core/socialaccount/adapter.py#L54) for populating a user from the provided info fails:
```
ERROR middleware.process_exception: AnonymousUser encountered an uncaught exception at /accounts/cilogon/login/callback/. Details:
ERROR middleware.process_exception: null value in column "first_name" violates not-null constraint
```
Update the code to default to an empty string instead of `None`, like the [base class](https://github.com/pennersr/django-allauth/blob/master/allauth/socialaccount/adapter.py#L110) in `django-allauth` does.
|
1.0
|
Gracefully handle case when CILogon does not provide user first/last name fields - In general, when a user logs in with CILogon, it provides information about the user, including email, first name, last name, etc.
However, when CILogon fails to provide some of these fields (e.g., first name), the [custom application logic](https://github.com/ucb-rit/coldfront/blob/master/coldfront/core/socialaccount/adapter.py#L54) for populating a user from the provided info fails:
```
ERROR middleware.process_exception: AnonymousUser encountered an uncaught exception at /accounts/cilogon/login/callback/. Details:
ERROR middleware.process_exception: null value in column "first_name" violates not-null constraint
```
Update the code to default to an empty string instead of `None`, like the [base class](https://github.com/pennersr/django-allauth/blob/master/allauth/socialaccount/adapter.py#L110) in `django-allauth` does.
|
priority
|
gracefully handle case when cilogon does not provide user first last name fields in general when a user logs in with cilogon it provides information about the user including email first name last name etc however when cilogon fails to provide some of these fields e g first name the for populating a user from the provided info fails error middleware process exception anonymoususer encountered an uncaught exception at accounts cilogon login callback details error middleware process exception null value in column first name violates not null constraint update the code to default to an empty string instead of none like the in django allauth does
| 1
|
667,226
| 22,422,739,984
|
IssuesEvent
|
2022-06-20 06:06:12
|
thewca/worldcubeassociation.org
|
https://api.github.com/repos/thewca/worldcubeassociation.org
|
closed
|
Competitors now able to delete their registration when accepted
|
bug registration good second issue high-priority
|
When you enable competitors being able to edit their events it seems they are also able to delete their registrations after they have been accepted. This is undesirable for now, since there are further actions that need to be taken for an accepted registration (e.g. processing a refund if it is due, and accepting the next competitor on the waiting list). While competitors being able to edit their events is great, it would be good if the ability to change their status was not an option for them. Obviously this can be re-enabled when there is automated waiting list handling!
|
1.0
|
Competitors now able to delete their registration when accepted - When you enable competitors being able to edit their events it seems they are also able to delete their registrations after they have been accepted. This is undesirable for now, since there are further actions that need to be taken for an accepted registration (e.g. processing a refund if it is due, and accepting the next competitor on the waiting list). While competitors being able to edit their events is great, it would be good if the ability to change their status was not an option for them. Obviously this can be re-enabled when there is automated waiting list handling!
|
priority
|
competitors now able to delete their registration when accepted when you enable competitors being able to edit their events it seems they are also able to delete their registrations after they have been accepted this is undesirable for now since there are further actions that need to be taken for an accepted registration e g processing a refund if it is due and accepting the next competitor on the waiting list while competitors being able to edit their events is great it would be good if the ability to change their status was not an option for them obviously this can be re enabled when there is automated waiting list handling
| 1
|
670,961
| 22,715,244,250
|
IssuesEvent
|
2022-07-06 00:58:31
|
apache/incubator-devlake
|
https://api.github.com/repos/apache/incubator-devlake
|
closed
|
Apache compliance
|
type/docs priority/high epic
|
## Documentation Scope
ALL
## Describe the Change
1. #1910
2. #1911
3. #1912
4. #1897
|
1.0
|
Apache compliance - ## Documentation Scope
ALL
## Describe the Change
1. #1910
2. #1911
3. #1912
4. #1897
|
priority
|
apache compliance documentation scope all describe the change
| 1
|
243,970
| 7,868,826,650
|
IssuesEvent
|
2018-06-24 05:04:54
|
PaddlePaddle/Paddle
|
https://api.github.com/repos/PaddlePaddle/Paddle
|
closed
|
Error when transpile program with piecewise_decay to distributed program
|
Bug high priority
|
When `piecewise_decay` is defined the pserver side program will have a wrong `conditional_block` refer to a block id that doesn't exist on the pserver program.
This error was first met by @kolinwei
```python
optimizer = fluid.optimizer.Momentum(
learning_rate=fluid.layers.piecewise_decay(
boundaries=bd, values=lr),
momentum=0.9,
regularization=fluid.regularizer.L2Decay(1e-4))
```
|
1.0
|
Error when transpile program with piecewise_decay to distributed program - When `piecewise_decay` is defined the pserver side program will have a wrong `conditional_block` refer to a block id that doesn't exist on the pserver program.
This error was first met by @kolinwei
```python
optimizer = fluid.optimizer.Momentum(
learning_rate=fluid.layers.piecewise_decay(
boundaries=bd, values=lr),
momentum=0.9,
regularization=fluid.regularizer.L2Decay(1e-4))
```
|
priority
|
error when transpile program with piecewise decay to distributed program when piecewise decay is defined the pserver side program will have a wrong conditional block refer to a block id that doesn t exist on the pserver program this error was first met by kolinwei python optimizer fluid optimizer momentum learning rate fluid layers piecewise decay boundaries bd values lr momentum regularization fluid regularizer
| 1
|
692,202
| 23,726,031,578
|
IssuesEvent
|
2022-08-30 19:42:12
|
archesproject/arches
|
https://api.github.com/repos/archesproject/arches
|
closed
|
ElasticSearch throws an error on any page where it's used
|
Priority: High bug
|
**Describe the bug**
<!--- By fully explaining what you are encountering, you can help us understand and reproduce the issue. -->
<!--- Often times, a screenshot or animated GIF can help show what you are encountering. -->
```
elastic_transport.TlsError: TLS error caused by: TlsError(TLS error caused by: SSLError([SSL: WRONG_VERSION_NUMBER] wrong version number...)
```
**To Reproduce**
Steps to reproduce the behavior:
1. Load dev/7.0.x in core and arches-her
2. visit the search page
|
1.0
|
ElasticSearch throws an error on any page where it's used - **Describe the bug**
<!--- By fully explaining what you are encountering, you can help us understand and reproduce the issue. -->
<!--- Often times, a screenshot or animated GIF can help show what you are encountering. -->
```
elastic_transport.TlsError: TLS error caused by: TlsError(TLS error caused by: SSLError([SSL: WRONG_VERSION_NUMBER] wrong version number...)
```
**To Reproduce**
Steps to reproduce the behavior:
1. Load dev/7.0.x in core and arches-her
2. visit the search page
|
priority
|
elasticsearch throws an error on any page where it s used describe the bug elastic transport tlserror tls error caused by tlserror tls error caused by sslerror wrong version number to reproduce steps to reproduce the behavior load dev x in core and arches her visit the search page
| 1
|
200,874
| 7,017,824,519
|
IssuesEvent
|
2017-12-21 11:07:18
|
bleenco/abstruse
|
https://api.github.com/repos/bleenco/abstruse
|
closed
|
[bug]: echo $ENV_VARIABLE
|
Priority: High Status: In Progress Type: Question
|
Check what happens if someone put `echo $ENV_VARIABLE` command in `.abstruse.yml`.
Encrypted data should not show at any cost
|
1.0
|
[bug]: echo $ENV_VARIABLE - Check what happens if someone put `echo $ENV_VARIABLE` command in `.abstruse.yml`.
Encrypted data should not show at any cost
|
priority
|
echo env variable check what happens if someone put echo env variable command in abstruse yml encrypted data should not show at any cost
| 1
|
551,008
| 16,136,149,228
|
IssuesEvent
|
2021-04-29 12:07:11
|
Neural-Systems-at-UIO/VisuAlign
|
https://api.github.com/repos/Neural-Systems-at-UIO/VisuAlign
|
closed
|
MeshView v0.8 - user defined sections and hidden outline
|
High Priority enhancement
|
We like to request the following changes to MeshView v0.8 (https://www.nesys.uio.no/MeshView/meshview.html?atlas=ABA_mouse_v3_2017_full ):
1. An option to cut imported point clouds in user-defined sections with a user-defined thickness (in the newest version of MeshView). This option is available in a previous version of MeshView (https://www.nesys.uio.no/MeshView/meshviewx.html?atlas=ABA_mouse_v3_2017_full ). Here it is done by cutting in "Cloud only" and adjusting the thickness of the slice by a slider. See attached image for explanation of what function we want to implement from previous version of MeshView.

2. View imported point clouds without structural outlines, or the option to make it "invisible" for the eye (turn off structural outline) while still keeping the bounding box.
|
1.0
|
MeshView v0.8 - user defined sections and hidden outline - We like to request the following changes to MeshView v0.8 (https://www.nesys.uio.no/MeshView/meshview.html?atlas=ABA_mouse_v3_2017_full ):
1. An option to cut imported point clouds in user-defined sections with a user-defined thickness (in the newest version of MeshView). This option is available in a previous version of MeshView (https://www.nesys.uio.no/MeshView/meshviewx.html?atlas=ABA_mouse_v3_2017_full ). Here it is done by cutting in "Cloud only" and adjusting the thickness of the slice by a slider. See attached image for explanation of what function we want to implement from previous version of MeshView.

2. View imported point clouds without structural outlines, or the option to make it "invisible" for the eye (turn off structural outline) while still keeping the bounding box.
|
priority
|
meshview user defined sections and hidden outline we like to request the following changes to meshview an option to cut imported point clouds in user defined sections with a user defined thickness in the newest version of meshview this option is available in a previous version of meshview here it is done by cutting in cloud only and adjusting the thickness of the slice by a slider see attached image for explanation of what function we want to implement from previous version of meshview view imported point clouds without structural outlines or the option to make it invisible for the eye turn off structural outline while still keeping the bounding box
| 1
|
357,460
| 10,606,644,544
|
IssuesEvent
|
2019-10-11 00:14:47
|
carbon-design-system/carbon
|
https://api.github.com/repos/carbon-design-system/carbon
|
closed
|
Tag (tag) component has insufficient contrast on Gray 100 theme
|
Severity 1 🚨 priority: high type: a11y ♿
|
<!-- Feel free to remove sections that aren't relevant.
## Title line template: [Title]: Brief description
-->
## Environment
Windows 10
Chrome Version 76.0.3809.132 (Official Build) (64-bit)
## Detailed description
Carbon v10 - React
Component: Tag (filter), http://themes.carbondesignsystem.com/?nav=tag
The component is using,
`background-color: interactive-02, color: inverse-01`
When switching to the Gray 100 theme, the contrast ratio has dropped to 3.57 (WCAG AA: Fail).
I used WebAIM (https://webaim.org/resources/contrastchecker/) to check the colour contrast.
## Additional information

|
1.0
|
Tag (tag) component has insufficient contrast on Gray 100 theme - <!-- Feel free to remove sections that aren't relevant.
## Title line template: [Title]: Brief description
-->
## Environment
Windows 10
Chrome Version 76.0.3809.132 (Official Build) (64-bit)
## Detailed description
Carbon v10 - React
Component: Tag (filter), http://themes.carbondesignsystem.com/?nav=tag
The component is using,
`background-color: interactive-02, color: inverse-01`
When switching to the Gray 100 theme, the contrast ratio has dropped to 3.57 (WCAG AA: Fail).
I used WebAIM (https://webaim.org/resources/contrastchecker/) to check the colour contrast.
## Additional information

|
priority
|
tag tag component has insufficient contrast on gray theme feel free to remove sections that aren t relevant title line template brief description environment windows chrome version official build bit detailed description carbon react component tag filter the component is using background color interactive color inverse when switching to the gray theme the contrast ratio has dropped to wcag aa fail i used webaim to check the colour contrast additional information
| 1
|
168,501
| 6,376,697,955
|
IssuesEvent
|
2017-08-02 08:12:39
|
morpho-os/framework
|
https://api.github.com/repos/morpho-os/framework
|
closed
|
Use integers for priority of events in ModuleManager, accept the `-` sign.
|
high priority (4)
|
Fix DB column type if necessary.
Fix range - `0..100` and the default priority.
|
1.0
|
Use integers for priority of events in ModuleManager, accept the `-` sign. - Fix DB column type if necessary.
Fix range - `0..100` and the default priority.
|
priority
|
use integers for priority of events in modulemanager accept the sign fix db column type if necessary fix range and the default priority
| 1
|
386,941
| 11,453,492,579
|
IssuesEvent
|
2020-02-06 15:29:36
|
woocommerce/woocommerce-gateway-paypal-express-checkout
|
https://api.github.com/repos/woocommerce/woocommerce-gateway-paypal-express-checkout
|
closed
|
Double stock reduce when IPN and PDT notifications are enable
|
Priority: High [Type] Cannot reproduce
|
When IPN and PDT notifications are enabled , the stock is reduced twice.
Stock levels reduced: High-waist wide leg trousers – S (1904LAG60240154B) 0→-1, Fringes short sleeve t-shirt – S (1904SIX58760129B) 1→0
July 28, 2019 at 9:28 am Delete note
Stock levels reduced: High-waist wide leg trousers – S (1904LAG60240154B) 1→0, Fringes short sleeve t-shirt – S (1904SIX58760129B) 2→1
July 28, 2019 at 9:28 am Delete note
PDT payment completed
July 28, 2019 at 9:27 am Delete note
IPN payment completed
July 28, 2019 at 9:27 am Delete note
|
1.0
|
Double stock reduce when IPN and PDT notifications are enable - When IPN and PDT notifications are enabled , the stock is reduced twice.
Stock levels reduced: High-waist wide leg trousers – S (1904LAG60240154B) 0→-1, Fringes short sleeve t-shirt – S (1904SIX58760129B) 1→0
July 28, 2019 at 9:28 am Delete note
Stock levels reduced: High-waist wide leg trousers – S (1904LAG60240154B) 1→0, Fringes short sleeve t-shirt – S (1904SIX58760129B) 2→1
July 28, 2019 at 9:28 am Delete note
PDT payment completed
July 28, 2019 at 9:27 am Delete note
IPN payment completed
July 28, 2019 at 9:27 am Delete note
|
priority
|
double stock reduce when ipn and pdt notifications are enable when ipn and pdt notifications are enabled the stock is reduced twice stock levels reduced high waist wide leg trousers – s → fringes short sleeve t shirt – s → july at am delete note stock levels reduced high waist wide leg trousers – s → fringes short sleeve t shirt – s → july at am delete note pdt payment completed july at am delete note ipn payment completed july at am delete note
| 1
|
277,488
| 8,629,027,702
|
IssuesEvent
|
2018-11-21 19:17:11
|
Polymer/lit-html
|
https://api.github.com/repos/Polymer/lit-html
|
closed
|
Update when directive to handle any number of cases.
|
Area: API Priority: High Status: Accepted Type: Enhancement
|
We have two feature requests related to conditional rendering:
- #511 - the false value should be optional
- A `switch` directive.
We can resolve both of these requests by generalizing `when` to introspect it's arguments to operate in two modes:
### `if` mode
When the second argument is a function or TemplateResult, we treat the condition as a truthy value and render either the second or third argument.
```ts
html`${when(condition,
() => html`condition is true`,
() => html`condition is false`
)}`
```
### `switch` mode
When the second argument is an object, we treat the condition as simple value, and the second argument as a map of cases:
```ts
html`${when(value, {
'one': () => html`value is one`,
'two': () => html`value is two`,
default: () => html`value is neither`,
})}`
```
In switch mode, cases can only be keyed by strings, number and symbols. If in the future we want to lift this restriction, we can accept arrays:
```ts
html`${when(value,
['one', () => html`value is one`],
['two': () => html`value is two`]
)}`
```
But this seems better to leave off for now.
One of the benefits of handling both the `if` and `switch` use cases in one directive is that we don't have to choose two new names, since we can't name a variable `if`, `switch` or `case`. It should also simplify choices.
### Caching
Currently `when` caches the DOM created by the true and false cases. This should increase performance on fast/frequently changing values, but this might not always be the case, and it increases complexity, code size and overhead. To make complexity pay-for-play we should remove caching from the `when` directive and add a separate `cachingWhen` directive. `cachingWhen` can locally be renamed to `when` when imported:
```ts
import {cachingWhen as when} from 'lit-html/directives/caching-when.js';
```
|
1.0
|
Update when directive to handle any number of cases. - We have two feature requests related to conditional rendering:
- #511 - the false value should be optional
- A `switch` directive.
We can resolve both of these requests by generalizing `when` to introspect it's arguments to operate in two modes:
### `if` mode
When the second argument is a function or TemplateResult, we treat the condition as a truthy value and render either the second or third argument.
```ts
html`${when(condition,
() => html`condition is true`,
() => html`condition is false`
)}`
```
### `switch` mode
When the second argument is an object, we treat the condition as simple value, and the second argument as a map of cases:
```ts
html`${when(value, {
'one': () => html`value is one`,
'two': () => html`value is two`,
default: () => html`value is neither`,
})}`
```
In switch mode, cases can only be keyed by strings, number and symbols. If in the future we want to lift this restriction, we can accept arrays:
```ts
html`${when(value,
['one', () => html`value is one`],
['two': () => html`value is two`]
)}`
```
But this seems better to leave off for now.
One of the benefits of handling both the `if` and `switch` use cases in one directive is that we don't have to choose two new names, since we can't name a variable `if`, `switch` or `case`. It should also simplify choices.
### Caching
Currently `when` caches the DOM created by the true and false cases. This should increase performance on fast/frequently changing values, but this might not always be the case, and it increases complexity, code size and overhead. To make complexity pay-for-play we should remove caching from the `when` directive and add a separate `cachingWhen` directive. `cachingWhen` can locally be renamed to `when` when imported:
```ts
import {cachingWhen as when} from 'lit-html/directives/caching-when.js';
```
|
priority
|
update when directive to handle any number of cases we have two feature requests related to conditional rendering the false value should be optional a switch directive we can resolve both of these requests by generalizing when to introspect it s arguments to operate in two modes if mode when the second argument is a function or templateresult we treat the condition as a truthy value and render either the second or third argument ts html when condition html condition is true html condition is false switch mode when the second argument is an object we treat the condition as simple value and the second argument as a map of cases ts html when value one html value is one two html value is two default html value is neither in switch mode cases can only be keyed by strings number and symbols if in the future we want to lift this restriction we can accept arrays ts html when value but this seems better to leave off for now one of the benefits of handling both the if and switch use cases in one directive is that we don t have to choose two new names since we can t name a variable if switch or case it should also simplify choices caching currently when caches the dom created by the true and false cases this should increase performance on fast frequently changing values but this might not always be the case and it increases complexity code size and overhead to make complexity pay for play we should remove caching from the when directive and add a separate cachingwhen directive cachingwhen can locally be renamed to when when imported ts import cachingwhen as when from lit html directives caching when js
| 1
|
157,991
| 6,019,910,749
|
IssuesEvent
|
2017-06-07 15:23:53
|
cytoscape/cytoscape.js
|
https://api.github.com/repos/cytoscape/cytoscape.js
|
closed
|
Bottom/top roundrect shapes (`shape: top-roundrectangle | bottom-roundrectangle`)
|
priority-1-high
|

`shape: top-roundrectangle | bottom-roundrectangle`
|
1.0
|
Bottom/top roundrect shapes (`shape: top-roundrectangle | bottom-roundrectangle`) - 
`shape: top-roundrectangle | bottom-roundrectangle`
|
priority
|
bottom top roundrect shapes shape top roundrectangle bottom roundrectangle shape top roundrectangle bottom roundrectangle
| 1
|
327,725
| 9,979,616,508
|
IssuesEvent
|
2019-07-09 23:39:08
|
MolSnoo/Alter-Ego
|
https://api.github.com/repos/MolSnoo/Alter-Ego
|
opened
|
Bot crashes when trying to load a player with no status effects
|
bug high priority
|
If a player with no status effects is loaded, the bot will crash when trying to generate the player's statusString.
|
1.0
|
Bot crashes when trying to load a player with no status effects - If a player with no status effects is loaded, the bot will crash when trying to generate the player's statusString.
|
priority
|
bot crashes when trying to load a player with no status effects if a player with no status effects is loaded the bot will crash when trying to generate the player s statusstring
| 1
|
295,217
| 9,083,230,576
|
IssuesEvent
|
2019-02-17 18:49:50
|
sul-dlss/preservation_catalog
|
https://api.github.com/repos/sul-dlss/preservation_catalog
|
reopened
|
zip segment names are being incorrectly generated(?)
|
bug high priority
|
Consider druid vb008fc5700
AWS S3 bucket contains the following files for this druid:
```
vb/008/fc/5700/vb008fc5700.v0001.z01
vb/008/fc/5700/vb008fc5700.v0001.z05
vb/008/fc/5700/vb008fc5700.v0001.z06
vb/008/fc/5700/vb008fc5700.v0001.z07
vb/008/fc/5700/vb008fc5700.v0001.z08
vb/008/fc/5700/vb008fc5700.v0001.z09
vb/008/fc/5700/vb008fc5700.v0001.z10
vb/008/fc/5700/vb008fc5700.v0001.z11
vb/008/fc/5700/vb008fc5700.v0001.z12
vb/008/fc/5700/vb008fc5700.v0001.z13
vb/008/fc/5700/vb008fc5700.v0001.z14
vb/008/fc/5700/vb008fc5700.v0001.z15
vb/008/fc/5700/vb008fc5700.v0001.z16
vb/008/fc/5700/vb008fc5700.v0001.z17
vb/008/fc/5700/vb008fc5700.v0001.z18
vb/008/fc/5700/vb008fc5700.v0001.z19
vb/008/fc/5700/vb008fc5700.v0001.z20
vb/008/fc/5700/vb008fc5700.v0001.z21
vb/008/fc/5700/vb008fc5700.v0001.z22
vb/008/fc/5700/vb008fc5700.v0001.z23
vb/008/fc/5700/vb008fc5700.v0001.z26
vb/008/fc/5700/vb008fc5700.v0001.z27
vb/008/fc/5700/vb008fc5700.v0001.z28
vb/008/fc/5700/vb008fc5700.v0001.z29
vb/008/fc/5700/vb008fc5700.v0001.zip
vb/008/fc/5700/vb008fc5700.v0002.zip
```
Note: missing order (z02, z03 and z04) and a total of 27 uploaded segments for version 1.
Metadata for the upload says:
```
{
"AcceptRanges": "bytes",
"ContentType": "",
"LastModified": "Mon, 17 Sep 2018 10:08:05 GMT",
"ContentLength": 7405818770,
"ETag": "\"b024db8b693879360086f845c0dbcb8d-1413\"",
"StorageClass": "GLACIER",
"Metadata": {
"size": "7405818770",
"parts_count": "27",
"zip_version": "Zip 3.0 (July 5th 2008)",
"checksum_md5": "e2d64ef108bfd463c8cf71478063c7f7",
"zip_cmd": "zip -r0X -s 10g /sdr-transfers/vb/008/fc/5700/vb008fc5700.v0001.zip vb008fc5700/v0001"
}
}
```
Note parts_count: 27, which implies the final segment should be z26, *not* z29.
There are 3 unexpected segments present: z27, z28 and z29.
There are 3 expected segments missing: z02, z03, z04.
This *suggests* that the unexpected segments are, in fact, the missing z02, z03 and z04. But even if that is the case, this is broken.
|
1.0
|
zip segment names are being incorrectly generated(?) - Consider druid vb008fc5700
AWS S3 bucket contains the following files for this druid:
```
vb/008/fc/5700/vb008fc5700.v0001.z01
vb/008/fc/5700/vb008fc5700.v0001.z05
vb/008/fc/5700/vb008fc5700.v0001.z06
vb/008/fc/5700/vb008fc5700.v0001.z07
vb/008/fc/5700/vb008fc5700.v0001.z08
vb/008/fc/5700/vb008fc5700.v0001.z09
vb/008/fc/5700/vb008fc5700.v0001.z10
vb/008/fc/5700/vb008fc5700.v0001.z11
vb/008/fc/5700/vb008fc5700.v0001.z12
vb/008/fc/5700/vb008fc5700.v0001.z13
vb/008/fc/5700/vb008fc5700.v0001.z14
vb/008/fc/5700/vb008fc5700.v0001.z15
vb/008/fc/5700/vb008fc5700.v0001.z16
vb/008/fc/5700/vb008fc5700.v0001.z17
vb/008/fc/5700/vb008fc5700.v0001.z18
vb/008/fc/5700/vb008fc5700.v0001.z19
vb/008/fc/5700/vb008fc5700.v0001.z20
vb/008/fc/5700/vb008fc5700.v0001.z21
vb/008/fc/5700/vb008fc5700.v0001.z22
vb/008/fc/5700/vb008fc5700.v0001.z23
vb/008/fc/5700/vb008fc5700.v0001.z26
vb/008/fc/5700/vb008fc5700.v0001.z27
vb/008/fc/5700/vb008fc5700.v0001.z28
vb/008/fc/5700/vb008fc5700.v0001.z29
vb/008/fc/5700/vb008fc5700.v0001.zip
vb/008/fc/5700/vb008fc5700.v0002.zip
```
Note: missing order (z02, z03 and z04) and a total of 27 uploaded segments for version 1.
Metadata for the upload says:
```
{
"AcceptRanges": "bytes",
"ContentType": "",
"LastModified": "Mon, 17 Sep 2018 10:08:05 GMT",
"ContentLength": 7405818770,
"ETag": "\"b024db8b693879360086f845c0dbcb8d-1413\"",
"StorageClass": "GLACIER",
"Metadata": {
"size": "7405818770",
"parts_count": "27",
"zip_version": "Zip 3.0 (July 5th 2008)",
"checksum_md5": "e2d64ef108bfd463c8cf71478063c7f7",
"zip_cmd": "zip -r0X -s 10g /sdr-transfers/vb/008/fc/5700/vb008fc5700.v0001.zip vb008fc5700/v0001"
}
}
```
Note parts_count: 27, which implies the final segment should be z26, *not* z29.
There are 3 unexpected segments present: z27, z28 and z29.
There are 3 expected segments missing: z02, z03, z04.
This *suggests* that the unexpected segments are, in fact, the missing z02, z03 and z04. But even if that is the case, this is broken.
|
priority
|
zip segment names are being incorrectly generated consider druid aws bucket contains the following files for this druid vb fc vb fc vb fc vb fc vb fc vb fc vb fc vb fc vb fc vb fc vb fc vb fc vb fc vb fc vb fc vb fc vb fc vb fc vb fc vb fc vb fc vb fc vb fc vb fc vb fc zip vb fc zip note missing order and and a total of uploaded segments for version metadata for the upload says acceptranges bytes contenttype lastmodified mon sep gmt contentlength etag storageclass glacier metadata size parts count zip version zip july checksum zip cmd zip s sdr transfers vb fc zip note parts count which implies the final segment should be not there are unexpected segments present and there are expected segments missing this suggests that the unexpected segments are in fact the missing and but even if that is the case this is broken
| 1
|
372,397
| 11,013,731,626
|
IssuesEvent
|
2019-12-04 21:07:12
|
alect47/Playlist
|
https://api.github.com/repos/alect47/Playlist
|
closed
|
DELETE api/v1/favorites/:id
|
High Priority
|
As a user, I can delete a song from my list of favorites.
```
DELETE /api/v1/favorites/:id
```
- [x] Delete the favorite with the id passed in
- [x] return 204 status code
- [x] If favorite not found return 404
|
1.0
|
DELETE api/v1/favorites/:id - As a user, I can delete a song from my list of favorites.
```
DELETE /api/v1/favorites/:id
```
- [x] Delete the favorite with the id passed in
- [x] return 204 status code
- [x] If favorite not found return 404
|
priority
|
delete api favorites id as a user i can delete a song from my list of favorites delete api favorites id delete the favorite with the id passed in return status code if favorite not found return
| 1
|
539,591
| 15,791,382,711
|
IssuesEvent
|
2021-04-02 04:21:28
|
davidfstr/Crystal-Web-Archiver
|
https://api.github.com/repos/davidfstr/Crystal-Web-Archiver
|
closed
|
Eliminate prompt to "access your photos" on macOS
|
os-mac priority-high type-bug
|
Priority: High
* Bad first-run experience for end-users.
Description:
* When Crystal is built as a macOS app binary and run for the first time (on macOS 10.14 Mojave), it displays a security prompt (see screenshot below) immediately when it is first run, just before showing the Open/Save dialog initially.
* This security prompt is confusing because it implies that Crystal is trying to access user data (i.e. the Photos directory) that Crystal in fact does not care about and does not try to specially access.
Tasks:
* [ ] Determine why this prompt is being displayed.
* [ ] Alter the behavior of Crystal to not trigger the prompt.
Related:
* StackOverflow question trying to determine why this prompt is happening in the first place
* https://stackoverflow.com/questions/66735138/eliminate-app-name-would-like-to-access-your-photos-prompt-when-building-na
* The one commentator on this question has not been helpful. Probably want to open a new question that asks about this kind of security prompt in general, perhaps on: https://apple.stackexchange.com/

|
1.0
|
Eliminate prompt to "access your photos" on macOS - Priority: High
* Bad first-run experience for end-users.
Description:
* When Crystal is built as a macOS app binary and run for the first time (on macOS 10.14 Mojave), it displays a security prompt (see screenshot below) immediately when it is first run, just before showing the Open/Save dialog initially.
* This security prompt is confusing because it implies that Crystal is trying to access user data (i.e. the Photos directory) that Crystal in fact does not care about and does not try to specially access.
Tasks:
* [ ] Determine why this prompt is being displayed.
* [ ] Alter the behavior of Crystal to not trigger the prompt.
Related:
* StackOverflow question trying to determine why this prompt is happening in the first place
* https://stackoverflow.com/questions/66735138/eliminate-app-name-would-like-to-access-your-photos-prompt-when-building-na
* The one commentator on this question has not been helpful. Probably want to open a new question that asks about this kind of security prompt in general, perhaps on: https://apple.stackexchange.com/

|
priority
|
eliminate prompt to access your photos on macos priority high bad first run experience for end users description when crystal is built as a macos app binary and run for the first time on macos mojave it displays a security prompt see screenshot below immediately when it is first run just before showing the open save dialog initially this security prompt is confusing because it implies that crystal is trying to access user data i e the photos directory that crystal in fact does not care about and does not try to specially access tasks determine why this prompt is being displayed alter the behavior of crystal to not trigger the prompt related stackoverflow question trying to determine why this prompt is happening in the first place the one commentator on this question has not been helpful probably want to open a new question that asks about this kind of security prompt in general perhaps on
| 1
|
554,472
| 16,421,260,084
|
IssuesEvent
|
2021-05-19 12:49:56
|
joseywoermann/navnlos
|
https://api.github.com/repos/joseywoermann/navnlos
|
closed
|
Filter does not work properly
|
Priority: Medium to high bug
|
If for example "test" is not allowed, the bot also deletes messages containing the word "testing", because "test" is a part of "**test**ing".
Possible solution:
Put the banned words into a JSON file and read the words into a list.
|
1.0
|
Filter does not work properly - If for example "test" is not allowed, the bot also deletes messages containing the word "testing", because "test" is a part of "**test**ing".
Possible solution:
Put the banned words into a JSON file and read the words into a list.
|
priority
|
filter does not work properly if for example test is not allowed the bot also deletes messages containing the word testing because test is a part of test ing possible solution put the banned words into a json file and read the words into a list
| 1
|
771,154
| 27,072,270,215
|
IssuesEvent
|
2023-02-14 08:01:53
|
vignetteapp/MediaPipe.NET
|
https://api.github.com/repos/vignetteapp/MediaPipe.NET
|
closed
|
Epic: libmuxr to replace current runtime
|
enhancement help wanted priority:high area:pinvoke
|
Due to the myriad of issues we have encountered with our current wrapper, it's sufficient to say that we cannot trust the wrapper we derived from homuler due to testability, reproducibility of issues, and everything is just a black hole to us. Therefore, the next major task is re-architecting the wrapper to a new implementation.
## Introduction to MUXR: The MediaPipe Universal eXtension Runtime
libmuxr, or simply MUXR, is our answer to a lot of issues we've encountered during the creation of MediaPipe.NET and porting MediaPipeUnity as Akihabara. MUXR aims to do the following:
- MUXR should handle every pointer ownership by MediaPipe. Any implementing API on top of it should not be able to touch the MediaPipe pointers as much as possible.
- Produce a Facade approach to the API, which is, make a C-compatible and convenient library as much as possible: which opens MUXR to be integrated by other languages if we wish to which only supports C ABIs.
- Invoking MUXR should be as easy as instantiating a new MediaPipe context in the Browser.
Of course a lot of the APIs we use like custom resources should still be supported, but we will have to re-architect everything including the wrapper to accomodate this new architecture.
|
1.0
|
Epic: libmuxr to replace current runtime - Due to the myriad of issues we have encountered with our current wrapper, it's sufficient to say that we cannot trust the wrapper we derived from homuler due to testability, reproducibility of issues, and everything is just a black hole to us. Therefore, the next major task is re-architecting the wrapper to a new implementation.
## Introduction to MUXR: The MediaPipe Universal eXtension Runtime
libmuxr, or simply MUXR, is our answer to a lot of issues we've encountered during the creation of MediaPipe.NET and porting MediaPipeUnity as Akihabara. MUXR aims to do the following:
- MUXR should handle every pointer ownership by MediaPipe. Any implementing API on top of it should not be able to touch the MediaPipe pointers as much as possible.
- Produce a Facade approach to the API, which is, make a C-compatible and convenient library as much as possible: which opens MUXR to be integrated by other languages if we wish to which only supports C ABIs.
- Invoking MUXR should be as easy as instantiating a new MediaPipe context in the Browser.
Of course a lot of the APIs we use like custom resources should still be supported, but we will have to re-architect everything including the wrapper to accomodate this new architecture.
|
priority
|
epic libmuxr to replace current runtime due to the myriad of issues we have encountered with our current wrapper it s sufficient to say that we cannot trust the wrapper we derived from homuler due to testability reproducibility of issues and everything is just a black hole to us therefore the next major task is re architecting the wrapper to a new implementation introduction to muxr the mediapipe universal extension runtime libmuxr or simply muxr is our answer to a lot of issues we ve encountered during the creation of mediapipe net and porting mediapipeunity as akihabara muxr aims to do the following muxr should handle every pointer ownership by mediapipe any implementing api on top of it should not be able to touch the mediapipe pointers as much as possible produce a facade approach to the api which is make a c compatible and convenient library as much as possible which opens muxr to be integrated by other languages if we wish to which only supports c abis invoking muxr should be as easy as instantiating a new mediapipe context in the browser of course a lot of the apis we use like custom resources should still be supported but we will have to re architect everything including the wrapper to accomodate this new architecture
| 1
|
501,051
| 14,520,050,299
|
IssuesEvent
|
2020-12-14 04:30:27
|
mintproject/mint-ui-lit
|
https://api.github.com/repos/mintproject/mint-ui-lit
|
closed
|
HAND setups are not showing in the UI (custom query problem)
|
bug high priority
|
When I select the flood contour indicator, none of the HAND setups show up.
There seems to be a problem with the API.
|
1.0
|
HAND setups are not showing in the UI (custom query problem) - When I select the flood contour indicator, none of the HAND setups show up.
There seems to be a problem with the API.
|
priority
|
hand setups are not showing in the ui custom query problem when i select the flood contour indicator none of the hand setups show up there seems to be a problem with the api
| 1
|
705,682
| 24,244,293,693
|
IssuesEvent
|
2022-09-27 09:17:56
|
mantidproject/mantid
|
https://api.github.com/repos/mantidproject/mantid
|
closed
|
Live data in refl GUI still not working on IDAaaS
|
High Priority Bug ISIS Team: LSS
|
Max has reported that live data is still not working on IDAaaS. I haven't been able to get hold of him to get more details yet; it is working for me. There are a number of things that it could be. See the latest comment on [this issue](https://github.com/ISISNeutronMuon/mantid-isis-support/issues/44) for some of my thoughts. Also see the earlier comment on that issue for other ideas.
@martyngigg @thomashampson I'm away on leave now so could someone contact Max to find out more? If Max is not available it would also be good to contact Becky or another reflectometry scientist to see if it works for them. The first thing to check is that CaChannel is working for them. If it's more a problem with the GUI/algorithm then maybe @rbauststfc could take a look.
Live data is very high priority for reflectometry so we need to make sure this is working.
|
1.0
|
Live data in refl GUI still not working on IDAaaS - Max has reported that live data is still not working on IDAaaS. I haven't been able to get hold of him to get more details yet; it is working for me. There are a number of things that it could be. See the latest comment on [this issue](https://github.com/ISISNeutronMuon/mantid-isis-support/issues/44) for some of my thoughts. Also see the earlier comment on that issue for other ideas.
@martyngigg @thomashampson I'm away on leave now so could someone contact Max to find out more? If Max is not available it would also be good to contact Becky or another reflectometry scientist to see if it works for them. The first thing to check is that CaChannel is working for them. If it's more a problem with the GUI/algorithm then maybe @rbauststfc could take a look.
Live data is very high priority for reflectometry so we need to make sure this is working.
|
priority
|
live data in refl gui still not working on idaaas max has reported that live data is still not working on idaaas i haven t been able to get hold of him to get more details yet it is working for me there are a number of things that it could be see the latest comment on for some of my thoughts also see the earlier comment on that issue for other ideas martyngigg thomashampson i m away on leave now so could someone contact max to find out more if max is not available it would also be good to contact becky or another reflectometry scientist to see if it works for them the first thing to check is that cachannel is working for them if it s more a problem with the gui algorithm then maybe rbauststfc could take a look live data is very high priority for reflectometry so we need to make sure this is working
| 1
|
747,989
| 26,103,112,378
|
IssuesEvent
|
2022-12-27 09:45:31
|
bounswe/bounswe2022group6
|
https://api.github.com/repos/bounswe/bounswe2022group6
|
closed
|
Creating labels and doctors
|
Priority: High State: In Progress Mobile
|
Enough number of labels should be created where each will have a distinct and readable color with a distinct usage.
A certain number of users should be approved as doctors and these users posts and comments should be heavily moderated.
Deadline 27.12.22 12.45
|
1.0
|
Creating labels and doctors - Enough number of labels should be created where each will have a distinct and readable color with a distinct usage.
A certain number of users should be approved as doctors and these users posts and comments should be heavily moderated.
Deadline 27.12.22 12.45
|
priority
|
creating labels and doctors enough number of labels should be created where each will have a distinct and readable color with a distinct usage a certain number of users should be approved as doctors and these users posts and comments should be heavily moderated deadline
| 1
|
558,384
| 16,532,167,536
|
IssuesEvent
|
2021-05-27 07:35:10
|
TheKye/Eco-WorldEdit
|
https://api.github.com/repos/TheKye/Eco-WorldEdit
|
closed
|
[Development] Import not working when copying from a different server or after a world wipe.
|
Priority: High Status: Fixed/Done Type: Bug
|

Both server running development version of worldedit.
Verified that after I wipe the world same error comes up when trying to import a schematic that was generated by same server.
|
1.0
|
[Development] Import not working when copying from a different server or after a world wipe. - 
Both server running development version of worldedit.
Verified that after I wipe the world same error comes up when trying to import a schematic that was generated by same server.
|
priority
|
import not working when copying from a different server or after a world wipe both server running development version of worldedit verified that after i wipe the world same error comes up when trying to import a schematic that was generated by same server
| 1
|
548,737
| 16,074,820,325
|
IssuesEvent
|
2021-04-25 06:20:14
|
ruuvi/com.ruuvi.station
|
https://api.github.com/repos/ruuvi/com.ruuvi.station
|
closed
|
Location permission is not forced to be enabled on all phones
|
bug high priority
|
We receive a lot of feedback about not being able to find nearby sensors and 90% time the issue has been that no location permission granted and Android cannot see anything because of that.
On my Android 11 phone I do get the "Enable Bluetooth Scanning" note if location permission is denied but apparently on all phones users are not seeing this note.
There is also a bug related to this on my phone. Reproduce:
1) Disable location permission
2) Try to add a sensor
3) The "Enable Bluetooth Scanning" dialog pops up
4) When pressing OK, the dialog just blinks and appears again and the app gets unresponsive and has to be killed.
EDIT: A user commented that on his Huawei phone he sees no dialogs about this.
"Device: HUAWEI VOG-L29
Android version: 10
App: Ruuvi Station 1.4.18"
|
1.0
|
Location permission is not forced to be enabled on all phones - We receive a lot of feedback about not being able to find nearby sensors and 90% time the issue has been that no location permission granted and Android cannot see anything because of that.
On my Android 11 phone I do get the "Enable Bluetooth Scanning" note if location permission is denied but apparently on all phones users are not seeing this note.
There is also a bug related to this on my phone. Reproduce:
1) Disable location permission
2) Try to add a sensor
3) The "Enable Bluetooth Scanning" dialog pops up
4) When pressing OK, the dialog just blinks and appears again and the app gets unresponsive and has to be killed.
EDIT: A user commented that on his Huawei phone he sees no dialogs about this.
"Device: HUAWEI VOG-L29
Android version: 10
App: Ruuvi Station 1.4.18"
|
priority
|
location permission is not forced to be enabled on all phones we receive a lot of feedback about not being able to find nearby sensors and time the issue has been that no location permission granted and android cannot see anything because of that on my android phone i do get the enable bluetooth scanning note if location permission is denied but apparently on all phones users are not seeing this note there is also a bug related to this on my phone reproduce disable location permission try to add a sensor the enable bluetooth scanning dialog pops up when pressing ok the dialog just blinks and appears again and the app gets unresponsive and has to be killed edit a user commented that on his huawei phone he sees no dialogs about this device huawei vog android version app ruuvi station
| 1
|
364,159
| 10,759,823,611
|
IssuesEvent
|
2019-10-31 17:21:54
|
Sage-Bionetworks/dccvalidator
|
https://api.github.com/repos/Sage-Bionetworks/dccvalidator
|
closed
|
Add support for new metadata templates
|
blocked high priority
|
WGS, WES, nanostring, single cell RNA seq, and possibly others are expected for November.
Need to add support for these once they have been shared in https://www.synapse.org/#!Synapse:syn18512044
|
1.0
|
Add support for new metadata templates - WGS, WES, nanostring, single cell RNA seq, and possibly others are expected for November.
Need to add support for these once they have been shared in https://www.synapse.org/#!Synapse:syn18512044
|
priority
|
add support for new metadata templates wgs wes nanostring single cell rna seq and possibly others are expected for november need to add support for these once they have been shared in
| 1
|
142,058
| 5,452,295,788
|
IssuesEvent
|
2017-03-08 02:23:46
|
CS2103JAN2017-W09-B3/main
|
https://api.github.com/repos/CS2103JAN2017-W09-B3/main
|
closed
|
As a user I want to add task with due date
|
priority.high type.story
|
so that i can record task that has to be done by given date.
|
1.0
|
As a user I want to add task with due date - so that i can record task that has to be done by given date.
|
priority
|
as a user i want to add task with due date so that i can record task that has to be done by given date
| 1
|
463,662
| 13,285,650,455
|
IssuesEvent
|
2020-08-24 08:28:27
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
closed
|
Cannot filter table using lamba function. - Bad Sad
|
Area/Language Component/Compiler Priority/High Resolution/Won’t Fix Type/Bug
|
**Description:**
$subject . No way to filter table.
**Steps to reproduce:**
```ballerina
import ballerina/http;
import ballerina/io;
table<Review> tbReviews = table {
{ id, review },
[ { "B1", "Review of book1" },
{ "B2", "Review of book2" },
{ "B3", "Review of book3" }
]
};
endpoint http:Listener bookReviewEP {
port: 7070
};
@http:ServiceConfig {
basePath: "/review"
}
service<http:Service> reviewService bind bookReviewEP {
@http:ResourceConfig {
methods: ["GET"],
path: "/{bookId}"
}
getReview (endpoint caller, http:Request request, string bookId) {
http:Response reviewResponse = new;
function (Review r) returns (boolean) getReview =
(r) => r.id == bookId;
table<Review> bookReview = tbReviews.filter(getReview);
if (bookReview.count() == 1) {
json reviewJson = check <json>bookReview;
io:println(reviewJson[0]);
reviewResponse.setTextPayload(reviewJson[0].review.toString());
} else {
reviewResponse.setTextPayload("(no reviews found)");
}
_ = caller -> respond(reviewResponse);
}
}
```
Invoke as `curl http://0.0.0.0:7070/review/B1`.
Error log:
```
[2018-11-06 23:22:37,533] ERROR {org.ballerinalang.launcher.Main} - Index: 0, Size: 0
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at org.wso2.ballerinalang.compiler.desugar.ASTBuilderUtil.generateArgExprs(ASTBuilderUtil.java:377)
at org.wso2.ballerinalang.compiler.desugar.ASTBuilderUtil.createInvocationExpr(ASTBuilderUtil.java:399)
at org.wso2.ballerinalang.compiler.desugar.ASTBuilderUtil.createInvocationExpr(ASTBuilderUtil.java:390)
at org.wso2.ballerinalang.compiler.desugar.IterableCodeDesugar.generateFilter(IterableCodeDesugar.java:770)
at org.wso2.ballerinalang.compiler.desugar.IterableCodeDesugar.generateOperationCode(IterableCodeDesugar.java:727)
at org.wso2.ballerinalang.compiler.desugar.IterableCodeDesugar.lambda$generateStreamingIteratorBlock$1(IterableCodeDesugar.java:318)
at java.lang.Iterable.forEach(Iterable.java:75)
at org.wso2.ballerinalang.compiler.desugar.IterableCodeDesugar.generateStreamingIteratorBlock(IterableCodeDesugar.java:318)
at org.wso2.ballerinalang.compiler.desugar.IterableCodeDesugar.generateIteratorFunction(IterableCodeDesugar.java:221)
at org.wso2.ballerinalang.compiler.desugar.IterableCodeDesugar.desugar(IterableCodeDesugar.java:126)
at org.wso2.ballerinalang.compiler.desugar.Desugar.visitIterableOperationInvocation(Desugar.java:1898)
at org.wso2.ballerinalang.compiler.desugar.Desugar.visit(Desugar.java:1167)
at org.wso2.ballerinalang.compiler.tree.expressions.BLangInvocation.accept(BLangInvocation.java:108)
at org.wso2.ballerinalang.compiler.desugar.Desugar.rewriteExpr(Desugar.java:1949)
at org.wso2.ballerinalang.compiler.desugar.Desugar.visit(Desugar.java:560)
at org.wso2.ballerinalang.compiler.tree.BLangVariable.accept(BLangVariable.java:90)
at org.wso2.ballerinalang.compiler.desugar.Desugar.rewrite(Desugar.java:1924)
at org.wso2.ballerinalang.compiler.desugar.Desugar.visit(Desugar.java:576)
at org.wso2.ballerinalang.compiler.tree.statements.BLangVariableDef.accept(BLangVariableDef.java:42)
at org.wso2.ballerinalang.compiler.desugar.Desugar.rewrite(Desugar.java:1924)
at org.wso2.ballerinalang.compiler.desugar.Desugar.rewrite(Desugar.java:1964)
at org.wso2.ballerinalang.compiler.desugar.Desugar.rewriteStmt(Desugar.java:1974)
at org.wso2.ballerinalang.compiler.desugar.Desugar.visit(Desugar.java:570)
at org.wso2.ballerinalang.compiler.tree.statements.BLangBlockStmt.accept(BLangBlockStmt.java:54)
at org.wso2.ballerinalang.compiler.desugar.Desugar.rewrite(Desugar.java:1924)
at org.wso2.ballerinalang.compiler.desugar.Desugar.rewrite(Desugar.java:1964)
at org.wso2.ballerinalang.compiler.desugar.Desugar.visit(Desugar.java:512)
at org.wso2.ballerinalang.compiler.tree.BLangResource.accept(BLangResource.java:30)
at org.wso2.ballerinalang.compiler.desugar.Desugar.rewrite(Desugar.java:1924)
at org.wso2.ballerinalang.compiler.desugar.Desugar.rewrite(Desugar.java:1981)
at org.wso2.ballerinalang.compiler.desugar.Desugar.visit(Desugar.java:470)
at org.wso2.ballerinalang.compiler.tree.BLangService.accept(BLangService.java:209)
at org.wso2.ballerinalang.compiler.desugar.Desugar.rewrite(Desugar.java:1924)
at org.wso2.ballerinalang.compiler.desugar.Desugar.rewrite(Desugar.java:1981)
at org.wso2.ballerinalang.compiler.desugar.Desugar.visit(Desugar.java:302)
at org.wso2.ballerinalang.compiler.tree.BLangPackage.accept(BLangPackage.java:140)
at org.wso2.ballerinalang.compiler.desugar.Desugar.rewrite(Desugar.java:1924)
at org.wso2.ballerinalang.compiler.desugar.Desugar.perform(Desugar.java:264)
at org.wso2.ballerinalang.compiler.CompilerDriver.desugar(CompilerDriver.java:211)
at org.wso2.ballerinalang.compiler.CompilerDriver.compile(CompilerDriver.java:178)
at org.wso2.ballerinalang.compiler.CompilerDriver.compilePackageSymbol(CompilerDriver.java:145)
at org.wso2.ballerinalang.compiler.CompilerDriver.compilePackage(CompilerDriver.java:112)
at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
at org.wso2.ballerinalang.compiler.Compiler.compilePackages(Compiler.java:155)
at org.wso2.ballerinalang.compiler.Compiler.compilePackage(Compiler.java:173)
at org.wso2.ballerinalang.compiler.Compiler.compile(Compiler.java:85)
at org.wso2.ballerinalang.compiler.Compiler.compile(Compiler.java:76)
at org.ballerinalang.launcher.LauncherUtils.compile(LauncherUtils.java:299)
at org.ballerinalang.launcher.LauncherUtils.runProgram(LauncherUtils.java:118)
at org.ballerinalang.launcher.Main$RunCmd.execute(Main.java:277)
at java.util.Optional.ifPresent(Optional.java:159)
at org.ballerinalang.launcher.Main.main(Main.java:65)
```
**Affected Versions:**
0.982.0
0.983.0
**OS, DB, other environment details and versions:**
Mac
|
1.0
|
Cannot filter table using lamba function. - Bad Sad - **Description:**
$subject . No way to filter table.
**Steps to reproduce:**
```ballerina
import ballerina/http;
import ballerina/io;
table<Review> tbReviews = table {
{ id, review },
[ { "B1", "Review of book1" },
{ "B2", "Review of book2" },
{ "B3", "Review of book3" }
]
};
endpoint http:Listener bookReviewEP {
port: 7070
};
@http:ServiceConfig {
basePath: "/review"
}
service<http:Service> reviewService bind bookReviewEP {
@http:ResourceConfig {
methods: ["GET"],
path: "/{bookId}"
}
getReview (endpoint caller, http:Request request, string bookId) {
http:Response reviewResponse = new;
function (Review r) returns (boolean) getReview =
(r) => r.id == bookId;
table<Review> bookReview = tbReviews.filter(getReview);
if (bookReview.count() == 1) {
json reviewJson = check <json>bookReview;
io:println(reviewJson[0]);
reviewResponse.setTextPayload(reviewJson[0].review.toString());
} else {
reviewResponse.setTextPayload("(no reviews found)");
}
_ = caller -> respond(reviewResponse);
}
}
```
Invoke as `curl http://0.0.0.0:7070/review/B1`.
Error log:
```
[2018-11-06 23:22:37,533] ERROR {org.ballerinalang.launcher.Main} - Index: 0, Size: 0
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at org.wso2.ballerinalang.compiler.desugar.ASTBuilderUtil.generateArgExprs(ASTBuilderUtil.java:377)
at org.wso2.ballerinalang.compiler.desugar.ASTBuilderUtil.createInvocationExpr(ASTBuilderUtil.java:399)
at org.wso2.ballerinalang.compiler.desugar.ASTBuilderUtil.createInvocationExpr(ASTBuilderUtil.java:390)
at org.wso2.ballerinalang.compiler.desugar.IterableCodeDesugar.generateFilter(IterableCodeDesugar.java:770)
at org.wso2.ballerinalang.compiler.desugar.IterableCodeDesugar.generateOperationCode(IterableCodeDesugar.java:727)
at org.wso2.ballerinalang.compiler.desugar.IterableCodeDesugar.lambda$generateStreamingIteratorBlock$1(IterableCodeDesugar.java:318)
at java.lang.Iterable.forEach(Iterable.java:75)
at org.wso2.ballerinalang.compiler.desugar.IterableCodeDesugar.generateStreamingIteratorBlock(IterableCodeDesugar.java:318)
at org.wso2.ballerinalang.compiler.desugar.IterableCodeDesugar.generateIteratorFunction(IterableCodeDesugar.java:221)
at org.wso2.ballerinalang.compiler.desugar.IterableCodeDesugar.desugar(IterableCodeDesugar.java:126)
at org.wso2.ballerinalang.compiler.desugar.Desugar.visitIterableOperationInvocation(Desugar.java:1898)
at org.wso2.ballerinalang.compiler.desugar.Desugar.visit(Desugar.java:1167)
at org.wso2.ballerinalang.compiler.tree.expressions.BLangInvocation.accept(BLangInvocation.java:108)
at org.wso2.ballerinalang.compiler.desugar.Desugar.rewriteExpr(Desugar.java:1949)
at org.wso2.ballerinalang.compiler.desugar.Desugar.visit(Desugar.java:560)
at org.wso2.ballerinalang.compiler.tree.BLangVariable.accept(BLangVariable.java:90)
at org.wso2.ballerinalang.compiler.desugar.Desugar.rewrite(Desugar.java:1924)
at org.wso2.ballerinalang.compiler.desugar.Desugar.visit(Desugar.java:576)
at org.wso2.ballerinalang.compiler.tree.statements.BLangVariableDef.accept(BLangVariableDef.java:42)
at org.wso2.ballerinalang.compiler.desugar.Desugar.rewrite(Desugar.java:1924)
at org.wso2.ballerinalang.compiler.desugar.Desugar.rewrite(Desugar.java:1964)
at org.wso2.ballerinalang.compiler.desugar.Desugar.rewriteStmt(Desugar.java:1974)
at org.wso2.ballerinalang.compiler.desugar.Desugar.visit(Desugar.java:570)
at org.wso2.ballerinalang.compiler.tree.statements.BLangBlockStmt.accept(BLangBlockStmt.java:54)
at org.wso2.ballerinalang.compiler.desugar.Desugar.rewrite(Desugar.java:1924)
at org.wso2.ballerinalang.compiler.desugar.Desugar.rewrite(Desugar.java:1964)
at org.wso2.ballerinalang.compiler.desugar.Desugar.visit(Desugar.java:512)
at org.wso2.ballerinalang.compiler.tree.BLangResource.accept(BLangResource.java:30)
at org.wso2.ballerinalang.compiler.desugar.Desugar.rewrite(Desugar.java:1924)
at org.wso2.ballerinalang.compiler.desugar.Desugar.rewrite(Desugar.java:1981)
at org.wso2.ballerinalang.compiler.desugar.Desugar.visit(Desugar.java:470)
at org.wso2.ballerinalang.compiler.tree.BLangService.accept(BLangService.java:209)
at org.wso2.ballerinalang.compiler.desugar.Desugar.rewrite(Desugar.java:1924)
at org.wso2.ballerinalang.compiler.desugar.Desugar.rewrite(Desugar.java:1981)
at org.wso2.ballerinalang.compiler.desugar.Desugar.visit(Desugar.java:302)
at org.wso2.ballerinalang.compiler.tree.BLangPackage.accept(BLangPackage.java:140)
at org.wso2.ballerinalang.compiler.desugar.Desugar.rewrite(Desugar.java:1924)
at org.wso2.ballerinalang.compiler.desugar.Desugar.perform(Desugar.java:264)
at org.wso2.ballerinalang.compiler.CompilerDriver.desugar(CompilerDriver.java:211)
at org.wso2.ballerinalang.compiler.CompilerDriver.compile(CompilerDriver.java:178)
at org.wso2.ballerinalang.compiler.CompilerDriver.compilePackageSymbol(CompilerDriver.java:145)
at org.wso2.ballerinalang.compiler.CompilerDriver.compilePackage(CompilerDriver.java:112)
at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
at org.wso2.ballerinalang.compiler.Compiler.compilePackages(Compiler.java:155)
at org.wso2.ballerinalang.compiler.Compiler.compilePackage(Compiler.java:173)
at org.wso2.ballerinalang.compiler.Compiler.compile(Compiler.java:85)
at org.wso2.ballerinalang.compiler.Compiler.compile(Compiler.java:76)
at org.ballerinalang.launcher.LauncherUtils.compile(LauncherUtils.java:299)
at org.ballerinalang.launcher.LauncherUtils.runProgram(LauncherUtils.java:118)
at org.ballerinalang.launcher.Main$RunCmd.execute(Main.java:277)
at java.util.Optional.ifPresent(Optional.java:159)
at org.ballerinalang.launcher.Main.main(Main.java:65)
```
**Affected Versions:**
0.982.0
0.983.0
**OS, DB, other environment details and versions:**
Mac
|
priority
|
cannot filter table using lamba function bad sad description subject no way to filter table steps to reproduce ballerina import ballerina http import ballerina io table tbreviews table id review review of review of review of endpoint http listener bookreviewep port http serviceconfig basepath review service reviewservice bind bookreviewep http resourceconfig methods path bookid getreview endpoint caller http request request string bookid http response reviewresponse new function review r returns boolean getreview r r id bookid table bookreview tbreviews filter getreview if bookreview count json reviewjson check bookreview io println reviewjson reviewresponse settextpayload reviewjson review tostring else reviewresponse settextpayload no reviews found caller respond reviewresponse invoke as curl error log error org ballerinalang launcher main index size java lang indexoutofboundsexception index size at java util arraylist rangecheck arraylist java at java util arraylist get arraylist java at org ballerinalang compiler desugar astbuilderutil generateargexprs astbuilderutil java at org ballerinalang compiler desugar astbuilderutil createinvocationexpr astbuilderutil java at org ballerinalang compiler desugar astbuilderutil createinvocationexpr astbuilderutil java at org ballerinalang compiler desugar iterablecodedesugar generatefilter iterablecodedesugar java at org ballerinalang compiler desugar iterablecodedesugar generateoperationcode iterablecodedesugar java at org ballerinalang compiler desugar iterablecodedesugar lambda generatestreamingiteratorblock iterablecodedesugar java at java lang iterable foreach iterable java at org ballerinalang compiler desugar iterablecodedesugar generatestreamingiteratorblock iterablecodedesugar java at org ballerinalang compiler desugar iterablecodedesugar generateiteratorfunction iterablecodedesugar java at org ballerinalang compiler desugar iterablecodedesugar desugar iterablecodedesugar java at org ballerinalang compiler desugar desugar visititerableoperationinvocation desugar java at org ballerinalang compiler desugar desugar visit desugar java at org ballerinalang compiler tree expressions blanginvocation accept blanginvocation java at org ballerinalang compiler desugar desugar rewriteexpr desugar java at org ballerinalang compiler desugar desugar visit desugar java at org ballerinalang compiler tree blangvariable accept blangvariable java at org ballerinalang compiler desugar desugar rewrite desugar java at org ballerinalang compiler desugar desugar visit desugar java at org ballerinalang compiler tree statements blangvariabledef accept blangvariabledef java at org ballerinalang compiler desugar desugar rewrite desugar java at org ballerinalang compiler desugar desugar rewrite desugar java at org ballerinalang compiler desugar desugar rewritestmt desugar java at org ballerinalang compiler desugar desugar visit desugar java at org ballerinalang compiler tree statements blangblockstmt accept blangblockstmt java at org ballerinalang compiler desugar desugar rewrite desugar java at org ballerinalang compiler desugar desugar rewrite desugar java at org ballerinalang compiler desugar desugar visit desugar java at org ballerinalang compiler tree blangresource accept blangresource java at org ballerinalang compiler desugar desugar rewrite desugar java at org ballerinalang compiler desugar desugar rewrite desugar java at org ballerinalang compiler desugar desugar visit desugar java at org ballerinalang compiler tree blangservice accept blangservice java at org ballerinalang compiler desugar desugar rewrite desugar java at org ballerinalang compiler desugar desugar rewrite desugar java at org ballerinalang compiler desugar desugar visit desugar java at org ballerinalang compiler tree blangpackage accept blangpackage java at org ballerinalang compiler desugar desugar rewrite desugar java at org ballerinalang compiler desugar desugar perform desugar java at org ballerinalang compiler compilerdriver desugar compilerdriver java at org ballerinalang compiler compilerdriver compile compilerdriver java at org ballerinalang compiler compilerdriver compilepackagesymbol compilerdriver java at org ballerinalang compiler compilerdriver compilepackage compilerdriver java at java util stream foreachops foreachop ofref accept foreachops java at java util stream referencepipeline accept referencepipeline java at java util arraylist arraylistspliterator foreachremaining arraylist java at java util stream abstractpipeline copyinto abstractpipeline java at java util stream abstractpipeline wrapandcopyinto abstractpipeline java at java util stream foreachops foreachop evaluatesequential foreachops java at java util stream foreachops foreachop ofref evaluatesequential foreachops java at java util stream abstractpipeline evaluate abstractpipeline java at java util stream referencepipeline foreach referencepipeline java at org ballerinalang compiler compiler compilepackages compiler java at org ballerinalang compiler compiler compilepackage compiler java at org ballerinalang compiler compiler compile compiler java at org ballerinalang compiler compiler compile compiler java at org ballerinalang launcher launcherutils compile launcherutils java at org ballerinalang launcher launcherutils runprogram launcherutils java at org ballerinalang launcher main runcmd execute main java at java util optional ifpresent optional java at org ballerinalang launcher main main main java affected versions os db other environment details and versions mac
| 1
|
544,427
| 15,893,517,708
|
IssuesEvent
|
2021-04-11 06:15:00
|
wso2/product-apim
|
https://api.github.com/repos/wso2/product-apim
|
closed
|
Custom Properties are not displayed
|
API-M 4.0.0 Priority/High REST APIs React-UI T1 Type/Bug
|
### Description:
Custom Properties added in the publisher can not be seen.
I added two properties one ticked to be visible in dev portal. I was unable to see the property in dev portal.
<img width="1456" alt="Screen Shot 2021-03-03 at 11 11 26 PM" src="https://user-images.githubusercontent.com/32265029/109848919-ce0ee580-7c76-11eb-8711-73f36015f720.png">
<img width="1456" alt="Screen Shot 2021-03-03 at 11 12 47 PM" src="https://user-images.githubusercontent.com/32265029/109848943-d2d39980-7c76-11eb-8cf3-7f3fc5e55a45.png">
### Steps to reproduce:
1. Create an API
2. Add a custom property
3. Go to Dev portal
4. Go to API's overview page.
### Affected Product Version:
v4.0.0-alpha
### Environment details (with versions):
- OS: Windows server 2019
- Browser : Chrome 88.0.4324.190 (Official Build) (64-bit)
|
1.0
|
Custom Properties are not displayed - ### Description:
Custom Properties added in the publisher can not be seen.
I added two properties one ticked to be visible in dev portal. I was unable to see the property in dev portal.
<img width="1456" alt="Screen Shot 2021-03-03 at 11 11 26 PM" src="https://user-images.githubusercontent.com/32265029/109848919-ce0ee580-7c76-11eb-8711-73f36015f720.png">
<img width="1456" alt="Screen Shot 2021-03-03 at 11 12 47 PM" src="https://user-images.githubusercontent.com/32265029/109848943-d2d39980-7c76-11eb-8cf3-7f3fc5e55a45.png">
### Steps to reproduce:
1. Create an API
2. Add a custom property
3. Go to Dev portal
4. Go to API's overview page.
### Affected Product Version:
v4.0.0-alpha
### Environment details (with versions):
- OS: Windows server 2019
- Browser : Chrome 88.0.4324.190 (Official Build) (64-bit)
|
priority
|
custom properties are not displayed description custom properties added in the publisher can not be seen i added two properties one ticked to be visible in dev portal i was unable to see the property in dev portal img width alt screen shot at pm src img width alt screen shot at pm src steps to reproduce create an api add a custom property go to dev portal go to api s overview page affected product version alpha environment details with versions os windows server browser chrome official build bit
| 1
|
499,811
| 14,479,673,528
|
IssuesEvent
|
2020-12-10 10:07:21
|
AGROFIMS/hagrofims
|
https://api.github.com/repos/AGROFIMS/hagrofims
|
closed
|
Export to Excel file issues
|
bug excel file high priority
|
- [ ] Some timing Values are missing as well as TraitAlias, TraitDataType, TraitValidation


|
1.0
|
Export to Excel file issues - - [ ] Some timing Values are missing as well as TraitAlias, TraitDataType, TraitValidation


|
priority
|
export to excel file issues some timing values are missing as well as traitalias traitdatatype traitvalidation
| 1
|
620,101
| 19,552,657,637
|
IssuesEvent
|
2022-01-03 01:27:34
|
XyrisOS/xyris
|
https://api.github.com/repos/XyrisOS/xyris
|
closed
|
Use ext2 For Bootable Image
|
high-priority
|
Required packages for macOS (via `brew`):
- `e2fsprogs`
- `e2tools`
Related to #211 since Limine is having problems with `echFS` when compiled (even with Docker) on macOS hosts. No clue why, but moving away from `echFS` is probably for the best.
|
1.0
|
Use ext2 For Bootable Image - Required packages for macOS (via `brew`):
- `e2fsprogs`
- `e2tools`
Related to #211 since Limine is having problems with `echFS` when compiled (even with Docker) on macOS hosts. No clue why, but moving away from `echFS` is probably for the best.
|
priority
|
use for bootable image required packages for macos via brew related to since limine is having problems with echfs when compiled even with docker on macos hosts no clue why but moving away from echfs is probably for the best
| 1
|
279,392
| 8,664,784,780
|
IssuesEvent
|
2018-11-28 21:14:03
|
CzolgIT/servertanksgame
|
https://api.github.com/repos/CzolgIT/servertanksgame
|
closed
|
UDP connection enablement
|
enhancement priority: highest
|
Implement UDP connection when receiving any packet (except JoinRequest and QuitPacket)
|
1.0
|
UDP connection enablement - Implement UDP connection when receiving any packet (except JoinRequest and QuitPacket)
|
priority
|
udp connection enablement implement udp connection when receiving any packet except joinrequest and quitpacket
| 1
|
577,789
| 17,119,837,723
|
IssuesEvent
|
2021-07-12 02:33:01
|
DanXi-Dev/DanXi
|
https://api.github.com/repos/DanXi-Dev/DanXi
|
closed
|
[BUG] UI flashing impacts visual quality
|
bug high priority
|
There are several occasions currently where the application's UI will flash, severely impacting the visual experience of the app.
- On all platforms, when opening the details page, the content is always loaded after the transition begin, which will lead to a brief period of displaying only the `Loading Indicator` and then suddenly all contents appear. Causing discomfort. Consider revamping the loading process or adding at least a `Fade` transition between the loading view and the content view.
- On iOS, when the keyboard is shown, the framework will repeatedly call `build()` method of the current page. Currently the app is not ready for this behavior and doing so will cause the app to reload its content multiple times, causing the UI inconsistency.
- To be continued...
|
1.0
|
[BUG] UI flashing impacts visual quality - There are several occasions currently where the application's UI will flash, severely impacting the visual experience of the app.
- On all platforms, when opening the details page, the content is always loaded after the transition begin, which will lead to a brief period of displaying only the `Loading Indicator` and then suddenly all contents appear. Causing discomfort. Consider revamping the loading process or adding at least a `Fade` transition between the loading view and the content view.
- On iOS, when the keyboard is shown, the framework will repeatedly call `build()` method of the current page. Currently the app is not ready for this behavior and doing so will cause the app to reload its content multiple times, causing the UI inconsistency.
- To be continued...
|
priority
|
ui flashing impacts visual quality there are several occasions currently where the application s ui will flash severely impacting the visual experience of the app on all platforms when opening the details page the content is always loaded after the transition begin which will lead to a brief period of displaying only the loading indicator and then suddenly all contents appear causing discomfort consider revamping the loading process or adding at least a fade transition between the loading view and the content view on ios when the keyboard is shown the framework will repeatedly call build method of the current page currently the app is not ready for this behavior and doing so will cause the app to reload its content multiple times causing the ui inconsistency to be continued
| 1
|
375,079
| 11,099,515,629
|
IssuesEvent
|
2019-12-16 17:09:13
|
BendroCorp/bendrocorp-app
|
https://api.github.com/repos/BendroCorp/bendrocorp-app
|
opened
|
Add technical controls through form/reports for generating guest Discord invites
|
api support required effort: moderate enhancement needs definition priority:high
|
Add technical controls through form/reports for generating guest Discord invites.
|
1.0
|
Add technical controls through form/reports for generating guest Discord invites - Add technical controls through form/reports for generating guest Discord invites.
|
priority
|
add technical controls through form reports for generating guest discord invites add technical controls through form reports for generating guest discord invites
| 1
|
410,385
| 11,987,012,370
|
IssuesEvent
|
2020-04-07 20:23:37
|
AugurProject/augur
|
https://api.github.com/repos/AugurProject/augur
|
closed
|
Update any URL's to link to the help centre
|
Needed for V2 launch Priority: High
|
@petervecchiarelli
Currently there are 'learn more' links in the trading app that are not pointing to the right URL. When the help centre is ready to go, dev will need to update these URLs in the trading app to point to the right pages.
|
1.0
|
Update any URL's to link to the help centre - @petervecchiarelli
Currently there are 'learn more' links in the trading app that are not pointing to the right URL. When the help centre is ready to go, dev will need to update these URLs in the trading app to point to the right pages.
|
priority
|
update any url s to link to the help centre petervecchiarelli currently there are learn more links in the trading app that are not pointing to the right url when the help centre is ready to go dev will need to update these urls in the trading app to point to the right pages
| 1
|
831,232
| 32,041,861,076
|
IssuesEvent
|
2023-09-22 20:06:46
|
godotengine/godot
|
https://api.github.com/repos/godotengine/godot
|
closed
|
Visual Studio can't open Godot 4 C# solution on Mac OSX 12.5
|
bug platform:macos topic:editor confirmed topic:dotnet high priority
|
___
***Bugsquad note:** This issue has been confirmed several times already. No need to confirm it further.*
___
### Godot version
Godot 4 beta 2
### System information
Macbook pro OSX 12.5,
### Issue description
Hi,
Visual studio on mac can't open the solution an throws this error:
<img width="353" alt="Screen Shot 2022-10-07 at 5 19 58 PM" src="https://user-images.githubusercontent.com/104303630/194576382-8299898f-b485-4a14-a259-43294e971134.png">
I have the latest version of visual studio and .net 6 framework installed
### Steps to reproduce
Create a C# file in Godot and open it either from Godot or opening the solution through visual studio.
### Minimal reproduction project
_No response_
|
1.0
|
Visual Studio can't open Godot 4 C# solution on Mac OSX 12.5 - ___
***Bugsquad note:** This issue has been confirmed several times already. No need to confirm it further.*
___
### Godot version
Godot 4 beta 2
### System information
Macbook pro OSX 12.5,
### Issue description
Hi,
Visual studio on mac can't open the solution an throws this error:
<img width="353" alt="Screen Shot 2022-10-07 at 5 19 58 PM" src="https://user-images.githubusercontent.com/104303630/194576382-8299898f-b485-4a14-a259-43294e971134.png">
I have the latest version of visual studio and .net 6 framework installed
### Steps to reproduce
Create a C# file in Godot and open it either from Godot or opening the solution through visual studio.
### Minimal reproduction project
_No response_
|
priority
|
visual studio can t open godot c solution on mac osx bugsquad note this issue has been confirmed several times already no need to confirm it further godot version godot beta system information macbook pro osx issue description hi visual studio on mac can t open the solution an throws this error img width alt screen shot at pm src i have the latest version of visual studio and net framework installed steps to reproduce create a c file in godot and open it either from godot or opening the solution through visual studio minimal reproduction project no response
| 1
|
202,047
| 7,043,560,264
|
IssuesEvent
|
2017-12-31 08:34:38
|
bitshares/bitshares-ui
|
https://api.github.com/repos/bitshares/bitshares-ui
|
closed
|
[2] Cannot download bin file from bitshares.org/wallet
|
bug high priority
|
## Problem Statement
User navigates to https://bitshares.org/wallet. Site correctly indicates that a .bin file is available to download. User clicks download but no action occurred. User was competent and I trust that this was the case.
## Request
If you still have a local wallet at bitshares.org/wallet, please attempt the same download procedure and post a licecap if successful. If you verify the issue, please resolve the download problem.
|
1.0
|
[2] Cannot download bin file from bitshares.org/wallet - ## Problem Statement
User navigates to https://bitshares.org/wallet. Site correctly indicates that a .bin file is available to download. User clicks download but no action occurred. User was competent and I trust that this was the case.
## Request
If you still have a local wallet at bitshares.org/wallet, please attempt the same download procedure and post a licecap if successful. If you verify the issue, please resolve the download problem.
|
priority
|
cannot download bin file from bitshares org wallet problem statement user navigates to site correctly indicates that a bin file is available to download user clicks download but no action occurred user was competent and i trust that this was the case request if you still have a local wallet at bitshares org wallet please attempt the same download procedure and post a licecap if successful if you verify the issue please resolve the download problem
| 1
|
633,413
| 20,254,125,639
|
IssuesEvent
|
2022-02-14 21:05:00
|
COSC481W-2022Winter/capstone-wicrosoft
|
https://api.github.com/repos/COSC481W-2022Winter/capstone-wicrosoft
|
opened
|
Log In Screen to Profile
|
Priority: High Additions: Feature Sprint: One
|
Acceptance criteria:
if the user fills in the correct username and password, then click the sign-in button, the user should be logged in and view the profile page, which has the full name and the role of the user, and a log out.
if the user fills in the incorrect username and password, the user should not be logged in, and the user should see a warning.
if the username or the password is left blank when the sign-in button is clicked, the user should see a warning.
the sign-in process and the password storage follows a standard to protect users' private information.
when the user click log out on the profile page, the user will be redirected to the sign in page.
after the user log out, the user cannot access the profile page using the URL
Tasks:
1. fill the database with the test data (Austin Ahlijian)
2. Login: (Keegan/Elijah)
2.1 html
2.2 django view
2.3 form
2.3.1 pass all the information to backend
2.3.1 show warnings when the login is not successful or the username/password is blank
2.4. backend: query the database to check the credentials
Profile: (Keegan/Elijah)
3.1 html
3.2 django view
3.3 query and get the full name and the role
3.4. backend: query the database to retrieve full name and the role
|
1.0
|
Log In Screen to Profile - Acceptance criteria:
if the user fills in the correct username and password, then click the sign-in button, the user should be logged in and view the profile page, which has the full name and the role of the user, and a log out.
if the user fills in the incorrect username and password, the user should not be logged in, and the user should see a warning.
if the username or the password is left blank when the sign-in button is clicked, the user should see a warning.
the sign-in process and the password storage follows a standard to protect users' private information.
when the user click log out on the profile page, the user will be redirected to the sign in page.
after the user log out, the user cannot access the profile page using the URL
Tasks:
1. fill the database with the test data (Austin Ahlijian)
2. Login: (Keegan/Elijah)
2.1 html
2.2 django view
2.3 form
2.3.1 pass all the information to backend
2.3.1 show warnings when the login is not successful or the username/password is blank
2.4. backend: query the database to check the credentials
Profile: (Keegan/Elijah)
3.1 html
3.2 django view
3.3 query and get the full name and the role
3.4. backend: query the database to retrieve full name and the role
|
priority
|
log in screen to profile acceptance criteria if the user fills in the correct username and password then click the sign in button the user should be logged in and view the profile page which has the full name and the role of the user and a log out if the user fills in the incorrect username and password the user should not be logged in and the user should see a warning if the username or the password is left blank when the sign in button is clicked the user should see a warning the sign in process and the password storage follows a standard to protect users private information when the user click log out on the profile page the user will be redirected to the sign in page after the user log out the user cannot access the profile page using the url tasks fill the database with the test data austin ahlijian login keegan elijah html django view form pass all the information to backend show warnings when the login is not successful or the username password is blank backend query the database to check the credentials profile keegan elijah html django view query and get the full name and the role backend query the database to retrieve full name and the role
| 1
|
291,490
| 8,925,769,204
|
IssuesEvent
|
2019-01-22 00:36:45
|
TACC/abaco
|
https://api.github.com/repos/TACC/abaco
|
closed
|
Extend management of actor mailboxes
|
deployed.Dev priority.high proposal
|
At present, we can either `POST` a new message to an actor's mailbox or do a `GET` to discover how many messages are in that mailbox. Additional management actions are desirable:
1. `DELETE` would immediately clear out the mailbox. This would be helpful in case of an unexpected backlog that we don't actually want to process.
2. Adding `?count=N` to `GET` would retrieve the contents of the N most recent messages. This would be useful for debugging message sending behavior from other agents.
|
1.0
|
Extend management of actor mailboxes - At present, we can either `POST` a new message to an actor's mailbox or do a `GET` to discover how many messages are in that mailbox. Additional management actions are desirable:
1. `DELETE` would immediately clear out the mailbox. This would be helpful in case of an unexpected backlog that we don't actually want to process.
2. Adding `?count=N` to `GET` would retrieve the contents of the N most recent messages. This would be useful for debugging message sending behavior from other agents.
|
priority
|
extend management of actor mailboxes at present we can either post a new message to an actor s mailbox or do a get to discover how many messages are in that mailbox additional management actions are desirable delete would immediately clear out the mailbox this would be helpful in case of an unexpected backlog that we don t actually want to process adding count n to get would retrieve the contents of the n most recent messages this would be useful for debugging message sending behavior from other agents
| 1
|
377,011
| 11,161,723,419
|
IssuesEvent
|
2019-12-26 14:56:06
|
bounswe/bounswe2019group10
|
https://api.github.com/repos/bounswe/bounswe2019group10
|
closed
|
Member profile page component
|
Priority: High Relation: Frontend Type: New Feature
|
A react component should be implemented to show a member's profile preview
|
1.0
|
Member profile page component - A react component should be implemented to show a member's profile preview
|
priority
|
member profile page component a react component should be implemented to show a member s profile preview
| 1
|
194,025
| 6,890,759,775
|
IssuesEvent
|
2017-11-22 15:01:02
|
arquillian/smart-testing
|
https://api.github.com/repos/arquillian/smart-testing
|
closed
|
Move configuration lookup logic to configuration loader
|
Component: Core Priority: High train/ginger Type: Chore
|
##### Issue Overview
Move the configuration lookup logic from maven extension to the `ConfigurationLoader` class located in core to make it usable also for other integration cases (eg. Che) that use the Smart Testing API.
|
1.0
|
Move configuration lookup logic to configuration loader - ##### Issue Overview
Move the configuration lookup logic from maven extension to the `ConfigurationLoader` class located in core to make it usable also for other integration cases (eg. Che) that use the Smart Testing API.
|
priority
|
move configuration lookup logic to configuration loader issue overview move the configuration lookup logic from maven extension to the configurationloader class located in core to make it usable also for other integration cases eg che that use the smart testing api
| 1
|
770,129
| 27,029,735,309
|
IssuesEvent
|
2023-02-12 02:46:48
|
OpenRefine/OpenRefine
|
https://api.github.com/repos/OpenRefine/OpenRefine
|
opened
|
Can no longer set REFINE_MEMORY using command line
|
bug priority: High good first issue help wanted windows
|
Regression. Using `refine.bat` on Windows, users cannot override the default 1400M setting.
This is likely due to the fact that after reading the parameter options for the command line, we also then read the settings from refine.ini and override the command line options.
### To Reproduce
Steps to reproduce the behavior:
1. Start OpenRefine from command line by passing max memory option using `refine /m 2g` or `refine /m 4096M` etc.
### Current Results
REFINE_MEMORY variable is not set finally. Instead `refine.ini` is read lastly and overrides the command line options asked for by the user.
### Expected Behavior
Options read from `refine.ini` can be read first.
However, command line options should always be applied last (so they set the final state of options) and a user at the command line has full control as expected without surprises.
### Screenshots
<!-- If applicable, add screenshots to help explain your problem. -->
### Versions<!-- (please complete the following information)-->
- Operating System: <!-- e.g. iOS, Windows 10, Linux, Ubuntu 18.04 -->
- Browser Version: <!-- e.g. Chrome 19, Firefox 61, Safari, NOTE: OpenRefine does not support IE but works OK in most cases -->
- JRE or JDK Version: <!-- output of "java -version" e.g. JRE 1.8.0_181 -->
- OpenRefine: <!-- e.g. OpenRefine 3.0 Beta] -->
### Datasets
<!-- If you are allowed and are OK with making your data public, it would be awesome if you can include or attach the data causing the issue or a URL pointing to where the data is.
If you are concerned about keeping your data private, you can share it selectively by email to developers who work on the issue -->
### Additional context
We should also double check how things work on Linux with `refine` shell script.
|
1.0
|
Can no longer set REFINE_MEMORY using command line - Regression. Using `refine.bat` on Windows, users cannot override the default 1400M setting.
This is likely due to the fact that after reading the parameter options for the command line, we also then read the settings from refine.ini and override the command line options.
### To Reproduce
Steps to reproduce the behavior:
1. Start OpenRefine from command line by passing max memory option using `refine /m 2g` or `refine /m 4096M` etc.
### Current Results
REFINE_MEMORY variable is not set finally. Instead `refine.ini` is read lastly and overrides the command line options asked for by the user.
### Expected Behavior
Options read from `refine.ini` can be read first.
However, command line options should always be applied last (so they set the final state of options) and a user at the command line has full control as expected without surprises.
### Screenshots
<!-- If applicable, add screenshots to help explain your problem. -->
### Versions<!-- (please complete the following information)-->
- Operating System: <!-- e.g. iOS, Windows 10, Linux, Ubuntu 18.04 -->
- Browser Version: <!-- e.g. Chrome 19, Firefox 61, Safari, NOTE: OpenRefine does not support IE but works OK in most cases -->
- JRE or JDK Version: <!-- output of "java -version" e.g. JRE 1.8.0_181 -->
- OpenRefine: <!-- e.g. OpenRefine 3.0 Beta] -->
### Datasets
<!-- If you are allowed and are OK with making your data public, it would be awesome if you can include or attach the data causing the issue or a URL pointing to where the data is.
If you are concerned about keeping your data private, you can share it selectively by email to developers who work on the issue -->
### Additional context
We should also double check how things work on Linux with `refine` shell script.
|
priority
|
can no longer set refine memory using command line regression using refine bat on windows users cannot override the default setting this is likely due to the fact that after reading the parameter options for the command line we also then read the settings from refine ini and override the command line options to reproduce steps to reproduce the behavior start openrefine from command line by passing max memory option using refine m or refine m etc current results refine memory variable is not set finally instead refine ini is read lastly and overrides the command line options asked for by the user expected behavior options read from refine ini can be read first however command line options should always be applied last so they set the final state of options and a user at the command line has full control as expected without surprises screenshots versions operating system browser version jre or jdk version openrefine datasets if you are allowed and are ok with making your data public it would be awesome if you can include or attach the data causing the issue or a url pointing to where the data is if you are concerned about keeping your data private you can share it selectively by email to developers who work on the issue additional context we should also double check how things work on linux with refine shell script
| 1
|
423,073
| 12,290,234,830
|
IssuesEvent
|
2020-05-10 02:31:33
|
mesg-foundation/aragon
|
https://api.github.com/repos/mesg-foundation/aragon
|
closed
|
Update webhook service form
|
high priority
|
- [x] rename `webhook url` by `url`
- [x] Add the following text after the field URL but before the button "create new connection":
```
A POST request will be sent to this URL with the content of the event ([example](https://pastebin.com/whxdT0JE)).
```
|
1.0
|
Update webhook service form - - [x] rename `webhook url` by `url`
- [x] Add the following text after the field URL but before the button "create new connection":
```
A POST request will be sent to this URL with the content of the event ([example](https://pastebin.com/whxdT0JE)).
```
|
priority
|
update webhook service form rename webhook url by url add the following text after the field url but before the button create new connection a post request will be sent to this url with the content of the event
| 1
|
768,229
| 26,958,608,539
|
IssuesEvent
|
2023-02-08 16:31:15
|
Ore-Design/Ore-3D-Reports-Changelog
|
https://api.github.com/repos/Ore-Design/Ore-3D-Reports-Changelog
|
closed
|
Bug: Fuse Very Volatile - Crashes Often [1.7.0]
|
bug in progress high priority
|
Crashed twice trying to create single, linear fuse edge. Fails trying to send error report as well.
+Add Fuse Edge to Parent - CRASH (the default child was still in the parent, I was trying to add a second child)
|
1.0
|
Bug: Fuse Very Volatile - Crashes Often [1.7.0] - Crashed twice trying to create single, linear fuse edge. Fails trying to send error report as well.
+Add Fuse Edge to Parent - CRASH (the default child was still in the parent, I was trying to add a second child)
|
priority
|
bug fuse very volatile crashes often crashed twice trying to create single linear fuse edge fails trying to send error report as well add fuse edge to parent crash the default child was still in the parent i was trying to add a second child
| 1
|
228,000
| 7,545,000,152
|
IssuesEvent
|
2018-04-17 20:13:09
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
Need threadsafe Array2d
|
High Priority
|
There is no threadsafe version of Array2d, and these two locations in the code use it when it could be changed on another thread during serialization:
Graph.Keys
Districts.DistrictMap
And possibly WorldLayer
|
1.0
|
Need threadsafe Array2d - There is no threadsafe version of Array2d, and these two locations in the code use it when it could be changed on another thread during serialization:
Graph.Keys
Districts.DistrictMap
And possibly WorldLayer
|
priority
|
need threadsafe there is no threadsafe version of and these two locations in the code use it when it could be changed on another thread during serialization graph keys districts districtmap and possibly worldlayer
| 1
|
200,068
| 6,997,778,154
|
IssuesEvent
|
2017-12-16 18:40:49
|
roboticslab-uc3m/openrave-yarp-plugins
|
https://api.github.com/repos/roboticslab-uc3m/openrave-yarp-plugins
|
closed
|
YarpOpenraveControlboard always does rad/deg conversions (even if prismatic!)
|
blocking dev:YarpOpenraveControlboard priority: high status: in progress
|
As of current `develop` and identified since 274cbc297c0bf28163d20cdbb153ba9cceb1356b as part of #29 : YarpOpenraveControlboard supposes revolute joints, and therefore there are many radToDeg (in fact, hard-coded `*180/M_PI`) and degToRad conversions without checking if revolute or prismatic.
Must correct this!
|
1.0
|
YarpOpenraveControlboard always does rad/deg conversions (even if prismatic!) - As of current `develop` and identified since 274cbc297c0bf28163d20cdbb153ba9cceb1356b as part of #29 : YarpOpenraveControlboard supposes revolute joints, and therefore there are many radToDeg (in fact, hard-coded `*180/M_PI`) and degToRad conversions without checking if revolute or prismatic.
Must correct this!
|
priority
|
yarpopenravecontrolboard always does rad deg conversions even if prismatic as of current develop and identified since as part of yarpopenravecontrolboard supposes revolute joints and therefore there are many radtodeg in fact hard coded m pi and degtorad conversions without checking if revolute or prismatic must correct this
| 1
|
236,302
| 7,748,374,216
|
IssuesEvent
|
2018-05-30 08:07:45
|
Gloirin/m2gTest
|
https://api.github.com/repos/Gloirin/m2gTest
|
closed
|
0002474:
drag and drop not working with >50 messages
|
Felamimail bug high priority
|
**Reported by pschuele on 23 Mar 2010 17:00**
drag and drop not working with >50 messages
-> it seems that the grid selection model is not used correctly
-> see Tine.widgets.container.TreePanel::onBeforeNodeDrop
|
1.0
|
0002474:
drag and drop not working with >50 messages - **Reported by pschuele on 23 Mar 2010 17:00**
drag and drop not working with >50 messages
-> it seems that the grid selection model is not used correctly
-> see Tine.widgets.container.TreePanel::onBeforeNodeDrop
|
priority
|
drag and drop not working with messages reported by pschuele on mar drag and drop not working with gt messages gt it seems that the grid selection model is not used correctly gt see tine widgets container treepanel onbeforenodedrop
| 1
|
203,675
| 7,072,439,583
|
IssuesEvent
|
2018-01-09 00:33:29
|
spring-projects/spring-boot
|
https://api.github.com/repos/spring-projects/spring-boot
|
closed
|
Don't cause early initialization from WebMvcMetricsFilter
|
priority: high type: enhancement
|
The `WebMvcMetricsFilter` causes many beans to be pulled in much earlier than in 1.5. Profiling an application shows much of the work now happens in a Tomcat thread rather than the main thread. We should probably make the filter lazy like we did with Spring Security.
|
1.0
|
Don't cause early initialization from WebMvcMetricsFilter - The `WebMvcMetricsFilter` causes many beans to be pulled in much earlier than in 1.5. Profiling an application shows much of the work now happens in a Tomcat thread rather than the main thread. We should probably make the filter lazy like we did with Spring Security.
|
priority
|
don t cause early initialization from webmvcmetricsfilter the webmvcmetricsfilter causes many beans to be pulled in much earlier than in profiling an application shows much of the work now happens in a tomcat thread rather than the main thread we should probably make the filter lazy like we did with spring security
| 1
|
24,646
| 2,671,304,427
|
IssuesEvent
|
2015-03-24 04:43:20
|
nickpaventi/culligan-diy
|
https://api.github.com/repos/nickpaventi/culligan-diy
|
closed
|
Home [Mobile]: Product feature CTA wrong color
|
bug High Priority
|
Button is the wrong color and needs padding on each side

|
1.0
|
Home [Mobile]: Product feature CTA wrong color - Button is the wrong color and needs padding on each side

|
priority
|
home product feature cta wrong color button is the wrong color and needs padding on each side
| 1
|
382,442
| 11,306,374,360
|
IssuesEvent
|
2020-01-18 13:40:12
|
localstack/localstack
|
https://api.github.com/repos/localstack/localstack
|
closed
|
Error Invoking Lambda with Docker
|
PRO priority-high
|
<!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
Latest version of the Docker image (also failed with previous couple of versions).
Lambda loaded. `awslocal lambda list-functions` returns
```
{
"Functions": [
{
"TracingConfig": {
"Mode": "PassThrough"
},
"Version": "$LATEST",
"CodeSha256": "FpWGJCFmeoDOa4hgE9nBdoFeMM6pYwwNa4MFFSyxQt4=",
"FunctionName": "Hello",
"MemorySize": 128,
"CodeSize": 746,
"FunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:Hello",
"Handler": "index.handler",
"Role": "arn:aws:iam::000000000000:role/dp-garage-dev-data-platform-api-user-role-to-assume",
"Timeout": 5,
"LastModified": "2020-01-17T19:06:43.534+0000",
"Runtime": "nodejs10.x",
"Description": ""
}
]
}
```
Lambda code (`index.js`)
```js
exports.handler = (event, context, cb) => cb('hello world');
```
Calling with
```
awslocal lambda invoke --function-name Hello lambda.out
```
Error log (repeating message)
```
2020-01-17T19:07:46.326516000Z
2020-01-17T19:07:47:WARNING:localstack.services.awslambda.lambda_executors: Empty event body specified for invocation of Lambda "arn:aws:lambda:us-east-1:000000000000:function:Hello"
2020-01-17T19:07:49:INFO:localstack.services.awslambda.lambda_executors: Running lambda cmd: CONTAINER_ID="$(docker create --user=root --entrypoint=/tmp/939fa309.sh -v /tmp/localstack/python27.bin:/usr/bin/python -v /tmp/localstack/gosu.bin:/usr/bin/gosu --dns 127.0.0.1 -i -e DOCKER_LAMBDA_USE_STDIN="$DOCKER_LAMBDA_USE_STDIN" -e HOSTNAME="$HOSTNAME" -e LOCALSTACK_HOSTNAME="$LOCALSTACK_HOSTNAME" -e AWS_LAMBDA_FUNCTION_TIMEOUT="$AWS_LAMBDA_FUNCTION_TIMEOUT" -e AWS_LAMBDA_FUNCTION_NAME="$AWS_LAMBDA_FUNCTION_NAME" -e AWS_LAMBDA_FUNCTION_VERSION="$AWS_LAMBDA_FUNCTION_VERSION" -e AWS_LAMBDA_FUNCTION_INVOKED_ARN="$AWS_LAMBDA_FUNCTION_INVOKED_ARN" --rm "lambci/lambda:nodejs10.x" "index.handler")";docker cp "/tmp/localstack/zipfile.89592b7a/." "$CONTAINER_ID:/var/task"; docker cp /tmp/939fa309.sh $CONTAINER_ID:/tmp/939fa309.sh; docker start -ai "$CONTAINER_ID";
2020-01-17T19:07:54:WARNING:localstack.services.awslambda.lambda_api: Error executing Lambda function arn:aws:lambda:us-east-1:000000000000:function:Hello: Lambda process returned error status code: 127. Result: nohup: failed to run command ‘python’: Permission denied. Output:
/tmp/939fa309.sh: line 229: gosu: command not found
Starting daemons... Traceback (most recent call last):
File "/opt/code/localstack/localstack/services/awslambda/lambda_api.py", line 384, in run_lambda
event, context=context, version=version, asynchronous=asynchronous)
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 90, in execute
return do_execute()
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 75, in do_execute
raise e
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 71, in do_execute
result, log_output = self._execute(func_arn, func_details, event, context, version)
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 209, in _execute
result, log_output = self.run_lambda_executor(cmd, stdin, environment)
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 127, in run_lambda_executor
(return_code, result, log_output))
Exception: Lambda process returned error status code: 127. Result: nohup: failed to run command ‘python’: Permission denied. Output:
/tmp/939fa309.sh: line 229: gosu: command not found
Starting daemons...
2020-01-17T19:07:54.332802200Z
```
|
1.0
|
Error Invoking Lambda with Docker - <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
Latest version of the Docker image (also failed with previous couple of versions).
Lambda loaded. `awslocal lambda list-functions` returns
```
{
"Functions": [
{
"TracingConfig": {
"Mode": "PassThrough"
},
"Version": "$LATEST",
"CodeSha256": "FpWGJCFmeoDOa4hgE9nBdoFeMM6pYwwNa4MFFSyxQt4=",
"FunctionName": "Hello",
"MemorySize": 128,
"CodeSize": 746,
"FunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:Hello",
"Handler": "index.handler",
"Role": "arn:aws:iam::000000000000:role/dp-garage-dev-data-platform-api-user-role-to-assume",
"Timeout": 5,
"LastModified": "2020-01-17T19:06:43.534+0000",
"Runtime": "nodejs10.x",
"Description": ""
}
]
}
```
Lambda code (`index.js`)
```js
exports.handler = (event, context, cb) => cb('hello world');
```
Calling with
```
awslocal lambda invoke --function-name Hello lambda.out
```
Error log (repeating message)
```
2020-01-17T19:07:46.326516000Z
2020-01-17T19:07:47:WARNING:localstack.services.awslambda.lambda_executors: Empty event body specified for invocation of Lambda "arn:aws:lambda:us-east-1:000000000000:function:Hello"
2020-01-17T19:07:49:INFO:localstack.services.awslambda.lambda_executors: Running lambda cmd: CONTAINER_ID="$(docker create --user=root --entrypoint=/tmp/939fa309.sh -v /tmp/localstack/python27.bin:/usr/bin/python -v /tmp/localstack/gosu.bin:/usr/bin/gosu --dns 127.0.0.1 -i -e DOCKER_LAMBDA_USE_STDIN="$DOCKER_LAMBDA_USE_STDIN" -e HOSTNAME="$HOSTNAME" -e LOCALSTACK_HOSTNAME="$LOCALSTACK_HOSTNAME" -e AWS_LAMBDA_FUNCTION_TIMEOUT="$AWS_LAMBDA_FUNCTION_TIMEOUT" -e AWS_LAMBDA_FUNCTION_NAME="$AWS_LAMBDA_FUNCTION_NAME" -e AWS_LAMBDA_FUNCTION_VERSION="$AWS_LAMBDA_FUNCTION_VERSION" -e AWS_LAMBDA_FUNCTION_INVOKED_ARN="$AWS_LAMBDA_FUNCTION_INVOKED_ARN" --rm "lambci/lambda:nodejs10.x" "index.handler")";docker cp "/tmp/localstack/zipfile.89592b7a/." "$CONTAINER_ID:/var/task"; docker cp /tmp/939fa309.sh $CONTAINER_ID:/tmp/939fa309.sh; docker start -ai "$CONTAINER_ID";
2020-01-17T19:07:54:WARNING:localstack.services.awslambda.lambda_api: Error executing Lambda function arn:aws:lambda:us-east-1:000000000000:function:Hello: Lambda process returned error status code: 127. Result: nohup: failed to run command ‘python’: Permission denied. Output:
/tmp/939fa309.sh: line 229: gosu: command not found
Starting daemons... Traceback (most recent call last):
File "/opt/code/localstack/localstack/services/awslambda/lambda_api.py", line 384, in run_lambda
event, context=context, version=version, asynchronous=asynchronous)
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 90, in execute
return do_execute()
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 75, in do_execute
raise e
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 71, in do_execute
result, log_output = self._execute(func_arn, func_details, event, context, version)
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 209, in _execute
result, log_output = self.run_lambda_executor(cmd, stdin, environment)
File "/opt/code/localstack/localstack/services/awslambda/lambda_executors.py", line 127, in run_lambda_executor
(return_code, result, log_output))
Exception: Lambda process returned error status code: 127. Result: nohup: failed to run command ‘python’: Permission denied. Output:
/tmp/939fa309.sh: line 229: gosu: command not found
Starting daemons...
2020-01-17T19:07:54.332802200Z
```
|
priority
|
error invoking lambda with docker love localstack please consider supporting our collective 👉 latest version of the docker image also failed with previous couple of versions lambda loaded awslocal lambda list functions returns functions tracingconfig mode passthrough version latest functionname hello memorysize codesize functionarn arn aws lambda us east function hello handler index handler role arn aws iam role dp garage dev data platform api user role to assume timeout lastmodified runtime x description lambda code index js js exports handler event context cb cb hello world calling with awslocal lambda invoke function name hello lambda out error log repeating message warning localstack services awslambda lambda executors empty event body specified for invocation of lambda arn aws lambda us east function hello info localstack services awslambda lambda executors running lambda cmd container id docker create user root entrypoint tmp sh v tmp localstack bin usr bin python v tmp localstack gosu bin usr bin gosu dns i e docker lambda use stdin docker lambda use stdin e hostname hostname e localstack hostname localstack hostname e aws lambda function timeout aws lambda function timeout e aws lambda function name aws lambda function name e aws lambda function version aws lambda function version e aws lambda function invoked arn aws lambda function invoked arn rm lambci lambda x index handler docker cp tmp localstack zipfile container id var task docker cp tmp sh container id tmp sh docker start ai container id warning localstack services awslambda lambda api error executing lambda function arn aws lambda us east function hello lambda process returned error status code result nohup failed to run command ‘python’ permission denied output tmp sh line gosu command not found starting daemons traceback most recent call last file opt code localstack localstack services awslambda lambda api py line in run lambda event context context version version asynchronous asynchronous file opt code localstack localstack services awslambda lambda executors py line in execute return do execute file opt code localstack localstack services awslambda lambda executors py line in do execute raise e file opt code localstack localstack services awslambda lambda executors py line in do execute result log output self execute func arn func details event context version file opt code localstack localstack services awslambda lambda executors py line in execute result log output self run lambda executor cmd stdin environment file opt code localstack localstack services awslambda lambda executors py line in run lambda executor return code result log output exception lambda process returned error status code result nohup failed to run command ‘python’ permission denied output tmp sh line gosu command not found starting daemons
| 1
|
215,464
| 7,294,381,542
|
IssuesEvent
|
2018-02-25 23:04:20
|
python/mypy
|
https://api.github.com/repos/python/mypy
|
closed
|
"None" has no attribute "foo"
|
bug priority-0-high
|
Given
```
x = None
if x is not None:
x.foo()
```
running `mypy file.py` reports:
```
file.py:3: error: "None" has no attribute "foo"
```
mypy version: `mypy 0.570-dev-8ec7046787f92bf7b00d0c70f570fcb5d75e4b53`
Passing `--strict-optional` makes the problem go away.
|
1.0
|
"None" has no attribute "foo" - Given
```
x = None
if x is not None:
x.foo()
```
running `mypy file.py` reports:
```
file.py:3: error: "None" has no attribute "foo"
```
mypy version: `mypy 0.570-dev-8ec7046787f92bf7b00d0c70f570fcb5d75e4b53`
Passing `--strict-optional` makes the problem go away.
|
priority
|
none has no attribute foo given x none if x is not none x foo running mypy file py reports file py error none has no attribute foo mypy version mypy dev passing strict optional makes the problem go away
| 1
|
170,916
| 6,474,569,310
|
IssuesEvent
|
2017-08-17 18:23:53
|
srtucker22/chatty
|
https://api.github.com/repos/srtucker22/chatty
|
opened
|
Network error handling
|
bug high priority
|
Need to address this somehow somewhere that makes sense in the tutorial....
|
1.0
|
Network error handling - Need to address this somehow somewhere that makes sense in the tutorial....
|
priority
|
network error handling need to address this somehow somewhere that makes sense in the tutorial
| 1
|
274,872
| 8,568,973,521
|
IssuesEvent
|
2018-11-11 04:28:52
|
CS2113-AY1819S1-W12-1/main
|
https://api.github.com/repos/CS2113-AY1819S1-W12-1/main
|
closed
|
Feedback on week 12 project progress
|
priority.high
|
Subject: feedback on week 12 project progress
See [v1.4 progress guide](https://nuscs2113-ay1819s1.github.io/website/admin/project-w12-mid-v14.html) for more details of the activities mentioned below.
## Team progress
### Recommended progress for mid-v1.4
* Milestone v1.4 _managed_ systematically
- [x] A suitable deadline set (:heavy_check_mark: well done!)
- [x] Issues allocated to it e.g., [link](https://github.com/CS2113-AY1819S1-W12-1/main/issues/13) (:heavy_check_mark: well done!)
## Individual progress of @linnnruoo
### Recommended progress for mid-v1.4
- [x] Has issues/PRs assigned for the milestone (:heavy_check_mark: well done!)
- [ ] PPP available at `https://cs2113-ay1819s1-w12-1.github.io/main/team/linnnruoo.html` (:exclamation: try to do by next milestone)
- [ ] PPP contains a link to your code on RepoSense (:exclamation: try to do by next milestone)
## Individual progress of @driedmelon
### Recommended progress for mid-v1.4
- [ ] Has issues/PRs assigned for the milestone (:exclamation: try to do by next milestone)
- [ ] PPP available at `https://cs2113-ay1819s1-w12-1.github.io/main/team/driedmelon.html` (:exclamation: try to do by next milestone)
- [ ] PPP contains a link to your code on RepoSense (:exclamation: try to do by next milestone)
## Individual progress of @elstonayx
### Recommended progress for mid-v1.4
- [x] Has issues/PRs assigned for the milestone (:heavy_check_mark: well done!)
- [x] PPP available at `https://cs2113-ay1819s1-w12-1.github.io/main/team/elstonayx.html` (:heavy_check_mark: well done!)
- [ ] PPP contains a link to your code on RepoSense (:exclamation: try to do by next milestone)
## Individual progress of @jitwei98
### Recommended progress for mid-v1.4
- [x] Has issues/PRs assigned for the milestone (:heavy_check_mark: well done!)
- [x] PPP available at `https://cs2113-ay1819s1-w12-1.github.io/main/team/jitwei98.html` (:heavy_check_mark: well done!)
- [ ] PPP contains a link to your code on RepoSense (:exclamation: try to do by next milestone)
## Individual progress of @junweiljw
### Recommended progress for mid-v1.4
- [x] Has issues/PRs assigned for the milestone (:heavy_check_mark: well done!)
- [ ] PPP available at `https://cs2113-ay1819s1-w12-1.github.io/main/team/junweiljw.html` (:exclamation: try to do by next milestone)
- [ ] PPP contains a link to your code on RepoSense (:exclamation: try to do by next milestone)
Tutor: @okkhoy
Note: the above observation was done by the CS2113-feedback-bot and covers changes up to 2018-11-07 02:00:00 only. If you think the above observation is incorrect, please let us know by replying in this thread. Please include links to relevant PRs/comments in your response.
|
1.0
|
Feedback on week 12 project progress - Subject: feedback on week 12 project progress
See [v1.4 progress guide](https://nuscs2113-ay1819s1.github.io/website/admin/project-w12-mid-v14.html) for more details of the activities mentioned below.
## Team progress
### Recommended progress for mid-v1.4
* Milestone v1.4 _managed_ systematically
- [x] A suitable deadline set (:heavy_check_mark: well done!)
- [x] Issues allocated to it e.g., [link](https://github.com/CS2113-AY1819S1-W12-1/main/issues/13) (:heavy_check_mark: well done!)
## Individual progress of @linnnruoo
### Recommended progress for mid-v1.4
- [x] Has issues/PRs assigned for the milestone (:heavy_check_mark: well done!)
- [ ] PPP available at `https://cs2113-ay1819s1-w12-1.github.io/main/team/linnnruoo.html` (:exclamation: try to do by next milestone)
- [ ] PPP contains a link to your code on RepoSense (:exclamation: try to do by next milestone)
## Individual progress of @driedmelon
### Recommended progress for mid-v1.4
- [ ] Has issues/PRs assigned for the milestone (:exclamation: try to do by next milestone)
- [ ] PPP available at `https://cs2113-ay1819s1-w12-1.github.io/main/team/driedmelon.html` (:exclamation: try to do by next milestone)
- [ ] PPP contains a link to your code on RepoSense (:exclamation: try to do by next milestone)
## Individual progress of @elstonayx
### Recommended progress for mid-v1.4
- [x] Has issues/PRs assigned for the milestone (:heavy_check_mark: well done!)
- [x] PPP available at `https://cs2113-ay1819s1-w12-1.github.io/main/team/elstonayx.html` (:heavy_check_mark: well done!)
- [ ] PPP contains a link to your code on RepoSense (:exclamation: try to do by next milestone)
## Individual progress of @jitwei98
### Recommended progress for mid-v1.4
- [x] Has issues/PRs assigned for the milestone (:heavy_check_mark: well done!)
- [x] PPP available at `https://cs2113-ay1819s1-w12-1.github.io/main/team/jitwei98.html` (:heavy_check_mark: well done!)
- [ ] PPP contains a link to your code on RepoSense (:exclamation: try to do by next milestone)
## Individual progress of @junweiljw
### Recommended progress for mid-v1.4
- [x] Has issues/PRs assigned for the milestone (:heavy_check_mark: well done!)
- [ ] PPP available at `https://cs2113-ay1819s1-w12-1.github.io/main/team/junweiljw.html` (:exclamation: try to do by next milestone)
- [ ] PPP contains a link to your code on RepoSense (:exclamation: try to do by next milestone)
Tutor: @okkhoy
Note: the above observation was done by the CS2113-feedback-bot and covers changes up to 2018-11-07 02:00:00 only. If you think the above observation is incorrect, please let us know by replying in this thread. Please include links to relevant PRs/comments in your response.
|
priority
|
feedback on week project progress subject feedback on week project progress see for more details of the activities mentioned below team progress recommended progress for mid milestone managed systematically a suitable deadline set heavy check mark well done issues allocated to it e g heavy check mark well done individual progress of linnnruoo recommended progress for mid has issues prs assigned for the milestone heavy check mark well done ppp available at exclamation try to do by next milestone ppp contains a link to your code on reposense exclamation try to do by next milestone individual progress of driedmelon recommended progress for mid has issues prs assigned for the milestone exclamation try to do by next milestone ppp available at exclamation try to do by next milestone ppp contains a link to your code on reposense exclamation try to do by next milestone individual progress of elstonayx recommended progress for mid has issues prs assigned for the milestone heavy check mark well done ppp available at heavy check mark well done ppp contains a link to your code on reposense exclamation try to do by next milestone individual progress of recommended progress for mid has issues prs assigned for the milestone heavy check mark well done ppp available at heavy check mark well done ppp contains a link to your code on reposense exclamation try to do by next milestone individual progress of junweiljw recommended progress for mid has issues prs assigned for the milestone heavy check mark well done ppp available at exclamation try to do by next milestone ppp contains a link to your code on reposense exclamation try to do by next milestone tutor okkhoy note the above observation was done by the feedback bot and covers changes up to only if you think the above observation is incorrect please let us know by replying in this thread please include links to relevant prs comments in your response
| 1
|
677,559
| 23,165,903,331
|
IssuesEvent
|
2022-07-30 01:06:28
|
Unity-Technologies/com.unity.netcode.gameobjects
|
https://api.github.com/repos/Unity-Technologies/com.unity.netcode.gameobjects
|
closed
|
NetworkShow/NetworkHide doesn't work for scene network objects
|
type:bug stat:commited priority:high stat:imported
|
### Description
When a scene object is a network object, showing, hiding, then reshowing that scene object throws an exception.
### Reproduce Steps
1. Create a scene with a network object.
2. On server, set CheckObjectVisibility to false (don't believe this is even necessary)
3. Boot a server into the scene
4. Boot a client into the scene
5. Show the scene object to the client (they should already be able to see it, but won't receive messages for it)
6. Hide the scene object to the client - client should see the object despawned
7. Show the scene object to the client - exception
### Actual Outcome
Image shown below of the exception that is thrown
### Expected Outcome
The scene object should appear to the client
### Screenshots

### Environment
- OS: Windows 10
- Unity Version: 2020.3.24f1
- Netcode Version: 1.0.0-pre.7
### Additional Context
[Player.log](https://github.com/Unity-Technologies/com.unity.netcode.gameobjects/files/8656229/Player.log)
|
1.0
|
NetworkShow/NetworkHide doesn't work for scene network objects - ### Description
When a scene object is a network object, showing, hiding, then reshowing that scene object throws an exception.
### Reproduce Steps
1. Create a scene with a network object.
2. On server, set CheckObjectVisibility to false (don't believe this is even necessary)
3. Boot a server into the scene
4. Boot a client into the scene
5. Show the scene object to the client (they should already be able to see it, but won't receive messages for it)
6. Hide the scene object to the client - client should see the object despawned
7. Show the scene object to the client - exception
### Actual Outcome
Image shown below of the exception that is thrown
### Expected Outcome
The scene object should appear to the client
### Screenshots

### Environment
- OS: Windows 10
- Unity Version: 2020.3.24f1
- Netcode Version: 1.0.0-pre.7
### Additional Context
[Player.log](https://github.com/Unity-Technologies/com.unity.netcode.gameobjects/files/8656229/Player.log)
|
priority
|
networkshow networkhide doesn t work for scene network objects description when a scene object is a network object showing hiding then reshowing that scene object throws an exception reproduce steps create a scene with a network object on server set checkobjectvisibility to false don t believe this is even necessary boot a server into the scene boot a client into the scene show the scene object to the client they should already be able to see it but won t receive messages for it hide the scene object to the client client should see the object despawned show the scene object to the client exception actual outcome image shown below of the exception that is thrown expected outcome the scene object should appear to the client screenshots environment os windows unity version netcode version pre additional context
| 1
|
824,879
| 31,234,042,525
|
IssuesEvent
|
2023-08-20 03:27:26
|
erxes/erxes
|
https://api.github.com/repos/erxes/erxes
|
closed
|
[DEV-40] Upgrade "styled-components v5"
|
🛠 Enhancement priority: High Backlog
|
I'm wondering why you still rely on styled-components < v4 and have the burden of using styled-components-ts, which has been last updated 5 yrs ago. With v5, many improvements came, and it's a pain to keep using old v3 to enhance stuff
<sub>From [SyncLinear.com](https://synclinear.com) | [DEV-40](https://linear.app/erxes/issue/DEV-40/upgrade-styled-components-v5)</sub>
|
1.0
|
[DEV-40] Upgrade "styled-components v5" - I'm wondering why you still rely on styled-components < v4 and have the burden of using styled-components-ts, which has been last updated 5 yrs ago. With v5, many improvements came, and it's a pain to keep using old v3 to enhance stuff
<sub>From [SyncLinear.com](https://synclinear.com) | [DEV-40](https://linear.app/erxes/issue/DEV-40/upgrade-styled-components-v5)</sub>
|
priority
|
upgrade styled components i m wondering why you still rely on styled components and have the burden of using styled components ts which has been last updated yrs ago with many improvements came and it s a pain to keep using old to enhance stuff from
| 1
|
263,494
| 8,290,288,227
|
IssuesEvent
|
2018-09-19 16:54:27
|
nprapps/elections18-graphics
|
https://api.github.com/repos/nprapps/elections18-graphics
|
closed
|
Show a check-mark in the BoP chart when a _chamber_ has been called
|
effort:light priority:high
|
If we're using bar charts, add a check-mark once a party has control of a chamber
|
1.0
|
Show a check-mark in the BoP chart when a _chamber_ has been called - If we're using bar charts, add a check-mark once a party has control of a chamber
|
priority
|
show a check mark in the bop chart when a chamber has been called if we re using bar charts add a check mark once a party has control of a chamber
| 1
|
543,097
| 15,878,012,092
|
IssuesEvent
|
2021-04-09 10:24:10
|
red-hat-storage/ocs-ci
|
https://api.github.com/repos/red-hat-storage/ocs-ci
|
closed
|
Sequential Creation of PVC in AWS setup failes to Bound 2 out of 5 times.
|
High Priority bug
|
Observing this problem consistently during Scale test ocs-ci run for every build.
PVCs are created sequentially without waiting for it to Bound, later validating either PVC Bound or Not and there were failures reported as PVC not bound due to timeout, please note not observing this problem in vmware cluster.
The problem could be due to execution speed, with respect to vmware setup observing bit slowness in execution this leads to proving ample time for PVCs in csi queue and it's getting Bound.
So How to resolve this Problem ? Is it good to change the code like create a PVC wait for it Bound and create next ? or is there some other solution.
Recent aws test run: https://ocs4-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/qe-deploy-ocs-cluster/7484/testReport/junit/tests.e2e.scale.test_pvc_creation_deletion_scale/TestPVCCreationDeletionScale/test_multiple_pvc_creation_deletion_scale_ReadWriteOnce_CephBlockPool_/
Logs: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/jnk-ai3cl33-s/jnk-ai3cl33-s_20200513T203517/logs/failed_testcase_ocs_logs_1589405541/test_multiple_pvc_creation_deletion_scale%5bReadWriteMany-CephBlockPool%5d_ocs_logs/
|
1.0
|
Sequential Creation of PVC in AWS setup failes to Bound 2 out of 5 times. - Observing this problem consistently during Scale test ocs-ci run for every build.
PVCs are created sequentially without waiting for it to Bound, later validating either PVC Bound or Not and there were failures reported as PVC not bound due to timeout, please note not observing this problem in vmware cluster.
The problem could be due to execution speed, with respect to vmware setup observing bit slowness in execution this leads to proving ample time for PVCs in csi queue and it's getting Bound.
So How to resolve this Problem ? Is it good to change the code like create a PVC wait for it Bound and create next ? or is there some other solution.
Recent aws test run: https://ocs4-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/qe-deploy-ocs-cluster/7484/testReport/junit/tests.e2e.scale.test_pvc_creation_deletion_scale/TestPVCCreationDeletionScale/test_multiple_pvc_creation_deletion_scale_ReadWriteOnce_CephBlockPool_/
Logs: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/jnk-ai3cl33-s/jnk-ai3cl33-s_20200513T203517/logs/failed_testcase_ocs_logs_1589405541/test_multiple_pvc_creation_deletion_scale%5bReadWriteMany-CephBlockPool%5d_ocs_logs/
|
priority
|
sequential creation of pvc in aws setup failes to bound out of times observing this problem consistently during scale test ocs ci run for every build pvcs are created sequentially without waiting for it to bound later validating either pvc bound or not and there were failures reported as pvc not bound due to timeout please note not observing this problem in vmware cluster the problem could be due to execution speed with respect to vmware setup observing bit slowness in execution this leads to proving ample time for pvcs in csi queue and it s getting bound so how to resolve this problem is it good to change the code like create a pvc wait for it bound and create next or is there some other solution recent aws test run logs
| 1
|
397,230
| 11,725,507,725
|
IssuesEvent
|
2020-03-10 13:04:58
|
mantidproject/mantid
|
https://api.github.com/repos/mantidproject/mantid
|
closed
|
Workbench is a bit slow
|
High Priority ISIS Team: CoreTeam ISIS Team: Spectroscopy Workbench
|
This is initially just an investigation.
Indirect has been slow.
James (moun) has also said workbench is slow:
```
Workbench is really slow with anything using the ADS. For example the Muon Analysis interface when loading more than one run at a time. (ALC is less affected) Also as an example my script “Create2DALCMap” (from the script muon/Create2DALCMap.py in the Repository). Run it with parameters:
FirstFile=//hifi/data/hifi00166860.nxs
NumberOfRuns=121
StartTime=0.1
EndTime=10
NBunch=20
LogSelection=Field_Main_Target
OutputWorkspace=ChooseAWorkspaceName
On Plot it takes 37.42 seconds (my desktop PC, mostly dominated by loading the files). On Workbench it takes 4m 17s: nearly 7 times slower! Workbench with the Progress bar hidden: 4m 21.3s (the same within error).
```
|
1.0
|
Workbench is a bit slow - This is initially just an investigation.
Indirect has been slow.
James (moun) has also said workbench is slow:
```
Workbench is really slow with anything using the ADS. For example the Muon Analysis interface when loading more than one run at a time. (ALC is less affected) Also as an example my script “Create2DALCMap” (from the script muon/Create2DALCMap.py in the Repository). Run it with parameters:
FirstFile=//hifi/data/hifi00166860.nxs
NumberOfRuns=121
StartTime=0.1
EndTime=10
NBunch=20
LogSelection=Field_Main_Target
OutputWorkspace=ChooseAWorkspaceName
On Plot it takes 37.42 seconds (my desktop PC, mostly dominated by loading the files). On Workbench it takes 4m 17s: nearly 7 times slower! Workbench with the Progress bar hidden: 4m 21.3s (the same within error).
```
|
priority
|
workbench is a bit slow this is initially just an investigation indirect has been slow james moun has also said workbench is slow workbench is really slow with anything using the ads for example the muon analysis interface when loading more than one run at a time alc is less affected also as an example my script “ ” from the script muon py in the repository run it with parameters firstfile hifi data nxs numberofruns starttime endtime nbunch logselection field main target outputworkspace chooseaworkspacename on plot it takes seconds my desktop pc mostly dominated by loading the files on workbench it takes nearly times slower workbench with the progress bar hidden the same within error
| 1
|
28,975
| 2,712,760,660
|
IssuesEvent
|
2015-04-09 15:29:54
|
nexusformat/definitions
|
https://api.github.com/repos/nexusformat/definitions
|
closed
|
Merge CIF coordinates into NeXus
|
bug high priority
|
**Original reporter**: *[prjemian](https://github.com/prjemian)*
Merge the method used by CIF to define coordinates into the NeXus coordinate system.
|
1.0
|
Merge CIF coordinates into NeXus - **Original reporter**: *[prjemian](https://github.com/prjemian)*
Merge the method used by CIF to define coordinates into the NeXus coordinate system.
|
priority
|
merge cif coordinates into nexus original reporter merge the method used by cif to define coordinates into the nexus coordinate system
| 1
|
209,691
| 7,178,688,683
|
IssuesEvent
|
2018-01-31 17:12:29
|
vmware/vic
|
https://api.github.com/repos/vmware/vic
|
closed
|
Tests in 1-18-Docker-Network-RM fail due to already-existing network
|
kind/bug priority/high status/needs-triage team/container
|
**VIC version:**
`v1.3.0-rc1-15605-89bc7c9` (not actually RC1)
**Deployment details:**
https://ci.vcna.io/vmware/vic/15605
**Actual behavior:**
1. `Basic network remove` reported "Error response from daemon: network test-network already exists"
2. `Remove already removed network` failed with "'Error response from daemon: test-network has active endpoints' does not contain 'Error response from daemon: network test-network not found'"
3. `Remove network with running container` reported "Error response from daemon: network test-network already exists"
**Expected behavior:**
* The tests pass.
* The tests do not depend on another in a way that causes cascading failures like this.
**Logs:**
[Test-Cases.Group1-Docker-Commands.1-18-Docker-Network-RM-VCH-15605-9279-container-logs.zip](https://github.com/vmware/vic/files/1619960/Test-Cases.Group1-Docker-Commands.1-18-Docker-Network-RM-VCH-15605-9279-container-logs.zip)
**Additional details as necessary:**

|
1.0
|
Tests in 1-18-Docker-Network-RM fail due to already-existing network - **VIC version:**
`v1.3.0-rc1-15605-89bc7c9` (not actually RC1)
**Deployment details:**
https://ci.vcna.io/vmware/vic/15605
**Actual behavior:**
1. `Basic network remove` reported "Error response from daemon: network test-network already exists"
2. `Remove already removed network` failed with "'Error response from daemon: test-network has active endpoints' does not contain 'Error response from daemon: network test-network not found'"
3. `Remove network with running container` reported "Error response from daemon: network test-network already exists"
**Expected behavior:**
* The tests pass.
* The tests do not depend on another in a way that causes cascading failures like this.
**Logs:**
[Test-Cases.Group1-Docker-Commands.1-18-Docker-Network-RM-VCH-15605-9279-container-logs.zip](https://github.com/vmware/vic/files/1619960/Test-Cases.Group1-Docker-Commands.1-18-Docker-Network-RM-VCH-15605-9279-container-logs.zip)
**Additional details as necessary:**

|
priority
|
tests in docker network rm fail due to already existing network vic version not actually deployment details actual behavior basic network remove reported error response from daemon network test network already exists remove already removed network failed with error response from daemon test network has active endpoints does not contain error response from daemon network test network not found remove network with running container reported error response from daemon network test network already exists expected behavior the tests pass the tests do not depend on another in a way that causes cascading failures like this logs additional details as necessary
| 1
|
142,387
| 5,474,680,165
|
IssuesEvent
|
2017-03-11 02:29:53
|
fossasia/open-event-orga-server
|
https://api.github.com/repos/fossasia/open-event-orga-server
|
closed
|
Ticketing: More than Maximum Number of Tickets set can be bought
|
bug Priority: High Priority: URGENT
|
The system let's users buy more than the maximum number of tickets set by the organizer.
Tickets set in wizard step 1:

Tickets sold:

Tickets still available on site. It should display: "Sold out".

|
2.0
|
Ticketing: More than Maximum Number of Tickets set can be bought - The system let's users buy more than the maximum number of tickets set by the organizer.
Tickets set in wizard step 1:

Tickets sold:

Tickets still available on site. It should display: "Sold out".

|
priority
|
ticketing more than maximum number of tickets set can be bought the system let s users buy more than the maximum number of tickets set by the organizer tickets set in wizard step tickets sold tickets still available on site it should display sold out
| 1
|
595,092
| 18,059,704,295
|
IssuesEvent
|
2021-09-20 12:44:25
|
AY2122S1-CS2103-W14-4/tp
|
https://api.github.com/repos/AY2122S1-CS2103-W14-4/tp
|
opened
|
Add description
|
type.Story priority.High
|
As a user, I can add a description to a task so that I can see the extra details pertaining to the task.
|
1.0
|
Add description - As a user, I can add a description to a task so that I can see the extra details pertaining to the task.
|
priority
|
add description as a user i can add a description to a task so that i can see the extra details pertaining to the task
| 1
|
121,030
| 4,804,024,286
|
IssuesEvent
|
2016-11-02 12:10:46
|
CS2103AUG2016-W15-C3/main
|
https://api.github.com/repos/CS2103AUG2016-W15-C3/main
|
closed
|
Code quality feedback for A0142130A
|
priority.high
|
# main
- many methods don't have header comments
# test
- your tests are pretty neatly written :+1:
- the only issue is that, test names don't follow the convention
# docs
- while you have some contribution to the documents, it doesn't seem significant; please note that we require all team members to make significant contributions to all aspects of the project
Some more details [here](https://github.com/nus-cs2103-AY1617S1/addressbook-level4/pull/94#pullrequestreview-6361376)
|
1.0
|
Code quality feedback for A0142130A - # main
- many methods don't have header comments
# test
- your tests are pretty neatly written :+1:
- the only issue is that, test names don't follow the convention
# docs
- while you have some contribution to the documents, it doesn't seem significant; please note that we require all team members to make significant contributions to all aspects of the project
Some more details [here](https://github.com/nus-cs2103-AY1617S1/addressbook-level4/pull/94#pullrequestreview-6361376)
|
priority
|
code quality feedback for main many methods don t have header comments test your tests are pretty neatly written the only issue is that test names don t follow the convention docs while you have some contribution to the documents it doesn t seem significant please note that we require all team members to make significant contributions to all aspects of the project some more details
| 1
|
675,302
| 23,089,104,217
|
IssuesEvent
|
2022-07-26 13:53:51
|
PyPSA/pypsa-eur-sec
|
https://api.github.com/repos/PyPSA/pypsa-eur-sec
|
closed
|
Generalised way to promote config settings to `{sector_opts}` wildcard entries
|
high-priority
|
We often have the problem that it's not easy to run parameter sweeps on all config settings. Can we come up with a generalised way to promote config settings to `{sector_opts}` wildcard entries?
Like:
`...-CF:sector:dac:false-...`
To control:
```yaml
sector:
dac: true
```
|
1.0
|
Generalised way to promote config settings to `{sector_opts}` wildcard entries - We often have the problem that it's not easy to run parameter sweeps on all config settings. Can we come up with a generalised way to promote config settings to `{sector_opts}` wildcard entries?
Like:
`...-CF:sector:dac:false-...`
To control:
```yaml
sector:
dac: true
```
|
priority
|
generalised way to promote config settings to sector opts wildcard entries we often have the problem that it s not easy to run parameter sweeps on all config settings can we come up with a generalised way to promote config settings to sector opts wildcard entries like cf sector dac false to control yaml sector dac true
| 1
|
301,420
| 9,220,249,169
|
IssuesEvent
|
2019-03-11 17:04:39
|
strapi/strapi
|
https://api.github.com/repos/strapi/strapi
|
reopened
|
Password Reset redirects authenticated users to admin
|
Good for new contributors priority: high type: enhancement 💅
|
**Informations**
- **Node.js version**: 10.4.1
- **npm version**: 6.4.1
- **Strapi version**: Beta 14.2
- **Database**: MongoDB
- **Operating system**: macOS
**What is the current behavior?**
When using the password reset link below, the user is redirected into the admin
area, which he/she should not have access to. After trying to click any link then,
an error occurs and nothing works.
**Steps to reproduce the problem**
1. Send a POST request to `http://localhost:1337/auth/forgot-password` with the following body:
```json
{
"email": "john@doe.com",
"url": "https://localhost:1337/admin/plugins/users-permissions/auth/reset-password"
}
```
**What is the expected behavior?**
After the password reset, redirect the user either to a specified URL or to the main page. But in any case, do not redirect the user to the /admin area.
**Suggested solutions**
Either add a redirect URL or redirect users based on their roles.
|
1.0
|
Password Reset redirects authenticated users to admin - **Informations**
- **Node.js version**: 10.4.1
- **npm version**: 6.4.1
- **Strapi version**: Beta 14.2
- **Database**: MongoDB
- **Operating system**: macOS
**What is the current behavior?**
When using the password reset link below, the user is redirected into the admin
area, which he/she should not have access to. After trying to click any link then,
an error occurs and nothing works.
**Steps to reproduce the problem**
1. Send a POST request to `http://localhost:1337/auth/forgot-password` with the following body:
```json
{
"email": "john@doe.com",
"url": "https://localhost:1337/admin/plugins/users-permissions/auth/reset-password"
}
```
**What is the expected behavior?**
After the password reset, redirect the user either to a specified URL or to the main page. But in any case, do not redirect the user to the /admin area.
**Suggested solutions**
Either add a redirect URL or redirect users based on their roles.
|
priority
|
password reset redirects authenticated users to admin informations node js version npm version strapi version beta database mongodb operating system macos what is the current behavior when using the password reset link below the user is redirected into the admin area which he she should not have access to after trying to click any link then an error occurs and nothing works steps to reproduce the problem send a post request to with the following body json email john doe com url what is the expected behavior after the password reset redirect the user either to a specified url or to the main page but in any case do not redirect the user to the admin area suggested solutions either add a redirect url or redirect users based on their roles
| 1
|
553,882
| 16,384,475,680
|
IssuesEvent
|
2021-05-17 08:40:23
|
technologiestiftung/flusshygiene
|
https://api.github.com/repos/technologiestiftung/flusshygiene
|
opened
|
Cran packages not found
|
High Priority Packages: FHPredict Base bug
|
We have an issue while building the opencpu/base image.
Some images can't be found.
```plain
Step 6/9 : RUN add-apt-repository ppa:c2d4u.team/c2d4u4.0+ && apt-get update && apt-get install -y r-cran-rstanarm=2.21.1-1cran1.2004.0 r-cran-units=0.6-7-1cran1.2004.0 r-cran-raster=3.4-5-1cran1.2004.0 r-cran-rcurl=1.98-1.2-1cran1.2004.0 r-cran-sf=0.9-7-1cran1.2004.0 r-cran-dplyr=1.0.3-1cran1.2004.0 r-cran-modelmetrics=1.2.2.2-1cran1.2004.0 r-cran-caret=6.0-86-1cran1.2004.0 r-cran-fs=1.5.0-1cran1.2004.0 r-cran-httr=1.4.2-1cran1.2004.0 r-cran-aws.signature=0.6.0-1cran1.2004.0 r-cran-xml2=1.3.2-1cran1.2004.0 r-cran-lmtest=0.9.37-2.1~ubuntu20.04.1~ppa1 && apt-get clean
---> Running in eabf3c97e143
A PPA for R packages from CRAN's Task Views built against R 4.0 (and subsequent releases). Only building packages for LTS releases.
More info: launchpad.net/~c2d4u.team/+archive/ubuntu/c2d4u4.0+
Get:1 http://security.ubuntu.com/ubuntu focal-security InRelease [109 kB]
Hit:2 http://archive.ubuntu.com/ubuntu focal InRelease
Get:3 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu focal InRelease [18.1 kB]
Get:4 http://archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]
Get:5 http://security.ubuntu.com/ubuntu focal-security/restricted amd64 Packages [267 kB]
Get:6 http://ppa.launchpad.net/opencpu/opencpu-2.2/ubuntu focal InRelease [18.1 kB]
Get:7 http://security.ubuntu.com/ubuntu focal-security/main amd64 Packages [817 kB]
Get:8 http://security.ubuntu.com/ubuntu focal-security/multiverse amd64 Packages [21.7 kB]
Get:9 http://security.ubuntu.com/ubuntu focal-security/universe amd64 Packages [700 kB]
Get:10 http://archive.ubuntu.com/ubuntu focal-backports InRelease [101 kB]
Get:11 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu focal/main amd64 Packages [879 kB]
Get:12 http://archive.ubuntu.com/ubuntu focal-updates/multiverse amd64 Packages [29.7 kB]
Get:13 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages [1238 kB]
Get:14 http://archive.ubuntu.com/ubuntu focal-updates/restricted amd64 Packages [299 kB]
Get:15 http://archive.ubuntu.com/ubuntu focal-updates/universe amd64 Packages [969 kB]
Get:16 http://archive.ubuntu.com/ubuntu focal-backports/universe amd64 Packages [4305 B]
Get:17 http://ppa.launchpad.net/opencpu/opencpu-2.2/ubuntu focal/main amd64 Packages [8339 B]
Fetched 5594 kB in 1s (4335 kB/s)
Reading package lists...
Hit:1 http://security.ubuntu.com/ubuntu focal-security InRelease
Hit:2 http://archive.ubuntu.com/ubuntu focal InRelease
Hit:3 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu focal InRelease
Hit:4 http://archive.ubuntu.com/ubuntu focal-updates InRelease
Hit:5 http://archive.ubuntu.com/ubuntu focal-backports InRelease
Hit:6 http://ppa.launchpad.net/opencpu/opencpu-2.2/ubuntu focal InRelease
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
E: Version '0.6-7-1cran1.2004.0' for 'r-cran-units' was not found
E: Version '3.4-5-1cran1.2004.0' for 'r-cran-raster' was not found
E: Version '1.98-1.2-1cran1.2004.0' for 'r-cran-rcurl' was not found
E: Version '0.9-7-1cran1.2004.0' for 'r-cran-sf' was not found
E: Version '1.0.3-1cran1.2004.0' for 'r-cran-dplyr' was not found
E: Version '6.0-86-1cran1.2004.0' for 'r-cran-caret' was not found
```
|
1.0
|
Cran packages not found - We have an issue while building the opencpu/base image.
Some images can't be found.
```plain
Step 6/9 : RUN add-apt-repository ppa:c2d4u.team/c2d4u4.0+ && apt-get update && apt-get install -y r-cran-rstanarm=2.21.1-1cran1.2004.0 r-cran-units=0.6-7-1cran1.2004.0 r-cran-raster=3.4-5-1cran1.2004.0 r-cran-rcurl=1.98-1.2-1cran1.2004.0 r-cran-sf=0.9-7-1cran1.2004.0 r-cran-dplyr=1.0.3-1cran1.2004.0 r-cran-modelmetrics=1.2.2.2-1cran1.2004.0 r-cran-caret=6.0-86-1cran1.2004.0 r-cran-fs=1.5.0-1cran1.2004.0 r-cran-httr=1.4.2-1cran1.2004.0 r-cran-aws.signature=0.6.0-1cran1.2004.0 r-cran-xml2=1.3.2-1cran1.2004.0 r-cran-lmtest=0.9.37-2.1~ubuntu20.04.1~ppa1 && apt-get clean
---> Running in eabf3c97e143
A PPA for R packages from CRAN's Task Views built against R 4.0 (and subsequent releases). Only building packages for LTS releases.
More info: launchpad.net/~c2d4u.team/+archive/ubuntu/c2d4u4.0+
Get:1 http://security.ubuntu.com/ubuntu focal-security InRelease [109 kB]
Hit:2 http://archive.ubuntu.com/ubuntu focal InRelease
Get:3 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu focal InRelease [18.1 kB]
Get:4 http://archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]
Get:5 http://security.ubuntu.com/ubuntu focal-security/restricted amd64 Packages [267 kB]
Get:6 http://ppa.launchpad.net/opencpu/opencpu-2.2/ubuntu focal InRelease [18.1 kB]
Get:7 http://security.ubuntu.com/ubuntu focal-security/main amd64 Packages [817 kB]
Get:8 http://security.ubuntu.com/ubuntu focal-security/multiverse amd64 Packages [21.7 kB]
Get:9 http://security.ubuntu.com/ubuntu focal-security/universe amd64 Packages [700 kB]
Get:10 http://archive.ubuntu.com/ubuntu focal-backports InRelease [101 kB]
Get:11 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu focal/main amd64 Packages [879 kB]
Get:12 http://archive.ubuntu.com/ubuntu focal-updates/multiverse amd64 Packages [29.7 kB]
Get:13 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages [1238 kB]
Get:14 http://archive.ubuntu.com/ubuntu focal-updates/restricted amd64 Packages [299 kB]
Get:15 http://archive.ubuntu.com/ubuntu focal-updates/universe amd64 Packages [969 kB]
Get:16 http://archive.ubuntu.com/ubuntu focal-backports/universe amd64 Packages [4305 B]
Get:17 http://ppa.launchpad.net/opencpu/opencpu-2.2/ubuntu focal/main amd64 Packages [8339 B]
Fetched 5594 kB in 1s (4335 kB/s)
Reading package lists...
Hit:1 http://security.ubuntu.com/ubuntu focal-security InRelease
Hit:2 http://archive.ubuntu.com/ubuntu focal InRelease
Hit:3 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu focal InRelease
Hit:4 http://archive.ubuntu.com/ubuntu focal-updates InRelease
Hit:5 http://archive.ubuntu.com/ubuntu focal-backports InRelease
Hit:6 http://ppa.launchpad.net/opencpu/opencpu-2.2/ubuntu focal InRelease
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
E: Version '0.6-7-1cran1.2004.0' for 'r-cran-units' was not found
E: Version '3.4-5-1cran1.2004.0' for 'r-cran-raster' was not found
E: Version '1.98-1.2-1cran1.2004.0' for 'r-cran-rcurl' was not found
E: Version '0.9-7-1cran1.2004.0' for 'r-cran-sf' was not found
E: Version '1.0.3-1cran1.2004.0' for 'r-cran-dplyr' was not found
E: Version '6.0-86-1cran1.2004.0' for 'r-cran-caret' was not found
```
|
priority
|
cran packages not found we have an issue while building the opencpu base image some images can t be found plain step run add apt repository ppa team apt get update apt get install y r cran rstanarm r cran units r cran raster r cran rcurl r cran sf r cran dplyr r cran modelmetrics r cran caret r cran fs r cran httr r cran aws signature r cran r cran lmtest apt get clean running in a ppa for r packages from cran s task views built against r and subsequent releases only building packages for lts releases more info launchpad net team archive ubuntu get focal security inrelease hit focal inrelease get focal inrelease get focal updates inrelease get focal security restricted packages get focal inrelease get focal security main packages get focal security multiverse packages get focal security universe packages get focal backports inrelease get focal main packages get focal updates multiverse packages get focal updates main packages get focal updates restricted packages get focal updates universe packages get focal backports universe packages get focal main packages fetched kb in kb s reading package lists hit focal security inrelease hit focal inrelease hit focal inrelease hit focal updates inrelease hit focal backports inrelease hit focal inrelease reading package lists reading package lists building dependency tree reading state information e version for r cran units was not found e version for r cran raster was not found e version for r cran rcurl was not found e version for r cran sf was not found e version for r cran dplyr was not found e version for r cran caret was not found
| 1
|
134,938
| 5,240,453,399
|
IssuesEvent
|
2017-01-31 13:11:48
|
duckduckgo/zeroclickinfo-fathead
|
https://api.github.com/repos/duckduckgo/zeroclickinfo-fathead
|
closed
|
Python: numpy fathead - backslash in some examples need escaping
|
Bug Mission: Programming Priority: High Status: Work In Progress Topic: Python Topic: Reference
|
### Description
There are some escape sequences in the generated examples that need escaping. See this [issue comment](https://github.com/duckduckgo/zeroclickinfo-fathead/pull/688#issuecomment-275255792) in #688
## Get Started
- [x] 1) Claim this issue by commenting below
- [x] 2) Review our [Contributing Guide](https://github.com/duckduckgo/zeroclickinfo-fathead/blob/master/CONTRIBUTING.md)
- [x] 3) [Set up your development environment](https://docs.duckduckhack.com/welcome/setup-dev-environment.html), and fork this repository
- [x] 4) Create a Pull Request
## Resources
- Join [DuckDuckHack Slack](https://quackslack.herokuapp.com/) to ask questions
- Join the [DuckDuckHack Forum](https://forum.duckduckhack.com/) to discuss project planning and Instant Answer metrics
- Read the [DuckDuckHack Documentation](https://docs.duckduckhack.com/) for technical help
<!-- DO NOT REMOVE -->
---
<!-- The Instant Answer ID can be found by clicking the `?` icon beside the Instant Answer result on DuckDuckGo.com -->
Instant Answer Page: https://duck.co/ia/view/numpy
<!-- FILL THIS IN: ^^^^ -->
|
1.0
|
Python: numpy fathead - backslash in some examples need escaping - ### Description
There are some escape sequences in the generated examples that need escaping. See this [issue comment](https://github.com/duckduckgo/zeroclickinfo-fathead/pull/688#issuecomment-275255792) in #688
## Get Started
- [x] 1) Claim this issue by commenting below
- [x] 2) Review our [Contributing Guide](https://github.com/duckduckgo/zeroclickinfo-fathead/blob/master/CONTRIBUTING.md)
- [x] 3) [Set up your development environment](https://docs.duckduckhack.com/welcome/setup-dev-environment.html), and fork this repository
- [x] 4) Create a Pull Request
## Resources
- Join [DuckDuckHack Slack](https://quackslack.herokuapp.com/) to ask questions
- Join the [DuckDuckHack Forum](https://forum.duckduckhack.com/) to discuss project planning and Instant Answer metrics
- Read the [DuckDuckHack Documentation](https://docs.duckduckhack.com/) for technical help
<!-- DO NOT REMOVE -->
---
<!-- The Instant Answer ID can be found by clicking the `?` icon beside the Instant Answer result on DuckDuckGo.com -->
Instant Answer Page: https://duck.co/ia/view/numpy
<!-- FILL THIS IN: ^^^^ -->
|
priority
|
python numpy fathead backslash in some examples need escaping description there are some escape sequences in the generated examples that need escaping see this in get started claim this issue by commenting below review our and fork this repository create a pull request resources join to ask questions join the to discuss project planning and instant answer metrics read the for technical help instant answer page
| 1
|
419,110
| 12,217,736,784
|
IssuesEvent
|
2020-05-01 17:49:43
|
yugabyte/yugabyte-db
|
https://api.github.com/repos/yugabyte/yugabyte-db
|
closed
|
[Platform] GCP shared VPC create universe fails
|
area/platform priority/high
|
When the user has a shared vpc, we should carefully look into the vpc network we use for the node that we bring up, noticed that our network and subnetwork metadata on the create instance were incorrect. It needs to plumb the shared vpc project information while creating the nodes.
Try going through the create in google console and check the REST api equivalent to see the exact payload which is being sent when creating a shared vpc instance.
|
1.0
|
[Platform] GCP shared VPC create universe fails - When the user has a shared vpc, we should carefully look into the vpc network we use for the node that we bring up, noticed that our network and subnetwork metadata on the create instance were incorrect. It needs to plumb the shared vpc project information while creating the nodes.
Try going through the create in google console and check the REST api equivalent to see the exact payload which is being sent when creating a shared vpc instance.
|
priority
|
gcp shared vpc create universe fails when the user has a shared vpc we should carefully look into the vpc network we use for the node that we bring up noticed that our network and subnetwork metadata on the create instance were incorrect it needs to plumb the shared vpc project information while creating the nodes try going through the create in google console and check the rest api equivalent to see the exact payload which is being sent when creating a shared vpc instance
| 1
|
392,283
| 11,589,455,780
|
IssuesEvent
|
2020-02-24 02:18:47
|
TannerDisney/DisneyCafe-Portfolio
|
https://api.github.com/repos/TannerDisney/DisneyCafe-Portfolio
|
closed
|
Create ApplicationUser to inherit from AspNetUsers table
|
Back-End Database High Priority
|
To access AspNetUser's table is to create a new table that inherits from AspNetUsers.
|
1.0
|
Create ApplicationUser to inherit from AspNetUsers table - To access AspNetUser's table is to create a new table that inherits from AspNetUsers.
|
priority
|
create applicationuser to inherit from aspnetusers table to access aspnetuser s table is to create a new table that inherits from aspnetusers
| 1
|
712,403
| 24,494,326,067
|
IssuesEvent
|
2022-10-10 07:13:43
|
icon-project/icon-bridge
|
https://api.github.com/repos/icon-project/icon-bridge
|
opened
|
story(Harmony Local deployment): deploy smart contract and relays to all the chains that have similarity to BSC)
|
team: ibriz priority: high
|
## Overview
A clear and concise description of the user and their need
## Story
As a user, go through and deploy the contracts and relays to all the chains that have similarity to BSC like Harmony Ethereum, polygon, avalanche
## Test Scenarios
Link given/when/then test scenarios here
## Acceptance Criteria
- Functional acceptance criteria to be met
- Non-functional acceptance criteria to be met
|
1.0
|
story(Harmony Local deployment): deploy smart contract and relays to all the chains that have similarity to BSC) - ## Overview
A clear and concise description of the user and their need
## Story
As a user, go through and deploy the contracts and relays to all the chains that have similarity to BSC like Harmony Ethereum, polygon, avalanche
## Test Scenarios
Link given/when/then test scenarios here
## Acceptance Criteria
- Functional acceptance criteria to be met
- Non-functional acceptance criteria to be met
|
priority
|
story harmony local deployment deploy smart contract and relays to all the chains that have similarity to bsc overview a clear and concise description of the user and their need story as a user go through and deploy the contracts and relays to all the chains that have similarity to bsc like harmony ethereum polygon avalanche test scenarios link given when then test scenarios here acceptance criteria functional acceptance criteria to be met non functional acceptance criteria to be met
| 1
|
343,424
| 10,329,983,595
|
IssuesEvent
|
2019-09-02 13:35:10
|
geosolutions-it/MapStore2
|
https://api.github.com/repos/geosolutions-it/MapStore2
|
opened
|
Error drawing the Rectangle ROI in Query Builder
|
Priority: High bug
|
### Description
The Query Builder throws some exceptions while drawing the rectangle ROI (spatial filter)
mapstore2.js?edc02a99850a9b9dd1e6:13 Uncaught TypeError: Cannot read property 'length' of null
at t.setLayout (mapstore2.js?edc02a99850a9b9dd1e6:13)
at t.setCoordinates (mapstore2.js?edc02a99850a9b9dd1e6:9)
at new t (mapstore2.js?edc02a99850a9b9dd1e6:9)
at t.s.geometryFunction [as geometryFunction_] (mapstore2.js?edc02a99850a9b9dd1e6:133)
at t.startDrawing_ (mapstore2.js?edc02a99850a9b9dd1e6:33)
at t.handleUpEvent (mapstore2.js?edc02a99850a9b9dd1e6:32)
at t.handleEvent (mapstore2.js?edc02a99850a9b9dd1e6:11)
at t.handleEvent (mapstore2.js?edc02a99850a9b9dd1e6:32)
at t.handleMapBrowserEvent (mapstore2.js?edc02a99850a9b9dd1e6:39)
at t (mapstore2.js?edc02a99850a9b9dd1e6:6)
at t.dispatchEvent (mapstore2.js?edc02a99850a9b9dd1e6:17)
at t.handlePointerUp_ (mapstore2.js?edc02a99850a9b9dd1e6:84)
at t (mapstore2.js?edc02a99850a9b9dd1e6:6)
at t.dispatchEvent (mapstore2.js?edc02a99850a9b9dd1e6:17)
at t.fireNativeEvent (mapstore2.js?edc02a99850a9b9dd1e6:81)
at t.i (mapstore2.js?edc02a99850a9b9dd1e6:126)
t.setLayout @ mapstore2.js?edc02a99850a9b9dd1e6:13
t.setCoordinates @ mapstore2.js?edc02a99850a9b9dd1e6:9
t @ mapstore2.js?edc02a99850a9b9dd1e6:9
s.geometryFunction @ mapstore2.js?edc02a99850a9b9dd1e6:133
t.startDrawing_ @ mapstore2.js?edc02a99850a9b9dd1e6:33
t.handleUpEvent @ mapstore2.js?edc02a99850a9b9dd1e6:32
t.handleEvent @ mapstore2.js?edc02a99850a9b9dd1e6:11
t.handleEvent @ mapstore2.js?edc02a99850a9b9dd1e6:32
t.handleMapBrowserEvent @ mapstore2.js?edc02a99850a9b9dd1e6:39
t @ mapstore2.js?edc02a99850a9b9dd1e6:6
t.dispatchEvent @ mapstore2.js?edc02a99850a9b9dd1e6:17
t.handlePointerUp_ @ mapstore2.js?edc02a99850a9b9dd1e6:84
t @ mapstore2.js?edc02a99850a9b9dd1e6:6
t.dispatchEvent @ mapstore2.js?edc02a99850a9b9dd1e6:17
t.fireNativeEvent @ mapstore2.js?edc02a99850a9b9dd1e6:81
i @ mapstore2.js?edc02a99850a9b9dd1e6:126
t.eventHandler_ @ mapstore2.js?edc02a99850a9b9dd1e6:81
t @ mapstore2.js?edc02a99850a9b9dd1e6:6
68mapstore2.js?edc02a99850a9b9dd1e6:33 Uncaught TypeError: Cannot read property 'getGeometry' of null
at t.modifyDrawing_ (mapstore2.js?edc02a99850a9b9dd1e6:33)
at t.handlePointerMove_ (mapstore2.js?edc02a99850a9b9dd1e6:32)
at t.handleEvent (mapstore2.js?edc02a99850a9b9dd1e6:32)
at t.handleMapBrowserEvent (mapstore2.js?edc02a99850a9b9dd1e6:39)
at t (mapstore2.js?edc02a99850a9b9dd1e6:6)
at t.dispatchEvent (mapstore2.js?edc02a99850a9b9dd1e6:17)
at t.relayEvent_ (mapstore2.js?edc02a99850a9b9dd1e6:84)
at t (mapstore2.js?edc02a99850a9b9dd1e6:6)
at t.dispatchEvent (mapstore2.js?edc02a99850a9b9dd1e6:17)
at t.fireNativeEvent (mapstore2.js?edc02a99850a9b9dd1e6:81)
at t.o (mapstore2.js?edc02a99850a9b9dd1e6:126)
at t.eventHandler_ (mapstore2.js?edc02a99850a9b9dd1e6:81)
at HTMLDivElement.t (mapstore2.js?edc02a99850a9b9dd1e6:6)
t.modifyDrawing_ @ mapstore2.js?edc02a99850a9b9dd1e6:33
t.handlePointerMove_ @ mapstore2.js?edc02a99850a9b9dd1e6:32
t.handleEvent @ mapstore2.js?edc02a99850a9b9dd1e6:32
t.handleMapBrowserEvent @ mapstore2.js?edc02a99850a9b9dd1e6:39
t @ mapstore2.js?edc02a99850a9b9dd1e6:6
t.dispatchEvent @ mapstore2.js?edc02a99850a9b9dd1e6:17
t.relayEvent_ @ mapstore2.js?edc02a99850a9b9dd1e6:84
t @ mapstore2.js?edc02a99850a9b9dd1e6:6
t.dispatchEvent @ mapstore2.js?edc02a99850a9b9dd1e6:17
t.fireNativeEvent @ mapstore2.js?edc02a99850a9b9dd1e6:81
o @ mapstore2.js?edc02a99850a9b9dd1e6:126
t.eventHandler_ @ mapstore2.js?edc02a99850a9b9dd1e6:81
t @ mapstore2.js?edc02a99850a9b9dd1e6:6
### In case of Bug (otherwise remove this paragraph)
*Browser Affected*
(use this site: https://www.whatsmybrowser.org/ for non expert users)
- [ ] Internet Explorer
- [ ] Chrome
- [ ] Firefox
- [ ] Safari
*Steps to reproduce*
- Open a map
- Import a vector layer
- Select the layer in TOC and open the Filter Layer tool
- Select Rectangle in ROI section and try to draw it
*Expected Result*
- You can draw the rectangle ROI on the map
*Current Result*
- You cannot draw the rectangle ROI on the map and the exceptions above are thrown
### Other useful information (optional):
The recent OL update to v5 could be involved.
|
1.0
|
Error drawing the Rectangle ROI in Query Builder - ### Description
The Query Builder throws some exceptions while drawing the rectangle ROI (spatial filter)
mapstore2.js?edc02a99850a9b9dd1e6:13 Uncaught TypeError: Cannot read property 'length' of null
at t.setLayout (mapstore2.js?edc02a99850a9b9dd1e6:13)
at t.setCoordinates (mapstore2.js?edc02a99850a9b9dd1e6:9)
at new t (mapstore2.js?edc02a99850a9b9dd1e6:9)
at t.s.geometryFunction [as geometryFunction_] (mapstore2.js?edc02a99850a9b9dd1e6:133)
at t.startDrawing_ (mapstore2.js?edc02a99850a9b9dd1e6:33)
at t.handleUpEvent (mapstore2.js?edc02a99850a9b9dd1e6:32)
at t.handleEvent (mapstore2.js?edc02a99850a9b9dd1e6:11)
at t.handleEvent (mapstore2.js?edc02a99850a9b9dd1e6:32)
at t.handleMapBrowserEvent (mapstore2.js?edc02a99850a9b9dd1e6:39)
at t (mapstore2.js?edc02a99850a9b9dd1e6:6)
at t.dispatchEvent (mapstore2.js?edc02a99850a9b9dd1e6:17)
at t.handlePointerUp_ (mapstore2.js?edc02a99850a9b9dd1e6:84)
at t (mapstore2.js?edc02a99850a9b9dd1e6:6)
at t.dispatchEvent (mapstore2.js?edc02a99850a9b9dd1e6:17)
at t.fireNativeEvent (mapstore2.js?edc02a99850a9b9dd1e6:81)
at t.i (mapstore2.js?edc02a99850a9b9dd1e6:126)
t.setLayout @ mapstore2.js?edc02a99850a9b9dd1e6:13
t.setCoordinates @ mapstore2.js?edc02a99850a9b9dd1e6:9
t @ mapstore2.js?edc02a99850a9b9dd1e6:9
s.geometryFunction @ mapstore2.js?edc02a99850a9b9dd1e6:133
t.startDrawing_ @ mapstore2.js?edc02a99850a9b9dd1e6:33
t.handleUpEvent @ mapstore2.js?edc02a99850a9b9dd1e6:32
t.handleEvent @ mapstore2.js?edc02a99850a9b9dd1e6:11
t.handleEvent @ mapstore2.js?edc02a99850a9b9dd1e6:32
t.handleMapBrowserEvent @ mapstore2.js?edc02a99850a9b9dd1e6:39
t @ mapstore2.js?edc02a99850a9b9dd1e6:6
t.dispatchEvent @ mapstore2.js?edc02a99850a9b9dd1e6:17
t.handlePointerUp_ @ mapstore2.js?edc02a99850a9b9dd1e6:84
t @ mapstore2.js?edc02a99850a9b9dd1e6:6
t.dispatchEvent @ mapstore2.js?edc02a99850a9b9dd1e6:17
t.fireNativeEvent @ mapstore2.js?edc02a99850a9b9dd1e6:81
i @ mapstore2.js?edc02a99850a9b9dd1e6:126
t.eventHandler_ @ mapstore2.js?edc02a99850a9b9dd1e6:81
t @ mapstore2.js?edc02a99850a9b9dd1e6:6
68mapstore2.js?edc02a99850a9b9dd1e6:33 Uncaught TypeError: Cannot read property 'getGeometry' of null
at t.modifyDrawing_ (mapstore2.js?edc02a99850a9b9dd1e6:33)
at t.handlePointerMove_ (mapstore2.js?edc02a99850a9b9dd1e6:32)
at t.handleEvent (mapstore2.js?edc02a99850a9b9dd1e6:32)
at t.handleMapBrowserEvent (mapstore2.js?edc02a99850a9b9dd1e6:39)
at t (mapstore2.js?edc02a99850a9b9dd1e6:6)
at t.dispatchEvent (mapstore2.js?edc02a99850a9b9dd1e6:17)
at t.relayEvent_ (mapstore2.js?edc02a99850a9b9dd1e6:84)
at t (mapstore2.js?edc02a99850a9b9dd1e6:6)
at t.dispatchEvent (mapstore2.js?edc02a99850a9b9dd1e6:17)
at t.fireNativeEvent (mapstore2.js?edc02a99850a9b9dd1e6:81)
at t.o (mapstore2.js?edc02a99850a9b9dd1e6:126)
at t.eventHandler_ (mapstore2.js?edc02a99850a9b9dd1e6:81)
at HTMLDivElement.t (mapstore2.js?edc02a99850a9b9dd1e6:6)
t.modifyDrawing_ @ mapstore2.js?edc02a99850a9b9dd1e6:33
t.handlePointerMove_ @ mapstore2.js?edc02a99850a9b9dd1e6:32
t.handleEvent @ mapstore2.js?edc02a99850a9b9dd1e6:32
t.handleMapBrowserEvent @ mapstore2.js?edc02a99850a9b9dd1e6:39
t @ mapstore2.js?edc02a99850a9b9dd1e6:6
t.dispatchEvent @ mapstore2.js?edc02a99850a9b9dd1e6:17
t.relayEvent_ @ mapstore2.js?edc02a99850a9b9dd1e6:84
t @ mapstore2.js?edc02a99850a9b9dd1e6:6
t.dispatchEvent @ mapstore2.js?edc02a99850a9b9dd1e6:17
t.fireNativeEvent @ mapstore2.js?edc02a99850a9b9dd1e6:81
o @ mapstore2.js?edc02a99850a9b9dd1e6:126
t.eventHandler_ @ mapstore2.js?edc02a99850a9b9dd1e6:81
t @ mapstore2.js?edc02a99850a9b9dd1e6:6
### In case of Bug (otherwise remove this paragraph)
*Browser Affected*
(use this site: https://www.whatsmybrowser.org/ for non expert users)
- [ ] Internet Explorer
- [ ] Chrome
- [ ] Firefox
- [ ] Safari
*Steps to reproduce*
- Open a map
- Import a vector layer
- Select the layer in TOC and open the Filter Layer tool
- Select Rectangle in ROI section and try to draw it
*Expected Result*
- You can draw the rectangle ROI on the map
*Current Result*
- You cannot draw the rectangle ROI on the map and the exceptions above are thrown
### Other useful information (optional):
The recent OL update to v5 could be involved.
|
priority
|
error drawing the rectangle roi in query builder description the query builder throws some exceptions while drawing the rectangle roi spatial filter js uncaught typeerror cannot read property length of null at t setlayout js at t setcoordinates js at new t js at t s geometryfunction js at t startdrawing js at t handleupevent js at t handleevent js at t handleevent js at t handlemapbrowserevent js at t js at t dispatchevent js at t handlepointerup js at t js at t dispatchevent js at t firenativeevent js at t i js t setlayout js t setcoordinates js t js s geometryfunction js t startdrawing js t handleupevent js t handleevent js t handleevent js t handlemapbrowserevent js t js t dispatchevent js t handlepointerup js t js t dispatchevent js t firenativeevent js i js t eventhandler js t js js uncaught typeerror cannot read property getgeometry of null at t modifydrawing js at t handlepointermove js at t handleevent js at t handlemapbrowserevent js at t js at t dispatchevent js at t relayevent js at t js at t dispatchevent js at t firenativeevent js at t o js at t eventhandler js at htmldivelement t js t modifydrawing js t handlepointermove js t handleevent js t handlemapbrowserevent js t js t dispatchevent js t relayevent js t js t dispatchevent js t firenativeevent js o js t eventhandler js t js in case of bug otherwise remove this paragraph browser affected use this site for non expert users internet explorer chrome firefox safari steps to reproduce open a map import a vector layer select the layer in toc and open the filter layer tool select rectangle in roi section and try to draw it expected result you can draw the rectangle roi on the map current result you cannot draw the rectangle roi on the map and the exceptions above are thrown other useful information optional the recent ol update to could be involved
| 1
|
466,410
| 13,401,410,061
|
IssuesEvent
|
2020-09-03 17:16:09
|
alibaba/nacos
|
https://api.github.com/repos/alibaba/nacos
|
closed
|
【BUG】naming raft remove the wrong file
|
area/Naming kind/bug priority/high
|
<!-- Here is for bug reports and feature requests ONLY!
If you're looking for help, please check our mail list、WeChat group and the Gitter room.
Please try to use English to describe your issue, or at least provide a snippet of English translation.
我们鼓励使用英文,如果不能直接使用,可以使用翻译软件,您仍旧可以保留中文原文。
-->
**Describe the bug**
All persistent service information could not be saved to a file because the wrong file was removed
**Expected behavior**
Files in old formats should be removed
**Acutally behavior**
```java
// remove old format file:
if (StringUtils.isNoneBlank(namespaceId)) {
if (datum.key.contains(Constants.DEFAULT_GROUP + Constants.SERVICE_INFO_SPLITER)) {
String oldDatumKey = datum.key
.replace(Constants.DEFAULT_GROUP + Constants.SERVICE_INFO_SPLITER, StringUtils.EMPTY);
cacheFile = cacheFile(cacheFileName(namespaceId, datum.key));
if (cacheFile.exists() && !cacheFile.delete()) {
Loggers.RAFT.error("[RAFT-DELETE] failed to delete old format datum: {}, value: {}", datum.key,
datum.value);
throw new IllegalStateException("failed to delete old format datum: " + datum.key);
}
}
}
```
**How to Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Desktop (please complete the following information):**
- OS: [Mac os]
- Version [nacos-server 1.3.1+]
- Module [naming]
**Additional context**
Add any other context about the problem here.
|
1.0
|
【BUG】naming raft remove the wrong file - <!-- Here is for bug reports and feature requests ONLY!
If you're looking for help, please check our mail list、WeChat group and the Gitter room.
Please try to use English to describe your issue, or at least provide a snippet of English translation.
我们鼓励使用英文,如果不能直接使用,可以使用翻译软件,您仍旧可以保留中文原文。
-->
**Describe the bug**
All persistent service information could not be saved to a file because the wrong file was removed
**Expected behavior**
Files in old formats should be removed
**Acutally behavior**
```java
// remove old format file:
if (StringUtils.isNoneBlank(namespaceId)) {
if (datum.key.contains(Constants.DEFAULT_GROUP + Constants.SERVICE_INFO_SPLITER)) {
String oldDatumKey = datum.key
.replace(Constants.DEFAULT_GROUP + Constants.SERVICE_INFO_SPLITER, StringUtils.EMPTY);
cacheFile = cacheFile(cacheFileName(namespaceId, datum.key));
if (cacheFile.exists() && !cacheFile.delete()) {
Loggers.RAFT.error("[RAFT-DELETE] failed to delete old format datum: {}, value: {}", datum.key,
datum.value);
throw new IllegalStateException("failed to delete old format datum: " + datum.key);
}
}
}
```
**How to Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Desktop (please complete the following information):**
- OS: [Mac os]
- Version [nacos-server 1.3.1+]
- Module [naming]
**Additional context**
Add any other context about the problem here.
|
priority
|
【bug】naming raft remove the wrong file here is for bug reports and feature requests only if you re looking for help please check our mail list、wechat group and the gitter room please try to use english to describe your issue or at least provide a snippet of english translation 我们鼓励使用英文,如果不能直接使用,可以使用翻译软件,您仍旧可以保留中文原文。 describe the bug all persistent service information could not be saved to a file because the wrong file was removed expected behavior files in old formats should be removed acutally behavior java remove old format file if stringutils isnoneblank namespaceid if datum key contains constants default group constants service info spliter string olddatumkey datum key replace constants default group constants service info spliter stringutils empty cachefile cachefile cachefilename namespaceid datum key if cachefile exists cachefile delete loggers raft error failed to delete old format datum value datum key datum value throw new illegalstateexception failed to delete old format datum datum key how to reproduce steps to reproduce the behavior go to click on scroll down to see error desktop please complete the following information os version module additional context add any other context about the problem here
| 1
|
259,222
| 8,195,264,060
|
IssuesEvent
|
2018-08-31 04:54:42
|
magda-io/magda
|
https://api.github.com/repos/magda-io/magda
|
closed
|
CSW data source `australian-institute-of-marine-science` Crawling URL needs update
|
priority: high
|
### Problem description
@maxious found this dataset:
https://dev.magda.io/dataset/ds-aims-6292acd0-7616-11dc-885e-00008a07204e/details
This dataset has `NASA` as publisher name.
But the source XML doesn't seem to have the any `NASA` data.
The database shows this dataset was last updated on 2018-01-19 01:26:45.465435+00
Source aspect shows:
```
{"id": "aims", "url": "http://data.aims.gov.au/geonetwork/srv/eng/csw?service=CSW&version=2.0.2&request=GetRecordById&elementsetname=full&outputschema=http%3A%2F%2Fwww.isotc211.org%2F2005%2Fgmd&typeNames=gmd%3AMD_Metadata&id=6292acd0-7616-11dc-885e-00008a07204e", "name": "Australian Institute of Marine Science", "type": "csw-dataset"}
```
XML fetched at that time does include `NASA` data.
Noticed the data fetch URL `http://data.aims.gov.au/geonetwork/srv/eng/csw?service=CSW&version=2.0.2&request=GetRecordById&elementsetname=full&outputschema=http%3A%2F%2Fwww.isotc211.org%2F2005%2Fgmd&typeNames=gmd%3AMD_Metadata&id=6292acd0-7616-11dc-885e-00008a07204e` doesn't work anymore.
Suspect the correct one should be:
`http://data.aims.gov.au/geonetwork/srv/eng/csw?service=CSW&version=2.0.2&request=GetRecordById&elementsetname=full&outputschema=http%3A%2F%2Fwww.isotc211.org%2F2005%2Fgmd&typeNames=gmd%3AMD_Metadata&uuid=6292acd0-7616-11dc-885e-00008a07204e`
If so, we might have a connector config needs to be fixed
### Problem reproduction steps
### Screenshot / Design / File reference
|
1.0
|
CSW data source `australian-institute-of-marine-science` Crawling URL needs update - ### Problem description
@maxious found this dataset:
https://dev.magda.io/dataset/ds-aims-6292acd0-7616-11dc-885e-00008a07204e/details
This dataset has `NASA` as publisher name.
But the source XML doesn't seem to have the any `NASA` data.
The database shows this dataset was last updated on 2018-01-19 01:26:45.465435+00
Source aspect shows:
```
{"id": "aims", "url": "http://data.aims.gov.au/geonetwork/srv/eng/csw?service=CSW&version=2.0.2&request=GetRecordById&elementsetname=full&outputschema=http%3A%2F%2Fwww.isotc211.org%2F2005%2Fgmd&typeNames=gmd%3AMD_Metadata&id=6292acd0-7616-11dc-885e-00008a07204e", "name": "Australian Institute of Marine Science", "type": "csw-dataset"}
```
XML fetched at that time does include `NASA` data.
Noticed the data fetch URL `http://data.aims.gov.au/geonetwork/srv/eng/csw?service=CSW&version=2.0.2&request=GetRecordById&elementsetname=full&outputschema=http%3A%2F%2Fwww.isotc211.org%2F2005%2Fgmd&typeNames=gmd%3AMD_Metadata&id=6292acd0-7616-11dc-885e-00008a07204e` doesn't work anymore.
Suspect the correct one should be:
`http://data.aims.gov.au/geonetwork/srv/eng/csw?service=CSW&version=2.0.2&request=GetRecordById&elementsetname=full&outputschema=http%3A%2F%2Fwww.isotc211.org%2F2005%2Fgmd&typeNames=gmd%3AMD_Metadata&uuid=6292acd0-7616-11dc-885e-00008a07204e`
If so, we might have a connector config needs to be fixed
### Problem reproduction steps
### Screenshot / Design / File reference
|
priority
|
csw data source australian institute of marine science crawling url needs update problem description maxious found this dataset this dataset has nasa as publisher name but the source xml doesn t seem to have the any nasa data the database shows this dataset was last updated on source aspect shows id aims url name australian institute of marine science type csw dataset xml fetched at that time does include nasa data noticed the data fetch url doesn t work anymore suspect the correct one should be if so we might have a connector config needs to be fixed problem reproduction steps screenshot design file reference
| 1
|
392,495
| 11,592,161,908
|
IssuesEvent
|
2020-02-24 10:53:42
|
luna/ide
|
https://api.github.com/repos/luna/ide
|
closed
|
Text Controller
|
Category: IDE Change: Non-Breaking Difficulty: Core Contributor Priority: Highest Type: Enhancement
|
### Summary
We need an implementation of Text Controller using File Manager Client to save and load source files. The Text Editor should have option to call save/load methods of Text Controller.
### Value
A fully functional TextController ready to be integrated with TextEditor.
### Specification
- The text controller should use File Manager Client
### Acceptance Criteria & Test Cases
a unit test showing how TextController uses FileManagerClient
|
1.0
|
Text Controller - ### Summary
We need an implementation of Text Controller using File Manager Client to save and load source files. The Text Editor should have option to call save/load methods of Text Controller.
### Value
A fully functional TextController ready to be integrated with TextEditor.
### Specification
- The text controller should use File Manager Client
### Acceptance Criteria & Test Cases
a unit test showing how TextController uses FileManagerClient
|
priority
|
text controller summary we need an implementation of text controller using file manager client to save and load source files the text editor should have option to call save load methods of text controller value a fully functional textcontroller ready to be integrated with texteditor specification the text controller should use file manager client acceptance criteria test cases a unit test showing how textcontroller uses filemanagerclient
| 1
|
217,650
| 7,326,787,313
|
IssuesEvent
|
2018-03-04 00:34:03
|
angrykoala/wendigo
|
https://api.github.com/repos/angrykoala/wendigo
|
closed
|
Regex support for expectations
|
enhancement high priority
|
The following assertions may use regex:
* text
* title
* ~~class~~
* ~~value~~
|
1.0
|
Regex support for expectations - The following assertions may use regex:
* text
* title
* ~~class~~
* ~~value~~
|
priority
|
regex support for expectations the following assertions may use regex text title class value
| 1
|
307,378
| 9,416,389,922
|
IssuesEvent
|
2019-04-10 14:35:22
|
CredentialEngine/CredentialRegistry
|
https://api.github.com/repos/CredentialEngine/CredentialRegistry
|
closed
|
Enabling skip validation when publishing to the endpoint where registry manages the envelope
|
High Priority
|
@rsaksida @science
As a followup to the Email from Oct. 13, 2018, I added this issue to track this task/request.
Team
As you know there are two approaches for publishing to the registry.
Using self-signed envelope:
https://credentialengineregistry.org/ce-registry/envelopes?update_if_exists=true
Using URL where registry signs envelopes:
https://credentialengineregistry.org/resources/organizations/{0}/documents
Previous to the current update, we had the registry use a Json validation schema to validate the document being published. We are not using the Json validation schema at this time for the new format. Using the first method we have been appending skip_vaidation=true so the Json validation schema is not used.
We need to also skip the Json validation for the second method of publishing.
We need to have the JSON validation turned off, or alternatively enable the equivalent of skip_validation for the second method?
|
1.0
|
Enabling skip validation when publishing to the endpoint where registry manages the envelope - @rsaksida @science
As a followup to the Email from Oct. 13, 2018, I added this issue to track this task/request.
Team
As you know there are two approaches for publishing to the registry.
Using self-signed envelope:
https://credentialengineregistry.org/ce-registry/envelopes?update_if_exists=true
Using URL where registry signs envelopes:
https://credentialengineregistry.org/resources/organizations/{0}/documents
Previous to the current update, we had the registry use a Json validation schema to validate the document being published. We are not using the Json validation schema at this time for the new format. Using the first method we have been appending skip_vaidation=true so the Json validation schema is not used.
We need to also skip the Json validation for the second method of publishing.
We need to have the JSON validation turned off, or alternatively enable the equivalent of skip_validation for the second method?
|
priority
|
enabling skip validation when publishing to the endpoint where registry manages the envelope rsaksida science as a followup to the email from oct i added this issue to track this task request team as you know there are two approaches for publishing to the registry using self signed envelope using url where registry signs envelopes previous to the current update we had the registry use a json validation schema to validate the document being published we are not using the json validation schema at this time for the new format using the first method we have been appending skip vaidation true so the json validation schema is not used we need to also skip the json validation for the second method of publishing we need to have the json validation turned off or alternatively enable the equivalent of skip validation for the second method
| 1
|
695,591
| 23,865,174,471
|
IssuesEvent
|
2022-09-07 10:22:50
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
_count doesn't update in many to many relation
|
bug/2-confirmed kind/bug team/client topic: database-provider/planetscale topic: _count topic: referentialIntegrity priority/high size/s
|
### Bug description
I have many-to-many relations in my schema like brand and category. I am using _.count to see how many brands are there in a category. It seems to work but when I delete one of the brand it does not update _.count in the category.
This is the model :
```prisma
model Category {
id String @id @default(cuid())
name String @unique
createdAt DateTime @default(now())
brands Brand[]
}
model Brand {
id String @id @default(cuid())
name String @unique
createdAt DateTime @default(now())
categories Category[]
}
```
I am showing how many brands in a category by pulling the data like this
```ts
const Data = await prisma.category.findMany({
include: {
_count: {
select: { brands: true }
}
}
});
```
It works fine. It shows how many brands are in a category.
When I try to delete a brand like this;
```ts
prisma.brand
.delete({
where: {
id: ID
}
})
```
The brand gets deleted but the category list doesn't update. It still includes that brand in _count.
I have deleted the same way in one-to-many relation where _count updated. How do I do it in many to many?
### How to reproduce
The project is complicated so If you want you can use just two models to follow the bugs
```prisma
model Category {
id String @id @default(cuid())
name String @unique
createdAt DateTime @default(now())
brands Brand[]
}
model Brand {
id String @id @default(cuid())
name String @unique
createdAt DateTime @default(now())
categories Category[]
}
```
### Expected behavior
When I delete a brand, the Category list (._count) also should be updated. like when I use this code to see.
```ts
const Data = await prisma.category.findMany({
include: {
_count: {
select: { brands: true }
}
}
});
```
### Environment & setup
- OS: Windows 10
- Database: MySQL
- Node.js version: v16.14.1
### Prisma Version
```
prisma : 3.11.0
@prisma/client : 3.11.0
Current platform : windows
Query Engine (Node-API) : libquery-engine b371888aaf8f51357c7457d836b86d12da91658b (at node_modules\@prisma\engines\query_engine-windows.dll.node)
Migration Engine : migration-engine-cli b371888aaf8f51357c7457d836b86d12da91658b (at node_modules\@prisma\engines\migration-engine-windows.exe)
Introspection Engine : introspection-core b371888aaf8f51357c7457d836b86d12da91658b (at node_modules\@prisma\engines\introspection-engine-windows.exe)
Format Binary : prisma-fmt b371888aaf8f51357c7457d836b86d12da91658b (at node_modules\@prisma\engines\prisma-fmt-windows.exe)
Default Engines Hash : b371888aaf8f51357c7457d836b86d12da91658b
Studio : 0.458.0
Preview Features : referentialIntegrity
```
|
1.0
|
_count doesn't update in many to many relation - ### Bug description
I have many-to-many relations in my schema like brand and category. I am using _.count to see how many brands are there in a category. It seems to work but when I delete one of the brand it does not update _.count in the category.
This is the model :
```prisma
model Category {
id String @id @default(cuid())
name String @unique
createdAt DateTime @default(now())
brands Brand[]
}
model Brand {
id String @id @default(cuid())
name String @unique
createdAt DateTime @default(now())
categories Category[]
}
```
I am showing how many brands in a category by pulling the data like this
```ts
const Data = await prisma.category.findMany({
include: {
_count: {
select: { brands: true }
}
}
});
```
It works fine. It shows how many brands are in a category.
When I try to delete a brand like this;
```ts
prisma.brand
.delete({
where: {
id: ID
}
})
```
The brand gets deleted but the category list doesn't update. It still includes that brand in _count.
I have deleted the same way in one-to-many relation where _count updated. How do I do it in many to many?
### How to reproduce
The project is complicated so If you want you can use just two models to follow the bugs
```prisma
model Category {
id String @id @default(cuid())
name String @unique
createdAt DateTime @default(now())
brands Brand[]
}
model Brand {
id String @id @default(cuid())
name String @unique
createdAt DateTime @default(now())
categories Category[]
}
```
### Expected behavior
When I delete a brand, the Category list (._count) also should be updated. like when I use this code to see.
```ts
const Data = await prisma.category.findMany({
include: {
_count: {
select: { brands: true }
}
}
});
```
### Environment & setup
- OS: Windows 10
- Database: MySQL
- Node.js version: v16.14.1
### Prisma Version
```
prisma : 3.11.0
@prisma/client : 3.11.0
Current platform : windows
Query Engine (Node-API) : libquery-engine b371888aaf8f51357c7457d836b86d12da91658b (at node_modules\@prisma\engines\query_engine-windows.dll.node)
Migration Engine : migration-engine-cli b371888aaf8f51357c7457d836b86d12da91658b (at node_modules\@prisma\engines\migration-engine-windows.exe)
Introspection Engine : introspection-core b371888aaf8f51357c7457d836b86d12da91658b (at node_modules\@prisma\engines\introspection-engine-windows.exe)
Format Binary : prisma-fmt b371888aaf8f51357c7457d836b86d12da91658b (at node_modules\@prisma\engines\prisma-fmt-windows.exe)
Default Engines Hash : b371888aaf8f51357c7457d836b86d12da91658b
Studio : 0.458.0
Preview Features : referentialIntegrity
```
|
priority
|
count doesn t update in many to many relation bug description i have many to many relations in my schema like brand and category i am using count to see how many brands are there in a category it seems to work but when i delete one of the brand it does not update count in the category this is the model prisma model category id string id default cuid name string unique createdat datetime default now brands brand model brand id string id default cuid name string unique createdat datetime default now categories category i am showing how many brands in a category by pulling the data like this ts const data await prisma category findmany include count select brands true it works fine it shows how many brands are in a category when i try to delete a brand like this ts prisma brand delete where id id the brand gets deleted but the category list doesn t update it still includes that brand in count i have deleted the same way in one to many relation where count updated how do i do it in many to many how to reproduce the project is complicated so if you want you can use just two models to follow the bugs prisma model category id string id default cuid name string unique createdat datetime default now brands brand model brand id string id default cuid name string unique createdat datetime default now categories category expected behavior when i delete a brand the category list count also should be updated like when i use this code to see ts const data await prisma category findmany include count select brands true environment setup os windows database mysql node js version prisma version prisma prisma client current platform windows query engine node api libquery engine at node modules prisma engines query engine windows dll node migration engine migration engine cli at node modules prisma engines migration engine windows exe introspection engine introspection core at node modules prisma engines introspection engine windows exe format binary prisma fmt at node modules prisma engines prisma fmt windows exe default engines hash studio preview features referentialintegrity
| 1
|
460,753
| 13,217,765,804
|
IssuesEvent
|
2020-08-17 07:27:52
|
vacuumlabs/adalite
|
https://api.github.com/repos/vacuumlabs/adalite
|
closed
|
Set up WebUSB support
|
high priority needs work
|
We would like to start using WebUSb for Mac and Linux Ledger users (Windows users should be still served though U2F)
https://github.com/vacuumlabs/adalite/issues/469
- [x] Setup WebUSB Ledger support on one of our staging environments
- [x] Quicky test with some of our users
- [x] If works, deploy to production
|
1.0
|
Set up WebUSB support - We would like to start using WebUSb for Mac and Linux Ledger users (Windows users should be still served though U2F)
https://github.com/vacuumlabs/adalite/issues/469
- [x] Setup WebUSB Ledger support on one of our staging environments
- [x] Quicky test with some of our users
- [x] If works, deploy to production
|
priority
|
set up webusb support we would like to start using webusb for mac and linux ledger users windows users should be still served though setup webusb ledger support on one of our staging environments quicky test with some of our users if works deploy to production
| 1
|
339,609
| 10,256,834,735
|
IssuesEvent
|
2019-08-21 18:37:23
|
onaio/reveal-frontend
|
https://api.github.com/repos/onaio/reveal-frontend
|
opened
|
Open IRS Planning Jurisdiction Selection with no Jurisdictions selected
|
Priority: High enhancement
|
Currently a new IRS Plan will load in the Jurisdiction Selection page with all Jurisdictions selected. We need to flip this to have no Jurisdictions selected on initial new Plan page load to make it an additive process of selecting Jurisdictions.
|
1.0
|
Open IRS Planning Jurisdiction Selection with no Jurisdictions selected - Currently a new IRS Plan will load in the Jurisdiction Selection page with all Jurisdictions selected. We need to flip this to have no Jurisdictions selected on initial new Plan page load to make it an additive process of selecting Jurisdictions.
|
priority
|
open irs planning jurisdiction selection with no jurisdictions selected currently a new irs plan will load in the jurisdiction selection page with all jurisdictions selected we need to flip this to have no jurisdictions selected on initial new plan page load to make it an additive process of selecting jurisdictions
| 1
|
565,726
| 16,768,229,883
|
IssuesEvent
|
2021-06-14 11:43:55
|
getkirby/kirby
|
https://api.github.com/repos/getkirby/kirby
|
closed
|
[3.6.0] Refactor `Panel\Panel` class
|
priority: high 🔥 type: refactoring :recycle:
|
As discussed in https://github.com/getkirby/kirby/pull/3327#pullrequestreview-676808165:
- Move `page`, `file` and `user` methods to a new class (maybe `Cms\Finder`?)
- Convert `Panel` class to a singleton that takes the `$kirby` object
Should be done in Kirby 3.6.0 as the new foundation can then be built upon in future releases. If we only do this later, it would be a breaking change.
|
1.0
|
[3.6.0] Refactor `Panel\Panel` class - As discussed in https://github.com/getkirby/kirby/pull/3327#pullrequestreview-676808165:
- Move `page`, `file` and `user` methods to a new class (maybe `Cms\Finder`?)
- Convert `Panel` class to a singleton that takes the `$kirby` object
Should be done in Kirby 3.6.0 as the new foundation can then be built upon in future releases. If we only do this later, it would be a breaking change.
|
priority
|
refactor panel panel class as discussed in move page file and user methods to a new class maybe cms finder convert panel class to a singleton that takes the kirby object should be done in kirby as the new foundation can then be built upon in future releases if we only do this later it would be a breaking change
| 1
|
669,114
| 22,612,615,526
|
IssuesEvent
|
2022-06-29 18:35:31
|
tnc-ca-geo/animl-frontend
|
https://api.github.com/repos/tnc-ca-geo/animl-frontend
|
closed
|
If an image has no objects, label it as "needs review"
|
high priority
|
High priority b/c right now we're not getting any objects back from megadetector, so it looks like none of them need to be reviewed.
|
1.0
|
If an image has no objects, label it as "needs review" - High priority b/c right now we're not getting any objects back from megadetector, so it looks like none of them need to be reviewed.
|
priority
|
if an image has no objects label it as needs review high priority b c right now we re not getting any objects back from megadetector so it looks like none of them need to be reviewed
| 1
|
155,873
| 5,962,330,051
|
IssuesEvent
|
2017-05-29 21:41:04
|
GluuFederation/oxAuth
|
https://api.github.com/repos/GluuFederation/oxAuth
|
closed
|
CORS filter doesn't seem to process pre-flight requests in CE 3.0.x
|
bug High priority
|
Environment:
CentOS6.7, Gluu CE 3.0.1
Steps to reproduce:
1. Edit file `WEB-INF/web.xml` **inside of** `oxauth.war` in any way which would result in behaviour that differs from defaults. For example, let's add "Authorization" header to the list of allowed headers, which will make it to look like this:
```
<!-- Cors -->
<filter>
<filter-name>CorsFilter</filter-name>
<filter-class>org.gluu.oxserver.filters.CorsFilter</filter-class>
<init-param>
<param-name>cors.allowed.origins</param-name>
<param-value>*</param-value>
</init-param>
<init-param>
<param-name>cors.allowed.headers</param-name>
<param-value>Origin,Accept,X-Requested-With,Content-Type,Access-Control-Request-Method,Access-Control-Request-Headers,Authorization</param-value>
</init-param>
<init-param>
<param-name>cors.support.credentials</param-name>
<param-value>true</param-value>
</init-param>
</filter>
```
2. Restart oxAuth service `# service oxauth restart`
3. Send next request to `userinfo` OIDC endpoint (it's an actual CORS pre-flight request used by some on-page OIDC javascript clients employing implicit flow):
```
OPTIONS /oxauth/seam/resource/restv1/oxauth/userinfo HTTP/1.1
Host: idp.gsu.edu
Connection: close
Access-Control-Request-Method: GET
Origin: http://oidc-js.site:5000
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.116 Safari/537.36
Access-Control-Request-Headers: authorization
Accept: */*
Referer: http://oidc-js.site:5000/user-manager-sample.html
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8
```
Result:
Response to the proposed request looks like this:
```
HTTP/1.1 200 OK
Date: Mon, 15 May 2017 23:17:04 GMT
Server: Jetty(9.3.15.v20161220)
X-Xss-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Strict-Transport-Security: max-age=31536000; includeSubDomains
Allow: HEAD, POST, GET, OPTIONS
Content-Type: text/plain
Content-Length: 24
Access-Control-Allow-Origin: *
Connection: close
```
This seems to be a plain usual response to OPTIONS request instead of response to CORS pre-flight request (the latter should contain several "Access-Control-Allow-*" headers; "Access-Control-Allow-Origin" which can be seen there actually has nothing to do with the filter as it's set outside of Jetty, by Apache web server). Filter seems to maintain some basic functionality and even reacts to certain changes to its configurations (I was able to modify "SupportCredentials" settings), and even return some CORS headers in response sent to certain urls, but it at least doesn't produce a correct pre-flight response from `userinfo` endpoint what makes it inaccessible for on-page clients.
Expected result:
A correct response to such kind of request should look like this (additional CORS headers are possible):
```
HTTP/1.1 200 OK
Date: Mon, 15 May 2017 23:17:04 GMT
Server: Jetty(9.3.15.v20161220)
X-Xss-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Strict-Transport-Security: max-age=31536000; includeSubDomains
Access-Control-Allow-Methods: GET, POST, OPTIONS
Access-Control-Allow-Headers: Origin,Accept,X-Requested-With,Content-Type,Access-Control-Request-Method,Access-Control-Request-Headers,Authorization
Access-Control-Max-Age: 86400
Content-Type: text/plain
Content-Length: 24
Access-Control-Allow-Origin: *
Connection: close
```
Also please note that in case when "Access-Control-Allow-Credentials" headers is used in response and is set to "true", it's prohibited to use wildcard "*" for "Access-Control-Allow-Origin" and explicitly set origin must be used instead. As in requests to `userinfo` endpoint we must send `access token` in "Authorization" header, we must backup response to such request with "Access-Control-Allow-Credentials: true", or on-page javascript client won't be able to receive it.
For further details those pages may be useful:
- [link1](https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS#Preflighted_requests)
- [link2](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Credentials)
- [link3](https://fetch.spec.whatwg.org/#http-cors-protocol)
|
1.0
|
CORS filter doesn't seem to process pre-flight requests in CE 3.0.x - Environment:
CentOS6.7, Gluu CE 3.0.1
Steps to reproduce:
1. Edit file `WEB-INF/web.xml` **inside of** `oxauth.war` in any way which would result in behaviour that differs from defaults. For example, let's add "Authorization" header to the list of allowed headers, which will make it to look like this:
```
<!-- Cors -->
<filter>
<filter-name>CorsFilter</filter-name>
<filter-class>org.gluu.oxserver.filters.CorsFilter</filter-class>
<init-param>
<param-name>cors.allowed.origins</param-name>
<param-value>*</param-value>
</init-param>
<init-param>
<param-name>cors.allowed.headers</param-name>
<param-value>Origin,Accept,X-Requested-With,Content-Type,Access-Control-Request-Method,Access-Control-Request-Headers,Authorization</param-value>
</init-param>
<init-param>
<param-name>cors.support.credentials</param-name>
<param-value>true</param-value>
</init-param>
</filter>
```
2. Restart oxAuth service `# service oxauth restart`
3. Send next request to `userinfo` OIDC endpoint (it's an actual CORS pre-flight request used by some on-page OIDC javascript clients employing implicit flow):
```
OPTIONS /oxauth/seam/resource/restv1/oxauth/userinfo HTTP/1.1
Host: idp.gsu.edu
Connection: close
Access-Control-Request-Method: GET
Origin: http://oidc-js.site:5000
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.116 Safari/537.36
Access-Control-Request-Headers: authorization
Accept: */*
Referer: http://oidc-js.site:5000/user-manager-sample.html
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8
```
Result:
Response to the proposed request looks like this:
```
HTTP/1.1 200 OK
Date: Mon, 15 May 2017 23:17:04 GMT
Server: Jetty(9.3.15.v20161220)
X-Xss-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Strict-Transport-Security: max-age=31536000; includeSubDomains
Allow: HEAD, POST, GET, OPTIONS
Content-Type: text/plain
Content-Length: 24
Access-Control-Allow-Origin: *
Connection: close
```
This seems to be a plain usual response to OPTIONS request instead of response to CORS pre-flight request (the latter should contain several "Access-Control-Allow-*" headers; "Access-Control-Allow-Origin" which can be seen there actually has nothing to do with the filter as it's set outside of Jetty, by Apache web server). Filter seems to maintain some basic functionality and even reacts to certain changes to its configurations (I was able to modify "SupportCredentials" settings), and even return some CORS headers in response sent to certain urls, but it at least doesn't produce a correct pre-flight response from `userinfo` endpoint what makes it inaccessible for on-page clients.
Expected result:
A correct response to such kind of request should look like this (additional CORS headers are possible):
```
HTTP/1.1 200 OK
Date: Mon, 15 May 2017 23:17:04 GMT
Server: Jetty(9.3.15.v20161220)
X-Xss-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Strict-Transport-Security: max-age=31536000; includeSubDomains
Access-Control-Allow-Methods: GET, POST, OPTIONS
Access-Control-Allow-Headers: Origin,Accept,X-Requested-With,Content-Type,Access-Control-Request-Method,Access-Control-Request-Headers,Authorization
Access-Control-Max-Age: 86400
Content-Type: text/plain
Content-Length: 24
Access-Control-Allow-Origin: *
Connection: close
```
Also please note that in case when "Access-Control-Allow-Credentials" headers is used in response and is set to "true", it's prohibited to use wildcard "*" for "Access-Control-Allow-Origin" and explicitly set origin must be used instead. As in requests to `userinfo` endpoint we must send `access token` in "Authorization" header, we must backup response to such request with "Access-Control-Allow-Credentials: true", or on-page javascript client won't be able to receive it.
For further details those pages may be useful:
- [link1](https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS#Preflighted_requests)
- [link2](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Credentials)
- [link3](https://fetch.spec.whatwg.org/#http-cors-protocol)
|
priority
|
cors filter doesn t seem to process pre flight requests in ce x environment gluu ce steps to reproduce edit file web inf web xml inside of oxauth war in any way which would result in behaviour that differs from defaults for example let s add authorization header to the list of allowed headers which will make it to look like this corsfilter org gluu oxserver filters corsfilter cors allowed origins cors allowed headers origin accept x requested with content type access control request method access control request headers authorization cors support credentials true restart oxauth service service oxauth restart send next request to userinfo oidc endpoint it s an actual cors pre flight request used by some on page oidc javascript clients employing implicit flow options oxauth seam resource oxauth userinfo http host idp gsu edu connection close access control request method get origin user agent mozilla linux applewebkit khtml like gecko chrome safari access control request headers authorization accept referer accept encoding gzip deflate sdch br accept language en us en q result response to the proposed request looks like this http ok date mon may gmt server jetty x xss protection mode block x content type options nosniff strict transport security max age includesubdomains allow head post get options content type text plain content length access control allow origin connection close this seems to be a plain usual response to options request instead of response to cors pre flight request the latter should contain several access control allow headers access control allow origin which can be seen there actually has nothing to do with the filter as it s set outside of jetty by apache web server filter seems to maintain some basic functionality and even reacts to certain changes to its configurations i was able to modify supportcredentials settings and even return some cors headers in response sent to certain urls but it at least doesn t produce a correct pre flight response from userinfo endpoint what makes it inaccessible for on page clients expected result a correct response to such kind of request should look like this additional cors headers are possible http ok date mon may gmt server jetty x xss protection mode block x content type options nosniff strict transport security max age includesubdomains access control allow methods get post options access control allow headers origin accept x requested with content type access control request method access control request headers authorization access control max age content type text plain content length access control allow origin connection close also please note that in case when access control allow credentials headers is used in response and is set to true it s prohibited to use wildcard for access control allow origin and explicitly set origin must be used instead as in requests to userinfo endpoint we must send access token in authorization header we must backup response to such request with access control allow credentials true or on page javascript client won t be able to receive it for further details those pages may be useful
| 1
|
254,397
| 8,073,635,600
|
IssuesEvent
|
2018-08-06 19:58:22
|
phetsims/scenery
|
https://api.github.com/repos/phetsims/scenery
|
closed
|
Should NodeIO have special phetio options to control its methods?
|
dev:phet-io priority:2-high status:ready-for-review type:question
|
A lot of the phet-io customization that we want to have is directly from interfacing with NodeIO. Setting something visible/pickable is a large percentage of the conversation in design meetings.
Twice now it has come up that we don't want to be able to toggle visibility/pickablity, with the phetMenu, and the phetButton, and we have uninstrumented many things to achieve this goal as well. What if NodeIO had a way to say, "PhetMenu, you can still be a subType of mine, and you can elect to not be able to toggle visibility."
That would solve the 4th checkbox in https://github.com/phetsims/joist/issues/445#issuecomment-341248255. It also would help PhetButton, since we are trying to override the pickable functionality for a custom one that Sim.js controls (https://github.com/phetsims/joist/issues/453).
I don't know exactly how this would work, but `phetioNotVisibleToggleable: true` could keep the phet-io api from changing visibility on this type of Node.
It would be a challenge to try to document this in a client facing way. @samreid what do you think?
|
1.0
|
Should NodeIO have special phetio options to control its methods? - A lot of the phet-io customization that we want to have is directly from interfacing with NodeIO. Setting something visible/pickable is a large percentage of the conversation in design meetings.
Twice now it has come up that we don't want to be able to toggle visibility/pickablity, with the phetMenu, and the phetButton, and we have uninstrumented many things to achieve this goal as well. What if NodeIO had a way to say, "PhetMenu, you can still be a subType of mine, and you can elect to not be able to toggle visibility."
That would solve the 4th checkbox in https://github.com/phetsims/joist/issues/445#issuecomment-341248255. It also would help PhetButton, since we are trying to override the pickable functionality for a custom one that Sim.js controls (https://github.com/phetsims/joist/issues/453).
I don't know exactly how this would work, but `phetioNotVisibleToggleable: true` could keep the phet-io api from changing visibility on this type of Node.
It would be a challenge to try to document this in a client facing way. @samreid what do you think?
|
priority
|
should nodeio have special phetio options to control its methods a lot of the phet io customization that we want to have is directly from interfacing with nodeio setting something visible pickable is a large percentage of the conversation in design meetings twice now it has come up that we don t want to be able to toggle visibility pickablity with the phetmenu and the phetbutton and we have uninstrumented many things to achieve this goal as well what if nodeio had a way to say phetmenu you can still be a subtype of mine and you can elect to not be able to toggle visibility that would solve the checkbox in it also would help phetbutton since we are trying to override the pickable functionality for a custom one that sim js controls i don t know exactly how this would work but phetionotvisibletoggleable true could keep the phet io api from changing visibility on this type of node it would be a challenge to try to document this in a client facing way samreid what do you think
| 1
|
543,053
| 15,877,090,443
|
IssuesEvent
|
2021-04-09 09:12:03
|
wso2/product-is
|
https://api.github.com/repos/wso2/product-is
|
closed
|
Rename the existing sub heading “Filter Users” as “List Users” in IS 5.6.0 doc.
|
Complexity/Low Component/SCIM Priority/Highest bug docs
|
In IS 5.6.0 SCIM2 documentation, under filter users, instead of describing filters, we discussed how to list the users using different attributes. [https://docs.wso2.com/display/IS560/apidocs/SCIM2-endpoints/#!/operations#UsersEndpoint#getUsersByPost](url)
|
1.0
|
Rename the existing sub heading “Filter Users” as “List Users” in IS 5.6.0 doc. - In IS 5.6.0 SCIM2 documentation, under filter users, instead of describing filters, we discussed how to list the users using different attributes. [https://docs.wso2.com/display/IS560/apidocs/SCIM2-endpoints/#!/operations#UsersEndpoint#getUsersByPost](url)
|
priority
|
rename the existing sub heading “filter users” as “list users” in is doc in is documentation under filter users instead of describing filters we discussed how to list the users using different attributes url
| 1
|
500,364
| 14,497,180,383
|
IssuesEvent
|
2020-12-11 13:53:29
|
ansible/awx
|
https://api.github.com/repos/ansible/awx
|
closed
|
[ui_next] Token modal does not display when creating token/application
|
component:ui_next priority:high state:in_progress type:bug
|
### ISSUE TYPE
- Bug Report
##### SUMMARY
Token modal does not display when creating token/application
##### ENVIRONMENT
* AWX version: a50034be3c9d8257e02a430ff5fc8bd6901b2b58
##### STEPS TO REPRODUCE
<!-- Please describe exactly how to reproduce the problem. -->
##### EXPECTED RESULTS
<!-- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!-- What actually happened? -->
##### ADDITIONAL INFORMATION
<!-- Include any links to sosreport, database dumps, screenshots or other
information. -->
|
1.0
|
[ui_next] Token modal does not display when creating token/application - ### ISSUE TYPE
- Bug Report
##### SUMMARY
Token modal does not display when creating token/application
##### ENVIRONMENT
* AWX version: a50034be3c9d8257e02a430ff5fc8bd6901b2b58
##### STEPS TO REPRODUCE
<!-- Please describe exactly how to reproduce the problem. -->
##### EXPECTED RESULTS
<!-- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!-- What actually happened? -->
##### ADDITIONAL INFORMATION
<!-- Include any links to sosreport, database dumps, screenshots or other
information. -->
|
priority
|
token modal does not display when creating token application issue type bug report summary token modal does not display when creating token application environment awx version steps to reproduce expected results actual results additional information include any links to sosreport database dumps screenshots or other information
| 1
|
479,093
| 13,791,261,344
|
IssuesEvent
|
2020-10-09 11:51:04
|
onaio/reveal-frontend
|
https://api.github.com/repos/onaio/reveal-frontend
|
closed
|
Duplicates when Downloading Jurisdiction Metadata on Namibia Production
|
Priority: High
|
- [ ] We previously had an issue where we got duplicates when downloading jurisdiction meta data on targets on Namibia Production. This was resolved and had been working OK. However, the issue appears to have recurred as shown on the screenshot below of a document that I downloaded.

|
1.0
|
Duplicates when Downloading Jurisdiction Metadata on Namibia Production - - [ ] We previously had an issue where we got duplicates when downloading jurisdiction meta data on targets on Namibia Production. This was resolved and had been working OK. However, the issue appears to have recurred as shown on the screenshot below of a document that I downloaded.

|
priority
|
duplicates when downloading jurisdiction metadata on namibia production we previously had an issue where we got duplicates when downloading jurisdiction meta data on targets on namibia production this was resolved and had been working ok however the issue appears to have recurred as shown on the screenshot below of a document that i downloaded
| 1
|
602,420
| 18,468,748,671
|
IssuesEvent
|
2021-10-17 11:12:45
|
AY2122S1-CS2113T-W11-1/tp
|
https://api.github.com/repos/AY2122S1-CS2113T-W11-1/tp
|
closed
|
Implement modification command to timetable
|
priority.High
|
Change the lessons in the timetable allowing users to update timetable whenever they want to.
|
1.0
|
Implement modification command to timetable - Change the lessons in the timetable allowing users to update timetable whenever they want to.
|
priority
|
implement modification command to timetable change the lessons in the timetable allowing users to update timetable whenever they want to
| 1
|
260,714
| 8,213,914,284
|
IssuesEvent
|
2018-09-04 21:06:52
|
inaturalist/iNaturalistAndroid
|
https://api.github.com/repos/inaturalist/iNaturalistAndroid
|
opened
|
Stop using the photo ID as the observation photo ID
|
High Priority bug
|
https://github.com/inaturalist/iNaturalistAndroid/blob/master/iNaturalist/src/main/java/org/inaturalist/android/INaturalistService.java#L4478
By doing this we make it impossible to update or delete an obs photo. Since we're using the photo ID instead, the user either gets a 404 when they try to PUT or DELETE to /observation_photos:id if that obs photo doesn't exist, or they get a 403 if it does but it doesn't belong to them, which seems to hold up sync.
I would also advice parsing this data out of the `observation_photos` part of the observation response, not the `photos`.
This will only be possible after a change to the API: https://github.com/inaturalist/iNaturalistAPI/issues/151
|
1.0
|
Stop using the photo ID as the observation photo ID - https://github.com/inaturalist/iNaturalistAndroid/blob/master/iNaturalist/src/main/java/org/inaturalist/android/INaturalistService.java#L4478
By doing this we make it impossible to update or delete an obs photo. Since we're using the photo ID instead, the user either gets a 404 when they try to PUT or DELETE to /observation_photos:id if that obs photo doesn't exist, or they get a 403 if it does but it doesn't belong to them, which seems to hold up sync.
I would also advice parsing this data out of the `observation_photos` part of the observation response, not the `photos`.
This will only be possible after a change to the API: https://github.com/inaturalist/iNaturalistAPI/issues/151
|
priority
|
stop using the photo id as the observation photo id by doing this we make it impossible to update or delete an obs photo since we re using the photo id instead the user either gets a when they try to put or delete to observation photos id if that obs photo doesn t exist or they get a if it does but it doesn t belong to them which seems to hold up sync i would also advice parsing this data out of the observation photos part of the observation response not the photos this will only be possible after a change to the api
| 1
|
451,634
| 13,039,561,668
|
IssuesEvent
|
2020-07-28 16:57:52
|
canonn-science/CAPIv2-Strapi
|
https://api.github.com/repos/canonn-science/CAPIv2-Strapi
|
opened
|
[TO-DO] Add some additional metadata to TS Sites
|
priority: high status: WIP type: feature request
|
Per LCU in discord:
- Add Type (Need a list of types)
- Add Leviathan Count (Need a min/max of count)
- Image Data (relation to media)
|
1.0
|
[TO-DO] Add some additional metadata to TS Sites - Per LCU in discord:
- Add Type (Need a list of types)
- Add Leviathan Count (Need a min/max of count)
- Image Data (relation to media)
|
priority
|
add some additional metadata to ts sites per lcu in discord add type need a list of types add leviathan count need a min max of count image data relation to media
| 1
|
73,960
| 3,422,835,006
|
IssuesEvent
|
2015-12-09 01:23:44
|
evennia/ainneve
|
https://api.github.com/repos/evennia/ainneve
|
closed
|
Create Archetype Starter Gear
|
easy help wanted high priority
|
Write and create Starter Gear for each Archetype. This can be a joint effort between a builder and a coder.
On the builder side, what is needed:
Item short and long descriptions (i.e., "This breastplate is made of a fine..." and "a fine, leather breastplate")
What slots the item will take up
Any effects of the item (is it cursed? does it boost a stat? does it give health regen?)
On the coder side:
Create a Typeclass for each abstracted type of item (a Breastplate typeclass for example, from Armor, that is coded to only take up a 'torso' slot)--then all things that only take up the Torso can be Breastplates.
Add to core typeclasses if necessary for decided item effects
Write Spawner/Prototype code to generate these items
|
1.0
|
Create Archetype Starter Gear - Write and create Starter Gear for each Archetype. This can be a joint effort between a builder and a coder.
On the builder side, what is needed:
Item short and long descriptions (i.e., "This breastplate is made of a fine..." and "a fine, leather breastplate")
What slots the item will take up
Any effects of the item (is it cursed? does it boost a stat? does it give health regen?)
On the coder side:
Create a Typeclass for each abstracted type of item (a Breastplate typeclass for example, from Armor, that is coded to only take up a 'torso' slot)--then all things that only take up the Torso can be Breastplates.
Add to core typeclasses if necessary for decided item effects
Write Spawner/Prototype code to generate these items
|
priority
|
create archetype starter gear write and create starter gear for each archetype this can be a joint effort between a builder and a coder on the builder side what is needed item short and long descriptions i e this breastplate is made of a fine and a fine leather breastplate what slots the item will take up any effects of the item is it cursed does it boost a stat does it give health regen on the coder side create a typeclass for each abstracted type of item a breastplate typeclass for example from armor that is coded to only take up a torso slot then all things that only take up the torso can be breastplates add to core typeclasses if necessary for decided item effects write spawner prototype code to generate these items
| 1
|
355,407
| 10,580,173,170
|
IssuesEvent
|
2019-10-08 05:50:07
|
AY1920S1-CS2103T-T12-3/main
|
https://api.github.com/repos/AY1920S1-CS2103T-T12-3/main
|
closed
|
As a coach I can sort players according to average performance across all weeks
|
priority.High type.Story
|
So I can plan who to send for competitions
|
1.0
|
As a coach I can sort players according to average performance across all weeks - So I can plan who to send for competitions
|
priority
|
as a coach i can sort players according to average performance across all weeks so i can plan who to send for competitions
| 1
|
202,288
| 7,046,273,996
|
IssuesEvent
|
2018-01-02 06:31:00
|
wso2-incubator/testgrid
|
https://api.github.com/repos/wso2-incubator/testgrid
|
opened
|
Design the configuration file required for each testplan
|
Priority/Highest Severity/Critical Type/New Feature
|
**Description:**
$subject. A sample of the initial draft config we came up with can be found here. [1]
But, product teams does not know about channels. And, operating system and jdk should not be configured by the product teams.
Hence, we should come up with a base test plan (testgrid-base-testplan-config-1.0.0.yaml). That is configured and maintained by TestGrid folks. This will contain all the major infrastructure our products will support. Product teams will write configuration file based on testgrid-base-testplan-config-1.0.0.yaml. In there, they can include/exclude any infrastructure as necessary.
[1] initial draft: https://gist.github.com/kasunbg/48b2ee7e9dbfda95a6e710acff9493d7
|
1.0
|
Design the configuration file required for each testplan - **Description:**
$subject. A sample of the initial draft config we came up with can be found here. [1]
But, product teams does not know about channels. And, operating system and jdk should not be configured by the product teams.
Hence, we should come up with a base test plan (testgrid-base-testplan-config-1.0.0.yaml). That is configured and maintained by TestGrid folks. This will contain all the major infrastructure our products will support. Product teams will write configuration file based on testgrid-base-testplan-config-1.0.0.yaml. In there, they can include/exclude any infrastructure as necessary.
[1] initial draft: https://gist.github.com/kasunbg/48b2ee7e9dbfda95a6e710acff9493d7
|
priority
|
design the configuration file required for each testplan description subject a sample of the initial draft config we came up with can be found here but product teams does not know about channels and operating system and jdk should not be configured by the product teams hence we should come up with a base test plan testgrid base testplan config yaml that is configured and maintained by testgrid folks this will contain all the major infrastructure our products will support product teams will write configuration file based on testgrid base testplan config yaml in there they can include exclude any infrastructure as necessary initial draft
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.