Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
439,922 | 12,690,512,198 | IssuesEvent | 2020-06-21 12:32:54 | uksf/website-issues | https://api.github.com/repos/uksf/website-issues | closed | Add RAMC to specialization preferences during applications | area/both priority/high type/enhancement | Add an option to select RAMC or Medic to the select a specialization preference part of the application. | 1.0 | Add RAMC to specialization preferences during applications - Add an option to select RAMC or Medic to the select a specialization preference part of the application. | priority | add ramc to specialization preferences during applications add an option to select ramc or medic to the select a specialization preference part of the application | 1 |
244,561 | 7,876,604,764 | IssuesEvent | 2018-06-26 02:06:01 | bahmutov/rolling-task | https://api.github.com/repos/bahmutov/rolling-task | closed | Clean app iframe before adding rolled bundle | bug high priority | Currently before each test, we bundle and include the script. This leads to multiple script tags (and style elements)
<img width="841" alt="screen shot 2018-06-20 at 10 08 56 am" src="https://user-images.githubusercontent.com/2212006/41663760-2fb77b4a-7472-11e8-8f76-1e6d3472ca7a.png">
```js
beforeEach(() => {
// filename from the root of the repo
cy.task('roll', 'app.js').then(({ code }) => {
const doc = cy.state('document')
const script_tag = doc.createElement('script')
script_tag.type = 'text/javascript'
script_tag.text = code
doc.body.appendChild(script_tag)
})
})
```
How does Cypress clear the iframe between the tests? Probably during `cy.visit` | 1.0 | Clean app iframe before adding rolled bundle - Currently before each test, we bundle and include the script. This leads to multiple script tags (and style elements)
<img width="841" alt="screen shot 2018-06-20 at 10 08 56 am" src="https://user-images.githubusercontent.com/2212006/41663760-2fb77b4a-7472-11e8-8f76-1e6d3472ca7a.png">
```js
beforeEach(() => {
// filename from the root of the repo
cy.task('roll', 'app.js').then(({ code }) => {
const doc = cy.state('document')
const script_tag = doc.createElement('script')
script_tag.type = 'text/javascript'
script_tag.text = code
doc.body.appendChild(script_tag)
})
})
```
How does Cypress clear the iframe between the tests? Probably during `cy.visit` | priority | clean app iframe before adding rolled bundle currently before each test we bundle and include the script this leads to multiple script tags and style elements img width alt screen shot at am src js beforeeach filename from the root of the repo cy task roll app js then code const doc cy state document const script tag doc createelement script script tag type text javascript script tag text code doc body appendchild script tag how does cypress clear the iframe between the tests probably during cy visit | 1 |
580,001 | 17,202,820,177 | IssuesEvent | 2021-07-17 16:04:59 | ooni/probe | https://api.github.com/repos/ooni/probe | opened | Mobile release 3.1.0 | effort/M ooni/probe-mobile priority/high | Release notes:
- Added a reminder encouraging users to enable automated testing
- Added a badge to indicate that a proxy is being used
- Added support for minimizing a running test
- Added a progress bar in the dashboard when a test is running
* fix the play store warning
| 1.0 | Mobile release 3.1.0 - Release notes:
- Added a reminder encouraging users to enable automated testing
- Added a badge to indicate that a proxy is being used
- Added support for minimizing a running test
- Added a progress bar in the dashboard when a test is running
* fix the play store warning
| priority | mobile release release notes added a reminder encouraging users to enable automated testing added a badge to indicate that a proxy is being used added support for minimizing a running test added a progress bar in the dashboard when a test is running fix the play store warning | 1 |
140,147 | 5,397,833,887 | IssuesEvent | 2017-02-27 15:36:01 | swkoubou/molt | https://api.github.com/repos/swkoubou/molt | closed | Docker操作のためのイメージ作成 | enhancement high priority | molt では Docker の操作を行う必要があるのでそれ用のイメージを作成する必要がある。
現状 [dind](https://hub.docker.com/_/docker/) が存在するけど、これをベースにすると Python のバージョン固定ができないのでどうしようかなと | 1.0 | Docker操作のためのイメージ作成 - molt では Docker の操作を行う必要があるのでそれ用のイメージを作成する必要がある。
現状 [dind](https://hub.docker.com/_/docker/) が存在するけど、これをベースにすると Python のバージョン固定ができないのでどうしようかなと | priority | docker操作のためのイメージ作成 molt では docker の操作を行う必要があるのでそれ用のイメージを作成する必要がある。 現状 が存在するけど、これをベースにすると python のバージョン固定ができないのでどうしようかなと | 1 |
527,184 | 15,325,151,638 | IssuesEvent | 2021-02-26 00:39:16 | jcsnorlax97/rentr | https://api.github.com/repos/jcsnorlax97/rentr | closed | [TASK] Cleaning up naming conventions & Update Listing entity table | High Priority backend database dev-task | ### Associated User Story:
N/A
### Task Description:
- [X] Rename `getUser()` to be `getUserViaId()` in UserController & UserDao & all associated test files.
- [X] Rename `getListing()` to be `getListingViaId()`
- [X] Rename `unAuthenticated()` to be `unauthenticated()`
- [X] Replace `res.status(401).json(...)` in `authenticateUser()` in `UserController` with `ApiError.unauthenticated(...)`
- [X] Ensure the following attributes are in the "Listing" entity in `init.sql`:
- [X] listing id (int) — `id`
- [X] Title (String; < 100 chars) — `title`
- [X] Price (String) — `price`
- [X] Number of bedrooms (String) — `num_bedroom`
- [X] Number of washrooms (String) — `num_bathroom`
- [X] Laundry Room? (Boolean) — `is_laundry_available`
- [X] Pet Allowed? (Boolean) — `is_pet_allowed`
- [X] Parking Available? (Boolean) — `is_parking_available`
- [X] Image (Url/Base64) (Array of String) — `images`
- [X] Description (String; < 5000 chars) — `description`
- [X] Update all places that uses "Listing" entity and is affected:
- [X] Update `services/listing.js`
- [X] Update `services/listing.test.js`
- [X] Update `dto/listing.js`
- [X] Update `dao/listing.js`
- [X] Ensure all API endpoints are working via Postman (See Screenshots Below):
- [X] GET `/api/v1/ping`
- [X] GET `/api/v1/user/:id`
- [X] POST `/api/v1/user/registration`
- [X] POST `/api/v1/user/login`
- [X] GET `/api/v1/listing`
- [X] GET `/api/v1/listing/:id`
- [X] POST `/api/v1/listing`
- [X] Environment variable, JWT_KEY, for authentication
- [X] Decide shared secret key & initial user password
- [X] Re-generate the hashed password used in `init.sql` & Update them
- [X] Update `.env.template` as documentations
### Dependencies:
All Sprint 2 Dev Tasks
### Acceptance Criteria:
N/A | 1.0 | [TASK] Cleaning up naming conventions & Update Listing entity table - ### Associated User Story:
N/A
### Task Description:
- [X] Rename `getUser()` to be `getUserViaId()` in UserController & UserDao & all associated test files.
- [X] Rename `getListing()` to be `getListingViaId()`
- [X] Rename `unAuthenticated()` to be `unauthenticated()`
- [X] Replace `res.status(401).json(...)` in `authenticateUser()` in `UserController` with `ApiError.unauthenticated(...)`
- [X] Ensure the following attributes are in the "Listing" entity in `init.sql`:
- [X] listing id (int) — `id`
- [X] Title (String; < 100 chars) — `title`
- [X] Price (String) — `price`
- [X] Number of bedrooms (String) — `num_bedroom`
- [X] Number of washrooms (String) — `num_bathroom`
- [X] Laundry Room? (Boolean) — `is_laundry_available`
- [X] Pet Allowed? (Boolean) — `is_pet_allowed`
- [X] Parking Available? (Boolean) — `is_parking_available`
- [X] Image (Url/Base64) (Array of String) — `images`
- [X] Description (String; < 5000 chars) — `description`
- [X] Update all places that uses "Listing" entity and is affected:
- [X] Update `services/listing.js`
- [X] Update `services/listing.test.js`
- [X] Update `dto/listing.js`
- [X] Update `dao/listing.js`
- [X] Ensure all API endpoints are working via Postman (See Screenshots Below):
- [X] GET `/api/v1/ping`
- [X] GET `/api/v1/user/:id`
- [X] POST `/api/v1/user/registration`
- [X] POST `/api/v1/user/login`
- [X] GET `/api/v1/listing`
- [X] GET `/api/v1/listing/:id`
- [X] POST `/api/v1/listing`
- [X] Environment variable, JWT_KEY, for authentication
- [X] Decide shared secret key & initial user password
- [X] Re-generate the hashed password used in `init.sql` & Update them
- [X] Update `.env.template` as documentations
### Dependencies:
All Sprint 2 Dev Tasks
### Acceptance Criteria:
N/A | priority | cleaning up naming conventions update listing entity table associated user story n a task description rename getuser to be getuserviaid in usercontroller userdao all associated test files rename getlisting to be getlistingviaid rename unauthenticated to be unauthenticated replace res status json in authenticateuser in usercontroller with apierror unauthenticated ensure the following attributes are in the listing entity in init sql listing id int — id title string chars — title price string — price number of bedrooms string — num bedroom number of washrooms string — num bathroom laundry room boolean — is laundry available pet allowed boolean — is pet allowed parking available boolean — is parking available image url array of string — images description string chars — description update all places that uses listing entity and is affected update services listing js update services listing test js update dto listing js update dao listing js ensure all api endpoints are working via postman see screenshots below get api ping get api user id post api user registration post api user login get api listing get api listing id post api listing environment variable jwt key for authentication decide shared secret key initial user password re generate the hashed password used in init sql update them update env template as documentations dependencies all sprint dev tasks acceptance criteria n a | 1 |
420,372 | 12,237,182,559 | IssuesEvent | 2020-05-04 17:35:19 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [docdb] Make ConflictResolver code path async | area/docdb priority/high | UPDATE:
Seems like we can get all the RPC threads stuck during conflict resolution. The main thread does a `latch.Wait`, but it is expecting a new resolution RPC thread to actually wake them up. If *all* threads wait at the same time though, there's no thread to do the work that would wake them up.
Relevant stack:
```
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007f97c8511d1e in yb::ConditionVariable::Wait (this=<optimized out>) at ../../src/yb/util/condition_variable.cc:84
#2 0x00007f97c8511f48 in yb::CountDownLatch::Wait (this=this@entry=0x7f97a08ca020) at ../../src/yb/util/countdown_latch.cc:54
#3 0x00007f97cf96e88e in yb::docdb::(anonymous namespace)::ConflictResolver::FetchTransactionStatuses (this=0x7f97a08ca200) at ../../src/yb/docdb/conflict_resolution.cc:330
#4 yb::docdb::(anonymous namespace)::ConflictResolver::DoResolveConflicts (this=this@entry=0x7f97a08ca200) at ../../src/yb/docdb/conflict_resolution.cc:232
#5 0x00007f97cf97365f in yb::docdb::(anonymous namespace)::ConflictResolver::ResolveConflicts (this=0x7f97a08ca200) at ../../src/yb/docdb/conflict_resolution.cc:222
#6 yb::docdb::(anonymous namespace)::ConflictResolver::Resolve (this=0x7f97a08ca200) at ../../src/yb/docdb/conflict_resolution.cc:132
#7 yb::docdb::ResolveTransactionConflicts (doc_ops=..., write_batch=..., hybrid_time=..., read_time=..., doc_db=..., partial_range_key_intents=..., status_manager=0x34cbe190, conflicts_metric=0x33a4e800) at ../../src/yb/docdb/conflict_resolution.cc:750
#8 0x00007f97d04e0833 in yb::tablet::Tablet::StartDocWriteOperation (this=this@entry=0x28e94410, operation=0xec355560) at ../../src/yb/tablet/tablet.cc:2388
#9 0x00007f97d04e2ca1 in yb::tablet::Tablet::KeyValueBatchFromQLWriteBatch (this=this@entry=0x28e94410, operation=...) at ../../src/yb/tablet/tablet.cc:1267
#10 0x00007f97d04e3d1a in yb::tablet::Tablet::AcquireLocksAndPerformDocOperations (this=0x28e94410, operation=...) at ../../src/yb/tablet/tablet.cc:1581
#11 0x00007f97d05099f1 in yb::tablet::TabletPeer::WriteAsync (this=this@entry=0x255db80, state=..., term=term@entry=19, deadline=...) at ../../src/yb/tablet/tablet_peer.cc:590
#12 0x00007f97d0dc7f09 in yb::tserver::TabletServiceImpl::Write (this=this@entry=0x1055040, req=req@entry=0xb33db970, resp=resp@entry=0xd18b1820, context=...) at ../../src/yb/tserver/tablet_service.cc:1276
#13 0x00007f97ce069aba in yb::tserver::TabletServerServiceIf::Handle (this=0x1055040, call=...) at src/yb/tserver/tserver_service.service.cc:148
#14 0x00007f97c9ddecf9 in yb::rpc::ServicePoolImpl::Handle (this=0x176cb40, incoming=...) at ../../src/yb/rpc/service_pool.cc:262
#15 0x00007f97c9d82e04 in yb::rpc::InboundCall::InboundCallTask::Run (this=<optimized out>) at ../../src/yb/rpc/inbound_call.cc:212
#16 0x00007f97c9dea948 in yb::rpc::(anonymous namespace)::Worker::Execute (this=<optimized out>) at ../../src/yb/rpc/thread_pool.cc:99
#17 0x00007f97c86234af in std::function<void ()>::operator()() const (this=0x24e3258) at /usr/scratch/yugabyte/yugabyte-2.1.3.0-linux/linuxbrew-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/Cellar/gcc/5.5.0_4/include/c++/5.5.0/functional:2267
#18 yb::Thread::SuperviseThread (arg=0x24e3200) at ../../src/yb/util/thread.cc:744
#19 0x00007f97c2e5d694 in start_thread (arg=0x7f97a08cc700) at pthread_create.c:333
#20 0x00007f97c259a41d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
```
PREVIOUS ASSUMPTIONS:
In /threadz, seeing 146 stacks of:
```
@ 0x7fccde2e711e (unknown)
@ 0x7fccde396066 syscall
@ 0x7fccdf446746 std::__atomic_futex_unsigned_base::_M_futex_wait_until()
@ 0x7fcceda5f38e yb::ql::ExecContext::PrepareChildTransaction()
@ 0x7fcceda68755 yb::ql::Executor::UpdateIndexes()
@ 0x7fcceda667e7 yb::ql::Executor::AddOperation()
@ 0x7fcceda66b3a yb::ql::Executor::ExecPTNode()
@ 0x7fcceda6d53f yb::ql::Executor::ExecTreeNode()
@ 0x7fcceda6d6e9 yb::ql::Executor::Execute()
@ 0x7fcceda6db8d yb::ql::Executor::ExecuteAsync()
@ 0x7fccedec1918 yb::ql::QLProcessor::ExecuteAsync()
@ 0x7fcceebfc44f yb::cqlserver::CQLProcessor::ProcessRequest()
@ 0x7fcceebfc983 yb::cqlserver::CQLProcessor::ProcessRequest()
@ 0x7fcceebfcd44 yb::cqlserver::CQLProcessor::ProcessCall()
@ 0x7fcceec1590c yb::cqlserver::CQLServiceImpl::Handle()
@ 0x7fcce5bdf5e7 yb::rpc::ServicePoolImpl::Handle()
```
Background was 9 node, 3 region, RF3. Stopped a region, brought it back after 15m (so followers would be marked failed). On recovery, multiple RBSs would start and at some point, the nodes in this region get stuck, many on these stacks.
This could be caused by #4035 or a completely different problem in the same space!
cc @kmuthukk @ameyb | 1.0 | [docdb] Make ConflictResolver code path async - UPDATE:
Seems like we can get all the RPC threads stuck during conflict resolution. The main thread does a `latch.Wait`, but it is expecting a new resolution RPC thread to actually wake them up. If *all* threads wait at the same time though, there's no thread to do the work that would wake them up.
Relevant stack:
```
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007f97c8511d1e in yb::ConditionVariable::Wait (this=<optimized out>) at ../../src/yb/util/condition_variable.cc:84
#2 0x00007f97c8511f48 in yb::CountDownLatch::Wait (this=this@entry=0x7f97a08ca020) at ../../src/yb/util/countdown_latch.cc:54
#3 0x00007f97cf96e88e in yb::docdb::(anonymous namespace)::ConflictResolver::FetchTransactionStatuses (this=0x7f97a08ca200) at ../../src/yb/docdb/conflict_resolution.cc:330
#4 yb::docdb::(anonymous namespace)::ConflictResolver::DoResolveConflicts (this=this@entry=0x7f97a08ca200) at ../../src/yb/docdb/conflict_resolution.cc:232
#5 0x00007f97cf97365f in yb::docdb::(anonymous namespace)::ConflictResolver::ResolveConflicts (this=0x7f97a08ca200) at ../../src/yb/docdb/conflict_resolution.cc:222
#6 yb::docdb::(anonymous namespace)::ConflictResolver::Resolve (this=0x7f97a08ca200) at ../../src/yb/docdb/conflict_resolution.cc:132
#7 yb::docdb::ResolveTransactionConflicts (doc_ops=..., write_batch=..., hybrid_time=..., read_time=..., doc_db=..., partial_range_key_intents=..., status_manager=0x34cbe190, conflicts_metric=0x33a4e800) at ../../src/yb/docdb/conflict_resolution.cc:750
#8 0x00007f97d04e0833 in yb::tablet::Tablet::StartDocWriteOperation (this=this@entry=0x28e94410, operation=0xec355560) at ../../src/yb/tablet/tablet.cc:2388
#9 0x00007f97d04e2ca1 in yb::tablet::Tablet::KeyValueBatchFromQLWriteBatch (this=this@entry=0x28e94410, operation=...) at ../../src/yb/tablet/tablet.cc:1267
#10 0x00007f97d04e3d1a in yb::tablet::Tablet::AcquireLocksAndPerformDocOperations (this=0x28e94410, operation=...) at ../../src/yb/tablet/tablet.cc:1581
#11 0x00007f97d05099f1 in yb::tablet::TabletPeer::WriteAsync (this=this@entry=0x255db80, state=..., term=term@entry=19, deadline=...) at ../../src/yb/tablet/tablet_peer.cc:590
#12 0x00007f97d0dc7f09 in yb::tserver::TabletServiceImpl::Write (this=this@entry=0x1055040, req=req@entry=0xb33db970, resp=resp@entry=0xd18b1820, context=...) at ../../src/yb/tserver/tablet_service.cc:1276
#13 0x00007f97ce069aba in yb::tserver::TabletServerServiceIf::Handle (this=0x1055040, call=...) at src/yb/tserver/tserver_service.service.cc:148
#14 0x00007f97c9ddecf9 in yb::rpc::ServicePoolImpl::Handle (this=0x176cb40, incoming=...) at ../../src/yb/rpc/service_pool.cc:262
#15 0x00007f97c9d82e04 in yb::rpc::InboundCall::InboundCallTask::Run (this=<optimized out>) at ../../src/yb/rpc/inbound_call.cc:212
#16 0x00007f97c9dea948 in yb::rpc::(anonymous namespace)::Worker::Execute (this=<optimized out>) at ../../src/yb/rpc/thread_pool.cc:99
#17 0x00007f97c86234af in std::function<void ()>::operator()() const (this=0x24e3258) at /usr/scratch/yugabyte/yugabyte-2.1.3.0-linux/linuxbrew-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/Cellar/gcc/5.5.0_4/include/c++/5.5.0/functional:2267
#18 yb::Thread::SuperviseThread (arg=0x24e3200) at ../../src/yb/util/thread.cc:744
#19 0x00007f97c2e5d694 in start_thread (arg=0x7f97a08cc700) at pthread_create.c:333
#20 0x00007f97c259a41d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
```
PREVIOUS ASSUMPTIONS:
In /threadz, seeing 146 stacks of:
```
@ 0x7fccde2e711e (unknown)
@ 0x7fccde396066 syscall
@ 0x7fccdf446746 std::__atomic_futex_unsigned_base::_M_futex_wait_until()
@ 0x7fcceda5f38e yb::ql::ExecContext::PrepareChildTransaction()
@ 0x7fcceda68755 yb::ql::Executor::UpdateIndexes()
@ 0x7fcceda667e7 yb::ql::Executor::AddOperation()
@ 0x7fcceda66b3a yb::ql::Executor::ExecPTNode()
@ 0x7fcceda6d53f yb::ql::Executor::ExecTreeNode()
@ 0x7fcceda6d6e9 yb::ql::Executor::Execute()
@ 0x7fcceda6db8d yb::ql::Executor::ExecuteAsync()
@ 0x7fccedec1918 yb::ql::QLProcessor::ExecuteAsync()
@ 0x7fcceebfc44f yb::cqlserver::CQLProcessor::ProcessRequest()
@ 0x7fcceebfc983 yb::cqlserver::CQLProcessor::ProcessRequest()
@ 0x7fcceebfcd44 yb::cqlserver::CQLProcessor::ProcessCall()
@ 0x7fcceec1590c yb::cqlserver::CQLServiceImpl::Handle()
@ 0x7fcce5bdf5e7 yb::rpc::ServicePoolImpl::Handle()
```
Background was 9 node, 3 region, RF3. Stopped a region, brought it back after 15m (so followers would be marked failed). On recovery, multiple RBSs would start and at some point, the nodes in this region get stuck, many on these stacks.
This could be caused by #4035 or a completely different problem in the same space!
cc @kmuthukk @ameyb | priority | make conflictresolver code path async update seems like we can get all the rpc threads stuck during conflict resolution the main thread does a latch wait but it is expecting a new resolution rpc thread to actually wake them up if all threads wait at the same time though there s no thread to do the work that would wake them up relevant stack pthread cond wait glibc at sysdeps unix sysv linux pthread cond wait s in yb conditionvariable wait this at src yb util condition variable cc in yb countdownlatch wait this this entry at src yb util countdown latch cc in yb docdb anonymous namespace conflictresolver fetchtransactionstatuses this at src yb docdb conflict resolution cc yb docdb anonymous namespace conflictresolver doresolveconflicts this this entry at src yb docdb conflict resolution cc in yb docdb anonymous namespace conflictresolver resolveconflicts this at src yb docdb conflict resolution cc yb docdb anonymous namespace conflictresolver resolve this at src yb docdb conflict resolution cc yb docdb resolvetransactionconflicts doc ops write batch hybrid time read time doc db partial range key intents status manager conflicts metric at src yb docdb conflict resolution cc in yb tablet tablet startdocwriteoperation this this entry operation at src yb tablet tablet cc in yb tablet tablet keyvaluebatchfromqlwritebatch this this entry operation at src yb tablet tablet cc in yb tablet tablet acquirelocksandperformdocoperations this operation at src yb tablet tablet cc in yb tablet tabletpeer writeasync this this entry state term term entry deadline at src yb tablet tablet peer cc in yb tserver tabletserviceimpl write this this entry req req entry resp resp entry context at src yb tserver tablet service cc in yb tserver tabletserverserviceif handle this call at src yb tserver tserver service service cc in yb rpc servicepoolimpl handle this incoming at src yb rpc service pool cc in yb rpc inboundcall inboundcalltask run this at src yb rpc inbound call cc in yb rpc anonymous namespace worker execute this at src yb rpc thread pool cc in std function operator const this at usr scratch yugabyte yugabyte linux linuxbrew xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx cellar gcc include c functional yb thread supervisethread arg at src yb util thread cc in start thread arg at pthread create c in clone at sysdeps unix sysv linux clone s previous assumptions in threadz seeing stacks of unknown syscall std atomic futex unsigned base m futex wait until yb ql execcontext preparechildtransaction yb ql executor updateindexes yb ql executor addoperation yb ql executor execptnode yb ql executor exectreenode yb ql executor execute yb ql executor executeasync yb ql qlprocessor executeasync yb cqlserver cqlprocessor processrequest yb cqlserver cqlprocessor processrequest yb cqlserver cqlprocessor processcall yb cqlserver cqlserviceimpl handle yb rpc servicepoolimpl handle background was node region stopped a region brought it back after so followers would be marked failed on recovery multiple rbss would start and at some point the nodes in this region get stuck many on these stacks this could be caused by or a completely different problem in the same space cc kmuthukk ameyb | 1 |
285,640 | 8,767,413,047 | IssuesEvent | 2018-12-17 19:40:54 | torchgan/torchgan | https://api.github.com/repos/torchgan/torchgan | closed | Improve Documentation | enhancement high priority v0.0.2 work in progress | Since we are pushing for the release and putting more focus on feature completeness, I am listing all the current modules. All of these need to be documented before the next release can be tagged.
- [x] Loss Functions: New functions lack the equations. We also don't have a dedicated documentation for the functional forms of the losses. However, the `train_ops` need to have proper documentation as it is necessary for customizability.
- [x] Models: Models are fairly well documented. They just need some minor clean ups.
- [x] Metrics: The metrics API is a bit difficult to use. If we can't find a cleaner way to deal with metrics the only way is to have well-documented examples.
- [x] Logger: Lacks Documentation
- [x] Layers: Lacks Documentation
- [x] Trainer: Documentation is fine. However, needs a few demonstrative examples.
| 1.0 | Improve Documentation - Since we are pushing for the release and putting more focus on feature completeness, I am listing all the current modules. All of these need to be documented before the next release can be tagged.
- [x] Loss Functions: New functions lack the equations. We also don't have a dedicated documentation for the functional forms of the losses. However, the `train_ops` need to have proper documentation as it is necessary for customizability.
- [x] Models: Models are fairly well documented. They just need some minor clean ups.
- [x] Metrics: The metrics API is a bit difficult to use. If we can't find a cleaner way to deal with metrics the only way is to have well-documented examples.
- [x] Logger: Lacks Documentation
- [x] Layers: Lacks Documentation
- [x] Trainer: Documentation is fine. However, needs a few demonstrative examples.
| priority | improve documentation since we are pushing for the release and putting more focus on feature completeness i am listing all the current modules all of these need to be documented before the next release can be tagged loss functions new functions lack the equations we also don t have a dedicated documentation for the functional forms of the losses however the train ops need to have proper documentation as it is necessary for customizability models models are fairly well documented they just need some minor clean ups metrics the metrics api is a bit difficult to use if we can t find a cleaner way to deal with metrics the only way is to have well documented examples logger lacks documentation layers lacks documentation trainer documentation is fine however needs a few demonstrative examples | 1 |
204,951 | 7,092,986,088 | IssuesEvent | 2018-01-12 18:38:14 | ArctosDB/arctos | https://api.github.com/repos/ArctosDB/arctos | closed | funky sorting on loan list | Display/Interface Function-Transactions Priority-High | If you review loan items in Arctos, the list is different if you have it on 100 or 250 per page. I was double checking the list of specimens loaned, and looking at it with 100 per page, it looked like lots of specimens were missing. I had forgotten there were greater than 100 specimens. I sorted the list more than once. Somehow it does not sort the entire list by cat number when on 100 per page, just those 100 that happen to make it onto the page under some other criteria. See attached images. One is for when set for 100 per page and sorted by MVZ number, the other for the 250 per page, again sorted by MVZ number. Can this be fixed?
<img width="328" alt="screen shot 2017-02-02 at 10 18 58 am 1" src="https://user-images.githubusercontent.com/5749672/33914502-51d466f2-df53-11e7-8238-c8b78d85117c.png">
<img width="328" alt="screen shot 2017-02-02 at 10 18 58 am" src="https://user-images.githubusercontent.com/5749672/33914504-51e84186-df53-11e7-9620-8ae2aced3d9c.png">
| 1.0 | funky sorting on loan list - If you review loan items in Arctos, the list is different if you have it on 100 or 250 per page. I was double checking the list of specimens loaned, and looking at it with 100 per page, it looked like lots of specimens were missing. I had forgotten there were greater than 100 specimens. I sorted the list more than once. Somehow it does not sort the entire list by cat number when on 100 per page, just those 100 that happen to make it onto the page under some other criteria. See attached images. One is for when set for 100 per page and sorted by MVZ number, the other for the 250 per page, again sorted by MVZ number. Can this be fixed?
<img width="328" alt="screen shot 2017-02-02 at 10 18 58 am 1" src="https://user-images.githubusercontent.com/5749672/33914502-51d466f2-df53-11e7-8238-c8b78d85117c.png">
<img width="328" alt="screen shot 2017-02-02 at 10 18 58 am" src="https://user-images.githubusercontent.com/5749672/33914504-51e84186-df53-11e7-9620-8ae2aced3d9c.png">
| priority | funky sorting on loan list if you review loan items in arctos the list is different if you have it on or per page i was double checking the list of specimens loaned and looking at it with per page it looked like lots of specimens were missing i had forgotten there were greater than specimens i sorted the list more than once somehow it does not sort the entire list by cat number when on per page just those that happen to make it onto the page under some other criteria see attached images one is for when set for per page and sorted by mvz number the other for the per page again sorted by mvz number can this be fixed img width alt screen shot at am src img width alt screen shot at am src | 1 |
133,457 | 5,203,788,465 | IssuesEvent | 2017-01-24 13:56:18 | CCAFS/MARLO | https://api.github.com/repos/CCAFS/MARLO | closed | Replace Management Liaison typo for A4NH | Priority - High Type -Task | These fields are necessary, instead we change the labels for A4NH. Replace Management Liaison = Flagship and Management Liaison contact person = Flagship Leader | 1.0 | Replace Management Liaison typo for A4NH - These fields are necessary, instead we change the labels for A4NH. Replace Management Liaison = Flagship and Management Liaison contact person = Flagship Leader | priority | replace management liaison typo for these fields are necessary instead we change the labels for replace management liaison flagship and management liaison contact person flagship leader | 1 |
793,568 | 28,001,798,423 | IssuesEvent | 2023-03-27 13:04:08 | AY2223S2-CS2113-W15-1/tp | https://api.github.com/repos/AY2223S2-CS2113-W15-1/tp | closed | Track Calories Intake | priority.High type.Story | As a busy Office Worker, I want to track my calories intake over the course of a working day so that I do not have to remember everything that I have eaten. | 1.0 | Track Calories Intake - As a busy Office Worker, I want to track my calories intake over the course of a working day so that I do not have to remember everything that I have eaten. | priority | track calories intake as a busy office worker i want to track my calories intake over the course of a working day so that i do not have to remember everything that i have eaten | 1 |
557,821 | 16,520,217,302 | IssuesEvent | 2021-05-26 13:52:00 | django-cms/django-cms | https://api.github.com/repos/django-cms/django-cms | opened | [BUG] add DJANGO_3_2 to certain conditions | priority: high | ## Description
https://github.com/django-cms/django-cms/blob/develop/cms/management/commands/subcommands/base.py#L40
https://github.com/django-cms/django-cms/blob/34a26bd1b543ee14eebc0b95ce550bf91eb7d236/cms/views.py#L57
There are also some places used inside the tests.
``DJANGO_3_2`` should be added to those conditions
This only affects django CMS 3.9.x
| 1.0 | [BUG] add DJANGO_3_2 to certain conditions - ## Description
https://github.com/django-cms/django-cms/blob/develop/cms/management/commands/subcommands/base.py#L40
https://github.com/django-cms/django-cms/blob/34a26bd1b543ee14eebc0b95ce550bf91eb7d236/cms/views.py#L57
There are also some places used inside the tests.
``DJANGO_3_2`` should be added to those conditions
This only affects django CMS 3.9.x
| priority | add django to certain conditions description there are also some places used inside the tests django should be added to those conditions this only affects django cms x | 1 |
780,789 | 27,408,316,445 | IssuesEvent | 2023-03-01 08:47:51 | giantswarm/roadmap | https://api.github.com/repos/giantswarm/roadmap | closed | [Spike] Improving Crossplane severe performance issues | priority/high team/honeybadger impact/high | We are experiencing a lot of client side throttling on clusters with Crossplane installed, which seemingly became worse with the switch to official providers (they have a magnitude more CRDs, getting over 900 / 1000 on some clusters).
Also experiencing more frequent CRD install job OOM kills for some apps (flux, external-secrets) on clusters with Crossplane.
All these combined causes Flux helm releases to fail often / get stuck in upgrade e.g. helm-controller sees slowness, or crossplane takes too long to come up / get healthy and the CRD jobs getting OOM killed fails the pre install hooks and we end up with a timed out / failed release.
Having these on live clusters (e.g. banana did trigger alert on Friday) will end up with more alerts for team on-call. Current clusters where we have these are mostly test clusters which are ignored and helm releases are normally reconciled when chart version changes or the flux helm releases is suspended and then resumed. These are the reasons why we don't have the alerts triggering at the moment.
```[tasklist]
### Tasks
- [ ] https://github.com/giantswarm/giantswarm/issues/25892
```
| 1.0 | [Spike] Improving Crossplane severe performance issues - We are experiencing a lot of client side throttling on clusters with Crossplane installed, which seemingly became worse with the switch to official providers (they have a magnitude more CRDs, getting over 900 / 1000 on some clusters).
Also experiencing more frequent CRD install job OOM kills for some apps (flux, external-secrets) on clusters with Crossplane.
All these combined causes Flux helm releases to fail often / get stuck in upgrade e.g. helm-controller sees slowness, or crossplane takes too long to come up / get healthy and the CRD jobs getting OOM killed fails the pre install hooks and we end up with a timed out / failed release.
Having these on live clusters (e.g. banana did trigger alert on Friday) will end up with more alerts for team on-call. Current clusters where we have these are mostly test clusters which are ignored and helm releases are normally reconciled when chart version changes or the flux helm releases is suspended and then resumed. These are the reasons why we don't have the alerts triggering at the moment.
```[tasklist]
### Tasks
- [ ] https://github.com/giantswarm/giantswarm/issues/25892
```
| priority | improving crossplane severe performance issues we are experiencing a lot of client side throttling on clusters with crossplane installed which seemingly became worse with the switch to official providers they have a magnitude more crds getting over on some clusters also experiencing more frequent crd install job oom kills for some apps flux external secrets on clusters with crossplane all these combined causes flux helm releases to fail often get stuck in upgrade e g helm controller sees slowness or crossplane takes too long to come up get healthy and the crd jobs getting oom killed fails the pre install hooks and we end up with a timed out failed release having these on live clusters e g banana did trigger alert on friday will end up with more alerts for team on call current clusters where we have these are mostly test clusters which are ignored and helm releases are normally reconciled when chart version changes or the flux helm releases is suspended and then resumed these are the reasons why we don t have the alerts triggering at the moment tasks | 1 |
563,103 | 16,676,094,980 | IssuesEvent | 2021-06-07 16:21:43 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | binary_cross_entropy misses derivative for target even if the variable is unused for backward | high priority module: autograd triaged | I don't understand why it breaks in the forward pass? The backprop through the loss variable may not even be required (e.g. when it's used as matching affinity for non-differentiated algorithm).
I would porpose:
- it should have a derivative formula for target (it can be useful for bringing two probability distributions together in contrastive formulations)
- if it doesn't have it, it should fail at backprop time when the variable is known to be backpropped through in the graph
```python
import torch
import torch.nn.functional as F
a = torch.ones(1, 2) / 2
a.requires_grad_()
b = torch.ones(1, 2) / 2
matching_matrix_unused_for_backprop = F.binary_cross_entropy(a, b.requires_grad_(), reduce = 'none')
# File ".../vadim/prefix/miniconda/lib/python3.8/site-packages/torch/nn/functional.py", line 2525, in binary_cross_entropy
# return torch._C._nn.binary_cross_entropy(
# RuntimeError: the derivative for 'target' is not implemented
```
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @anjali411 @albanD @gqchen @pearu @nikitaved @soulitzer | 1.0 | binary_cross_entropy misses derivative for target even if the variable is unused for backward - I don't understand why it breaks in the forward pass? The backprop through the loss variable may not even be required (e.g. when it's used as matching affinity for non-differentiated algorithm).
I would porpose:
- it should have a derivative formula for target (it can be useful for bringing two probability distributions together in contrastive formulations)
- if it doesn't have it, it should fail at backprop time when the variable is known to be backpropped through in the graph
```python
import torch
import torch.nn.functional as F
a = torch.ones(1, 2) / 2
a.requires_grad_()
b = torch.ones(1, 2) / 2
matching_matrix_unused_for_backprop = F.binary_cross_entropy(a, b.requires_grad_(), reduce = 'none')
# File ".../vadim/prefix/miniconda/lib/python3.8/site-packages/torch/nn/functional.py", line 2525, in binary_cross_entropy
# return torch._C._nn.binary_cross_entropy(
# RuntimeError: the derivative for 'target' is not implemented
```
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @anjali411 @albanD @gqchen @pearu @nikitaved @soulitzer | priority | binary cross entropy misses derivative for target even if the variable is unused for backward i don t understand why it breaks in the forward pass the backprop through the loss variable may not even be required e g when it s used as matching affinity for non differentiated algorithm i would porpose it should have a derivative formula for target it can be useful for bringing two probability distributions together in contrastive formulations if it doesn t have it it should fail at backprop time when the variable is known to be backpropped through in the graph python import torch import torch nn functional as f a torch ones a requires grad b torch ones matching matrix unused for backprop f binary cross entropy a b requires grad reduce none file vadim prefix miniconda lib site packages torch nn functional py line in binary cross entropy return torch c nn binary cross entropy runtimeerror the derivative for target is not implemented cc ezyang gchanan bdhirsh jbschlosser alband gqchen pearu nikitaved soulitzer | 1 |
239,134 | 7,787,024,625 | IssuesEvent | 2018-06-06 20:54:58 | EQuimper/react-native-google-autocomplete | https://api.github.com/repos/EQuimper/react-native-google-autocomplete | closed | Clear the locationResults when all text is erased | Improvement Priority:High | Great library ! Love the simplicity.
Any way to have the `locationResults` array cleared up when all text is erased ? For example using the `clear` method on the `Textinput` | 1.0 | Clear the locationResults when all text is erased - Great library ! Love the simplicity.
Any way to have the `locationResults` array cleared up when all text is erased ? For example using the `clear` method on the `Textinput` | priority | clear the locationresults when all text is erased great library love the simplicity any way to have the locationresults array cleared up when all text is erased for example using the clear method on the textinput | 1 |
430,814 | 12,466,381,452 | IssuesEvent | 2020-05-28 15:23:46 | windchime-yk/novel-support.js | https://api.github.com/repos/windchime-yk/novel-support.js | opened | v2.0.0として更新 | Priority: High Type: Release | Vueでも使えるように、テキストのみを変換する機能を追加する。
それに伴って名前付きエクスポートに戻し、関数名を変更する。
そのため、SemVerの理念に則って、メジャーバージョンを更新する。 | 1.0 | v2.0.0として更新 - Vueでも使えるように、テキストのみを変換する機能を追加する。
それに伴って名前付きエクスポートに戻し、関数名を変更する。
そのため、SemVerの理念に則って、メジャーバージョンを更新する。 | priority | vueでも使えるように、テキストのみを変換する機能を追加する。 それに伴って名前付きエクスポートに戻し、関数名を変更する。 そのため、semverの理念に則って、メジャーバージョンを更新する。 | 1 |
634,495 | 20,363,167,846 | IssuesEvent | 2022-02-21 00:01:17 | flix/flix | https://api.github.com/repos/flix/flix | closed | Add `fix` modifier | high-priority | - [x] Add `fix` keyword to frontend.
- [x] Add check for `fix` and `not`.
- [x] Add `fix` keyword support to Stratifier/Ullman.
- [x] Add `fix` keyword support to Datalog runtime.
- [x] Add check that rejects illegal use of lattice variables. (Suggest to use `fix`).
- [ ] Add positive test cases that use fix explicitly.
- [ ] Add negative test cases that use fix explicitly and wrongly.
- [x] Add test cases that uses lattices.
- [ ] Cleanup `Stratifier.run` to avoid too many intermediate collections.
- [x] Refactor query to use `fix` and get rid of `SelectFragment`.
- [x] Refactor documentation and error messages
- [x] https://github.com/flix/flix/issues/3179
- [x] VSCode keyword
| 1.0 | Add `fix` modifier - - [x] Add `fix` keyword to frontend.
- [x] Add check for `fix` and `not`.
- [x] Add `fix` keyword support to Stratifier/Ullman.
- [x] Add `fix` keyword support to Datalog runtime.
- [x] Add check that rejects illegal use of lattice variables. (Suggest to use `fix`).
- [ ] Add positive test cases that use fix explicitly.
- [ ] Add negative test cases that use fix explicitly and wrongly.
- [x] Add test cases that uses lattices.
- [ ] Cleanup `Stratifier.run` to avoid too many intermediate collections.
- [x] Refactor query to use `fix` and get rid of `SelectFragment`.
- [x] Refactor documentation and error messages
- [x] https://github.com/flix/flix/issues/3179
- [x] VSCode keyword
| priority | add fix modifier add fix keyword to frontend add check for fix and not add fix keyword support to stratifier ullman add fix keyword support to datalog runtime add check that rejects illegal use of lattice variables suggest to use fix add positive test cases that use fix explicitly add negative test cases that use fix explicitly and wrongly add test cases that uses lattices cleanup stratifier run to avoid too many intermediate collections refactor query to use fix and get rid of selectfragment refactor documentation and error messages vscode keyword | 1 |
94,732 | 3,931,602,942 | IssuesEvent | 2016-04-25 13:09:01 | 0mp/io-touchpad | https://api.github.com/repos/0mp/io-touchpad | opened | The final check list for the second iteration | priority: high status: in progress type: task | This is what we need to to before submitting the second iteration:
- [ ] Close related issues and pull requests.
- [ ] Help @RjiukYagami with the test suite. The check list for this issues is here: #22
- [ ] Sum up the second iteration in this document. Every one should add some information about how they contributed to the project during this iteration.
- [ ] @RjiukYagami
- [ ] @piotrekp1
- [ ] @michal7352
- [ ] me
- [ ] Clean up your code (@piotrekp1 especially :wink: ) :
- [ ] @RjiukYagami
- [ ] @piotrekp1
- [ ] @michal7352
- [ ] me
- [ ] Document your code
- [ ] @RjiukYagami
- [ ] @piotrekp1
- [ ] @michal7352
- [ ] me
Comment below that you've read this issue please. | 1.0 | The final check list for the second iteration - This is what we need to to before submitting the second iteration:
- [ ] Close related issues and pull requests.
- [ ] Help @RjiukYagami with the test suite. The check list for this issues is here: #22
- [ ] Sum up the second iteration in this document. Every one should add some information about how they contributed to the project during this iteration.
- [ ] @RjiukYagami
- [ ] @piotrekp1
- [ ] @michal7352
- [ ] me
- [ ] Clean up your code (@piotrekp1 especially :wink: ) :
- [ ] @RjiukYagami
- [ ] @piotrekp1
- [ ] @michal7352
- [ ] me
- [ ] Document your code
- [ ] @RjiukYagami
- [ ] @piotrekp1
- [ ] @michal7352
- [ ] me
Comment below that you've read this issue please. | priority | the final check list for the second iteration this is what we need to to before submitting the second iteration close related issues and pull requests help rjiukyagami with the test suite the check list for this issues is here sum up the second iteration in this document every one should add some information about how they contributed to the project during this iteration rjiukyagami me clean up your code especially wink rjiukyagami me document your code rjiukyagami me comment below that you ve read this issue please | 1 |
487,120 | 14,019,597,899 | IssuesEvent | 2020-10-29 18:24:43 | woocommerce/woocommerce-gateway-stripe | https://api.github.com/repos/woocommerce/woocommerce-gateway-stripe | opened | Update for PHP8 | Priority: High | PHP8 will be released on Thursday, November 26, 2020 (Thanksgiving in the US, and the start of the holiday shopping season). Ensuring this extension works well with PHP8 is a high priority.
Task list in progress:
- [ ] Test with PHP8 to identify any problems
- [ ] Fix PHP8 related problems
- [ ] Update Travis to include PHP 8 in the versions of PHP to be tested with | 1.0 | Update for PHP8 - PHP8 will be released on Thursday, November 26, 2020 (Thanksgiving in the US, and the start of the holiday shopping season). Ensuring this extension works well with PHP8 is a high priority.
Task list in progress:
- [ ] Test with PHP8 to identify any problems
- [ ] Fix PHP8 related problems
- [ ] Update Travis to include PHP 8 in the versions of PHP to be tested with | priority | update for will be released on thursday november thanksgiving in the us and the start of the holiday shopping season ensuring this extension works well with is a high priority task list in progress test with to identify any problems fix related problems update travis to include php in the versions of php to be tested with | 1 |
535,061 | 15,681,428,589 | IssuesEvent | 2021-03-25 05:20:11 | DeepakVelmurugan/virtualdata | https://api.github.com/repos/DeepakVelmurugan/virtualdata | opened | Add the score card | enhancement high-priority | Score card for displaying scores of each model and if score is bad it's background color should change try it with javascript | 1.0 | Add the score card - Score card for displaying scores of each model and if score is bad it's background color should change try it with javascript | priority | add the score card score card for displaying scores of each model and if score is bad it s background color should change try it with javascript | 1 |
296,845 | 9,126,570,461 | IssuesEvent | 2019-02-24 22:33:52 | Game-technology-group-2/project-teleport | https://api.github.com/repos/Game-technology-group-2/project-teleport | closed | Implement the Assimp library | high priority | The project should use the Assimp library to import external 3D assets. | 1.0 | Implement the Assimp library - The project should use the Assimp library to import external 3D assets. | priority | implement the assimp library the project should use the assimp library to import external assets | 1 |
140,562 | 5,412,267,763 | IssuesEvent | 2017-03-01 14:08:52 | k0shk0sh/FastHub | https://api.github.com/repos/k0shk0sh/FastHub | closed | bug links in readme | bug Future release High Priority | clicking on a link [non GitHub] in a readme file fails and tries to open it as a repo instead of opening it inside a webview | 1.0 | bug links in readme - clicking on a link [non GitHub] in a readme file fails and tries to open it as a repo instead of opening it inside a webview | priority | bug links in readme clicking on a link in a readme file fails and tries to open it as a repo instead of opening it inside a webview | 1 |
399,283 | 11,746,001,930 | IssuesEvent | 2020-03-12 10:51:05 | JuliaDynamics/DrWatson.jl | https://api.github.com/repos/JuliaDynamics/DrWatson.jl | closed | Add new travis key | high priority | Hi @sebastianpech , @tamasgal , @asinghvi17 @JonasIsensee or whoever else uses Linux.
Can someone please add a new documenter key to our travis builds, as is instructed by here: https://juliadocs.github.io/Documenter.jl/stable/man/hosting/#travis-ssh-1 ?
our old key got erased somehow... I can't do it because this key generator doesn't work on windows.
thanks! | 1.0 | Add new travis key - Hi @sebastianpech , @tamasgal , @asinghvi17 @JonasIsensee or whoever else uses Linux.
Can someone please add a new documenter key to our travis builds, as is instructed by here: https://juliadocs.github.io/Documenter.jl/stable/man/hosting/#travis-ssh-1 ?
our old key got erased somehow... I can't do it because this key generator doesn't work on windows.
thanks! | priority | add new travis key hi sebastianpech tamasgal jonasisensee or whoever else uses linux can someone please add a new documenter key to our travis builds as is instructed by here our old key got erased somehow i can t do it because this key generator doesn t work on windows thanks | 1 |
208,693 | 7,157,455,341 | IssuesEvent | 2018-01-26 19:55:28 | broadinstitute/gatk | https://api.github.com/repos/broadinstitute/gatk | closed | GenomicsDBImport: add support for samples with spaces in the name | GenomicsDB PRIORITY_HIGH bug | We are unable to run joint genotyping on the cloud when it includes a sample with a space in the name; it fails when calling GATK's GenomicsDBImport functionality (see error log below).
This practice (samples with spaces in the name) is unfortunately not uncommon, so we've had to modify our various workflows one by one to add support for this case. Please update GenomicsDBImport to handle samples with a space in the name.
Example error log:
gsutil cat gs://broad-jg-dev-cromwell-execution/JointGenotyping/6918095f-ca06-4883-bcb5-f5c2e343bb6d/call-ImportGVCFs/shard-0/ImportGVCFs-0-stderr.log
Using GATK jar /usr/gitc/gatk-package-4.beta.6-local.jar
Running:
java -Dsamjdk.use_async_io_read_samtools=false -Dsamjdk.use_async_io_write_samtools=true -Dsamjdk.use_async_io_write_tribble=false -Dsamjdk.compression_level=1 -Xmx4g -Xms4g -jar /usr/gitc/gatk-package-4.beta.6-local.jar GenomicsDBImport --genomicsDBWorkspace genomicsdb --batchSize 50 -L chr1:1-391754 --sampleNameMap /cromwell_root/broad-jg-dev-storage/freimer_dutch_fin_wgs_v1/v1/sample_map --readerThreads 5 -ip 500
Picked up _JAVA_OPTIONS: -Djava.io.tmpdir=/cromwell_root/tmp.H9t5pC
[December 14, 2017 7:41:30 PM UTC] GenomicsDBImport --genomicsDBWorkspace genomicsdb --batchSize 50 --sampleNameMap /cromwell_root/broad-jg-dev-storage/freimer_dutch_fin_wgs_v1/v1/sample_map --readerThreads 5 --intervals chr1:1-391754 --interval_padding 500 --genomicsDBSegmentSize 1048576 --genomicsDBVCFBufferSize 16384 --overwriteExistingGenomicsDBWorkspace false --consolidate false --validateSampleNameMap false --interval_set_rule UNION --interval_exclusion_padding 0 --interval_merging_rule ALL --readValidationStringency SILENT --secondsBetweenProgressUpdates 10.0 --disableSequenceDictionaryValidation false --createOutputBamIndex true --createOutputBamMD5 false --createOutputVariantIndex true --createOutputVariantMD5 false --lenient false --addOutputSAMProgramRecord true --addOutputVCFCommandLine true --cloudPrefetchBuffer 0 --cloudIndexPrefetchBuffer 0 --disableBamIndexCaching false --help false --version false --showHidden false --verbosity INFO --QUIET false --use_jdk_deflater false --use_jdk_inflater false --gcs_max_retries 20 --disableToolDefaultReadFilters false
[December 14, 2017 7:41:30 PM UTC] Executing as root@7ca892f01ff3 on Linux 4.9.0-0.bpo.3-amd64 amd64; OpenJDK 64-Bit Server VM 1.8.0_111-8u111-b14-2~bpo8+1-b14; Version: 4.beta.6
[December 14, 2017 7:41:30 PM UTC] org.broadinstitute.hellbender.tools.genomicsdb.GenomicsDBImport done. Elapsed time: 0.01 minutes.
Runtime.totalMemory()=4116185088
***********************************************************************
A USER ERROR has occurred: Bad input: Expected a file of format
Sample File
but found line: I-PAL_FR02_000639 001 gs://broad-gotc-prod-storage/pipeline/G87944/gvcfs/I-PAL_FR02_000639_001.023ca2f7-4fba-4617-9f65-cb989818c858.g.vcf.gz
***********************************************************************
Set the system property GATK_STACKTRACE_ON_USER_EXCEPTION (--javaOptions '-DGATK_STACKTRACE_ON_USER_EXCEPTION=true') to print the stack trace. | 1.0 | GenomicsDBImport: add support for samples with spaces in the name - We are unable to run joint genotyping on the cloud when it includes a sample with a space in the name; it fails when calling GATK's GenomicsDBImport functionality (see error log below).
This practice (samples with spaces in the name) is unfortunately not uncommon, so we've had to modify our various workflows one by one to add support for this case. Please update GenomicsDBImport to handle samples with a space in the name.
Example error log:
gsutil cat gs://broad-jg-dev-cromwell-execution/JointGenotyping/6918095f-ca06-4883-bcb5-f5c2e343bb6d/call-ImportGVCFs/shard-0/ImportGVCFs-0-stderr.log
Using GATK jar /usr/gitc/gatk-package-4.beta.6-local.jar
Running:
java -Dsamjdk.use_async_io_read_samtools=false -Dsamjdk.use_async_io_write_samtools=true -Dsamjdk.use_async_io_write_tribble=false -Dsamjdk.compression_level=1 -Xmx4g -Xms4g -jar /usr/gitc/gatk-package-4.beta.6-local.jar GenomicsDBImport --genomicsDBWorkspace genomicsdb --batchSize 50 -L chr1:1-391754 --sampleNameMap /cromwell_root/broad-jg-dev-storage/freimer_dutch_fin_wgs_v1/v1/sample_map --readerThreads 5 -ip 500
Picked up _JAVA_OPTIONS: -Djava.io.tmpdir=/cromwell_root/tmp.H9t5pC
[December 14, 2017 7:41:30 PM UTC] GenomicsDBImport --genomicsDBWorkspace genomicsdb --batchSize 50 --sampleNameMap /cromwell_root/broad-jg-dev-storage/freimer_dutch_fin_wgs_v1/v1/sample_map --readerThreads 5 --intervals chr1:1-391754 --interval_padding 500 --genomicsDBSegmentSize 1048576 --genomicsDBVCFBufferSize 16384 --overwriteExistingGenomicsDBWorkspace false --consolidate false --validateSampleNameMap false --interval_set_rule UNION --interval_exclusion_padding 0 --interval_merging_rule ALL --readValidationStringency SILENT --secondsBetweenProgressUpdates 10.0 --disableSequenceDictionaryValidation false --createOutputBamIndex true --createOutputBamMD5 false --createOutputVariantIndex true --createOutputVariantMD5 false --lenient false --addOutputSAMProgramRecord true --addOutputVCFCommandLine true --cloudPrefetchBuffer 0 --cloudIndexPrefetchBuffer 0 --disableBamIndexCaching false --help false --version false --showHidden false --verbosity INFO --QUIET false --use_jdk_deflater false --use_jdk_inflater false --gcs_max_retries 20 --disableToolDefaultReadFilters false
[December 14, 2017 7:41:30 PM UTC] Executing as root@7ca892f01ff3 on Linux 4.9.0-0.bpo.3-amd64 amd64; OpenJDK 64-Bit Server VM 1.8.0_111-8u111-b14-2~bpo8+1-b14; Version: 4.beta.6
[December 14, 2017 7:41:30 PM UTC] org.broadinstitute.hellbender.tools.genomicsdb.GenomicsDBImport done. Elapsed time: 0.01 minutes.
Runtime.totalMemory()=4116185088
***********************************************************************
A USER ERROR has occurred: Bad input: Expected a file of format
Sample File
but found line: I-PAL_FR02_000639 001 gs://broad-gotc-prod-storage/pipeline/G87944/gvcfs/I-PAL_FR02_000639_001.023ca2f7-4fba-4617-9f65-cb989818c858.g.vcf.gz
***********************************************************************
Set the system property GATK_STACKTRACE_ON_USER_EXCEPTION (--javaOptions '-DGATK_STACKTRACE_ON_USER_EXCEPTION=true') to print the stack trace. | priority | genomicsdbimport add support for samples with spaces in the name we are unable to run joint genotyping on the cloud when it includes a sample with a space in the name it fails when calling gatk s genomicsdbimport functionality see error log below this practice samples with spaces in the name is unfortunately not uncommon so we ve had to modify our various workflows one by one to add support for this case please update genomicsdbimport to handle samples with a space in the name example error log gsutil cat gs broad jg dev cromwell execution jointgenotyping call importgvcfs shard importgvcfs stderr log using gatk jar usr gitc gatk package beta local jar running java dsamjdk use async io read samtools false dsamjdk use async io write samtools true dsamjdk use async io write tribble false dsamjdk compression level jar usr gitc gatk package beta local jar genomicsdbimport genomicsdbworkspace genomicsdb batchsize l samplenamemap cromwell root broad jg dev storage freimer dutch fin wgs sample map readerthreads ip picked up java options djava io tmpdir cromwell root tmp genomicsdbimport genomicsdbworkspace genomicsdb batchsize samplenamemap cromwell root broad jg dev storage freimer dutch fin wgs sample map readerthreads intervals interval padding genomicsdbsegmentsize genomicsdbvcfbuffersize overwriteexistinggenomicsdbworkspace false consolidate false validatesamplenamemap false interval set rule union interval exclusion padding interval merging rule all readvalidationstringency silent secondsbetweenprogressupdates disablesequencedictionaryvalidation false createoutputbamindex true false createoutputvariantindex true false lenient false addoutputsamprogramrecord true addoutputvcfcommandline true cloudprefetchbuffer cloudindexprefetchbuffer disablebamindexcaching false help false version false showhidden false verbosity info quiet false use jdk deflater false use jdk inflater false gcs max retries disabletooldefaultreadfilters false executing as root on linux bpo openjdk bit server vm version beta org broadinstitute hellbender tools genomicsdb genomicsdbimport done elapsed time minutes runtime totalmemory a user error has occurred bad input expected a file of format sample file but found line i pal gs broad gotc prod storage pipeline gvcfs i pal g vcf gz set the system property gatk stacktrace on user exception javaoptions dgatk stacktrace on user exception true to print the stack trace | 1 |
508,015 | 14,688,346,792 | IssuesEvent | 2021-01-02 02:06:18 | blchelle/collabogreat | https://api.github.com/repos/blchelle/collabogreat | closed | Design/Implement UI For the "Task Card" Component | Priority: High Status: In Progress Type: Enhancement | ## Description
The BoardCard component is used in ProjectBoard Page of the app. The BoardCard component should give the following information
* The name of the associated test
* The description of the associated task
* The name and image of the user who is assigned to the task
The board card app should also allow the user to
* Change the status of the associated task (already implemented on the front-end, will be completed in #57)
* Delete the task
* Edit the task | 1.0 | Design/Implement UI For the "Task Card" Component - ## Description
The BoardCard component is used in ProjectBoard Page of the app. The BoardCard component should give the following information
* The name of the associated test
* The description of the associated task
* The name and image of the user who is assigned to the task
The board card app should also allow the user to
* Change the status of the associated task (already implemented on the front-end, will be completed in #57)
* Delete the task
* Edit the task | priority | design implement ui for the task card component description the boardcard component is used in projectboard page of the app the boardcard component should give the following information the name of the associated test the description of the associated task the name and image of the user who is assigned to the task the board card app should also allow the user to change the status of the associated task already implemented on the front end will be completed in delete the task edit the task | 1 |
811,857 | 30,302,999,789 | IssuesEvent | 2023-07-10 07:34:31 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | opened | yb-tserver crash: Segmentation fault during sidecars flush operation | area/docdb priority/high status/awaiting-triage | ### Description
Test1:
http://stress.dev.yugabyte.com/stress_test/da7af025-6c6c-4319-9b21-64af6a27bb7d
Test2:
http://stress.dev.yugabyte.com/stress_test/b5c4b879-cdeb-4578-91a9-3a957b515760
```
* thread #1, name = 'yb-tserver', stop reason = signal SIGSEGV
* frame #0: 0x000055e27683b395 yb-tserver`yb::rpc::Sidecars::Flush(boost::container::small_vector_base<yb::RefCntSlice, void, void>*) [inlined] yb::WriteBuffer::Block::Shrink(this=<unavailable>, size=<unavailable>) at write_buffer.h:131:27
frame #1: 0x000055e27683b395 yb-tserver`yb::rpc::Sidecars::Flush(boost::container::small_vector_base<yb::RefCntSlice, void, void>*) [inlined] yb::WriteBuffer::ShrinkLastBlock(this=0x000026887d9d7c28) at write_buffer.cc:138:20
frame #2: 0x000055e27683b364 yb-tserver`yb::rpc::Sidecars::Flush(boost::container::small_vector_base<yb::RefCntSlice, void, void>*) [inlined] yb::WriteBuffer::Flush(this=0x000026887d9d7c28, output=<unavailable>) at write_buffer.cc:184:3
frame #3: 0x000055e27683b364 yb-tserver`yb::rpc::Sidecars::Flush(this=0x000026887d9d7c20, output=0x000026887cb9b160) at sidecars.cc:110:11
frame #4: 0x000055e276789e26 yb-tserver`yb::rpc::OutboundCall::Serialize(this=0x00002688791ab420, output=0x000026887cb9b160) at outbound_call.cc:212:14
frame #5: 0x000055e27683d62a yb-tserver`yb::rpc::TcpStream::Send(std::__1::shared_ptr<yb::rpc::OutboundData>) [inlined] yb::rpc::TcpStreamSendingData::TcpStreamSendingData(this=0x000026887cb9b150, data_=<unavailable>, mem_tracker=std::__1::shared_ptr<yb::MemTracker>::element_type @ 0x000026887f9206e0) at tcp_stream.cc:530:9
frame #6: 0x000055e27683d5ee yb-tserver`yb::rpc::TcpStream::Send(std::__1::shared_ptr<yb::rpc::OutboundData>) [inlined] yb::rpc::TcpStreamSendingData* std::__1::construct_at[abi:v15007]<yb::rpc::TcpStreamSendingData, std::__1::shared_ptr<yb::rpc::OutboundData>, std::__1::shared_ptr<yb::MemTracker>&, yb::rpc::TcpStreamSendingData*>(__location=0x000026887cb9b150, __args=<unavailable>, __args=std::__1::shared_ptr<yb::MemTracker>::element_type @ 0x000026887f9206e0) at construct_at.h:35:48
frame #7: 0x000055e27683d5dd yb-tserver`yb::rpc::TcpStream::Send(std::__1::shared_ptr<yb::rpc::OutboundData>) [inlined] void std::__1::allocator_traits<std::__1::allocator<yb::rpc::TcpStreamSendingData>>::construct[abi:v15007]<yb::rpc::TcpStreamSendingData, std::__1::shared_ptr<yb::rpc::OutboundData>, std::__1::shared_ptr<yb::MemTracker>&, void, void>((null)=0x000026887f8e9c88, __p=0x000026887cb9b150, __args=<unavailable>, __args=std::__1::shared_ptr<yb::MemTracker>::element_type @ 0x000026887f9206e0) at allocator_traits.h:298:9
frame #8: 0x000055e27683d5dd yb-tserver`yb::rpc::TcpStream::Send(std::__1::shared_ptr<yb::rpc::OutboundData>) [inlined] yb::rpc::TcpStreamSendingData& std::__1::deque<yb::rpc::TcpStreamSendingData, std::__1::allocator<yb::rpc::TcpStreamSendingData>>::emplace_back<std::__1::shared_ptr<yb::rpc::OutboundData>, std::__1::shared_ptr<yb::MemTracker>&>(this=0x000026887f8e9c60, __args=<unavailable>, __args=std::__1::shared_ptr<yb::MemTracker>::element_type @ 0x000026887f9206e0) at deque:1984:5
frame #9: 0x000055e27683cbe9 yb-tserver`yb::rpc::TcpStream::Send(this=0x000026887f8e9b80, data=<unavailable>) at tcp_stream.cc:468:12
frame #10: 0x000055e2767a2c0c yb-tserver`yb::rpc::RefinedStream::Send(this=<unavailable>, data=<unavailable>) at refined_stream.cc:89:27
frame #11: 0x000055e2767a4b37 yb-tserver`yb::rpc::RefinedStream::Established(this=0x00002688775b8000, state=<unavailable>) at refined_stream.cc:215:5
frame #12: 0x000055e276763fbf yb-tserver`yb::rpc::(anonymous namespace)::CompressedRefiner::Handshake(this=0x0000268878bd01c0) at compressed_stream.cc:0
frame #13: 0x000055e2767a3993 yb-tserver`yb::rpc::RefinedStream::Connected(this=0x00002688775b8000) at refined_stream.cc:172:24
frame #14: 0x000055e276841513 yb-tserver`yb::rpc::TcpStream::Handler(this=0x000026887f8e9b80, watcher=<unavailable>, revents=<unavailable>) at tcp_stream.cc:286:17
frame #15: 0x000055e275b4f81b yb-tserver`ev_invoke_pending + 91
frame #16: 0x000055e275b52b99 yb-tserver`ev_run + 3577
frame #17: 0x000055e27679e147 yb-tserver`yb::rpc::Reactor::RunThread() [inlined] ev::loop_ref::run(this=0x000026887f9be638, flags=0) at ev++.h:211:7
frame #18: 0x000055e27679e13e yb-tserver`yb::rpc::Reactor::RunThread(this=0x000026887f9be600) at reactor.cc:630:9
frame #19: 0x000055e276f40f22 yb-tserver`yb::Thread::SuperviseThread(void*) [inlined] std::__1::__function::__value_func<void ()>::operator(this=0x000026887f939800)[abi:v15007]() const at function.h:512:16
frame #20: 0x000055e276f40f0c yb-tserver`yb::Thread::SuperviseThread(void*) [inlined] std::__1::function<void ()>::operator(this=0x000026887f939800)() const at function.h:1197:12
frame #21: 0x000055e276f40f0c yb-tserver`yb::Thread::SuperviseThread(arg=0x000026887f9397a0) at thread.cc:842:3
frame #22: 0x00007f058c994694 libpthread.so.0`start_thread(arg=0x00007f058555f700) at pthread_create.c:333
frame #23: 0x00007f058cc9141d libc.so.6`__clone at clone.S:109
```
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information. | 1.0 | yb-tserver crash: Segmentation fault during sidecars flush operation - ### Description
Test1:
http://stress.dev.yugabyte.com/stress_test/da7af025-6c6c-4319-9b21-64af6a27bb7d
Test2:
http://stress.dev.yugabyte.com/stress_test/b5c4b879-cdeb-4578-91a9-3a957b515760
```
* thread #1, name = 'yb-tserver', stop reason = signal SIGSEGV
* frame #0: 0x000055e27683b395 yb-tserver`yb::rpc::Sidecars::Flush(boost::container::small_vector_base<yb::RefCntSlice, void, void>*) [inlined] yb::WriteBuffer::Block::Shrink(this=<unavailable>, size=<unavailable>) at write_buffer.h:131:27
frame #1: 0x000055e27683b395 yb-tserver`yb::rpc::Sidecars::Flush(boost::container::small_vector_base<yb::RefCntSlice, void, void>*) [inlined] yb::WriteBuffer::ShrinkLastBlock(this=0x000026887d9d7c28) at write_buffer.cc:138:20
frame #2: 0x000055e27683b364 yb-tserver`yb::rpc::Sidecars::Flush(boost::container::small_vector_base<yb::RefCntSlice, void, void>*) [inlined] yb::WriteBuffer::Flush(this=0x000026887d9d7c28, output=<unavailable>) at write_buffer.cc:184:3
frame #3: 0x000055e27683b364 yb-tserver`yb::rpc::Sidecars::Flush(this=0x000026887d9d7c20, output=0x000026887cb9b160) at sidecars.cc:110:11
frame #4: 0x000055e276789e26 yb-tserver`yb::rpc::OutboundCall::Serialize(this=0x00002688791ab420, output=0x000026887cb9b160) at outbound_call.cc:212:14
frame #5: 0x000055e27683d62a yb-tserver`yb::rpc::TcpStream::Send(std::__1::shared_ptr<yb::rpc::OutboundData>) [inlined] yb::rpc::TcpStreamSendingData::TcpStreamSendingData(this=0x000026887cb9b150, data_=<unavailable>, mem_tracker=std::__1::shared_ptr<yb::MemTracker>::element_type @ 0x000026887f9206e0) at tcp_stream.cc:530:9
frame #6: 0x000055e27683d5ee yb-tserver`yb::rpc::TcpStream::Send(std::__1::shared_ptr<yb::rpc::OutboundData>) [inlined] yb::rpc::TcpStreamSendingData* std::__1::construct_at[abi:v15007]<yb::rpc::TcpStreamSendingData, std::__1::shared_ptr<yb::rpc::OutboundData>, std::__1::shared_ptr<yb::MemTracker>&, yb::rpc::TcpStreamSendingData*>(__location=0x000026887cb9b150, __args=<unavailable>, __args=std::__1::shared_ptr<yb::MemTracker>::element_type @ 0x000026887f9206e0) at construct_at.h:35:48
frame #7: 0x000055e27683d5dd yb-tserver`yb::rpc::TcpStream::Send(std::__1::shared_ptr<yb::rpc::OutboundData>) [inlined] void std::__1::allocator_traits<std::__1::allocator<yb::rpc::TcpStreamSendingData>>::construct[abi:v15007]<yb::rpc::TcpStreamSendingData, std::__1::shared_ptr<yb::rpc::OutboundData>, std::__1::shared_ptr<yb::MemTracker>&, void, void>((null)=0x000026887f8e9c88, __p=0x000026887cb9b150, __args=<unavailable>, __args=std::__1::shared_ptr<yb::MemTracker>::element_type @ 0x000026887f9206e0) at allocator_traits.h:298:9
frame #8: 0x000055e27683d5dd yb-tserver`yb::rpc::TcpStream::Send(std::__1::shared_ptr<yb::rpc::OutboundData>) [inlined] yb::rpc::TcpStreamSendingData& std::__1::deque<yb::rpc::TcpStreamSendingData, std::__1::allocator<yb::rpc::TcpStreamSendingData>>::emplace_back<std::__1::shared_ptr<yb::rpc::OutboundData>, std::__1::shared_ptr<yb::MemTracker>&>(this=0x000026887f8e9c60, __args=<unavailable>, __args=std::__1::shared_ptr<yb::MemTracker>::element_type @ 0x000026887f9206e0) at deque:1984:5
frame #9: 0x000055e27683cbe9 yb-tserver`yb::rpc::TcpStream::Send(this=0x000026887f8e9b80, data=<unavailable>) at tcp_stream.cc:468:12
frame #10: 0x000055e2767a2c0c yb-tserver`yb::rpc::RefinedStream::Send(this=<unavailable>, data=<unavailable>) at refined_stream.cc:89:27
frame #11: 0x000055e2767a4b37 yb-tserver`yb::rpc::RefinedStream::Established(this=0x00002688775b8000, state=<unavailable>) at refined_stream.cc:215:5
frame #12: 0x000055e276763fbf yb-tserver`yb::rpc::(anonymous namespace)::CompressedRefiner::Handshake(this=0x0000268878bd01c0) at compressed_stream.cc:0
frame #13: 0x000055e2767a3993 yb-tserver`yb::rpc::RefinedStream::Connected(this=0x00002688775b8000) at refined_stream.cc:172:24
frame #14: 0x000055e276841513 yb-tserver`yb::rpc::TcpStream::Handler(this=0x000026887f8e9b80, watcher=<unavailable>, revents=<unavailable>) at tcp_stream.cc:286:17
frame #15: 0x000055e275b4f81b yb-tserver`ev_invoke_pending + 91
frame #16: 0x000055e275b52b99 yb-tserver`ev_run + 3577
frame #17: 0x000055e27679e147 yb-tserver`yb::rpc::Reactor::RunThread() [inlined] ev::loop_ref::run(this=0x000026887f9be638, flags=0) at ev++.h:211:7
frame #18: 0x000055e27679e13e yb-tserver`yb::rpc::Reactor::RunThread(this=0x000026887f9be600) at reactor.cc:630:9
frame #19: 0x000055e276f40f22 yb-tserver`yb::Thread::SuperviseThread(void*) [inlined] std::__1::__function::__value_func<void ()>::operator(this=0x000026887f939800)[abi:v15007]() const at function.h:512:16
frame #20: 0x000055e276f40f0c yb-tserver`yb::Thread::SuperviseThread(void*) [inlined] std::__1::function<void ()>::operator(this=0x000026887f939800)() const at function.h:1197:12
frame #21: 0x000055e276f40f0c yb-tserver`yb::Thread::SuperviseThread(arg=0x000026887f9397a0) at thread.cc:842:3
frame #22: 0x00007f058c994694 libpthread.so.0`start_thread(arg=0x00007f058555f700) at pthread_create.c:333
frame #23: 0x00007f058cc9141d libc.so.6`__clone at clone.S:109
```
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information. | priority | yb tserver crash segmentation fault during sidecars flush operation description thread name yb tserver stop reason signal sigsegv frame yb tserver yb rpc sidecars flush boost container small vector base yb writebuffer block shrink this size at write buffer h frame yb tserver yb rpc sidecars flush boost container small vector base yb writebuffer shrinklastblock this at write buffer cc frame yb tserver yb rpc sidecars flush boost container small vector base yb writebuffer flush this output at write buffer cc frame yb tserver yb rpc sidecars flush this output at sidecars cc frame yb tserver yb rpc outboundcall serialize this output at outbound call cc frame yb tserver yb rpc tcpstream send std shared ptr yb rpc tcpstreamsendingdata tcpstreamsendingdata this data mem tracker std shared ptr element type at tcp stream cc frame yb tserver yb rpc tcpstream send std shared ptr yb rpc tcpstreamsendingdata std construct at std shared ptr yb rpc tcpstreamsendingdata location args args std shared ptr element type at construct at h frame yb tserver yb rpc tcpstream send std shared ptr void std allocator traits construct std shared ptr void void null p args args std shared ptr element type at allocator traits h frame yb tserver yb rpc tcpstream send std shared ptr yb rpc tcpstreamsendingdata std deque emplace back std shared ptr this args args std shared ptr element type at deque frame yb tserver yb rpc tcpstream send this data at tcp stream cc frame yb tserver yb rpc refinedstream send this data at refined stream cc frame yb tserver yb rpc refinedstream established this state at refined stream cc frame yb tserver yb rpc anonymous namespace compressedrefiner handshake this at compressed stream cc frame yb tserver yb rpc refinedstream connected this at refined stream cc frame yb tserver yb rpc tcpstream handler this watcher revents at tcp stream cc frame yb tserver ev invoke pending frame yb tserver ev run frame yb tserver yb rpc reactor runthread ev loop ref run this flags at ev h frame yb tserver yb rpc reactor runthread this at reactor cc frame yb tserver yb thread supervisethread void std function value func operator this const at function h frame yb tserver yb thread supervisethread void std function operator this const at function h frame yb tserver yb thread supervisethread arg at thread cc frame libpthread so start thread arg at pthread create c frame libc so clone at clone s warning please confirm that this issue does not contain any sensitive information i confirm this issue does not contain any sensitive information | 1 |
680,158 | 23,260,969,731 | IssuesEvent | 2022-08-04 13:28:51 | DDMAL/cantus | https://api.github.com/repos/DDMAL/cantus | closed | Improve volpiano spacing | High Priority | Volpiano excerpts sometimes have overlapping text.
<img width="440" alt="image" src="https://user-images.githubusercontent.com/11023634/180817413-0de38f70-2848-43fe-b1e7-74c4ba070192.png">
| 1.0 | Improve volpiano spacing - Volpiano excerpts sometimes have overlapping text.
<img width="440" alt="image" src="https://user-images.githubusercontent.com/11023634/180817413-0de38f70-2848-43fe-b1e7-74c4ba070192.png">
| priority | improve volpiano spacing volpiano excerpts sometimes have overlapping text img width alt image src | 1 |
457,827 | 13,162,965,683 | IssuesEvent | 2020-08-10 22:56:40 | zulip/zulip-mobile | https://api.github.com/repos/zulip/zulip-mobile | closed | When message fetch fails, show error instead of loading-animation | P1 high-priority a-data-sync a-message list | When you go to read a conversation, we load the messages that are in it if we don't already have them. While we're doing that fetch, we show a "loading" animation of placeholder messages. But then if that fetch fails with an exception, we carry on showing that loading-animation forever.
That's bad because after the fetch has failed, we are not in fact still working on loading the messages, so the animation is effectively telling the user something that isn't true. It's also misleading when trying to debug, as it obscures the fact there was an error and not just something taking a long time.
Cases where we've seen this come up recently:
* #4156 [turned out to be](https://chat.zulip.org/#narrow/stream/243-mobile-team/topic/.23M4156.20Message.20List.20placeholders) a server bug, where the server returned garbled data on fetching any messages that had emoji reactions.
* We'd actually seen this bug [before](https://chat.zulip.org/#narrow/stream/243-mobile-team/topic/Streams.20not.20loading.20in.20android/near/883392), when it was live on chat.zulip.org and caused the same symptom. It was fixed within a couple of days, but perhaps unsurprisingly there was a deployment that happened to have upgraded to master within that window of a couple of days, and stayed there.
* #4033 may in part reflect another case of this -- at least when the loading is "endless" and not merely long. (Though because that one goes away on quitting and relaunching the app, it's definitely not the same server bug and probably is a purely client-side bug.)
Instead, when the fetch fails we should show a widget that's not animated and says there was an error. Preferably also with a button (low-emphasis, like a [text button](https://material.io/components/buttons#text-button)) to retry.
Some chat discussion of how to implement this starts [here](https://chat.zulip.org/#narrow/stream/243-mobile-team/topic/.23M4156.20Message.20List.20placeholders/near/928698), and particularly [here](https://chat.zulip.org/#narrow/stream/243-mobile-team/topic/.23M4156.20Message.20List.20placeholders/near/928758) and after.
| 1.0 | When message fetch fails, show error instead of loading-animation - When you go to read a conversation, we load the messages that are in it if we don't already have them. While we're doing that fetch, we show a "loading" animation of placeholder messages. But then if that fetch fails with an exception, we carry on showing that loading-animation forever.
That's bad because after the fetch has failed, we are not in fact still working on loading the messages, so the animation is effectively telling the user something that isn't true. It's also misleading when trying to debug, as it obscures the fact there was an error and not just something taking a long time.
Cases where we've seen this come up recently:
* #4156 [turned out to be](https://chat.zulip.org/#narrow/stream/243-mobile-team/topic/.23M4156.20Message.20List.20placeholders) a server bug, where the server returned garbled data on fetching any messages that had emoji reactions.
* We'd actually seen this bug [before](https://chat.zulip.org/#narrow/stream/243-mobile-team/topic/Streams.20not.20loading.20in.20android/near/883392), when it was live on chat.zulip.org and caused the same symptom. It was fixed within a couple of days, but perhaps unsurprisingly there was a deployment that happened to have upgraded to master within that window of a couple of days, and stayed there.
* #4033 may in part reflect another case of this -- at least when the loading is "endless" and not merely long. (Though because that one goes away on quitting and relaunching the app, it's definitely not the same server bug and probably is a purely client-side bug.)
Instead, when the fetch fails we should show a widget that's not animated and says there was an error. Preferably also with a button (low-emphasis, like a [text button](https://material.io/components/buttons#text-button)) to retry.
Some chat discussion of how to implement this starts [here](https://chat.zulip.org/#narrow/stream/243-mobile-team/topic/.23M4156.20Message.20List.20placeholders/near/928698), and particularly [here](https://chat.zulip.org/#narrow/stream/243-mobile-team/topic/.23M4156.20Message.20List.20placeholders/near/928758) and after.
| priority | when message fetch fails show error instead of loading animation when you go to read a conversation we load the messages that are in it if we don t already have them while we re doing that fetch we show a loading animation of placeholder messages but then if that fetch fails with an exception we carry on showing that loading animation forever that s bad because after the fetch has failed we are not in fact still working on loading the messages so the animation is effectively telling the user something that isn t true it s also misleading when trying to debug as it obscures the fact there was an error and not just something taking a long time cases where we ve seen this come up recently a server bug where the server returned garbled data on fetching any messages that had emoji reactions we d actually seen this bug when it was live on chat zulip org and caused the same symptom it was fixed within a couple of days but perhaps unsurprisingly there was a deployment that happened to have upgraded to master within that window of a couple of days and stayed there may in part reflect another case of this at least when the loading is endless and not merely long though because that one goes away on quitting and relaunching the app it s definitely not the same server bug and probably is a purely client side bug instead when the fetch fails we should show a widget that s not animated and says there was an error preferably also with a button low emphasis like a to retry some chat discussion of how to implement this starts and particularly and after | 1 |
85,495 | 3,691,026,643 | IssuesEvent | 2016-02-25 22:14:16 | littleweaver/django-brambling | https://api.github.com/repos/littleweaver/django-brambling | opened | Move deployment code into django-brambling | high priority | This includes making a fabfile and copying over the salt code from the dancerfly.com repo. And some additional documentation. | 1.0 | Move deployment code into django-brambling - This includes making a fabfile and copying over the salt code from the dancerfly.com repo. And some additional documentation. | priority | move deployment code into django brambling this includes making a fabfile and copying over the salt code from the dancerfly com repo and some additional documentation | 1 |
414,082 | 12,098,617,444 | IssuesEvent | 2020-04-20 10:38:03 | ooni/probe | https://api.github.com/repos/ooni/probe | closed | Test card description font size changes at launch | bug effort/S ooni/probe-desktop priority/high | ## Expected Behavior
The font size should be applied before rendering the descriptions (or other areas).
## Actual Behavior
Text is rendered with some default font size when the window opens and then gets updated to a larger size within the first second.
It seems like this is because nextjs fails to apply styles on server side and sends additional CSS to be applied after the window finishes loading. We seemed to addressed this as per the [`with-styled-components`](https://github.com/zeit/next.js/tree/canary/examples/with-styled-components) example for nextjs. The implementation is slightly different in the newer version of the same example.
## Steps to Reproduce the Problem
1. Launch app
## Specifications
- Version: 3.0.0-rc.7
- Platform: all
| 1.0 | Test card description font size changes at launch - ## Expected Behavior
The font size should be applied before rendering the descriptions (or other areas).
## Actual Behavior
Text is rendered with some default font size when the window opens and then gets updated to a larger size within the first second.
It seems like this is because nextjs fails to apply styles on server side and sends additional CSS to be applied after the window finishes loading. We seemed to addressed this as per the [`with-styled-components`](https://github.com/zeit/next.js/tree/canary/examples/with-styled-components) example for nextjs. The implementation is slightly different in the newer version of the same example.
## Steps to Reproduce the Problem
1. Launch app
## Specifications
- Version: 3.0.0-rc.7
- Platform: all
| priority | test card description font size changes at launch expected behavior the font size should be applied before rendering the descriptions or other areas actual behavior text is rendered with some default font size when the window opens and then gets updated to a larger size within the first second it seems like this is because nextjs fails to apply styles on server side and sends additional css to be applied after the window finishes loading we seemed to addressed this as per the example for nextjs the implementation is slightly different in the newer version of the same example steps to reproduce the problem launch app specifications version rc platform all | 1 |
642,137 | 20,868,235,330 | IssuesEvent | 2022-03-22 09:29:21 | AY2122S2-CS2103-F11-1/tp | https://api.github.com/repos/AY2122S2-CS2103-F11-1/tp | closed | Tagging Task to Employees & Priority Tagging for Tasks | enhancement priority.High Logic Model Storage | 1. Implement ability to tag Tasks to Employees
- Tag Model needed
- Storage Affected
- Tag command is needed
- UI needs update
2. Task could be tagged to differenet Priority such as "Low", "High", "Medium"
| 1.0 | Tagging Task to Employees & Priority Tagging for Tasks - 1. Implement ability to tag Tasks to Employees
- Tag Model needed
- Storage Affected
- Tag command is needed
- UI needs update
2. Task could be tagged to differenet Priority such as "Low", "High", "Medium"
| priority | tagging task to employees priority tagging for tasks implement ability to tag tasks to employees tag model needed storage affected tag command is needed ui needs update task could be tagged to differenet priority such as low high medium | 1 |
166,352 | 6,303,262,535 | IssuesEvent | 2017-07-21 13:18:43 | der-On/XPlane2Blender | https://api.github.com/repos/der-On/XPlane2Blender | closed | Problem with exported lights | Bug priority high | I just tried to export the Rotate MD-80 with the last version of the exporter. Only one issue came up until now. The exported exterior lights are different than with previous versions of the exporter.
Attached are a test blender file, and two exported objs, with the different exporters. I have used version 3.20 (this is working ok) and the last version 3.3.13. (I took it from here
https://github.com/der-On/XPlane2Blender/tree/259-fix-attribute-reset)
In the last version of the exporter, the lights point somewhere unexpected.
[Rotate-MD-80-lights-test.zip](https://github.com/der-On/XPlane2Blender/files/1092103/Rotate-MD-80-lights-test.zip)
The exterior lights you find in the blend file, correspond to the configuration that was found to give better results in the former 3.20, that is, pointing the lights using bones to direct them, and the internal light's parameters to control aperture, color, ...
I am not sure if a better workflow is recommended now with the new versions of the exporter.
Please, let me know if you need me to provide more information.
| 1.0 | Problem with exported lights - I just tried to export the Rotate MD-80 with the last version of the exporter. Only one issue came up until now. The exported exterior lights are different than with previous versions of the exporter.
Attached are a test blender file, and two exported objs, with the different exporters. I have used version 3.20 (this is working ok) and the last version 3.3.13. (I took it from here
https://github.com/der-On/XPlane2Blender/tree/259-fix-attribute-reset)
In the last version of the exporter, the lights point somewhere unexpected.
[Rotate-MD-80-lights-test.zip](https://github.com/der-On/XPlane2Blender/files/1092103/Rotate-MD-80-lights-test.zip)
The exterior lights you find in the blend file, correspond to the configuration that was found to give better results in the former 3.20, that is, pointing the lights using bones to direct them, and the internal light's parameters to control aperture, color, ...
I am not sure if a better workflow is recommended now with the new versions of the exporter.
Please, let me know if you need me to provide more information.
| priority | problem with exported lights i just tried to export the rotate md with the last version of the exporter only one issue came up until now the exported exterior lights are different than with previous versions of the exporter attached are a test blender file and two exported objs with the different exporters i have used version this is working ok and the last version i took it from here in the last version of the exporter the lights point somewhere unexpected the exterior lights you find in the blend file correspond to the configuration that was found to give better results in the former that is pointing the lights using bones to direct them and the internal light s parameters to control aperture color i am not sure if a better workflow is recommended now with the new versions of the exporter please let me know if you need me to provide more information | 1 |
20,811 | 2,631,365,598 | IssuesEvent | 2015-03-07 01:21:14 | OSU-Net/cyder | https://api.github.com/repos/OSU-Net/cyder | closed | Enter key doesn't properly submit forms | bug CRITICAL high priority | In some fields, it does nothing. In others, including the Domain form name field, it causes a CSRF failure. | 1.0 | Enter key doesn't properly submit forms - In some fields, it does nothing. In others, including the Domain form name field, it causes a CSRF failure. | priority | enter key doesn t properly submit forms in some fields it does nothing in others including the domain form name field it causes a csrf failure | 1 |
360,754 | 10,696,932,124 | IssuesEvent | 2019-10-23 15:33:42 | canonical-web-and-design/ubuntu.com | https://api.github.com/repos/canonical-web-and-design/ubuntu.com | closed | /advantage page displays truncated instructions with my account | Priority: High | 
Hopefully the screenshot above shows where it breaks down. | 1.0 | /advantage page displays truncated instructions with my account - 
Hopefully the screenshot above shows where it breaks down. | priority | advantage page displays truncated instructions with my account hopefully the screenshot above shows where it breaks down | 1 |
271,474 | 8,484,112,065 | IssuesEvent | 2018-10-26 00:41:15 | naveego/plugin-pub-mssql | https://api.github.com/repos/naveego/plugin-pub-mssql | opened | Auto-shape discovery isn't working | bug priority: high | While working with the MSSQL plugin lately I noticed that the auto-discovery has stopped working. For a little while the *Add Discovered Shapes* button was being populated with the tables and views, but the past few days it doesn't seem to be working.

### Chrome Console Errors


| 1.0 | Auto-shape discovery isn't working - While working with the MSSQL plugin lately I noticed that the auto-discovery has stopped working. For a little while the *Add Discovered Shapes* button was being populated with the tables and views, but the past few days it doesn't seem to be working.

### Chrome Console Errors


| priority | auto shape discovery isn t working while working with the mssql plugin lately i noticed that the auto discovery has stopped working for a little while the add discovered shapes button was being populated with the tables and views but the past few days it doesn t seem to be working chrome console errors | 1 |
342,462 | 10,317,458,942 | IssuesEvent | 2019-08-30 12:47:08 | erxes/erxes | https://api.github.com/repos/erxes/erxes | closed | Number of segments incorrect | priority: High type: bug | **To Reproduce**
Steps to reproduce the behavior:
1. Go to 'nmg.app.erxes.io>Customer detail'
2. See the sidebar > Segment section> Gerege.agency > 2829
3. Click on Manage segments > Gerege.agency> 0
4. Error: Number is not equal


| 1.0 | Number of segments incorrect - **To Reproduce**
Steps to reproduce the behavior:
1. Go to 'nmg.app.erxes.io>Customer detail'
2. See the sidebar > Segment section> Gerege.agency > 2829
3. Click on Manage segments > Gerege.agency> 0
4. Error: Number is not equal


| priority | number of segments incorrect to reproduce steps to reproduce the behavior go to nmg app erxes io customer detail see the sidebar segment section gerege agency click on manage segments gerege agency error number is not equal | 1 |
689,377 | 23,618,252,418 | IssuesEvent | 2022-08-24 17:53:03 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | closed | Release v1.7.0 follow-up | component: distribution priority: high | Following up from #17758.
Please manually tag docker images and upload the releases to S3.
```
binary date: 20220822
commit SHA: 7abea0556ede980a5077fe1a8cfbae59b57c7c27
release tag: v1.7.0
``` | 1.0 | Release v1.7.0 follow-up - Following up from #17758.
Please manually tag docker images and upload the releases to S3.
```
binary date: 20220822
commit SHA: 7abea0556ede980a5077fe1a8cfbae59b57c7c27
release tag: v1.7.0
``` | priority | release follow up following up from please manually tag docker images and upload the releases to binary date commit sha release tag | 1 |
110,964 | 4,446,001,014 | IssuesEvent | 2016-08-20 11:41:12 | dmusican/Elegit | https://api.github.com/repos/dmusican/Elegit | opened | Permission from users to log usage | bug priority high | We need to make sure we ask permission from users the first time they start Elegit if it's ok to log usage data, and if not ok, to not do it. A popup when they start Elegit the first time would do it; store it in a user preference.
Secondarily, we should have a preferences screen where they can change this option. We'll undoubtedly have other preferences to add later. | 1.0 | Permission from users to log usage - We need to make sure we ask permission from users the first time they start Elegit if it's ok to log usage data, and if not ok, to not do it. A popup when they start Elegit the first time would do it; store it in a user preference.
Secondarily, we should have a preferences screen where they can change this option. We'll undoubtedly have other preferences to add later. | priority | permission from users to log usage we need to make sure we ask permission from users the first time they start elegit if it s ok to log usage data and if not ok to not do it a popup when they start elegit the first time would do it store it in a user preference secondarily we should have a preferences screen where they can change this option we ll undoubtedly have other preferences to add later | 1 |
788,577 | 27,757,379,346 | IssuesEvent | 2023-03-16 04:28:55 | AY2223S2-CS2103T-T13-4/tp | https://api.github.com/repos/AY2223S2-CS2103T-T13-4/tp | closed | As a user, I can clear the entire bookmark library | type.Story priority.High | ... so that I may restart from an empty bookmark library. | 1.0 | As a user, I can clear the entire bookmark library - ... so that I may restart from an empty bookmark library. | priority | as a user i can clear the entire bookmark library so that i may restart from an empty bookmark library | 1 |
338,142 | 10,224,844,969 | IssuesEvent | 2019-08-16 13:48:23 | onaio/reveal-frontend | https://api.github.com/repos/onaio/reveal-frontend | closed | Add page to update existing plan with PlanForm | Priority: High | We need to add the ability to update existing plans. | 1.0 | Add page to update existing plan with PlanForm - We need to add the ability to update existing plans. | priority | add page to update existing plan with planform we need to add the ability to update existing plans | 1 |
284,174 | 8,736,307,140 | IssuesEvent | 2018-12-11 19:10:26 | aowen87/TicketTester | https://api.github.com/repos/aowen87/TicketTester | closed | SPH Operator not working with the Volume Plot | bug crash likelihood high priority reviewed severity high wrong results | The SPH Operator is generating a SEGV when applied to a Volume Plot.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2469
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: High
Subject: SPH Operator not working with the Volume Plot
Assigned to: Kevin Griffin
Category:
Target version: 2.10.1
Author: Kevin Griffin
Start: 12/03/2015
Due date:
% Done: 100
Estimated time:
Created: 12/03/2015 11:20 pm
Updated: 01/14/2016 08:31 pm
Likelihood: 5 - Always
Severity: 4 - Crash / Wrong Results
Found in version: 2.10.0
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
The SPH Operator is generating a SEGV when applied to a Volume Plot.
Comments:
Hello:Ive fixed the issue with the SPH Resample Operator not working with the Volume Plot.2.10RC:Sending operators/SPHResample/avtSPHResampleFilter.CSending resources/help/en_US/relnotes2.10.1.htmlTransmitting file data ..Committed revision 27928.Trunk:Sending operators/SPHResample/avtSPHResampleFilter.CSending resources/help/en_US/relnotes2.10.1.htmlTransmitting file data ..Committed revision 27930.-Kevin
| 1.0 | SPH Operator not working with the Volume Plot - The SPH Operator is generating a SEGV when applied to a Volume Plot.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2469
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: High
Subject: SPH Operator not working with the Volume Plot
Assigned to: Kevin Griffin
Category:
Target version: 2.10.1
Author: Kevin Griffin
Start: 12/03/2015
Due date:
% Done: 100
Estimated time:
Created: 12/03/2015 11:20 pm
Updated: 01/14/2016 08:31 pm
Likelihood: 5 - Always
Severity: 4 - Crash / Wrong Results
Found in version: 2.10.0
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
The SPH Operator is generating a SEGV when applied to a Volume Plot.
Comments:
Hello:Ive fixed the issue with the SPH Resample Operator not working with the Volume Plot.2.10RC:Sending operators/SPHResample/avtSPHResampleFilter.CSending resources/help/en_US/relnotes2.10.1.htmlTransmitting file data ..Committed revision 27928.Trunk:Sending operators/SPHResample/avtSPHResampleFilter.CSending resources/help/en_US/relnotes2.10.1.htmlTransmitting file data ..Committed revision 27930.-Kevin
| priority | sph operator not working with the volume plot the sph operator is generating a segv when applied to a volume plot redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority high subject sph operator not working with the volume plot assigned to kevin griffin category target version author kevin griffin start due date done estimated time created pm updated pm likelihood always severity crash wrong results found in version impact expected use os all support group any description the sph operator is generating a segv when applied to a volume plot comments hello ive fixed the issue with the sph resample operator not working with the volume plot sending operators sphresample avtsphresamplefilter csending resources help en us htmltransmitting file data committed revision trunk sending operators sphresample avtsphresamplefilter csending resources help en us htmltransmitting file data committed revision kevin | 1 |
657,971 | 21,873,767,469 | IssuesEvent | 2022-05-19 08:19:06 | AyeCode/geodirectory | https://api.github.com/repos/AyeCode/geodirectory | closed | Classifieds/Real-Estate Sold Settings not possible to undo | Priority: High Type: Bug For: Developer | It is not possible to undo the "mark as under offer" feature, u suggest making the button be ticked and highlighted, and then unticking removed it.
Also, doing this should also try to set the "sale status" custom field if exists such as with dummy data. | 1.0 | Classifieds/Real-Estate Sold Settings not possible to undo - It is not possible to undo the "mark as under offer" feature, u suggest making the button be ticked and highlighted, and then unticking removed it.
Also, doing this should also try to set the "sale status" custom field if exists such as with dummy data. | priority | classifieds real estate sold settings not possible to undo it is not possible to undo the mark as under offer feature u suggest making the button be ticked and highlighted and then unticking removed it also doing this should also try to set the sale status custom field if exists such as with dummy data | 1 |
567,158 | 16,848,968,333 | IssuesEvent | 2021-06-20 05:01:55 | containerd/nerdctl | https://api.github.com/repos/containerd/nerdctl | closed | support of nerdctl inspect {imagename} | enhancement priority/high | > nerdctl inspect - Return low-level information on objects. Currently, only supports container objects.
Currently nerctl does not support to inspect images.
Will that be added in future?
Background:
Yet I am adding support for nerdctl in [x11docker](https://github.com/mviereck/x11docker). For several features I need to inspect the image. The most important informations are `ENTRYPOINT` and `CMD`. Further checks are done for `WORKDIR`, `USER` and the image architecture (mostly amd64).
Is there another way to get information about `ENTRYPOINT` and `CMD` before starting the container? | 1.0 | support of nerdctl inspect {imagename} - > nerdctl inspect - Return low-level information on objects. Currently, only supports container objects.
Currently nerctl does not support to inspect images.
Will that be added in future?
Background:
Yet I am adding support for nerdctl in [x11docker](https://github.com/mviereck/x11docker). For several features I need to inspect the image. The most important informations are `ENTRYPOINT` and `CMD`. Further checks are done for `WORKDIR`, `USER` and the image architecture (mostly amd64).
Is there another way to get information about `ENTRYPOINT` and `CMD` before starting the container? | priority | support of nerdctl inspect imagename nerdctl inspect return low level information on objects currently only supports container objects currently nerctl does not support to inspect images will that be added in future background yet i am adding support for nerdctl in for several features i need to inspect the image the most important informations are entrypoint and cmd further checks are done for workdir user and the image architecture mostly is there another way to get information about entrypoint and cmd before starting the container | 1 |
250,464 | 7,977,439,874 | IssuesEvent | 2018-07-17 15:19:30 | esteemapp/esteem-surfer | https://api.github.com/repos/esteemapp/esteem-surfer | reopened | Activities open filters out link/button | high priority | Instead of dropdown to filter, have filters text out like in image list of words, allow quick filtering... of course in smaller fonts, image is just to give an idea... @fildunsky might give some mockups

| 1.0 | Activities open filters out link/button - Instead of dropdown to filter, have filters text out like in image list of words, allow quick filtering... of course in smaller fonts, image is just to give an idea... @fildunsky might give some mockups

| priority | activities open filters out link button instead of dropdown to filter have filters text out like in image list of words allow quick filtering of course in smaller fonts image is just to give an idea fildunsky might give some mockups | 1 |
700,974 | 24,081,021,749 | IssuesEvent | 2022-09-19 06:34:32 | CS3219-AY2223S1/cs3219-project-ay2223s1-g33 | https://api.github.com/repos/CS3219-AY2223S1/cs3219-project-ay2223s1-g33 | closed | [Collaboration UI] Synchronization | Module/Front-End Status/High-Priority Type/Feature | ## Description
Add network logic to
- [ ] Send changes to the collaboration API
- [ ] Update editor based on data received from the collaboration API
## Parent Task
- #72 | 1.0 | [Collaboration UI] Synchronization - ## Description
Add network logic to
- [ ] Send changes to the collaboration API
- [ ] Update editor based on data received from the collaboration API
## Parent Task
- #72 | priority | synchronization description add network logic to send changes to the collaboration api update editor based on data received from the collaboration api parent task | 1 |
260,354 | 8,209,099,283 | IssuesEvent | 2018-09-04 06:10:16 | ampproject/amphtml | https://api.github.com/repos/ampproject/amphtml | opened | Error: null is not an object (evaluating 'a.contains') | Category: Runtime P1: High Priority Type: Bug | Error at: go/ampe/CL2OxsHFsuevSg, occurs 10k / day.
Stacktrace:
```
Error: null is not an object (evaluating 'a.contains')
at transferTo (https://raw.githubusercontent.com/ampproject/amphtml/1535579255907/src/service/fixed-layer.js:767)
at (https://raw.githubusercontent.com/ampproject/amphtml/1535579255907/src/service/fixed-layer.js:767)
at setupElement_ (https://raw.githubusercontent.com/ampproject/amphtml/1535579255907/src/service/fixed-layer.js:476)
at (https://raw.githubusercontent.com/ampproject/amphtml/1535579255907/src/service.js:172)
```
/cc @jridgewell | 1.0 | Error: null is not an object (evaluating 'a.contains') - Error at: go/ampe/CL2OxsHFsuevSg, occurs 10k / day.
Stacktrace:
```
Error: null is not an object (evaluating 'a.contains')
at transferTo (https://raw.githubusercontent.com/ampproject/amphtml/1535579255907/src/service/fixed-layer.js:767)
at (https://raw.githubusercontent.com/ampproject/amphtml/1535579255907/src/service/fixed-layer.js:767)
at setupElement_ (https://raw.githubusercontent.com/ampproject/amphtml/1535579255907/src/service/fixed-layer.js:476)
at (https://raw.githubusercontent.com/ampproject/amphtml/1535579255907/src/service.js:172)
```
/cc @jridgewell | priority | error null is not an object evaluating a contains error at go ampe occurs day stacktrace error null is not an object evaluating a contains at transferto at at setupelement at cc jridgewell | 1 |
403,705 | 11,845,106,676 | IssuesEvent | 2020-03-24 07:38:17 | aau-giraf/wiki | https://api.github.com/repos/aau-giraf/wiki | closed | Setup Github Pages | Gruppe 10 priority: high type: chore | For a better reading experience of the wiki, a website can be setup up for the repository by using [Github Pages](https://pages.github.com/). | 1.0 | Setup Github Pages - For a better reading experience of the wiki, a website can be setup up for the repository by using [Github Pages](https://pages.github.com/). | priority | setup github pages for a better reading experience of the wiki a website can be setup up for the repository by using | 1 |
580,646 | 17,263,127,783 | IssuesEvent | 2021-07-22 10:20:51 | EQAR/eqar_backend | https://api.github.com/repos/EQAR/eqar_backend | opened | Institutions: classify locations (physical location vs. belonging to HE system) | connect API enhancement high priority web API | We need to classify institution locations, i.e. between the main seat and the HE system to which the institution belongs, vs. other locations where they have campuses, possibly in other countries, but are not recognised locally.
As discussed, we should use the 'country_verified' field as a flag for a 'official seat'.
The field should be included in the Web API (list + detail) and Connect API endpoints showing institutions. | 1.0 | Institutions: classify locations (physical location vs. belonging to HE system) - We need to classify institution locations, i.e. between the main seat and the HE system to which the institution belongs, vs. other locations where they have campuses, possibly in other countries, but are not recognised locally.
As discussed, we should use the 'country_verified' field as a flag for a 'official seat'.
The field should be included in the Web API (list + detail) and Connect API endpoints showing institutions. | priority | institutions classify locations physical location vs belonging to he system we need to classify institution locations i e between the main seat and the he system to which the institution belongs vs other locations where they have campuses possibly in other countries but are not recognised locally as discussed we should use the country verified field as a flag for a official seat the field should be included in the web api list detail and connect api endpoints showing institutions | 1 |
68,182 | 3,284,648,107 | IssuesEvent | 2015-10-28 17:24:10 | washingtontrails/vms | https://api.github.com/repos/washingtontrails/vms | opened | VMS: Landmanager Access to Summary Reports in MBP Crew Corner | Bug High Priority Plone Pyramid VMS BUDGET | Land managers need to be able to see all Summary Reports for work done on trails in their lands.
Can we add a Summary Reports link, as we have for Crew Leaders, on the My Backpack, Crew Corner area so that Land Managers can see all these reports?
Julie reports:
On Story #95 - This method for Land Managers does not work: "Navigate to a work party which has trails for which the user is a land manager." It requires there to be a future, published work party on one of their trail segments in order for them to get in. The land managers often want to look back at these reports at the end of the year, when there will not be any future parties posted on their trail segments. They can't use custom dates to reference a past work party, because past work parties are no longer accessible via Custom Dates after they are switched to "complete."
I suggest land managers have the "Summary Reports" link in MyBackpack just like the CL's and staff have, and it just show the work parties they have access to. If that's too expensive / complicated, land managers should just have access to ALL summary reports like they do now. Is anyone opposed to that? It's more info than they need to see, and it's oversharing, but the alternative, as it's built now, does not work. | 1.0 | VMS: Landmanager Access to Summary Reports in MBP Crew Corner - Land managers need to be able to see all Summary Reports for work done on trails in their lands.
Can we add a Summary Reports link, as we have for Crew Leaders, on the My Backpack, Crew Corner area so that Land Managers can see all these reports?
Julie reports:
On Story #95 - This method for Land Managers does not work: "Navigate to a work party which has trails for which the user is a land manager." It requires there to be a future, published work party on one of their trail segments in order for them to get in. The land managers often want to look back at these reports at the end of the year, when there will not be any future parties posted on their trail segments. They can't use custom dates to reference a past work party, because past work parties are no longer accessible via Custom Dates after they are switched to "complete."
I suggest land managers have the "Summary Reports" link in MyBackpack just like the CL's and staff have, and it just show the work parties they have access to. If that's too expensive / complicated, land managers should just have access to ALL summary reports like they do now. Is anyone opposed to that? It's more info than they need to see, and it's oversharing, but the alternative, as it's built now, does not work. | priority | vms landmanager access to summary reports in mbp crew corner land managers need to be able to see all summary reports for work done on trails in their lands can we add a summary reports link as we have for crew leaders on the my backpack crew corner area so that land managers can see all these reports julie reports on story this method for land managers does not work navigate to a work party which has trails for which the user is a land manager it requires there to be a future published work party on one of their trail segments in order for them to get in the land managers often want to look back at these reports at the end of the year when there will not be any future parties posted on their trail segments they can t use custom dates to reference a past work party because past work parties are no longer accessible via custom dates after they are switched to complete i suggest land managers have the summary reports link in mybackpack just like the cl s and staff have and it just show the work parties they have access to if that s too expensive complicated land managers should just have access to all summary reports like they do now is anyone opposed to that it s more info than they need to see and it s oversharing but the alternative as it s built now does not work | 1 |
276,101 | 8,584,183,304 | IssuesEvent | 2018-11-13 21:57:18 | json-schema-org/json-schema-spec | https://api.github.com/repos/json-schema-org/json-schema-spec | closed | Recursive schema composition | Priority: High Status: Accepted Type: Enhancement core vocabulary | ## TL;DR:
Re-using recursive schemas is a challenge.
* `$recurse` is a specialized version of `$ref` with a context-dependent target
* The target is the root schema of the document where schema processing began
* Processing can be either static schema walking or dynamic evaluation with an instance
* The value of `$recurse` is always `true` (discussed in the "alternatives" section)
* This is based on a keyword we have long used in [Doca](https://github.com/cloudflare/doca)
## Example
_APPARENTLY MANDATORY DISCLAIMER: This is a minimal contrived example, please do not point out all of the ways in which it is unrealistic or fails to be a convincing use case because you can refactor it. It's just showing the mechanism._
foo-schema:
```JSON
{
"$id": "http://example.com/foo-schema",
"properties": {
"type": "object",
"foo": {"$recurse": true}
}
}
```
bar-schema:
```JSON
{
"$id": "http://example.com/bar-schema",
"allOf": [{"$ref": "http://example.com/foo"}],
"required": ["bar"],
"properties": {"bar": {"type": "boolean"}}
}
```
The instance:
```JSON
{
"bar": true,
"foo": {
"bar": false,
"foo": {
"foo": {}
}
}
}
```
is valid against the first schema, but not the second.
It is valid against foo-schema because the `"$recurse": true` is in foo-schema, which is the same document that we started processing. Therefore it behaves exactly like `"$ref": "#"`. The recursive "foo" works as you'd expect with `"$ref": "#"`, and foo-schema doesn't care about "bar" being there (additional properties are not forbidden).
However, it is not valid against bar-schema because in that case, the `"$recurse": true` in foo-schema behaves like `"$ref": "http://example.com/bar-schema"`, as bar-schema is the document that we started processing. Taking this step by step from the top down:
* Processing the root of the instance, we have the "bar" property required by bar-schema; we got this directly from the root schema of bar-schema, without `$recurse` being involved
* Looking inside "foo", processing follows the `allOf` and `$ref` to foo-schema. The top-level instance is an object, so we pass the `type` constraint
* Still processing foo-schema, for the contents of the "foo" property, we have `"$recurse": true. Since we started processing with bar-schema, this is the equivalent of `"$ref": "bar-schema"
* So now we apply bar-schema to the contents of foo. This works fine: there is a boolean "bar", and we follow `allOf` and `$ref` back to foo-schema, and pass the `"type": "object" constraint
* Now, once again, we look at `"$recurse": true` to go into the next level "foo", and once again this is treated as `"$ref": "bar-schema"`
* Now validation fails, because the innermost "foo" does not have the required "bar" property.
## Use cases
The primary use case for this meta-schemas. For example, the [hyper-schema meta-schema](https://github.com/json-schema-org/json-schema-spec/blob/master/hyper-schema.json) has to re-define all of the applicator keywords from the [core and validation meta-schema](https://github.com/json-schema-org/json-schema-spec/blob/master/schema.json). And if something wanted to extend hyper-schema, not only would they have to re-declare all of the core applicators a ***third*** time, but also re-declare all of the LDO keywords that use `"$ref": "#"`.
As we make more vocabularies and encourage more extensions, this rapidly becomes untenable.
I will show what the hyper-schema meta-schema would look like with `$recurse` in a subsequent comment.
There are some other use cases in hypermedia with common response formats, but they are all simpler than the meta-schema use case.
## Alternatives
### Doca's `cfRecurse`
This is a simplified version of an extension keyword, `cfRecurse`, used with [Doca](https://github.com/cloudflare/doca). That keyword takes a JSON Pointer (not a URI fragment) that is evaluated with respect to the post-`$ref`-resolution in-memory data structure. [EDIT: Although don't try it right now, it's broken, long story that is totally irrelevant to the proposal.]
If that has you scratching your head, that's part of why I'm not proposing `cfRecurse`'s exact behavior.
In fact, Doca only supports `""` (the root JSON Pointer) as a `cfRecurse` value, and no one has ever asked for any other path. The use case really just comes up for us with pure recursion.
Specifying any other pointer requires knowing the structure of the in-memory document. And when the whole point is that you don't know what your original root schema (where processing began) will be until runtime, you cannot know that structure.
One could treat the JSON Pointer as an interface constraint- "this schema may only be used with an initial document that has a `/definitions/foo` schema", but that is a lot of complexity for something that has never come up in practice.
For this reason, `$recurse` does not take a meaningful value. I chose `true` because `false` or `null` would be counter-intuitive (you'd expect those values to *not* do things), and a number, string, array, or object would be much more subject to error or misinterpretation.
### Parametrized schemas
#322 proposes a general schema parametrization feature, which could possibly be used to implement this feature. It would look something like:
Parameterized schema for `oneOf`:
```JSON
{
"$id": "http://example.com/oneof",
"properties": {
"oneOf": {
"items": {"$ref": {"$param": "rootPointer"}}
}
}
}
```
Using the parametrized schema:
```JSON
{
"$id": "http://example.com/caller",
"allOf": [
{
"$ref": "http://example.com/oneof",
"$params": {
"rootPointer": "http://example.com/caller"
}
}
],
...
}
```
See #322 for an explanation of how this works.
I'd rather not open the schema parametrization can of worms right now. `$recurse` is a much simpler and easy to implement proposal and meets the core need for meta-schema extensibility. It does not preclude implementing schema parametrization, either in a later draft or as an extension vocabulary of some sort (it makes an interesting test case for vocabulary support, actually).
## Summary
* extending recursive schemas is a fundamental use case of JSON Schema as seen in meta-schemas, which happens to require knowledge of where runtime processing started
* referring to something inside a schema document determined at runtime adds a lot of complexity and has no apparent use case (neither from Doca nor from any issue I've ever seen here), so let's not do it
Runtime resolution (whether `$recurse` or parametrized schemas) is sufficiently new and powerful that I feel we should lock it down to the simplest case with a clear need. We can always extend it later, but it's hard to pull these things back.
| 1.0 | Recursive schema composition - ## TL;DR:
Re-using recursive schemas is a challenge.
* `$recurse` is a specialized version of `$ref` with a context-dependent target
* The target is the root schema of the document where schema processing began
* Processing can be either static schema walking or dynamic evaluation with an instance
* The value of `$recurse` is always `true` (discussed in the "alternatives" section)
* This is based on a keyword we have long used in [Doca](https://github.com/cloudflare/doca)
## Example
_APPARENTLY MANDATORY DISCLAIMER: This is a minimal contrived example, please do not point out all of the ways in which it is unrealistic or fails to be a convincing use case because you can refactor it. It's just showing the mechanism._
foo-schema:
```JSON
{
"$id": "http://example.com/foo-schema",
"properties": {
"type": "object",
"foo": {"$recurse": true}
}
}
```
bar-schema:
```JSON
{
"$id": "http://example.com/bar-schema",
"allOf": [{"$ref": "http://example.com/foo"}],
"required": ["bar"],
"properties": {"bar": {"type": "boolean"}}
}
```
The instance:
```JSON
{
"bar": true,
"foo": {
"bar": false,
"foo": {
"foo": {}
}
}
}
```
is valid against the first schema, but not the second.
It is valid against foo-schema because the `"$recurse": true` is in foo-schema, which is the same document that we started processing. Therefore it behaves exactly like `"$ref": "#"`. The recursive "foo" works as you'd expect with `"$ref": "#"`, and foo-schema doesn't care about "bar" being there (additional properties are not forbidden).
However, it is not valid against bar-schema because in that case, the `"$recurse": true` in foo-schema behaves like `"$ref": "http://example.com/bar-schema"`, as bar-schema is the document that we started processing. Taking this step by step from the top down:
* Processing the root of the instance, we have the "bar" property required by bar-schema; we got this directly from the root schema of bar-schema, without `$recurse` being involved
* Looking inside "foo", processing follows the `allOf` and `$ref` to foo-schema. The top-level instance is an object, so we pass the `type` constraint
* Still processing foo-schema, for the contents of the "foo" property, we have `"$recurse": true. Since we started processing with bar-schema, this is the equivalent of `"$ref": "bar-schema"
* So now we apply bar-schema to the contents of foo. This works fine: there is a boolean "bar", and we follow `allOf` and `$ref` back to foo-schema, and pass the `"type": "object" constraint
* Now, once again, we look at `"$recurse": true` to go into the next level "foo", and once again this is treated as `"$ref": "bar-schema"`
* Now validation fails, because the innermost "foo" does not have the required "bar" property.
## Use cases
The primary use case for this meta-schemas. For example, the [hyper-schema meta-schema](https://github.com/json-schema-org/json-schema-spec/blob/master/hyper-schema.json) has to re-define all of the applicator keywords from the [core and validation meta-schema](https://github.com/json-schema-org/json-schema-spec/blob/master/schema.json). And if something wanted to extend hyper-schema, not only would they have to re-declare all of the core applicators a ***third*** time, but also re-declare all of the LDO keywords that use `"$ref": "#"`.
As we make more vocabularies and encourage more extensions, this rapidly becomes untenable.
I will show what the hyper-schema meta-schema would look like with `$recurse` in a subsequent comment.
There are some other use cases in hypermedia with common response formats, but they are all simpler than the meta-schema use case.
## Alternatives
### Doca's `cfRecurse`
This is a simplified version of an extension keyword, `cfRecurse`, used with [Doca](https://github.com/cloudflare/doca). That keyword takes a JSON Pointer (not a URI fragment) that is evaluated with respect to the post-`$ref`-resolution in-memory data structure. [EDIT: Although don't try it right now, it's broken, long story that is totally irrelevant to the proposal.]
If that has you scratching your head, that's part of why I'm not proposing `cfRecurse`'s exact behavior.
In fact, Doca only supports `""` (the root JSON Pointer) as a `cfRecurse` value, and no one has ever asked for any other path. The use case really just comes up for us with pure recursion.
Specifying any other pointer requires knowing the structure of the in-memory document. And when the whole point is that you don't know what your original root schema (where processing began) will be until runtime, you cannot know that structure.
One could treat the JSON Pointer as an interface constraint- "this schema may only be used with an initial document that has a `/definitions/foo` schema", but that is a lot of complexity for something that has never come up in practice.
For this reason, `$recurse` does not take a meaningful value. I chose `true` because `false` or `null` would be counter-intuitive (you'd expect those values to *not* do things), and a number, string, array, or object would be much more subject to error or misinterpretation.
### Parametrized schemas
#322 proposes a general schema parametrization feature, which could possibly be used to implement this feature. It would look something like:
Parameterized schema for `oneOf`:
```JSON
{
"$id": "http://example.com/oneof",
"properties": {
"oneOf": {
"items": {"$ref": {"$param": "rootPointer"}}
}
}
}
```
Using the parametrized schema:
```JSON
{
"$id": "http://example.com/caller",
"allOf": [
{
"$ref": "http://example.com/oneof",
"$params": {
"rootPointer": "http://example.com/caller"
}
}
],
...
}
```
See #322 for an explanation of how this works.
I'd rather not open the schema parametrization can of worms right now. `$recurse` is a much simpler and easy to implement proposal and meets the core need for meta-schema extensibility. It does not preclude implementing schema parametrization, either in a later draft or as an extension vocabulary of some sort (it makes an interesting test case for vocabulary support, actually).
## Summary
* extending recursive schemas is a fundamental use case of JSON Schema as seen in meta-schemas, which happens to require knowledge of where runtime processing started
* referring to something inside a schema document determined at runtime adds a lot of complexity and has no apparent use case (neither from Doca nor from any issue I've ever seen here), so let's not do it
Runtime resolution (whether `$recurse` or parametrized schemas) is sufficiently new and powerful that I feel we should lock it down to the simplest case with a clear need. We can always extend it later, but it's hard to pull these things back.
| priority | recursive schema composition tl dr re using recursive schemas is a challenge recurse is a specialized version of ref with a context dependent target the target is the root schema of the document where schema processing began processing can be either static schema walking or dynamic evaluation with an instance the value of recurse is always true discussed in the alternatives section this is based on a keyword we have long used in example apparently mandatory disclaimer this is a minimal contrived example please do not point out all of the ways in which it is unrealistic or fails to be a convincing use case because you can refactor it it s just showing the mechanism foo schema json id properties type object foo recurse true bar schema json id allof required properties bar type boolean the instance json bar true foo bar false foo foo is valid against the first schema but not the second it is valid against foo schema because the recurse true is in foo schema which is the same document that we started processing therefore it behaves exactly like ref the recursive foo works as you d expect with ref and foo schema doesn t care about bar being there additional properties are not forbidden however it is not valid against bar schema because in that case the recurse true in foo schema behaves like ref as bar schema is the document that we started processing taking this step by step from the top down processing the root of the instance we have the bar property required by bar schema we got this directly from the root schema of bar schema without recurse being involved looking inside foo processing follows the allof and ref to foo schema the top level instance is an object so we pass the type constraint still processing foo schema for the contents of the foo property we have recurse true since we started processing with bar schema this is the equivalent of ref bar schema so now we apply bar schema to the contents of foo this works fine there is a boolean bar and we follow allof and ref back to foo schema and pass the type object constraint now once again we look at recurse true to go into the next level foo and once again this is treated as ref bar schema now validation fails because the innermost foo does not have the required bar property use cases the primary use case for this meta schemas for example the has to re define all of the applicator keywords from the and if something wanted to extend hyper schema not only would they have to re declare all of the core applicators a third time but also re declare all of the ldo keywords that use ref as we make more vocabularies and encourage more extensions this rapidly becomes untenable i will show what the hyper schema meta schema would look like with recurse in a subsequent comment there are some other use cases in hypermedia with common response formats but they are all simpler than the meta schema use case alternatives doca s cfrecurse this is a simplified version of an extension keyword cfrecurse used with that keyword takes a json pointer not a uri fragment that is evaluated with respect to the post ref resolution in memory data structure if that has you scratching your head that s part of why i m not proposing cfrecurse s exact behavior in fact doca only supports the root json pointer as a cfrecurse value and no one has ever asked for any other path the use case really just comes up for us with pure recursion specifying any other pointer requires knowing the structure of the in memory document and when the whole point is that you don t know what your original root schema where processing began will be until runtime you cannot know that structure one could treat the json pointer as an interface constraint this schema may only be used with an initial document that has a definitions foo schema but that is a lot of complexity for something that has never come up in practice for this reason recurse does not take a meaningful value i chose true because false or null would be counter intuitive you d expect those values to not do things and a number string array or object would be much more subject to error or misinterpretation parametrized schemas proposes a general schema parametrization feature which could possibly be used to implement this feature it would look something like parameterized schema for oneof json id properties oneof items ref param rootpointer using the parametrized schema json id allof ref params rootpointer see for an explanation of how this works i d rather not open the schema parametrization can of worms right now recurse is a much simpler and easy to implement proposal and meets the core need for meta schema extensibility it does not preclude implementing schema parametrization either in a later draft or as an extension vocabulary of some sort it makes an interesting test case for vocabulary support actually summary extending recursive schemas is a fundamental use case of json schema as seen in meta schemas which happens to require knowledge of where runtime processing started referring to something inside a schema document determined at runtime adds a lot of complexity and has no apparent use case neither from doca nor from any issue i ve ever seen here so let s not do it runtime resolution whether recurse or parametrized schemas is sufficiently new and powerful that i feel we should lock it down to the simplest case with a clear need we can always extend it later but it s hard to pull these things back | 1 |
275,518 | 8,576,523,061 | IssuesEvent | 2018-11-12 20:39:44 | zulip/zulip-mobile | https://api.github.com/repos/zulip/zulip-mobile | closed | Unable to remove emoji reaction when just added | P1 high-priority bug upstream: other | (edited by @gnprice after discussion below)
Tapping an emoji reaction you've made should remove it. But:
* It doesn't work if I just added the reaction here (the most common case in practice, I'm pretty sure).
* Also doesn't work if I just added it through the webapp.
* It *does* work if I previously added it, and only then went and loaded these messages.
---
(original report:)
Tapping on a reaction to a message by you does not remove it if it is the only reaction of that particular type on a message. | 1.0 | Unable to remove emoji reaction when just added - (edited by @gnprice after discussion below)
Tapping an emoji reaction you've made should remove it. But:
* It doesn't work if I just added the reaction here (the most common case in practice, I'm pretty sure).
* Also doesn't work if I just added it through the webapp.
* It *does* work if I previously added it, and only then went and loaded these messages.
---
(original report:)
Tapping on a reaction to a message by you does not remove it if it is the only reaction of that particular type on a message. | priority | unable to remove emoji reaction when just added edited by gnprice after discussion below tapping an emoji reaction you ve made should remove it but it doesn t work if i just added the reaction here the most common case in practice i m pretty sure also doesn t work if i just added it through the webapp it does work if i previously added it and only then went and loaded these messages original report tapping on a reaction to a message by you does not remove it if it is the only reaction of that particular type on a message | 1 |
665,100 | 22,299,516,509 | IssuesEvent | 2022-06-13 07:21:19 | COS301-SE-2022/Twitter-Summariser | https://api.github.com/repos/COS301-SE-2022/Twitter-Summariser | closed | CICD: Cloudfront Enhancement | priority:high status:ready role:dev-op type:enhance scope:cicd | Invalidate the cache in the S3 bucket on cloudfront to have the latest frontend code whenever we deploy | 1.0 | CICD: Cloudfront Enhancement - Invalidate the cache in the S3 bucket on cloudfront to have the latest frontend code whenever we deploy | priority | cicd cloudfront enhancement invalidate the cache in the bucket on cloudfront to have the latest frontend code whenever we deploy | 1 |
333,989 | 10,134,362,583 | IssuesEvent | 2019-08-02 07:18:14 | oxyplot/oxyplot | https://api.github.com/repos/oxyplot/oxyplot | closed | Annotations ignore LineStyle.None and draw as if Solid | easy-fix help wanted high-priority unconfirmed-bug | ### Steps to reproduce
1. Add an Annotation with `LineStyle` property set to `LineStyle.None`
2. Make sure Stoke and StokeThickness are set to display something
Platform: Windows 7 SP1 64-bit
.NET version: 4.7.2
### Expected behaviour
No lines are drawn.
### Actual behaviour
Lines are drawn as if `LineStyle.Solid` was set.

Generated code
```C#
[Example("Untitled")]
public static PlotModel Untitled()
{
var plotModel1 = new PlotModel();
var linearAxis1 = new LinearAxis();
plotModel1.Axes.Add(linearAxis1);
var linearAxis2 = new LinearAxis();
linearAxis2.Position = AxisPosition.Bottom;
plotModel1.Axes.Add(linearAxis2);
var lineAnnotation1 = new LineAnnotation();
lineAnnotation1.Type = LineAnnotationType.Horizontal;
lineAnnotation1.Y = 30;
lineAnnotation1.LineStyle = LineStyle.Solid;
lineAnnotation1.Text = "Solid";
plotModel1.Annotations.Add(lineAnnotation1);
var lineAnnotation2 = new LineAnnotation();
lineAnnotation2.Type = LineAnnotationType.Horizontal;
lineAnnotation2.Y = 40;
lineAnnotation2.LineStyle = LineStyle.None;
lineAnnotation2.Text = "None";
plotModel1.Annotations.Add(lineAnnotation2);
var arrowAnnotation1 = new ArrowAnnotation();
arrowAnnotation1.EndPoint = new DataPoint(40,10);
arrowAnnotation1.StartPoint = new DataPoint(20,10);
arrowAnnotation1.Text = "Solid";
plotModel1.Annotations.Add(arrowAnnotation1);
var arrowAnnotation2 = new ArrowAnnotation();
arrowAnnotation2.EndPoint = new DataPoint(90,10);
arrowAnnotation2.LineStyle = LineStyle.None;
arrowAnnotation2.StartPoint = new DataPoint(70,10);
arrowAnnotation2.Text = "None";
plotModel1.Annotations.Add(arrowAnnotation2);
var polygonAnnotation1 = new PolygonAnnotation();
polygonAnnotation1.StrokeThickness = 2;
polygonAnnotation1.Text = "Solid";
polygonAnnotation1.Points.Add(new DataPoint(10,50));
polygonAnnotation1.Points.Add(new DataPoint(30,50));
polygonAnnotation1.Points.Add(new DataPoint(30,70));
polygonAnnotation1.Points.Add(new DataPoint(10,70));
plotModel1.Annotations.Add(polygonAnnotation1);
var polygonAnnotation2 = new PolygonAnnotation();
polygonAnnotation2.LineStyle = LineStyle.None;
polygonAnnotation2.StrokeThickness = 2;
polygonAnnotation2.Text = "None";
polygonAnnotation2.Points.Add(new DataPoint(60,50));
polygonAnnotation2.Points.Add(new DataPoint(80,50));
polygonAnnotation2.Points.Add(new DataPoint(80,70));
polygonAnnotation2.Points.Add(new DataPoint(60,70));
plotModel1.Annotations.Add(polygonAnnotation2);
return plotModel1;
}
```
Report
```
P L O T R E P O R T
=====================
=== PlotModel ===
Table 1. PlotModel
| DefaultFont | Segoe UI |
| DefaultFontSize | 12 |
| ActualCulture | de-DE |
| ActualPlotMargins | (29, 0, 0, 26) |
| PlotView | OxyPlot.Wpf.PlotView |
| Annotations | OxyPlot.ElementCollection`1[OxyPlot.Annotations.Annotation] |
| Axes | OxyPlot.ElementCollection`1[OxyPlot.Axes.Axis] |
| Background | #00000000 |
| Culture | |
| DefaultColors | System.Collections.Generic.List`1[OxyPlot.OxyColor] |
| IsLegendVisible | True |
| LegendArea | (422, 16, 0, 0) |
| LegendBackground | #00000000 |
| LegendBorder | #00000000 |
| LegendBorderThickness | 1 |
| LegendColumnSpacing | 8 |
| LegendFont | |
| LegendFontSize | 12 |
| LegendTextColor | #00000001 |
| LegendFontWeight | 400 |
| LegendItemAlignment | Left |
| LegendItemOrder | Normal |
| LegendItemSpacing | 24 |
| LegendLineSpacing | 0 |
| LegendMargin | 8 |
| LegendMaxWidth | NaN |
| LegendMaxHeight | NaN |
| LegendOrientation | Vertical |
| LegendPadding | 8 |
| LegendPlacement | Inside |
| LegendPosition | RightTop |
| LegendSymbolLength | 16 |
| LegendSymbolMargin | 4 |
| LegendSymbolPlacement | Left |
| LegendTitle | |
| LegendTitleColor | #00000001 |
| LegendTitleFont | |
| LegendTitleFontSize | 12 |
| LegendTitleFontWeight | 700 |
| Padding | (8, 8, 8, 8) |
| Width | 438 |
| Height | 448.04 |
| PlotAndAxisArea | (8, 8, 422, 432.04) |
| PlotArea | (37, 8, 393, 406.04) |
| AxisTierDistance | 4 |
| PlotAreaBackground | #00000000 |
| PlotAreaBorderColor | #ff000000 |
| PlotAreaBorderThickness | (1, 1, 1, 1) |
| PlotMargins | (NaN, NaN, NaN, NaN) |
| PlotType | XY |
| Series | OxyPlot.ElementCollection`1[OxyPlot.Series.Series] |
| RenderingDecorator | |
| Subtitle | |
| SubtitleFont | |
| SubtitleFontSize | 14 |
| SubtitleFontWeight | 400 |
| TextColor | #ff000000 |
| Title | |
| TitleToolTip | |
| TitleColor | #00000001 |
| SubtitleColor | #00000001 |
| TitleHorizontalAlignment | CenteredWithinPlotArea |
| TitleArea | (37, 8, 393, 12) |
| TitleFont | |
| TitleFontSize | 18 |
| TitleFontWeight | 700 |
| TitlePadding | 6 |
| DefaultAngleAxis | |
| DefaultMagnitudeAxis | |
| DefaultXAxis | LinearAxis(Bottom, 0, 100, 20) |
| DefaultYAxis | LinearAxis(Left, 0, 100, 20) |
| DefaultColorAxis | |
| SyncRoot | System.Object |
| SelectionColor | #ffffff00 |
=== Axes ===
Table 2. LinearAxis
| FormatAsFractions | False |
| FractionUnit | 1 |
| FractionUnitSymbol | |
| AbsoluteMaximum | 1.79769313486232E+308 |
| AbsoluteMinimum | -1.79769313486232E+308 |
| ActualMajorStep | 20 |
| ActualMaximum | 100 |
| ActualMinimum | 0 |
| ActualMinorStep | 5 |
| ActualStringFormat | g6 |
| ActualTitle | |
| Angle | 0 |
| AxisTickToLabelDistance | 4 |
| AxisTitleDistance | 4 |
| AxisDistance | 0 |
| AxislineColor | #ff000000 |
| AxislineStyle | None |
| AxislineThickness | 1 |
| ClipTitle | True |
| CropGridlines | False |
| DataMaximum | NaN |
| DataMinimum | NaN |
| EndPosition | 1 |
| ExtraGridlineColor | #ff000000 |
| ExtraGridlineStyle | Solid |
| ExtraGridlineThickness | 1 |
| ExtraGridlines | |
| FilterFunction | |
| FilterMaxValue | 1.79769313486232E+308 |
| FilterMinValue | -1.79769313486232E+308 |
| IntervalLength | 60 |
| IsAxisVisible | True |
| IsPanEnabled | True |
| IsReversed | False |
| IsZoomEnabled | True |
| Key | |
| LabelFormatter | |
| Layer | BelowSeries |
| MajorGridlineColor | #40000000 |
| MajorGridlineStyle | None |
| MajorGridlineThickness | 1 |
| MajorStep | NaN |
| MajorTickSize | 7 |
| Maximum | NaN |
| MaximumPadding | 0.01 |
| MaximumRange | Infinity |
| Minimum | NaN |
| MinimumMajorStep | 0 |
| MinimumMinorStep | 0 |
| MinimumPadding | 0.01 |
| MinimumRange | 0 |
| MinorGridlineColor | #20000000 |
| MinorGridlineStyle | None |
| MinorGridlineThickness | 1 |
| MinorStep | NaN |
| MinorTicklineColor | #00000001 |
| MinorTickSize | 4 |
| Offset | 101.970249236528 |
| Position | Left |
| PositionAtZeroCrossing | False |
| PositionTier | 0 |
| Scale | -4.0604 |
| ScreenMax | 8 414,04 |
| ScreenMin | 414,04 8 |
| StartPosition | 0 |
| StringFormat | |
| TickStyle | Outside |
| TicklineColor | #ff000000 |
| Title | |
| TitleClippingLength | 0.9 |
| TitleColor | #00000001 |
| TitleFont | |
| TitleFontSize | NaN |
| TitleFontWeight | 400 |
| TitleFormatString | {0} [{1}] |
| TitlePosition | 0.5 |
| Unit | |
| UseSuperExponentialFormat | False |
| DesiredSize | (29, 0) |
| Font | |
| FontSize | NaN |
| FontWeight | 400 |
| PlotModel | |
| Tag | |
| TextColor | #00000001 |
| ToolTip | |
| Selectable | True |
| SelectionMode | All |
| Parent | |
Table 3. LinearAxis
| FormatAsFractions | False |
| FractionUnit | 1 |
| FractionUnitSymbol | |
| AbsoluteMaximum | 1.79769313486232E+308 |
| AbsoluteMinimum | -1.79769313486232E+308 |
| ActualMajorStep | 20 |
| ActualMaximum | 100 |
| ActualMinimum | 0 |
| ActualMinorStep | 5 |
| ActualStringFormat | g6 |
| ActualTitle | |
| Angle | 0 |
| AxisTickToLabelDistance | 4 |
| AxisTitleDistance | 4 |
| AxisDistance | 0 |
| AxislineColor | #ff000000 |
| AxislineStyle | None |
| AxislineThickness | 1 |
| ClipTitle | True |
| CropGridlines | False |
| DataMaximum | NaN |
| DataMinimum | NaN |
| EndPosition | 1 |
| ExtraGridlineColor | #ff000000 |
| ExtraGridlineStyle | Solid |
| ExtraGridlineThickness | 1 |
| ExtraGridlines | |
| FilterFunction | |
| FilterMaxValue | 1.79769313486232E+308 |
| FilterMinValue | -1.79769313486232E+308 |
| IntervalLength | 60 |
| IsAxisVisible | True |
| IsPanEnabled | True |
| IsReversed | False |
| IsZoomEnabled | True |
| Key | |
| LabelFormatter | |
| Layer | BelowSeries |
| MajorGridlineColor | #40000000 |
| MajorGridlineStyle | None |
| MajorGridlineThickness | 1 |
| MajorStep | NaN |
| MajorTickSize | 7 |
| Maximum | NaN |
| MaximumPadding | 0.01 |
| MaximumRange | Infinity |
| Minimum | NaN |
| MinimumMajorStep | 0 |
| MinimumMinorStep | 0 |
| MinimumPadding | 0.01 |
| MinimumRange | 0 |
| MinorGridlineColor | #20000000 |
| MinorGridlineStyle | None |
| MinorGridlineThickness | 1 |
| MinorStep | NaN |
| MinorTicklineColor | #00000001 |
| MinorTickSize | 4 |
| Offset | -9.4147582697201 |
| Position | Bottom |
| PositionAtZeroCrossing | False |
| PositionTier | 0 |
| Scale | 3.93 |
| ScreenMax | 430 37 |
| ScreenMin | 37 430 |
| StartPosition | 0 |
| StringFormat | |
| TickStyle | Outside |
| TicklineColor | #ff000000 |
| Title | |
| TitleClippingLength | 0.9 |
| TitleColor | #00000001 |
| TitleFont | |
| TitleFontSize | NaN |
| TitleFontWeight | 400 |
| TitleFormatString | {0} [{1}] |
| TitlePosition | 0.5 |
| Unit | |
| UseSuperExponentialFormat | False |
| DesiredSize | (0, 26) |
| Font | |
| FontSize | NaN |
| FontWeight | 400 |
| PlotModel | |
| Tag | |
| TextColor | #00000001 |
| ToolTip | |
| Selectable | True |
| SelectionMode | All |
| Parent | |
=== Annotations ===
Table 4. LineAnnotation
| Intercept | 0 |
| Slope | 0 |
| Type | Horizontal |
| X | 0 |
| Y | 30 |
| Color | #ff0000ff |
| LineJoin | Miter |
| LineStyle | Solid |
| MaximumX | 1.79769313486232E+308 |
| MaximumY | 1.79769313486232E+308 |
| MinimumX | -1.79769313486232E+308 |
| MinimumY | -1.79769313486232E+308 |
| StrokeThickness | 1 |
| TextMargin | 12 |
| TextPadding | 0 |
| TextOrientation | AlongLine |
| TextLinePosition | 1 |
| ClipText | True |
| ClipByXAxis | True |
| ClipByYAxis | True |
| Text | Solid |
| TextPosition | n. def. n. def. |
| TextHorizontalAlignment | Right |
| TextVerticalAlignment | Top |
| TextRotation | 0 |
| Layer | AboveSeries |
| XAxis | LinearAxis(Bottom, 0, 100, 20) |
| XAxisKey | |
| YAxis | LinearAxis(Left, 0, 100, 20) |
| YAxisKey | |
| Font | |
| FontSize | NaN |
| FontWeight | 400 |
| PlotModel | |
| Tag | |
| TextColor | #00000001 |
| ToolTip | |
| Selectable | True |
| SelectionMode | All |
| Parent | |
Table 5. LineAnnotation
| Intercept | 0 |
| Slope | 0 |
| Type | Horizontal |
| X | 0 |
| Y | 40 |
| Color | #ff0000ff |
| LineJoin | Miter |
| LineStyle | None |
| MaximumX | 1.79769313486232E+308 |
| MaximumY | 1.79769313486232E+308 |
| MinimumX | -1.79769313486232E+308 |
| MinimumY | -1.79769313486232E+308 |
| StrokeThickness | 1 |
| TextMargin | 12 |
| TextPadding | 0 |
| TextOrientation | AlongLine |
| TextLinePosition | 1 |
| ClipText | True |
| ClipByXAxis | True |
| ClipByYAxis | True |
| Text | None |
| TextPosition | n. def. n. def. |
| TextHorizontalAlignment | Right |
| TextVerticalAlignment | Top |
| TextRotation | 0 |
| Layer | AboveSeries |
| XAxis | LinearAxis(Bottom, 0, 100, 20) |
| XAxisKey | |
| YAxis | LinearAxis(Left, 0, 100, 20) |
| YAxisKey | |
| Font | |
| FontSize | NaN |
| FontWeight | 400 |
| PlotModel | |
| Tag | |
| TextColor | #00000001 |
| ToolTip | |
| Selectable | True |
| SelectionMode | All |
| Parent | |
Table 6. ArrowAnnotation
| ArrowDirection | 0 0 |
| Color | #ff0000ff |
| EndPoint | 40 10 |
| HeadLength | 10 |
| HeadWidth | 3 |
| LineJoin | Miter |
| LineStyle | Solid |
| StartPoint | 20 10 |
| StrokeThickness | 2 |
| Veeness | 0 |
| Text | Solid |
| TextPosition | n. def. n. def. |
| TextHorizontalAlignment | Center |
| TextVerticalAlignment | Middle |
| TextRotation | 0 |
| Layer | AboveSeries |
| XAxis | LinearAxis(Bottom, 0, 100, 20) |
| XAxisKey | |
| YAxis | LinearAxis(Left, 0, 100, 20) |
| YAxisKey | |
| Font | |
| FontSize | NaN |
| FontWeight | 400 |
| PlotModel | |
| Tag | |
| TextColor | #00000001 |
| ToolTip | |
| Selectable | True |
| SelectionMode | All |
| Parent | |
Table 7. ArrowAnnotation
| ArrowDirection | 0 0 |
| Color | #ff0000ff |
| EndPoint | 90 10 |
| HeadLength | 10 |
| HeadWidth | 3 |
| LineJoin | Miter |
| LineStyle | None |
| StartPoint | 70 10 |
| StrokeThickness | 2 |
| Veeness | 0 |
| Text | None |
| TextPosition | n. def. n. def. |
| TextHorizontalAlignment | Center |
| TextVerticalAlignment | Middle |
| TextRotation | 0 |
| Layer | AboveSeries |
| XAxis | LinearAxis(Bottom, 0, 100, 20) |
| XAxisKey | |
| YAxis | LinearAxis(Left, 0, 100, 20) |
| YAxisKey | |
| Font | |
| FontSize | NaN |
| FontWeight | 400 |
| PlotModel | |
| Tag | |
| TextColor | #00000001 |
| ToolTip | |
| Selectable | True |
| SelectionMode | All |
| Parent | |
Table 8. PolygonAnnotation
| LineJoin | Miter |
| LineStyle | Solid |
| Points | System.Collections.Generic.List`1[OxyPlot.DataPoint] |
| Fill | #ffadd8e6 |
| Stroke | #ff000000 |
| StrokeThickness | 2 |
| Text | Solid |
| TextPosition | n. def. n. def. |
| TextHorizontalAlignment | Center |
| TextVerticalAlignment | Middle |
| TextRotation | 0 |
| Layer | AboveSeries |
| XAxis | LinearAxis(Bottom, 0, 100, 20) |
| XAxisKey | |
| YAxis | LinearAxis(Left, 0, 100, 20) |
| YAxisKey | |
| Font | |
| FontSize | NaN |
| FontWeight | 400 |
| PlotModel | |
| Tag | |
| TextColor | #00000001 |
| ToolTip | |
| Selectable | True |
| SelectionMode | All |
| Parent | |
Table 9. PolygonAnnotation
| LineJoin | Miter |
| LineStyle | None |
| Points | System.Collections.Generic.List`1[OxyPlot.DataPoint] |
| Fill | #ffadd8e6 |
| Stroke | #ff000000 |
| StrokeThickness | 2 |
| Text | None |
| TextPosition | n. def. n. def. |
| TextHorizontalAlignment | Center |
| TextVerticalAlignment | Middle |
| TextRotation | 0 |
| Layer | AboveSeries |
| XAxis | LinearAxis(Bottom, 0, 100, 20) |
| XAxisKey | |
| YAxis | LinearAxis(Left, 0, 100, 20) |
| YAxisKey | |
| Font | |
| FontSize | NaN |
| FontWeight | 400 |
| PlotModel | |
| Tag | |
| TextColor | #00000001 |
| ToolTip | |
| Selectable | True |
| SelectionMode | All |
| Parent | |
=== Series ===
Report generated by OxyPlot 1.0.0
``` | 1.0 | Annotations ignore LineStyle.None and draw as if Solid - ### Steps to reproduce
1. Add an Annotation with `LineStyle` property set to `LineStyle.None`
2. Make sure Stoke and StokeThickness are set to display something
Platform: Windows 7 SP1 64-bit
.NET version: 4.7.2
### Expected behaviour
No lines are drawn.
### Actual behaviour
Lines are drawn as if `LineStyle.Solid` was set.

Generated code
```C#
[Example("Untitled")]
public static PlotModel Untitled()
{
var plotModel1 = new PlotModel();
var linearAxis1 = new LinearAxis();
plotModel1.Axes.Add(linearAxis1);
var linearAxis2 = new LinearAxis();
linearAxis2.Position = AxisPosition.Bottom;
plotModel1.Axes.Add(linearAxis2);
var lineAnnotation1 = new LineAnnotation();
lineAnnotation1.Type = LineAnnotationType.Horizontal;
lineAnnotation1.Y = 30;
lineAnnotation1.LineStyle = LineStyle.Solid;
lineAnnotation1.Text = "Solid";
plotModel1.Annotations.Add(lineAnnotation1);
var lineAnnotation2 = new LineAnnotation();
lineAnnotation2.Type = LineAnnotationType.Horizontal;
lineAnnotation2.Y = 40;
lineAnnotation2.LineStyle = LineStyle.None;
lineAnnotation2.Text = "None";
plotModel1.Annotations.Add(lineAnnotation2);
var arrowAnnotation1 = new ArrowAnnotation();
arrowAnnotation1.EndPoint = new DataPoint(40,10);
arrowAnnotation1.StartPoint = new DataPoint(20,10);
arrowAnnotation1.Text = "Solid";
plotModel1.Annotations.Add(arrowAnnotation1);
var arrowAnnotation2 = new ArrowAnnotation();
arrowAnnotation2.EndPoint = new DataPoint(90,10);
arrowAnnotation2.LineStyle = LineStyle.None;
arrowAnnotation2.StartPoint = new DataPoint(70,10);
arrowAnnotation2.Text = "None";
plotModel1.Annotations.Add(arrowAnnotation2);
var polygonAnnotation1 = new PolygonAnnotation();
polygonAnnotation1.StrokeThickness = 2;
polygonAnnotation1.Text = "Solid";
polygonAnnotation1.Points.Add(new DataPoint(10,50));
polygonAnnotation1.Points.Add(new DataPoint(30,50));
polygonAnnotation1.Points.Add(new DataPoint(30,70));
polygonAnnotation1.Points.Add(new DataPoint(10,70));
plotModel1.Annotations.Add(polygonAnnotation1);
var polygonAnnotation2 = new PolygonAnnotation();
polygonAnnotation2.LineStyle = LineStyle.None;
polygonAnnotation2.StrokeThickness = 2;
polygonAnnotation2.Text = "None";
polygonAnnotation2.Points.Add(new DataPoint(60,50));
polygonAnnotation2.Points.Add(new DataPoint(80,50));
polygonAnnotation2.Points.Add(new DataPoint(80,70));
polygonAnnotation2.Points.Add(new DataPoint(60,70));
plotModel1.Annotations.Add(polygonAnnotation2);
return plotModel1;
}
```
Report
```
P L O T R E P O R T
=====================
=== PlotModel ===
Table 1. PlotModel
| DefaultFont | Segoe UI |
| DefaultFontSize | 12 |
| ActualCulture | de-DE |
| ActualPlotMargins | (29, 0, 0, 26) |
| PlotView | OxyPlot.Wpf.PlotView |
| Annotations | OxyPlot.ElementCollection`1[OxyPlot.Annotations.Annotation] |
| Axes | OxyPlot.ElementCollection`1[OxyPlot.Axes.Axis] |
| Background | #00000000 |
| Culture | |
| DefaultColors | System.Collections.Generic.List`1[OxyPlot.OxyColor] |
| IsLegendVisible | True |
| LegendArea | (422, 16, 0, 0) |
| LegendBackground | #00000000 |
| LegendBorder | #00000000 |
| LegendBorderThickness | 1 |
| LegendColumnSpacing | 8 |
| LegendFont | |
| LegendFontSize | 12 |
| LegendTextColor | #00000001 |
| LegendFontWeight | 400 |
| LegendItemAlignment | Left |
| LegendItemOrder | Normal |
| LegendItemSpacing | 24 |
| LegendLineSpacing | 0 |
| LegendMargin | 8 |
| LegendMaxWidth | NaN |
| LegendMaxHeight | NaN |
| LegendOrientation | Vertical |
| LegendPadding | 8 |
| LegendPlacement | Inside |
| LegendPosition | RightTop |
| LegendSymbolLength | 16 |
| LegendSymbolMargin | 4 |
| LegendSymbolPlacement | Left |
| LegendTitle | |
| LegendTitleColor | #00000001 |
| LegendTitleFont | |
| LegendTitleFontSize | 12 |
| LegendTitleFontWeight | 700 |
| Padding | (8, 8, 8, 8) |
| Width | 438 |
| Height | 448.04 |
| PlotAndAxisArea | (8, 8, 422, 432.04) |
| PlotArea | (37, 8, 393, 406.04) |
| AxisTierDistance | 4 |
| PlotAreaBackground | #00000000 |
| PlotAreaBorderColor | #ff000000 |
| PlotAreaBorderThickness | (1, 1, 1, 1) |
| PlotMargins | (NaN, NaN, NaN, NaN) |
| PlotType | XY |
| Series | OxyPlot.ElementCollection`1[OxyPlot.Series.Series] |
| RenderingDecorator | |
| Subtitle | |
| SubtitleFont | |
| SubtitleFontSize | 14 |
| SubtitleFontWeight | 400 |
| TextColor | #ff000000 |
| Title | |
| TitleToolTip | |
| TitleColor | #00000001 |
| SubtitleColor | #00000001 |
| TitleHorizontalAlignment | CenteredWithinPlotArea |
| TitleArea | (37, 8, 393, 12) |
| TitleFont | |
| TitleFontSize | 18 |
| TitleFontWeight | 700 |
| TitlePadding | 6 |
| DefaultAngleAxis | |
| DefaultMagnitudeAxis | |
| DefaultXAxis | LinearAxis(Bottom, 0, 100, 20) |
| DefaultYAxis | LinearAxis(Left, 0, 100, 20) |
| DefaultColorAxis | |
| SyncRoot | System.Object |
| SelectionColor | #ffffff00 |
=== Axes ===
Table 2. LinearAxis
| FormatAsFractions | False |
| FractionUnit | 1 |
| FractionUnitSymbol | |
| AbsoluteMaximum | 1.79769313486232E+308 |
| AbsoluteMinimum | -1.79769313486232E+308 |
| ActualMajorStep | 20 |
| ActualMaximum | 100 |
| ActualMinimum | 0 |
| ActualMinorStep | 5 |
| ActualStringFormat | g6 |
| ActualTitle | |
| Angle | 0 |
| AxisTickToLabelDistance | 4 |
| AxisTitleDistance | 4 |
| AxisDistance | 0 |
| AxislineColor | #ff000000 |
| AxislineStyle | None |
| AxislineThickness | 1 |
| ClipTitle | True |
| CropGridlines | False |
| DataMaximum | NaN |
| DataMinimum | NaN |
| EndPosition | 1 |
| ExtraGridlineColor | #ff000000 |
| ExtraGridlineStyle | Solid |
| ExtraGridlineThickness | 1 |
| ExtraGridlines | |
| FilterFunction | |
| FilterMaxValue | 1.79769313486232E+308 |
| FilterMinValue | -1.79769313486232E+308 |
| IntervalLength | 60 |
| IsAxisVisible | True |
| IsPanEnabled | True |
| IsReversed | False |
| IsZoomEnabled | True |
| Key | |
| LabelFormatter | |
| Layer | BelowSeries |
| MajorGridlineColor | #40000000 |
| MajorGridlineStyle | None |
| MajorGridlineThickness | 1 |
| MajorStep | NaN |
| MajorTickSize | 7 |
| Maximum | NaN |
| MaximumPadding | 0.01 |
| MaximumRange | Infinity |
| Minimum | NaN |
| MinimumMajorStep | 0 |
| MinimumMinorStep | 0 |
| MinimumPadding | 0.01 |
| MinimumRange | 0 |
| MinorGridlineColor | #20000000 |
| MinorGridlineStyle | None |
| MinorGridlineThickness | 1 |
| MinorStep | NaN |
| MinorTicklineColor | #00000001 |
| MinorTickSize | 4 |
| Offset | 101.970249236528 |
| Position | Left |
| PositionAtZeroCrossing | False |
| PositionTier | 0 |
| Scale | -4.0604 |
| ScreenMax | 8 414,04 |
| ScreenMin | 414,04 8 |
| StartPosition | 0 |
| StringFormat | |
| TickStyle | Outside |
| TicklineColor | #ff000000 |
| Title | |
| TitleClippingLength | 0.9 |
| TitleColor | #00000001 |
| TitleFont | |
| TitleFontSize | NaN |
| TitleFontWeight | 400 |
| TitleFormatString | {0} [{1}] |
| TitlePosition | 0.5 |
| Unit | |
| UseSuperExponentialFormat | False |
| DesiredSize | (29, 0) |
| Font | |
| FontSize | NaN |
| FontWeight | 400 |
| PlotModel | |
| Tag | |
| TextColor | #00000001 |
| ToolTip | |
| Selectable | True |
| SelectionMode | All |
| Parent | |
Table 3. LinearAxis
| FormatAsFractions | False |
| FractionUnit | 1 |
| FractionUnitSymbol | |
| AbsoluteMaximum | 1.79769313486232E+308 |
| AbsoluteMinimum | -1.79769313486232E+308 |
| ActualMajorStep | 20 |
| ActualMaximum | 100 |
| ActualMinimum | 0 |
| ActualMinorStep | 5 |
| ActualStringFormat | g6 |
| ActualTitle | |
| Angle | 0 |
| AxisTickToLabelDistance | 4 |
| AxisTitleDistance | 4 |
| AxisDistance | 0 |
| AxislineColor | #ff000000 |
| AxislineStyle | None |
| AxislineThickness | 1 |
| ClipTitle | True |
| CropGridlines | False |
| DataMaximum | NaN |
| DataMinimum | NaN |
| EndPosition | 1 |
| ExtraGridlineColor | #ff000000 |
| ExtraGridlineStyle | Solid |
| ExtraGridlineThickness | 1 |
| ExtraGridlines | |
| FilterFunction | |
| FilterMaxValue | 1.79769313486232E+308 |
| FilterMinValue | -1.79769313486232E+308 |
| IntervalLength | 60 |
| IsAxisVisible | True |
| IsPanEnabled | True |
| IsReversed | False |
| IsZoomEnabled | True |
| Key | |
| LabelFormatter | |
| Layer | BelowSeries |
| MajorGridlineColor | #40000000 |
| MajorGridlineStyle | None |
| MajorGridlineThickness | 1 |
| MajorStep | NaN |
| MajorTickSize | 7 |
| Maximum | NaN |
| MaximumPadding | 0.01 |
| MaximumRange | Infinity |
| Minimum | NaN |
| MinimumMajorStep | 0 |
| MinimumMinorStep | 0 |
| MinimumPadding | 0.01 |
| MinimumRange | 0 |
| MinorGridlineColor | #20000000 |
| MinorGridlineStyle | None |
| MinorGridlineThickness | 1 |
| MinorStep | NaN |
| MinorTicklineColor | #00000001 |
| MinorTickSize | 4 |
| Offset | -9.4147582697201 |
| Position | Bottom |
| PositionAtZeroCrossing | False |
| PositionTier | 0 |
| Scale | 3.93 |
| ScreenMax | 430 37 |
| ScreenMin | 37 430 |
| StartPosition | 0 |
| StringFormat | |
| TickStyle | Outside |
| TicklineColor | #ff000000 |
| Title | |
| TitleClippingLength | 0.9 |
| TitleColor | #00000001 |
| TitleFont | |
| TitleFontSize | NaN |
| TitleFontWeight | 400 |
| TitleFormatString | {0} [{1}] |
| TitlePosition | 0.5 |
| Unit | |
| UseSuperExponentialFormat | False |
| DesiredSize | (0, 26) |
| Font | |
| FontSize | NaN |
| FontWeight | 400 |
| PlotModel | |
| Tag | |
| TextColor | #00000001 |
| ToolTip | |
| Selectable | True |
| SelectionMode | All |
| Parent | |
=== Annotations ===
Table 4. LineAnnotation
| Intercept | 0 |
| Slope | 0 |
| Type | Horizontal |
| X | 0 |
| Y | 30 |
| Color | #ff0000ff |
| LineJoin | Miter |
| LineStyle | Solid |
| MaximumX | 1.79769313486232E+308 |
| MaximumY | 1.79769313486232E+308 |
| MinimumX | -1.79769313486232E+308 |
| MinimumY | -1.79769313486232E+308 |
| StrokeThickness | 1 |
| TextMargin | 12 |
| TextPadding | 0 |
| TextOrientation | AlongLine |
| TextLinePosition | 1 |
| ClipText | True |
| ClipByXAxis | True |
| ClipByYAxis | True |
| Text | Solid |
| TextPosition | n. def. n. def. |
| TextHorizontalAlignment | Right |
| TextVerticalAlignment | Top |
| TextRotation | 0 |
| Layer | AboveSeries |
| XAxis | LinearAxis(Bottom, 0, 100, 20) |
| XAxisKey | |
| YAxis | LinearAxis(Left, 0, 100, 20) |
| YAxisKey | |
| Font | |
| FontSize | NaN |
| FontWeight | 400 |
| PlotModel | |
| Tag | |
| TextColor | #00000001 |
| ToolTip | |
| Selectable | True |
| SelectionMode | All |
| Parent | |
Table 5. LineAnnotation
| Intercept | 0 |
| Slope | 0 |
| Type | Horizontal |
| X | 0 |
| Y | 40 |
| Color | #ff0000ff |
| LineJoin | Miter |
| LineStyle | None |
| MaximumX | 1.79769313486232E+308 |
| MaximumY | 1.79769313486232E+308 |
| MinimumX | -1.79769313486232E+308 |
| MinimumY | -1.79769313486232E+308 |
| StrokeThickness | 1 |
| TextMargin | 12 |
| TextPadding | 0 |
| TextOrientation | AlongLine |
| TextLinePosition | 1 |
| ClipText | True |
| ClipByXAxis | True |
| ClipByYAxis | True |
| Text | None |
| TextPosition | n. def. n. def. |
| TextHorizontalAlignment | Right |
| TextVerticalAlignment | Top |
| TextRotation | 0 |
| Layer | AboveSeries |
| XAxis | LinearAxis(Bottom, 0, 100, 20) |
| XAxisKey | |
| YAxis | LinearAxis(Left, 0, 100, 20) |
| YAxisKey | |
| Font | |
| FontSize | NaN |
| FontWeight | 400 |
| PlotModel | |
| Tag | |
| TextColor | #00000001 |
| ToolTip | |
| Selectable | True |
| SelectionMode | All |
| Parent | |
Table 6. ArrowAnnotation
| ArrowDirection | 0 0 |
| Color | #ff0000ff |
| EndPoint | 40 10 |
| HeadLength | 10 |
| HeadWidth | 3 |
| LineJoin | Miter |
| LineStyle | Solid |
| StartPoint | 20 10 |
| StrokeThickness | 2 |
| Veeness | 0 |
| Text | Solid |
| TextPosition | n. def. n. def. |
| TextHorizontalAlignment | Center |
| TextVerticalAlignment | Middle |
| TextRotation | 0 |
| Layer | AboveSeries |
| XAxis | LinearAxis(Bottom, 0, 100, 20) |
| XAxisKey | |
| YAxis | LinearAxis(Left, 0, 100, 20) |
| YAxisKey | |
| Font | |
| FontSize | NaN |
| FontWeight | 400 |
| PlotModel | |
| Tag | |
| TextColor | #00000001 |
| ToolTip | |
| Selectable | True |
| SelectionMode | All |
| Parent | |
Table 7. ArrowAnnotation
| ArrowDirection | 0 0 |
| Color | #ff0000ff |
| EndPoint | 90 10 |
| HeadLength | 10 |
| HeadWidth | 3 |
| LineJoin | Miter |
| LineStyle | None |
| StartPoint | 70 10 |
| StrokeThickness | 2 |
| Veeness | 0 |
| Text | None |
| TextPosition | n. def. n. def. |
| TextHorizontalAlignment | Center |
| TextVerticalAlignment | Middle |
| TextRotation | 0 |
| Layer | AboveSeries |
| XAxis | LinearAxis(Bottom, 0, 100, 20) |
| XAxisKey | |
| YAxis | LinearAxis(Left, 0, 100, 20) |
| YAxisKey | |
| Font | |
| FontSize | NaN |
| FontWeight | 400 |
| PlotModel | |
| Tag | |
| TextColor | #00000001 |
| ToolTip | |
| Selectable | True |
| SelectionMode | All |
| Parent | |
Table 8. PolygonAnnotation
| LineJoin | Miter |
| LineStyle | Solid |
| Points | System.Collections.Generic.List`1[OxyPlot.DataPoint] |
| Fill | #ffadd8e6 |
| Stroke | #ff000000 |
| StrokeThickness | 2 |
| Text | Solid |
| TextPosition | n. def. n. def. |
| TextHorizontalAlignment | Center |
| TextVerticalAlignment | Middle |
| TextRotation | 0 |
| Layer | AboveSeries |
| XAxis | LinearAxis(Bottom, 0, 100, 20) |
| XAxisKey | |
| YAxis | LinearAxis(Left, 0, 100, 20) |
| YAxisKey | |
| Font | |
| FontSize | NaN |
| FontWeight | 400 |
| PlotModel | |
| Tag | |
| TextColor | #00000001 |
| ToolTip | |
| Selectable | True |
| SelectionMode | All |
| Parent | |
Table 9. PolygonAnnotation
| LineJoin | Miter |
| LineStyle | None |
| Points | System.Collections.Generic.List`1[OxyPlot.DataPoint] |
| Fill | #ffadd8e6 |
| Stroke | #ff000000 |
| StrokeThickness | 2 |
| Text | None |
| TextPosition | n. def. n. def. |
| TextHorizontalAlignment | Center |
| TextVerticalAlignment | Middle |
| TextRotation | 0 |
| Layer | AboveSeries |
| XAxis | LinearAxis(Bottom, 0, 100, 20) |
| XAxisKey | |
| YAxis | LinearAxis(Left, 0, 100, 20) |
| YAxisKey | |
| Font | |
| FontSize | NaN |
| FontWeight | 400 |
| PlotModel | |
| Tag | |
| TextColor | #00000001 |
| ToolTip | |
| Selectable | True |
| SelectionMode | All |
| Parent | |
=== Series ===
Report generated by OxyPlot 1.0.0
``` | priority | annotations ignore linestyle none and draw as if solid steps to reproduce add an annotation with linestyle property set to linestyle none make sure stoke and stokethickness are set to display something platform windows bit net version expected behaviour no lines are drawn actual behaviour lines are drawn as if linestyle solid was set generated code c public static plotmodel untitled var new plotmodel var new linearaxis axes add var new linearaxis position axisposition bottom axes add var new lineannotation type lineannotationtype horizontal y linestyle linestyle solid text solid annotations add var new lineannotation type lineannotationtype horizontal y linestyle linestyle none text none annotations add var new arrowannotation endpoint new datapoint startpoint new datapoint text solid annotations add var new arrowannotation endpoint new datapoint linestyle linestyle none startpoint new datapoint text none annotations add var new polygonannotation strokethickness text solid points add new datapoint points add new datapoint points add new datapoint points add new datapoint annotations add var new polygonannotation linestyle linestyle none strokethickness text none points add new datapoint points add new datapoint points add new datapoint points add new datapoint annotations add return report p l o t r e p o r t plotmodel table plotmodel defaultfont segoe ui defaultfontsize actualculture de de actualplotmargins plotview oxyplot wpf plotview annotations oxyplot elementcollection axes oxyplot elementcollection background culture defaultcolors system collections generic list islegendvisible true legendarea legendbackground legendborder legendborderthickness legendcolumnspacing legendfont legendfontsize legendtextcolor legendfontweight legenditemalignment left legenditemorder normal legenditemspacing legendlinespacing legendmargin legendmaxwidth nan legendmaxheight nan legendorientation vertical legendpadding legendplacement inside legendposition righttop legendsymbollength legendsymbolmargin legendsymbolplacement left legendtitle legendtitlecolor legendtitlefont legendtitlefontsize legendtitlefontweight padding width height plotandaxisarea plotarea axistierdistance plotareabackground plotareabordercolor plotareaborderthickness plotmargins nan nan nan nan plottype xy series oxyplot elementcollection renderingdecorator subtitle subtitlefont subtitlefontsize subtitlefontweight textcolor title titletooltip titlecolor subtitlecolor titlehorizontalalignment centeredwithinplotarea titlearea titlefont titlefontsize titlefontweight titlepadding defaultangleaxis defaultmagnitudeaxis defaultxaxis linearaxis bottom defaultyaxis linearaxis left defaultcoloraxis syncroot system object selectioncolor axes table linearaxis formatasfractions false fractionunit fractionunitsymbol absolutemaximum absoluteminimum actualmajorstep actualmaximum actualminimum actualminorstep actualstringformat actualtitle angle axisticktolabeldistance axistitledistance axisdistance axislinecolor axislinestyle none axislinethickness cliptitle true cropgridlines false datamaximum nan dataminimum nan endposition extragridlinecolor extragridlinestyle solid extragridlinethickness extragridlines filterfunction filtermaxvalue filterminvalue intervallength isaxisvisible true ispanenabled true isreversed false iszoomenabled true key labelformatter layer belowseries majorgridlinecolor majorgridlinestyle none majorgridlinethickness majorstep nan majorticksize maximum nan maximumpadding maximumrange infinity minimum nan minimummajorstep minimumminorstep minimumpadding minimumrange minorgridlinecolor minorgridlinestyle none minorgridlinethickness minorstep nan minorticklinecolor minorticksize offset position left positionatzerocrossing false positiontier scale screenmax screenmin startposition stringformat tickstyle outside ticklinecolor title titleclippinglength titlecolor titlefont titlefontsize nan titlefontweight titleformatstring titleposition unit usesuperexponentialformat false desiredsize font fontsize nan fontweight plotmodel tag textcolor tooltip selectable true selectionmode all parent table linearaxis formatasfractions false fractionunit fractionunitsymbol absolutemaximum absoluteminimum actualmajorstep actualmaximum actualminimum actualminorstep actualstringformat actualtitle angle axisticktolabeldistance axistitledistance axisdistance axislinecolor axislinestyle none axislinethickness cliptitle true cropgridlines false datamaximum nan dataminimum nan endposition extragridlinecolor extragridlinestyle solid extragridlinethickness extragridlines filterfunction filtermaxvalue filterminvalue intervallength isaxisvisible true ispanenabled true isreversed false iszoomenabled true key labelformatter layer belowseries majorgridlinecolor majorgridlinestyle none majorgridlinethickness majorstep nan majorticksize maximum nan maximumpadding maximumrange infinity minimum nan minimummajorstep minimumminorstep minimumpadding minimumrange minorgridlinecolor minorgridlinestyle none minorgridlinethickness minorstep nan minorticklinecolor minorticksize offset position bottom positionatzerocrossing false positiontier scale screenmax screenmin startposition stringformat tickstyle outside ticklinecolor title titleclippinglength titlecolor titlefont titlefontsize nan titlefontweight titleformatstring titleposition unit usesuperexponentialformat false desiredsize font fontsize nan fontweight plotmodel tag textcolor tooltip selectable true selectionmode all parent annotations table lineannotation intercept slope type horizontal x y color linejoin miter linestyle solid maximumx maximumy minimumx minimumy strokethickness textmargin textpadding textorientation alongline textlineposition cliptext true clipbyxaxis true clipbyyaxis true text solid textposition n def n def texthorizontalalignment right textverticalalignment top textrotation layer aboveseries xaxis linearaxis bottom xaxiskey yaxis linearaxis left yaxiskey font fontsize nan fontweight plotmodel tag textcolor tooltip selectable true selectionmode all parent table lineannotation intercept slope type horizontal x y color linejoin miter linestyle none maximumx maximumy minimumx minimumy strokethickness textmargin textpadding textorientation alongline textlineposition cliptext true clipbyxaxis true clipbyyaxis true text none textposition n def n def texthorizontalalignment right textverticalalignment top textrotation layer aboveseries xaxis linearaxis bottom xaxiskey yaxis linearaxis left yaxiskey font fontsize nan fontweight plotmodel tag textcolor tooltip selectable true selectionmode all parent table arrowannotation arrowdirection color endpoint headlength headwidth linejoin miter linestyle solid startpoint strokethickness veeness text solid textposition n def n def texthorizontalalignment center textverticalalignment middle textrotation layer aboveseries xaxis linearaxis bottom xaxiskey yaxis linearaxis left yaxiskey font fontsize nan fontweight plotmodel tag textcolor tooltip selectable true selectionmode all parent table arrowannotation arrowdirection color endpoint headlength headwidth linejoin miter linestyle none startpoint strokethickness veeness text none textposition n def n def texthorizontalalignment center textverticalalignment middle textrotation layer aboveseries xaxis linearaxis bottom xaxiskey yaxis linearaxis left yaxiskey font fontsize nan fontweight plotmodel tag textcolor tooltip selectable true selectionmode all parent table polygonannotation linejoin miter linestyle solid points system collections generic list fill stroke strokethickness text solid textposition n def n def texthorizontalalignment center textverticalalignment middle textrotation layer aboveseries xaxis linearaxis bottom xaxiskey yaxis linearaxis left yaxiskey font fontsize nan fontweight plotmodel tag textcolor tooltip selectable true selectionmode all parent table polygonannotation linejoin miter linestyle none points system collections generic list fill stroke strokethickness text none textposition n def n def texthorizontalalignment center textverticalalignment middle textrotation layer aboveseries xaxis linearaxis bottom xaxiskey yaxis linearaxis left yaxiskey font fontsize nan fontweight plotmodel tag textcolor tooltip selectable true selectionmode all parent series report generated by oxyplot | 1 |
465,684 | 13,390,369,368 | IssuesEvent | 2020-09-02 20:26:47 | SlapGaming/SlapBot | https://api.github.com/repos/SlapGaming/SlapBot | opened | Improve storage | High priority enhancement new feature | Move from flatfile/YAML to proper sql backend
- [ ] Setup hibernate + JPA
- [ ] NSA logging to storage
- [ ] LTG beans
- [ ] Event/LAN beans | 1.0 | Improve storage - Move from flatfile/YAML to proper sql backend
- [ ] Setup hibernate + JPA
- [ ] NSA logging to storage
- [ ] LTG beans
- [ ] Event/LAN beans | priority | improve storage move from flatfile yaml to proper sql backend setup hibernate jpa nsa logging to storage ltg beans event lan beans | 1 |
480,733 | 13,866,064,599 | IssuesEvent | 2020-10-16 06:02:49 | wso2/product-is | https://api.github.com/repos/wso2/product-is | opened | Description field is named as Type in the userstore creation wizard | Affected/5.11.0-Beta Priority/High Severity/Minor bug ux | **Describe the issue:**
The userstore description field is titled "Type" instead of "**Description**". The placeholder of the input field is correct.

**How to reproduce:**
Login to console.
Navigate to Manage Tab > User stores. Click on Add new user store and select one userstore type.
**Expected behavior:**
The above-marked input field should be titled as "Description".
**Environment information** (_Please complete the following information; remove any unnecessary fields_) **:**
- Product Version: IS 5.11.0-Beta
- OS: Linux
- Database: H2
- Userstore:LDAP
---
| 1.0 | Description field is named as Type in the userstore creation wizard - **Describe the issue:**
The userstore description field is titled "Type" instead of "**Description**". The placeholder of the input field is correct.

**How to reproduce:**
Login to console.
Navigate to Manage Tab > User stores. Click on Add new user store and select one userstore type.
**Expected behavior:**
The above-marked input field should be titled as "Description".
**Environment information** (_Please complete the following information; remove any unnecessary fields_) **:**
- Product Version: IS 5.11.0-Beta
- OS: Linux
- Database: H2
- Userstore:LDAP
---
| priority | description field is named as type in the userstore creation wizard describe the issue the userstore description field is titled type instead of description the placeholder of the input field is correct how to reproduce login to console navigate to manage tab user stores click on add new user store and select one userstore type expected behavior the above marked input field should be titled as description environment information please complete the following information remove any unnecessary fields product version is beta os linux database userstore ldap | 1 |
393,341 | 11,613,906,458 | IssuesEvent | 2020-02-26 11:36:50 | hajkmap/Hajk | https://api.github.com/repos/hajkmap/Hajk | closed | Creating new map config in admin should result in some working defaults | difficulty:easy module:backend priority:high | # Problem
Currently the result of a call to `/mapservice/config/create/{mapconfigName}` is not satisfying. The default map contains parts that are problematic. In order to create new map configs, we always end up with copying an existing, working, map configuration instead of using the Create New button in admin.
# Call out for testers
Please try out the map config below. Any comments and modifications should be posted in this thread. The idea is to have an empty but working Hajk config. The only thing left from here is to add a layer from own local source and it should display fine. Regarding center, extend and resolutions, we can of course discuss that. Below defaults is what works for us in Halmstad.
# Proposed Solution
Preferably, the mapservice component that responds to `config/create` call is modified to result in something like this:
```javascript
// Basic configuration with an empty LayerSwitcher, Search and Infoclick plugins configured
{
"projections": [
{
"code": "EPSG:3006",
"definition": "+proj=utm +zone=33 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
181896.33,
6101648.07,
864416.0,
7689478.3
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3006",
"definition": "+proj=utm +zone=33 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
181896.33,
6101648.07,
864416.0,
7689478.3
],
"units": null
},
{
"code": "EPSG:3007",
"definition": "+proj=tmerc +lat_0=0 +lon_0=12 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
60436.5084,
6192389.565,
217643.4713,
6682784.4276
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3007",
"definition": "+proj=tmerc +lat_0=0 +lon_0=12 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
60436.5084,
6192389.565,
217643.4713,
6682784.4276
],
"units": null
},
{
"code": "EPSG:3008",
"definition": "+proj=tmerc +lat_0=0 +lon_0=13.5 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs ",
"extent": [
60857.4994,
6120098.8505,
223225.0217,
6906693.7888
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3008",
"definition": "+proj=tmerc +lat_0=0 +lon_0=13.5 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs ",
"extent": [
60857.4994,
6120098.8505,
223225.0217,
6906693.7888
],
"units": null
},
{
"code": "EPSG:3009",
"definition": "+proj=tmerc +lat_0=0 +lon_0=15 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
56294.0365,
6203542.5282,
218719.0581,
6835499.2391
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3009",
"definition": "+proj=tmerc +lat_0=0 +lon_0=15 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
56294.0365,
6203542.5282,
218719.0581,
6835499.2391
],
"units": null
},
{
"code": "EPSG:3010",
"definition": "+proj=tmerc +lat_0=0 +lon_0=16.5 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
97213.6352,
6228930.1419,
225141.8681,
6916524.0785
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3010",
"definition": "+proj=tmerc +lat_0=0 +lon_0=16.5 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
97213.6352,
6228930.1419,
225141.8681,
6916524.0785
],
"units": null
},
{
"code": "EPSG:3011",
"definition": "+proj=tmerc +lat_0=0 +lon_0=18 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
96664.5565,
6509617.2232,
220146.6914,
6727103.5879
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3011",
"definition": "+proj=tmerc +lat_0=0 +lon_0=18 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
96664.5565,
6509617.2232,
220146.6914,
6727103.5879
],
"units": null
},
{
"code": "EPSG:3012",
"definition": "+proj=tmerc +lat_0=0 +lon_0=14.25 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
30462.5263,
6829647.9842,
216416.1584,
7154168.0208
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3012",
"definition": "+proj=tmerc +lat_0=0 +lon_0=14.25 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
30462.5263,
6829647.9842,
216416.1584,
7154168.0208
],
"units": null
},
{
"code": "EPSG:3013",
"definition": "+proj=tmerc +lat_0=0 +lon_0=15.75 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs ",
"extent": [
34056.6264,
6710433.2884,
218692.0214,
7224144.732
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3013",
"definition": "+proj=tmerc +lat_0=0 +lon_0=15.75 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs ",
"extent": [
34056.6264,
6710433.2884,
218692.0214,
7224144.732
],
"units": null
},
{
"code": "EPSG:3014",
"definition": "+proj=tmerc +lat_0=0 +lon_0=17.25 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
-1420.28,
6888655.5779,
212669.1333,
7459585.3378
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3014",
"definition": "+proj=tmerc +lat_0=0 +lon_0=17.25 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
-1420.28,
6888655.5779,
212669.1333,
7459585.3378
],
"units": null
},
{
"code": "EPSG:3015",
"definition": "+proj=tmerc +lat_0=0 +lon_0=18.75 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
58479.4774,
6304213.2147,
241520.5226,
7276832.4419
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3015",
"definition": "+proj=tmerc +lat_0=0 +lon_0=18.75 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
58479.4774,
6304213.2147,
241520.5226,
7276832.4419
],
"units": null
},
{
"code": "EPSG:3016",
"definition": "+proj=tmerc +lat_0=0 +lon_0=20.25 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
-93218.3385,
7034909.8738,
261434.6246,
7676279.8691
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3016",
"definition": "+proj=tmerc +lat_0=0 +lon_0=20.25 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
-93218.3385,
7034909.8738,
261434.6246,
7676279.8691
],
"units": null
},
{
"code": "EPSG:3017",
"definition": "+proj=tmerc +lat_0=0 +lon_0=21.75 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
67451.0699,
7211342.8483,
145349.5699,
7254837.254
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3017",
"definition": "+proj=tmerc +lat_0=0 +lon_0=21.75 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
67451.0699,
7211342.8483,
145349.5699,
7254837.254
],
"units": null
},
{
"code": "EPSG:3018",
"definition": "+proj=tmerc +lat_0=0 +lon_0=23.25 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
38920.7048,
7267405.2323,
193050.246,
7597992.2419
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3018",
"definition": "+proj=tmerc +lat_0=0 +lon_0=23.25 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
38920.7048,
7267405.2323,
193050.246,
7597992.2419
],
"units": null
},
{
"code": "EPSG:3021",
"definition": "+proj=tmerc +lat_0=0 +lon_0=15.80827777777778 +k=1 +x_0=1500000 +y_0=0 +ellps=bessel +units=m +no_defs",
"extent": [
1392811.0743,
6208496.7665,
1570600.8906,
7546077.6984
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3021",
"definition": "+proj=tmerc +lat_0=0 +lon_0=15.80827777777778 +k=1 +x_0=1500000 +y_0=0 +ellps=bessel +units=m +no_defs",
"extent": [
1392811.0743,
6208496.7665,
1570600.8906,
7546077.6984
],
"units": null
}
],
"tools": [
{
"type": "layerswitcher",
"options": {
"baselayers": [],
"groups": [],
"active": true,
"visibleAtStart": true,
"backgroundSwitcherBlack": true,
"backgroundSwitcherWhite": true,
"showBreadcrumbs": true,
"target": "left",
"title": "",
"description": "",
"dropdownThemeMaps": true,
"themeMapHeaderCaption": "",
"instruction": "",
"visibleForGroups": []
},
"index": 0
},
{
"type": "infoclick",
"options": {
"displayPopup": true,
"markerImg": "marker.png",
"anchor": [
16,
32
],
"imgSize": [
32,
32
],
"popupOffsetY": 0,
"visibleForGroups": []
},
"index": 0
},
{
"type": "search",
"options": {
"target": "toolbar",
"onMap": false,
"enableViewTogglePopupInSnabbsok": true,
"toolbar": "bottom",
"maxZoom": 14,
"markerImg": "marker.png",
"kmlExportUrl": "/mapservice/export/kml",
"excelExportUrl": "/mapservice/export/excel",
"displayPopup": true,
"selectionTools": true,
"selectionSearch": false,
"radiusSearch": false,
"polygonSearch": false,
"searchSettings": false,
"base64Encode": false,
"instruction": "",
"filterVisible": true,
"anchor": [
16,
32
],
"imgSize": [
32,
32
],
"tooltip": "Sök...",
"searchWithinButtonText": "Markera i kartan",
"toolDescription": "<div>Sök innehåll i kartan</div>",
"popupOffsetY": 0,
"visibleForGroups": [],
"layers": [],
"selectedSources": []
},
"index": 0
}
],
"map": {
"target": "map",
"center": [
110600,
6283796
],
"title": "Ange kartans titel",
"projection": "EPSG:3008",
"zoom": 1,
"maxZoom": 9,
"minZoom": 0,
"resolutions": [
55.999999999999993,
27.999999999999996,
13.999999999999998,
6.9999999999999991,
2.8,
1.4,
0.55999999999999994,
0.28,
0.112,
0.056
],
"origin": [
72595.7168,
6269051.84184
],
"extent": [
72595.7168,
6269051.84184,
139282.1317,
6313912.82227
],
"constrainOnlyCenter": false,
"logo": "logo.png",
"geoserverLegendOptions": "fontName:Roboto;fontAntiAliasing:true;fontColor:0x333333;fontSize:14;dpi:90;forceLabels:on",
"mapselector": true,
"mapcleaner": true,
"drawerVisible": true,
"drawerPermanent": true,
"colors": {
"primaryColor": "#4a4a4a",
"secondaryColor": "#f5a623"
}
}
}
``` | 1.0 | Creating new map config in admin should result in some working defaults - # Problem
Currently the result of a call to `/mapservice/config/create/{mapconfigName}` is not satisfying. The default map contains parts that are problematic. In order to create new map configs, we always end up with copying an existing, working, map configuration instead of using the Create New button in admin.
# Call out for testers
Please try out the map config below. Any comments and modifications should be posted in this thread. The idea is to have an empty but working Hajk config. The only thing left from here is to add a layer from own local source and it should display fine. Regarding center, extend and resolutions, we can of course discuss that. Below defaults is what works for us in Halmstad.
# Proposed Solution
Preferably, the mapservice component that responds to `config/create` call is modified to result in something like this:
```javascript
// Basic configuration with an empty LayerSwitcher, Search and Infoclick plugins configured
{
"projections": [
{
"code": "EPSG:3006",
"definition": "+proj=utm +zone=33 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
181896.33,
6101648.07,
864416.0,
7689478.3
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3006",
"definition": "+proj=utm +zone=33 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
181896.33,
6101648.07,
864416.0,
7689478.3
],
"units": null
},
{
"code": "EPSG:3007",
"definition": "+proj=tmerc +lat_0=0 +lon_0=12 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
60436.5084,
6192389.565,
217643.4713,
6682784.4276
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3007",
"definition": "+proj=tmerc +lat_0=0 +lon_0=12 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
60436.5084,
6192389.565,
217643.4713,
6682784.4276
],
"units": null
},
{
"code": "EPSG:3008",
"definition": "+proj=tmerc +lat_0=0 +lon_0=13.5 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs ",
"extent": [
60857.4994,
6120098.8505,
223225.0217,
6906693.7888
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3008",
"definition": "+proj=tmerc +lat_0=0 +lon_0=13.5 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs ",
"extent": [
60857.4994,
6120098.8505,
223225.0217,
6906693.7888
],
"units": null
},
{
"code": "EPSG:3009",
"definition": "+proj=tmerc +lat_0=0 +lon_0=15 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
56294.0365,
6203542.5282,
218719.0581,
6835499.2391
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3009",
"definition": "+proj=tmerc +lat_0=0 +lon_0=15 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
56294.0365,
6203542.5282,
218719.0581,
6835499.2391
],
"units": null
},
{
"code": "EPSG:3010",
"definition": "+proj=tmerc +lat_0=0 +lon_0=16.5 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
97213.6352,
6228930.1419,
225141.8681,
6916524.0785
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3010",
"definition": "+proj=tmerc +lat_0=0 +lon_0=16.5 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
97213.6352,
6228930.1419,
225141.8681,
6916524.0785
],
"units": null
},
{
"code": "EPSG:3011",
"definition": "+proj=tmerc +lat_0=0 +lon_0=18 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
96664.5565,
6509617.2232,
220146.6914,
6727103.5879
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3011",
"definition": "+proj=tmerc +lat_0=0 +lon_0=18 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
96664.5565,
6509617.2232,
220146.6914,
6727103.5879
],
"units": null
},
{
"code": "EPSG:3012",
"definition": "+proj=tmerc +lat_0=0 +lon_0=14.25 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
30462.5263,
6829647.9842,
216416.1584,
7154168.0208
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3012",
"definition": "+proj=tmerc +lat_0=0 +lon_0=14.25 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
30462.5263,
6829647.9842,
216416.1584,
7154168.0208
],
"units": null
},
{
"code": "EPSG:3013",
"definition": "+proj=tmerc +lat_0=0 +lon_0=15.75 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs ",
"extent": [
34056.6264,
6710433.2884,
218692.0214,
7224144.732
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3013",
"definition": "+proj=tmerc +lat_0=0 +lon_0=15.75 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs ",
"extent": [
34056.6264,
6710433.2884,
218692.0214,
7224144.732
],
"units": null
},
{
"code": "EPSG:3014",
"definition": "+proj=tmerc +lat_0=0 +lon_0=17.25 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
-1420.28,
6888655.5779,
212669.1333,
7459585.3378
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3014",
"definition": "+proj=tmerc +lat_0=0 +lon_0=17.25 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
-1420.28,
6888655.5779,
212669.1333,
7459585.3378
],
"units": null
},
{
"code": "EPSG:3015",
"definition": "+proj=tmerc +lat_0=0 +lon_0=18.75 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
58479.4774,
6304213.2147,
241520.5226,
7276832.4419
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3015",
"definition": "+proj=tmerc +lat_0=0 +lon_0=18.75 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
58479.4774,
6304213.2147,
241520.5226,
7276832.4419
],
"units": null
},
{
"code": "EPSG:3016",
"definition": "+proj=tmerc +lat_0=0 +lon_0=20.25 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
-93218.3385,
7034909.8738,
261434.6246,
7676279.8691
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3016",
"definition": "+proj=tmerc +lat_0=0 +lon_0=20.25 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
-93218.3385,
7034909.8738,
261434.6246,
7676279.8691
],
"units": null
},
{
"code": "EPSG:3017",
"definition": "+proj=tmerc +lat_0=0 +lon_0=21.75 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
67451.0699,
7211342.8483,
145349.5699,
7254837.254
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3017",
"definition": "+proj=tmerc +lat_0=0 +lon_0=21.75 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
67451.0699,
7211342.8483,
145349.5699,
7254837.254
],
"units": null
},
{
"code": "EPSG:3018",
"definition": "+proj=tmerc +lat_0=0 +lon_0=23.25 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
38920.7048,
7267405.2323,
193050.246,
7597992.2419
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3018",
"definition": "+proj=tmerc +lat_0=0 +lon_0=23.25 +k=1 +x_0=150000 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs",
"extent": [
38920.7048,
7267405.2323,
193050.246,
7597992.2419
],
"units": null
},
{
"code": "EPSG:3021",
"definition": "+proj=tmerc +lat_0=0 +lon_0=15.80827777777778 +k=1 +x_0=1500000 +y_0=0 +ellps=bessel +units=m +no_defs",
"extent": [
1392811.0743,
6208496.7665,
1570600.8906,
7546077.6984
],
"units": null
},
{
"code": "http://www.opengis.net/gml/srs/epsg.xml#3021",
"definition": "+proj=tmerc +lat_0=0 +lon_0=15.80827777777778 +k=1 +x_0=1500000 +y_0=0 +ellps=bessel +units=m +no_defs",
"extent": [
1392811.0743,
6208496.7665,
1570600.8906,
7546077.6984
],
"units": null
}
],
"tools": [
{
"type": "layerswitcher",
"options": {
"baselayers": [],
"groups": [],
"active": true,
"visibleAtStart": true,
"backgroundSwitcherBlack": true,
"backgroundSwitcherWhite": true,
"showBreadcrumbs": true,
"target": "left",
"title": "",
"description": "",
"dropdownThemeMaps": true,
"themeMapHeaderCaption": "",
"instruction": "",
"visibleForGroups": []
},
"index": 0
},
{
"type": "infoclick",
"options": {
"displayPopup": true,
"markerImg": "marker.png",
"anchor": [
16,
32
],
"imgSize": [
32,
32
],
"popupOffsetY": 0,
"visibleForGroups": []
},
"index": 0
},
{
"type": "search",
"options": {
"target": "toolbar",
"onMap": false,
"enableViewTogglePopupInSnabbsok": true,
"toolbar": "bottom",
"maxZoom": 14,
"markerImg": "marker.png",
"kmlExportUrl": "/mapservice/export/kml",
"excelExportUrl": "/mapservice/export/excel",
"displayPopup": true,
"selectionTools": true,
"selectionSearch": false,
"radiusSearch": false,
"polygonSearch": false,
"searchSettings": false,
"base64Encode": false,
"instruction": "",
"filterVisible": true,
"anchor": [
16,
32
],
"imgSize": [
32,
32
],
"tooltip": "Sök...",
"searchWithinButtonText": "Markera i kartan",
"toolDescription": "<div>Sök innehåll i kartan</div>",
"popupOffsetY": 0,
"visibleForGroups": [],
"layers": [],
"selectedSources": []
},
"index": 0
}
],
"map": {
"target": "map",
"center": [
110600,
6283796
],
"title": "Ange kartans titel",
"projection": "EPSG:3008",
"zoom": 1,
"maxZoom": 9,
"minZoom": 0,
"resolutions": [
55.999999999999993,
27.999999999999996,
13.999999999999998,
6.9999999999999991,
2.8,
1.4,
0.55999999999999994,
0.28,
0.112,
0.056
],
"origin": [
72595.7168,
6269051.84184
],
"extent": [
72595.7168,
6269051.84184,
139282.1317,
6313912.82227
],
"constrainOnlyCenter": false,
"logo": "logo.png",
"geoserverLegendOptions": "fontName:Roboto;fontAntiAliasing:true;fontColor:0x333333;fontSize:14;dpi:90;forceLabels:on",
"mapselector": true,
"mapcleaner": true,
"drawerVisible": true,
"drawerPermanent": true,
"colors": {
"primaryColor": "#4a4a4a",
"secondaryColor": "#f5a623"
}
}
}
``` | priority | creating new map config in admin should result in some working defaults problem currently the result of a call to mapservice config create mapconfigname is not satisfying the default map contains parts that are problematic in order to create new map configs we always end up with copying an existing working map configuration instead of using the create new button in admin call out for testers please try out the map config below any comments and modifications should be posted in this thread the idea is to have an empty but working hajk config the only thing left from here is to add a layer from own local source and it should display fine regarding center extend and resolutions we can of course discuss that below defaults is what works for us in halmstad proposed solution preferably the mapservice component that responds to config create call is modified to result in something like this javascript basic configuration with an empty layerswitcher search and infoclick plugins configured projections code epsg definition proj utm zone ellps units m no defs extent units null code definition proj utm zone ellps units m no defs extent units null code epsg definition proj tmerc lat lon k x y ellps units m no defs extent units null code definition proj tmerc lat lon k x y ellps units m no defs extent units null code epsg definition proj tmerc lat lon k x y ellps units m no defs extent units null code definition proj tmerc lat lon k x y ellps units m no defs extent units null code epsg definition proj tmerc lat lon k x y ellps units m no defs extent units null code definition proj tmerc lat lon k x y ellps units m no defs extent units null code epsg definition proj tmerc lat lon k x y ellps units m no defs extent units null code definition proj tmerc lat lon k x y ellps units m no defs extent units null code epsg definition proj tmerc lat lon k x y ellps units m no defs extent units null code definition proj tmerc lat lon k x y ellps units m no defs extent units null code epsg definition proj tmerc lat lon k x y ellps units m no defs extent units null code definition proj tmerc lat lon k x y ellps units m no defs extent units null code epsg definition proj tmerc lat lon k x y ellps units m no defs extent units null code definition proj tmerc lat lon k x y ellps units m no defs extent units null code epsg definition proj tmerc lat lon k x y ellps units m no defs extent units null code definition proj tmerc lat lon k x y ellps units m no defs extent units null code epsg definition proj tmerc lat lon k x y ellps units m no defs extent units null code definition proj tmerc lat lon k x y ellps units m no defs extent units null code epsg definition proj tmerc lat lon k x y ellps units m no defs extent units null code definition proj tmerc lat lon k x y ellps units m no defs extent units null code epsg definition proj tmerc lat lon k x y ellps units m no defs extent units null code definition proj tmerc lat lon k x y ellps units m no defs extent units null code epsg definition proj tmerc lat lon k x y ellps units m no defs extent units null code definition proj tmerc lat lon k x y ellps units m no defs extent units null code epsg definition proj tmerc lat lon k x y ellps bessel units m no defs extent units null code definition proj tmerc lat lon k x y ellps bessel units m no defs extent units null tools type layerswitcher options baselayers groups active true visibleatstart true backgroundswitcherblack true backgroundswitcherwhite true showbreadcrumbs true target left title description dropdownthememaps true thememapheadercaption instruction visibleforgroups index type infoclick options displaypopup true markerimg marker png anchor imgsize popupoffsety visibleforgroups index type search options target toolbar onmap false enableviewtogglepopupinsnabbsok true toolbar bottom maxzoom markerimg marker png kmlexporturl mapservice export kml excelexporturl mapservice export excel displaypopup true selectiontools true selectionsearch false radiussearch false polygonsearch false searchsettings false false instruction filtervisible true anchor imgsize tooltip sök searchwithinbuttontext markera i kartan tooldescription sök innehåll i kartan popupoffsety visibleforgroups layers selectedsources index map target map center title ange kartans titel projection epsg zoom maxzoom minzoom resolutions origin extent constrainonlycenter false logo logo png geoserverlegendoptions fontname roboto fontantialiasing true fontcolor fontsize dpi forcelabels on mapselector true mapcleaner true drawervisible true drawerpermanent true colors primarycolor secondarycolor | 1 |
736,558 | 25,478,536,549 | IssuesEvent | 2022-11-25 17:03:26 | DCS-LCSR/ASL-DAI | https://api.github.com/repos/DCS-LCSR/ASL-DAI | closed | IMPORTANT: Some signs are missing from dev1 | high priority | Related to #361
Missing compounds that start with. e.g., (1):
dev1 on the left; DAI on the right here
<img width="426" alt="Screen Shot 2022-11-14 at 7 28 21 PM" src="https://user-images.githubusercontent.com/13629122/201796548-f8764eb2-fec8-48e9-baf5-7e983933a8f6.png">
<img width="428" alt="Screen Shot 2022-11-14 at 7 31 04 PM" src="https://user-images.githubusercontent.com/13629122/201796766-0d0dbb78-fdfa-4f8e-9d90-a01675d8bd2c.png">
| 1.0 | IMPORTANT: Some signs are missing from dev1 - Related to #361
Missing compounds that start with. e.g., (1):
dev1 on the left; DAI on the right here
<img width="426" alt="Screen Shot 2022-11-14 at 7 28 21 PM" src="https://user-images.githubusercontent.com/13629122/201796548-f8764eb2-fec8-48e9-baf5-7e983933a8f6.png">
<img width="428" alt="Screen Shot 2022-11-14 at 7 31 04 PM" src="https://user-images.githubusercontent.com/13629122/201796766-0d0dbb78-fdfa-4f8e-9d90-a01675d8bd2c.png">
| priority | important some signs are missing from related to missing compounds that start with e g on the left dai on the right here img width alt screen shot at pm src img width alt screen shot at pm src | 1 |
525,495 | 15,254,768,579 | IssuesEvent | 2021-02-20 13:24:23 | sevmardi/sxro | https://api.github.com/repos/sevmardi/sxro | opened | 0004464: Cannot edit wafer | bug high-priority | I added a wafer called unknow_wafer. That worked. However I cannot edit it. I enclose a screenshot of the problem. | 1.0 | 0004464: Cannot edit wafer - I added a wafer called unknow_wafer. That worked. However I cannot edit it. I enclose a screenshot of the problem. | priority | cannot edit wafer i added a wafer called unknow wafer that worked however i cannot edit it i enclose a screenshot of the problem | 1 |
612,679 | 19,028,218,542 | IssuesEvent | 2021-11-24 07:42:38 | Wrap-and-Go/Wrap-nd-Go | https://api.github.com/repos/Wrap-and-Go/Wrap-nd-Go | reopened | Search bar UI Needed | good first issue help wanted UX/UI high priority | The basic design of the search bar is given in the link: https://whimsical.com/food-8PBqBtCftsetbN27UNpFJC
Just waiting 🤞 Contributor✨
Make sure to merge your code in the website.css file which is present in the code section in the main branch. | 1.0 | Search bar UI Needed - The basic design of the search bar is given in the link: https://whimsical.com/food-8PBqBtCftsetbN27UNpFJC
Just waiting 🤞 Contributor✨
Make sure to merge your code in the website.css file which is present in the code section in the main branch. | priority | search bar ui needed the basic design of the search bar is given in the link just waiting 🤞 contributor✨ make sure to merge your code in the website css file which is present in the code section in the main branch | 1 |
250,022 | 7,966,574,686 | IssuesEvent | 2018-07-15 00:30:20 | OriginProtocol/origin-js | https://api.github.com/repos/OriginProtocol/origin-js | opened | Change IPFS images from data URIs to ipfs | enhancement high priority ipfs javascript origin.js | See https://github.com/OriginProtocol/origin-bridge/issues/18. This may or may not involve migrating existing listings to have different `pictures` values.
Relevant: https://github.com/OriginProtocol/origin-js/pull/216 | 1.0 | Change IPFS images from data URIs to ipfs - See https://github.com/OriginProtocol/origin-bridge/issues/18. This may or may not involve migrating existing listings to have different `pictures` values.
Relevant: https://github.com/OriginProtocol/origin-js/pull/216 | priority | change ipfs images from data uris to ipfs see this may or may not involve migrating existing listings to have different pictures values relevant | 1 |
649,984 | 21,331,430,322 | IssuesEvent | 2022-04-18 08:55:12 | eclipse/dirigible | https://api.github.com/repos/eclipse/dirigible | closed | [Workspace] Enumerating files ignores the soft links | bug component-workspace component-repository priority-high efforts-low | Enumerating files ignores the soft links. It is a must for Git based projects. | 1.0 | [Workspace] Enumerating files ignores the soft links - Enumerating files ignores the soft links. It is a must for Git based projects. | priority | enumerating files ignores the soft links enumerating files ignores the soft links it is a must for git based projects | 1 |
188,850 | 6,782,556,240 | IssuesEvent | 2017-10-30 08:39:33 | ballerinalang/composer | https://api.github.com/repos/ballerinalang/composer | closed | Cannot add connector params in the design view | 0.94-M1 Priority/Highest Severity/Critical | Browser: Chrome Version 62.0.3202.62 (Official Build) (64-bit)
**Steps**
1. Add a client connector
As shown in the below image, connector parameters can be added by source but not the design.

| 1.0 | Cannot add connector params in the design view - Browser: Chrome Version 62.0.3202.62 (Official Build) (64-bit)
**Steps**
1. Add a client connector
As shown in the below image, connector parameters can be added by source but not the design.

| priority | cannot add connector params in the design view browser chrome version official build bit steps add a client connector as shown in the below image connector parameters can be added by source but not the design | 1 |
322,125 | 9,813,261,938 | IssuesEvent | 2019-06-13 07:31:54 | alphagov/govuk-frontend | https://api.github.com/repos/alphagov/govuk-frontend | closed | Focus indicator is potentially confusing for tabs and accordion components | Effort: days Priority: high accessibility audit: may-2019 | This issue is from a May 2019 external accessibility audit report.
**WCAG Reference**: Usability feedback only, there is no WCAG related guidelines.
**Issue ID**: DAC_Issue21
**URLs**:
- http://govuk-frontend-v3.herokuapp.com/full-page-examples/bank-holidays#tab_england-and-wales
- http://govuk-frontend-v3.herokuapp.com/full-page-examples/service-manual-topic
## Screen Shots


The focus indicator did not surround elements as expected and only appeared at the top edge of the tab. This was confusing to keyboard only users in some cases as it was not clear which element was in focus.
## Current Code Ref(s)
```html
.js-enabled .govuk-tabs__tab--selected:focus, .js-enabled .govuk-tabs__tab--selected.\:focus { box-shadow: inset 0 4px 0 0 #ffdd00, inset 0 8px 0 0 #0b0c0c;
```
## Keyboard only comments
> “The highlighting being used on the tabs consists of a line appearing above it. A keyboard only user would expect highlighting to appear around the whole element.”
> “After the ‘click to expand link’ was selected, focus went to the line above it. At this point it is not obvious when the line was selected and that it will close the ‘click to expand’ element. As it appeared at the top of the link, I was not sure which link it was highlighting.”
## Solution
It is recommended that the entire area receives a visual focus indication, to ensure that it is clear which element is active. | 1.0 | Focus indicator is potentially confusing for tabs and accordion components - This issue is from a May 2019 external accessibility audit report.
**WCAG Reference**: Usability feedback only, there is no WCAG related guidelines.
**Issue ID**: DAC_Issue21
**URLs**:
- http://govuk-frontend-v3.herokuapp.com/full-page-examples/bank-holidays#tab_england-and-wales
- http://govuk-frontend-v3.herokuapp.com/full-page-examples/service-manual-topic
## Screen Shots


The focus indicator did not surround elements as expected and only appeared at the top edge of the tab. This was confusing to keyboard only users in some cases as it was not clear which element was in focus.
## Current Code Ref(s)
```html
.js-enabled .govuk-tabs__tab--selected:focus, .js-enabled .govuk-tabs__tab--selected.\:focus { box-shadow: inset 0 4px 0 0 #ffdd00, inset 0 8px 0 0 #0b0c0c;
```
## Keyboard only comments
> “The highlighting being used on the tabs consists of a line appearing above it. A keyboard only user would expect highlighting to appear around the whole element.”
> “After the ‘click to expand link’ was selected, focus went to the line above it. At this point it is not obvious when the line was selected and that it will close the ‘click to expand’ element. As it appeared at the top of the link, I was not sure which link it was highlighting.”
## Solution
It is recommended that the entire area receives a visual focus indication, to ensure that it is clear which element is active. | priority | focus indicator is potentially confusing for tabs and accordion components this issue is from a may external accessibility audit report wcag reference usability feedback only there is no wcag related guidelines issue id dac urls screen shots the focus indicator did not surround elements as expected and only appeared at the top edge of the tab this was confusing to keyboard only users in some cases as it was not clear which element was in focus current code ref s html js enabled govuk tabs tab selected focus js enabled govuk tabs tab selected focus box shadow inset inset keyboard only comments “the highlighting being used on the tabs consists of a line appearing above it a keyboard only user would expect highlighting to appear around the whole element ” “after the ‘click to expand link’ was selected focus went to the line above it at this point it is not obvious when the line was selected and that it will close the ‘click to expand’ element as it appeared at the top of the link i was not sure which link it was highlighting ” solution it is recommended that the entire area receives a visual focus indication to ensure that it is clear which element is active | 1 |
406,086 | 11,886,733,840 | IssuesEvent | 2020-03-27 22:48:37 | mit-cml/appinventor-sources | https://api.github.com/repos/mit-cml/appinventor-sources | closed | Dictionaries lookups fail when using joined strings | affects: ucr bug issue: accepted issue: pending component release priority: high | **Describe the bug**
[From the forum](https://community.appinventor.mit.edu/t/remove-entry-for-key-doesnt-work/3334/4?u=ewpatton): Removing the key "key3" works fine, but using the key `join("key", 3)` does not. This is because HashMap evaluates the equality of the two elements, and since String("key3") != FString("key3") (Kawa's String type), the removal fails.
**Affects**
<!--
Please check off the part of the system that is affected by the bug.
-->
- [ ] Designer
- [ ] Blocks editor
- [x] Companion
- [x] Compiled apps
- [ ] Buildserver
- [ ] Other... (please describe)
**Expected behavior**
<!--
Please describe what you expected to happen before you encountered the bug.
-->
Removing the key should work regardless of whether it is a String or FString passed in. We should normalize to String.
**Steps to reproduce**
<!--
Please describe the steps needed to reproduce the bug. If possible, please include a minimal example project that demonstrates the issue.
-->
1. Create a dictionary
2. Call the remove entry block but pass a key composed using the join block
3. Check the size of the dictionary and observe that it hasn't changed | 1.0 | Dictionaries lookups fail when using joined strings - **Describe the bug**
[From the forum](https://community.appinventor.mit.edu/t/remove-entry-for-key-doesnt-work/3334/4?u=ewpatton): Removing the key "key3" works fine, but using the key `join("key", 3)` does not. This is because HashMap evaluates the equality of the two elements, and since String("key3") != FString("key3") (Kawa's String type), the removal fails.
**Affects**
<!--
Please check off the part of the system that is affected by the bug.
-->
- [ ] Designer
- [ ] Blocks editor
- [x] Companion
- [x] Compiled apps
- [ ] Buildserver
- [ ] Other... (please describe)
**Expected behavior**
<!--
Please describe what you expected to happen before you encountered the bug.
-->
Removing the key should work regardless of whether it is a String or FString passed in. We should normalize to String.
**Steps to reproduce**
<!--
Please describe the steps needed to reproduce the bug. If possible, please include a minimal example project that demonstrates the issue.
-->
1. Create a dictionary
2. Call the remove entry block but pass a key composed using the join block
3. Check the size of the dictionary and observe that it hasn't changed | priority | dictionaries lookups fail when using joined strings describe the bug removing the key works fine but using the key join key does not this is because hashmap evaluates the equality of the two elements and since string fstring kawa s string type the removal fails affects please check off the part of the system that is affected by the bug designer blocks editor companion compiled apps buildserver other please describe expected behavior please describe what you expected to happen before you encountered the bug removing the key should work regardless of whether it is a string or fstring passed in we should normalize to string steps to reproduce please describe the steps needed to reproduce the bug if possible please include a minimal example project that demonstrates the issue create a dictionary call the remove entry block but pass a key composed using the join block check the size of the dictionary and observe that it hasn t changed | 1 |
617,070 | 19,341,315,290 | IssuesEvent | 2021-12-15 05:09:14 | Qiskit-Partners/qiskit-ibm | https://api.github.com/repos/Qiskit-Partners/qiskit-ibm | closed | FAIL: test_cancel_job_queued | type: bug priority: high | <!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit IBM Provider version**:
- **Python version**:
- **Operating system**:
### What is the current behavior?
This test keeps failing intermittently.
https://github.com/Qiskit-Partners/qiskit-ibm/runs/4186032439?check_suite_focus=true
```
======================================================================
834 FAIL: test_cancel_job_queued (test.ibm.runtime.test_runtime_integration.TestRuntimeIntegration)
835 Test canceling a queued job.
836 ----------------------------------------------------------------------
837 Traceback (most recent call last):
838 File "/Users/runner/work/qiskit-ibm/qiskit-ibm/test/ibm/runtime/test_runtime_integration.py", line 438, in test_cancel_job_queued
839 self.assertEqual(rjob.status(), JobStatus.CANCELLED)
840 AssertionError: <JobStatus.RUNNING: 'job is actively running'> != <JobStatus.CANCELLED: 'job has been cancelled'>
```
### Steps to reproduce the problem
### What is the expected behavior?
### Suggested solutions
| 1.0 | FAIL: test_cancel_job_queued - <!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit IBM Provider version**:
- **Python version**:
- **Operating system**:
### What is the current behavior?
This test keeps failing intermittently.
https://github.com/Qiskit-Partners/qiskit-ibm/runs/4186032439?check_suite_focus=true
```
======================================================================
834 FAIL: test_cancel_job_queued (test.ibm.runtime.test_runtime_integration.TestRuntimeIntegration)
835 Test canceling a queued job.
836 ----------------------------------------------------------------------
837 Traceback (most recent call last):
838 File "/Users/runner/work/qiskit-ibm/qiskit-ibm/test/ibm/runtime/test_runtime_integration.py", line 438, in test_cancel_job_queued
839 self.assertEqual(rjob.status(), JobStatus.CANCELLED)
840 AssertionError: <JobStatus.RUNNING: 'job is actively running'> != <JobStatus.CANCELLED: 'job has been cancelled'>
```
### Steps to reproduce the problem
### What is the expected behavior?
### Suggested solutions
| priority | fail test cancel job queued information qiskit ibm provider version python version operating system what is the current behavior this test keeps failing intermittently fail test cancel job queued test ibm runtime test runtime integration testruntimeintegration test canceling a queued job traceback most recent call last file users runner work qiskit ibm qiskit ibm test ibm runtime test runtime integration py line in test cancel job queued self assertequal rjob status jobstatus cancelled assertionerror steps to reproduce the problem what is the expected behavior suggested solutions | 1 |
341,569 | 10,298,864,807 | IssuesEvent | 2019-08-28 11:17:59 | pmem/issues | https://api.github.com/repos/pmem/issues | closed | test: obj_tx_lock/TEST0: SETUP (all/pmem/debug/helgrind) fails | Exposure: High OS: Linux Priority: 2 high Type: Bug | <!--
Before creating new issue, ensure that similar issue wasn't already created
* Search: https://github.com/pmem/issues/issues
Note that if you do not provide enough information to reproduce the issue, we may not be able to take action on your report.
Remember this is just a minimal template. You can extend it with data you think may be useful.
-->
# ISSUE: <!-- fill the title of issue -->
## Environment Information
- PMDK package version(s): 1.4.3-rc1
- OS(es) version(s): SLES 12.4
- ndctl version(s): 61.2
- kernel version(s): 4.12.14-95.29-default
## Please provide a reproduction of the bug:
```
./RUNTESTS obj_tx_lock -s TEST0 -t all -e force-enable
```
## How often bug is revealed: (always, often, rare): always
<!-- describe special circumstances in section above -->
## Actual behavior:
```
[2019-08-20T23:48:39.024Z] obj_tx_lock/TEST0: SETUP (all/pmem/debug/helgrind)
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 failed with Valgrind. See helgrind0.log. First 20 lines below.
[2019-08-20T23:48:39.960Z] yes: standard output: Broken pipe
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== Thread #1 is the program's root thread
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446==
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== ----------------------------------------------------------------
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446==
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== Thread #1's call to pthread_rwlock_wrlock failed
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== with error code 35 (EDEADLK: Resource deadlock would occur)
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== at 0x4C2FBF7: pthread_rwlock_wrlock_WRK (hg_intercepts.c:2164)
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== by 0x4C32693: pthread_rwlock_wrlock (hg_intercepts.c:2175)
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== by 0x4E46D5B: os_rwlock_wrlock (os_thread_posix.c:188)
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== by 0x4E7A83D: pmemobj_rwlock_wrlock (sync.c:402)
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== by 0x4E7DD53: add_to_tx_and_lock (tx.c:976)
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== by 0x4E7EE58: pmemobj_tx_lock (tx.c:1291)
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== by 0x4036BD: do_tx_add_taken_lock (obj_tx_lock.c:158)
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== by 0x403973: main (obj_tx_lock.c:188)
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446==
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446==
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== For counts of detected and suppressed errors, rerun with: -v
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== Use --history-level=approx or =none to gain increased speed, at
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== the cost of reduced accuracy of conflicting-access information
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
[2019-08-20T23:48:39.960Z] RUNTESTS: stopping: obj_tx_lock/TEST0 failed, TEST=all FS=pmem BUILD=debug
[2019-08-20T23:48:39.960Z] tput: No value for $TERM and no -T specified
[2019-08-20T23:48:39.960Z] tput: No value for $TERM and no -T specified
[2019-08-20T23:48:39.960Z] 1 tests failed: obj_tx_lock/TEST0
[2019-08-20T23:48:39.960Z] ../Makefile.inc:467: recipe for target 'TEST0' failed
[2019-08-20T23:48:39.960Z] make[3]: *** [TEST0] Error 1
[2019-08-20T23:48:39.960Z] make[3]: Target 'pcheck' not remade because of errors.
[2019-08-20T23:48:39.960Z] make[3]: Leaving directory '/home/jenkins/workspace/1.6.1/1.4.3/helgrind_143/src/test/obj_tx_lock'
[2019-08-20T23:48:39.960Z] Makefile:462: recipe for target 'obj_tx_lock' failed
[2019-08-20T23:48:39.960Z] make[2]: *** [obj_tx_lock] Error 2
[2019-08-20T23:48:39.960Z] make -C obj_tx_locks pcheck
[2019-08-20T23:48:39.960Z] make[3]: Entering directory '/home/jenkins/workspace/1.6.1/1.4.3/helgrind_143/src/test/obj_tx_locks'
```
## Expected behavior:
Test should pass.
## Details
<!-- fill this out -->
## Additional information about Priority and Help Requested:
Are you willing to submit a pull request with a proposed change? (Yes, No) <!-- check one if possible -->
Requested priority: (Showstopper, High, Medium, Low) <!-- check one if possible -->
| 1.0 | test: obj_tx_lock/TEST0: SETUP (all/pmem/debug/helgrind) fails - <!--
Before creating new issue, ensure that similar issue wasn't already created
* Search: https://github.com/pmem/issues/issues
Note that if you do not provide enough information to reproduce the issue, we may not be able to take action on your report.
Remember this is just a minimal template. You can extend it with data you think may be useful.
-->
# ISSUE: <!-- fill the title of issue -->
## Environment Information
- PMDK package version(s): 1.4.3-rc1
- OS(es) version(s): SLES 12.4
- ndctl version(s): 61.2
- kernel version(s): 4.12.14-95.29-default
## Please provide a reproduction of the bug:
```
./RUNTESTS obj_tx_lock -s TEST0 -t all -e force-enable
```
## How often bug is revealed: (always, often, rare): always
<!-- describe special circumstances in section above -->
## Actual behavior:
```
[2019-08-20T23:48:39.024Z] obj_tx_lock/TEST0: SETUP (all/pmem/debug/helgrind)
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 failed with Valgrind. See helgrind0.log. First 20 lines below.
[2019-08-20T23:48:39.960Z] yes: standard output: Broken pipe
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== Thread #1 is the program's root thread
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446==
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== ----------------------------------------------------------------
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446==
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== Thread #1's call to pthread_rwlock_wrlock failed
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== with error code 35 (EDEADLK: Resource deadlock would occur)
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== at 0x4C2FBF7: pthread_rwlock_wrlock_WRK (hg_intercepts.c:2164)
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== by 0x4C32693: pthread_rwlock_wrlock (hg_intercepts.c:2175)
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== by 0x4E46D5B: os_rwlock_wrlock (os_thread_posix.c:188)
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== by 0x4E7A83D: pmemobj_rwlock_wrlock (sync.c:402)
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== by 0x4E7DD53: add_to_tx_and_lock (tx.c:976)
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== by 0x4E7EE58: pmemobj_tx_lock (tx.c:1291)
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== by 0x4036BD: do_tx_add_taken_lock (obj_tx_lock.c:158)
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== by 0x403973: main (obj_tx_lock.c:188)
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446==
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446==
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== For counts of detected and suppressed errors, rerun with: -v
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== Use --history-level=approx or =none to gain increased speed, at
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== the cost of reduced accuracy of conflicting-access information
[2019-08-20T23:48:39.960Z] obj_tx_lock/TEST0 helgrind0.log ==52446== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
[2019-08-20T23:48:39.960Z] RUNTESTS: stopping: obj_tx_lock/TEST0 failed, TEST=all FS=pmem BUILD=debug
[2019-08-20T23:48:39.960Z] tput: No value for $TERM and no -T specified
[2019-08-20T23:48:39.960Z] tput: No value for $TERM and no -T specified
[2019-08-20T23:48:39.960Z] 1 tests failed: obj_tx_lock/TEST0
[2019-08-20T23:48:39.960Z] ../Makefile.inc:467: recipe for target 'TEST0' failed
[2019-08-20T23:48:39.960Z] make[3]: *** [TEST0] Error 1
[2019-08-20T23:48:39.960Z] make[3]: Target 'pcheck' not remade because of errors.
[2019-08-20T23:48:39.960Z] make[3]: Leaving directory '/home/jenkins/workspace/1.6.1/1.4.3/helgrind_143/src/test/obj_tx_lock'
[2019-08-20T23:48:39.960Z] Makefile:462: recipe for target 'obj_tx_lock' failed
[2019-08-20T23:48:39.960Z] make[2]: *** [obj_tx_lock] Error 2
[2019-08-20T23:48:39.960Z] make -C obj_tx_locks pcheck
[2019-08-20T23:48:39.960Z] make[3]: Entering directory '/home/jenkins/workspace/1.6.1/1.4.3/helgrind_143/src/test/obj_tx_locks'
```
## Expected behavior:
Test should pass.
## Details
<!-- fill this out -->
## Additional information about Priority and Help Requested:
Are you willing to submit a pull request with a proposed change? (Yes, No) <!-- check one if possible -->
Requested priority: (Showstopper, High, Medium, Low) <!-- check one if possible -->
| priority | test obj tx lock setup all pmem debug helgrind fails before creating new issue ensure that similar issue wasn t already created search note that if you do not provide enough information to reproduce the issue we may not be able to take action on your report remember this is just a minimal template you can extend it with data you think may be useful issue environment information pmdk package version s os es version s sles ndctl version s kernel version s default please provide a reproduction of the bug runtests obj tx lock s t all e force enable how often bug is revealed always often rare always actual behavior obj tx lock setup all pmem debug helgrind obj tx lock failed with valgrind see log first lines below yes standard output broken pipe obj tx lock log thread is the program s root thread obj tx lock log obj tx lock log obj tx lock log obj tx lock log thread s call to pthread rwlock wrlock failed obj tx lock log with error code edeadlk resource deadlock would occur obj tx lock log at pthread rwlock wrlock wrk hg intercepts c obj tx lock log by pthread rwlock wrlock hg intercepts c obj tx lock log by os rwlock wrlock os thread posix c obj tx lock log by pmemobj rwlock wrlock sync c obj tx lock log by add to tx and lock tx c obj tx lock log by pmemobj tx lock tx c obj tx lock log by do tx add taken lock obj tx lock c obj tx lock log by main obj tx lock c obj tx lock log obj tx lock log obj tx lock log for counts of detected and suppressed errors rerun with v obj tx lock log use history level approx or none to gain increased speed at obj tx lock log the cost of reduced accuracy of conflicting access information obj tx lock log error summary errors from contexts suppressed from runtests stopping obj tx lock failed test all fs pmem build debug tput no value for term and no t specified tput no value for term and no t specified tests failed obj tx lock makefile inc recipe for target failed make error make target pcheck not remade because of errors make leaving directory home jenkins workspace helgrind src test obj tx lock makefile recipe for target obj tx lock failed make error make c obj tx locks pcheck make entering directory home jenkins workspace helgrind src test obj tx locks expected behavior test should pass details additional information about priority and help requested are you willing to submit a pull request with a proposed change yes no requested priority showstopper high medium low | 1 |
665,249 | 22,304,742,560 | IssuesEvent | 2022-06-13 12:03:25 | ubtue/tuefind | https://api.github.com/repos/ubtue/tuefind | closed | Liste der OJS Journals nicht erreichbar | bug high priority caused by VuFind upgrade | Folgender Link funktioniert nicht:
https://www.ixtheo.de/OpenJournals
Vielleicht ein VuFind upgrade Problem?
<img width="939" alt="Bildschirmfoto 2022-06-13 um 13 19 43" src="https://user-images.githubusercontent.com/14169098/173343086-f99be775-e922-4d62-a4e1-b66e8ffb09bd.png">
<img width="974" alt="Bildschirmfoto 2022-06-13 um 13 21 33" src="https://user-images.githubusercontent.com/14169098/173343111-f121a0f9-6acd-496a-a1fe-2a7b26eaa94b.png">
| 1.0 | Liste der OJS Journals nicht erreichbar - Folgender Link funktioniert nicht:
https://www.ixtheo.de/OpenJournals
Vielleicht ein VuFind upgrade Problem?
<img width="939" alt="Bildschirmfoto 2022-06-13 um 13 19 43" src="https://user-images.githubusercontent.com/14169098/173343086-f99be775-e922-4d62-a4e1-b66e8ffb09bd.png">
<img width="974" alt="Bildschirmfoto 2022-06-13 um 13 21 33" src="https://user-images.githubusercontent.com/14169098/173343111-f121a0f9-6acd-496a-a1fe-2a7b26eaa94b.png">
| priority | liste der ojs journals nicht erreichbar folgender link funktioniert nicht vielleicht ein vufind upgrade problem img width alt bildschirmfoto um src img width alt bildschirmfoto um src | 1 |
109,606 | 4,396,192,944 | IssuesEvent | 2016-08-10 00:36:42 | camd67/moebot | https://api.github.com/repos/camd67/moebot | opened | Rate limit commands on a per-user basis | enhancement high-priority | Users should only be able to post a certain number of commands per minute | 1.0 | Rate limit commands on a per-user basis - Users should only be able to post a certain number of commands per minute | priority | rate limit commands on a per user basis users should only be able to post a certain number of commands per minute | 1 |
88,549 | 3,778,921,623 | IssuesEvent | 2016-03-18 04:12:22 | duriana/testlink-code | https://api.github.com/repos/duriana/testlink-code | opened | add ability to copy test steps | enhancement HIGH - priority | copy steps a-b c-d from test case A to test case B after step c
automatically create a new test cases from test case A with step ranges a-b c-d .. | 1.0 | add ability to copy test steps - copy steps a-b c-d from test case A to test case B after step c
automatically create a new test cases from test case A with step ranges a-b c-d .. | priority | add ability to copy test steps copy steps a b c d from test case a to test case b after step c automatically create a new test cases from test case a with step ranges a b c d | 1 |
162,473 | 6,154,049,051 | IssuesEvent | 2017-06-28 11:39:22 | mapbox/mapbox-gl-js | https://api.github.com/repos/mapbox/mapbox-gl-js | closed | Runtime styling sometimes doesn't take effect | bug high priority | When using a GeoJSON source, and setting runtime styling properties, then updating the GeoJSON source data, the changes to the style don't always take effect.
https://jsfiddle.net/ygxuk5zy/ contains a reduced test case (replace your access token). This is how it *should* look:

This is how it actually looks:

| 1.0 | Runtime styling sometimes doesn't take effect - When using a GeoJSON source, and setting runtime styling properties, then updating the GeoJSON source data, the changes to the style don't always take effect.
https://jsfiddle.net/ygxuk5zy/ contains a reduced test case (replace your access token). This is how it *should* look:

This is how it actually looks:

| priority | runtime styling sometimes doesn t take effect when using a geojson source and setting runtime styling properties then updating the geojson source data the changes to the style don t always take effect contains a reduced test case replace your access token this is how it should look this is how it actually looks | 1 |
31,192 | 2,732,763,918 | IssuesEvent | 2015-04-17 09:07:35 | cylc/cylc | https://api.github.com/repos/cylc/cylc | closed | Bad use of broadcast can bring down a suite. | bug priority high | E.g. in a date-time cycling suite:
```
% cylc broadcast -s 'command scripting=true' -p 1 -n foo-1 foo
# ... (suite shuts down)
% cylc cat-log -e foo | tail -n 31
Traceback (most recent call last):
File "/home/oliverh/cylc/cylc.git/lib/cylc/run.py", line 75, in main
server.run()
File "/home/oliverh/cylc/cylc.git/lib/cylc/scheduler.py", line 908, in run
self.pool.wireless.expire( self.pool.get_min_point() )
File "/home/oliverh/cylc/cylc.git/lib/cylc/broadcast.py", line 139, in expire
get_point(point_string) < cutoff_point):
File "/home/oliverh/cylc/cylc.git/lib/cylc/cycling/iso8601.py", line 116, in __cmp__
return self._iso_point_cmp(self.value, other.value)
File "/home/oliverh/cylc/cylc.git/lib/cylc/cycling/iso8601.py", line 85, in _wrapper
results = function(*args)
File "/home/oliverh/cylc/cylc.git/lib/cylc/cycling/iso8601.py", line 154, in _iso_point_cmp
point = point_parse(point_string)
File "/home/oliverh/cylc/cylc.git/lib/cylc/cycling/iso8601.py", line 695, in point_parse
return _point_parse(point_string).copy()
File "/home/oliverh/cylc/cylc.git/lib/cylc/cycling/iso8601.py", line 85, in _wrapper
results = function(*args)
File "/home/oliverh/cylc/cylc.git/lib/cylc/cycling/iso8601.py", line 715, in _point_parse
point = SuiteSpecifics.point_parser.parse(point_string) # Fail?
File "/home/oliverh/cylc/cylc.git/lib/isodatetime/parsers.py", line 236, in parse
date_info, time_info, parsed_expr = self.get_info(timepoint_string)
File "/home/oliverh/cylc/cylc.git/lib/isodatetime/parsers.py", line 432, in get_info
keys, date_info = self.get_date_info(date)
File "/home/oliverh/cylc/cylc.git/lib/isodatetime/parsers.py", line 392, in get_date_info
raise ISO8601SyntaxError("date", date_string)
ISO8601SyntaxError: Invalid ISO 8601 date representation: 1
ERROR CAUGHT: cleaning up before exit
2015-04-15T22:12:23+12 WARNING - some active tasks will be orphaned
THE ERROR WAS:
Invalid ISO 8601 date representation: 1
use --debug to turn on exception tracebacks)
``` | 1.0 | Bad use of broadcast can bring down a suite. - E.g. in a date-time cycling suite:
```
% cylc broadcast -s 'command scripting=true' -p 1 -n foo-1 foo
# ... (suite shuts down)
% cylc cat-log -e foo | tail -n 31
Traceback (most recent call last):
File "/home/oliverh/cylc/cylc.git/lib/cylc/run.py", line 75, in main
server.run()
File "/home/oliverh/cylc/cylc.git/lib/cylc/scheduler.py", line 908, in run
self.pool.wireless.expire( self.pool.get_min_point() )
File "/home/oliverh/cylc/cylc.git/lib/cylc/broadcast.py", line 139, in expire
get_point(point_string) < cutoff_point):
File "/home/oliverh/cylc/cylc.git/lib/cylc/cycling/iso8601.py", line 116, in __cmp__
return self._iso_point_cmp(self.value, other.value)
File "/home/oliverh/cylc/cylc.git/lib/cylc/cycling/iso8601.py", line 85, in _wrapper
results = function(*args)
File "/home/oliverh/cylc/cylc.git/lib/cylc/cycling/iso8601.py", line 154, in _iso_point_cmp
point = point_parse(point_string)
File "/home/oliverh/cylc/cylc.git/lib/cylc/cycling/iso8601.py", line 695, in point_parse
return _point_parse(point_string).copy()
File "/home/oliverh/cylc/cylc.git/lib/cylc/cycling/iso8601.py", line 85, in _wrapper
results = function(*args)
File "/home/oliverh/cylc/cylc.git/lib/cylc/cycling/iso8601.py", line 715, in _point_parse
point = SuiteSpecifics.point_parser.parse(point_string) # Fail?
File "/home/oliverh/cylc/cylc.git/lib/isodatetime/parsers.py", line 236, in parse
date_info, time_info, parsed_expr = self.get_info(timepoint_string)
File "/home/oliverh/cylc/cylc.git/lib/isodatetime/parsers.py", line 432, in get_info
keys, date_info = self.get_date_info(date)
File "/home/oliverh/cylc/cylc.git/lib/isodatetime/parsers.py", line 392, in get_date_info
raise ISO8601SyntaxError("date", date_string)
ISO8601SyntaxError: Invalid ISO 8601 date representation: 1
ERROR CAUGHT: cleaning up before exit
2015-04-15T22:12:23+12 WARNING - some active tasks will be orphaned
THE ERROR WAS:
Invalid ISO 8601 date representation: 1
use --debug to turn on exception tracebacks)
``` | priority | bad use of broadcast can bring down a suite e g in a date time cycling suite cylc broadcast s command scripting true p n foo foo suite shuts down cylc cat log e foo tail n traceback most recent call last file home oliverh cylc cylc git lib cylc run py line in main server run file home oliverh cylc cylc git lib cylc scheduler py line in run self pool wireless expire self pool get min point file home oliverh cylc cylc git lib cylc broadcast py line in expire get point point string cutoff point file home oliverh cylc cylc git lib cylc cycling py line in cmp return self iso point cmp self value other value file home oliverh cylc cylc git lib cylc cycling py line in wrapper results function args file home oliverh cylc cylc git lib cylc cycling py line in iso point cmp point point parse point string file home oliverh cylc cylc git lib cylc cycling py line in point parse return point parse point string copy file home oliverh cylc cylc git lib cylc cycling py line in wrapper results function args file home oliverh cylc cylc git lib cylc cycling py line in point parse point suitespecifics point parser parse point string fail file home oliverh cylc cylc git lib isodatetime parsers py line in parse date info time info parsed expr self get info timepoint string file home oliverh cylc cylc git lib isodatetime parsers py line in get info keys date info self get date info date file home oliverh cylc cylc git lib isodatetime parsers py line in get date info raise date date string invalid iso date representation error caught cleaning up before exit warning some active tasks will be orphaned the error was invalid iso date representation use debug to turn on exception tracebacks | 1 |
412,481 | 12,042,909,462 | IssuesEvent | 2020-04-14 11:27:38 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | Need to give an option to disable one signal on the pages. | NEXT UPDATE [Priority: HIGH] bug | Ref: https://secure.helpscout.net/conversation/1126741795/120333?folderId=1060556
We have removed the one signal support for the pages in ticket no:
https://github.com/ahmedkaludi/accelerated-mobile-pages/issues/3874
We need to create an option to disable one signal support on pages. If the user wants then only he can disable it otherwise it will show. | 1.0 | Need to give an option to disable one signal on the pages. - Ref: https://secure.helpscout.net/conversation/1126741795/120333?folderId=1060556
We have removed the one signal support for the pages in ticket no:
https://github.com/ahmedkaludi/accelerated-mobile-pages/issues/3874
We need to create an option to disable one signal support on pages. If the user wants then only he can disable it otherwise it will show. | priority | need to give an option to disable one signal on the pages ref we have removed the one signal support for the pages in ticket no we need to create an option to disable one signal support on pages if the user wants then only he can disable it otherwise it will show | 1 |
424,130 | 12,306,240,987 | IssuesEvent | 2020-05-12 00:50:51 | aspnetboilerplate/aspnetboilerplate | https://api.github.com/repos/aspnetboilerplate/aspnetboilerplate | closed | AbpSession.UserId null in SignalR Hub | priority:high problem | **Abp Package Version**: Latest (Creating new template from site)
**Base Framework**: .Net Core
**Steps need to reproduce:** I have created a repo on GitHub that has a project to test.
https://github.com/chadsmith12/SignalRTest/tree/master
But essentially when connecting to SignalR the following way:
```
new signalR.HubConnectionBuilder()
.withUrl(abp.appPath + 'signalr-TestHub', {
transport: signalR.HttpTransportType.LongPolling,
accessTokenFactory: () => {
return 'CURRENT JWT TOKEN HERE';
}
}).build();
```
And using the example Hub from the documentation like so:
```
[AbpMvcAuthorize]
public class TestHub : Hub, ITransientDependency
{
public IAbpSession AbpSession { get; set; }
public ILogger Logger { get; set; }
public TestHub()
{
AbpSession = NullAbpSession.Instance;
Logger = NullLogger.Instance;
}
public async Task SendMessage(string message)
{
var userId = AbpSession.UserId;
var connection = Context.User.Identity.GetUserId();
await Clients.All.SendAsync("getMessage", string.Format("User {0}: {1}", userId, message));
}
public override async Task OnConnectedAsync()
{
await base.OnConnectedAsync();
Debug.WriteLine("UserId" + AbpSession.UserId);
Logger.Debug("A client connected to MyChatHub: " + Context.ConnectionId);
}
public override async Task OnDisconnectedAsync(Exception exception)
{
await base.OnDisconnectedAsync(exception);
Logger.Debug("A client disconnected from MyChatHub: " + Context.ConnectionId);
}
}
```
And then connecting to a Hub that has the `[AbpMvcAuthorize]` attribute over the Hub. You will be authorized correctly but at random times the `UserId` inside of `AbpSession` will be null, even though `Context.User.Identity.GetUserId();` will give you the correct UserId. You can look at the logs for SignalR and notice that JWT is being sent correctly it seems each time and you will always get authorized when sending the JWT, but it is just that the `AbpSession` is empty at random times.
I also had a stackoverflow post a couple days ago that describes it.
https://stackoverflow.com/questions/53125573/aspnetboilerplate-signalr-jwt-authentication/ | 1.0 | AbpSession.UserId null in SignalR Hub - **Abp Package Version**: Latest (Creating new template from site)
**Base Framework**: .Net Core
**Steps need to reproduce:** I have created a repo on GitHub that has a project to test.
https://github.com/chadsmith12/SignalRTest/tree/master
But essentially when connecting to SignalR the following way:
```
new signalR.HubConnectionBuilder()
.withUrl(abp.appPath + 'signalr-TestHub', {
transport: signalR.HttpTransportType.LongPolling,
accessTokenFactory: () => {
return 'CURRENT JWT TOKEN HERE';
}
}).build();
```
And using the example Hub from the documentation like so:
```
[AbpMvcAuthorize]
public class TestHub : Hub, ITransientDependency
{
public IAbpSession AbpSession { get; set; }
public ILogger Logger { get; set; }
public TestHub()
{
AbpSession = NullAbpSession.Instance;
Logger = NullLogger.Instance;
}
public async Task SendMessage(string message)
{
var userId = AbpSession.UserId;
var connection = Context.User.Identity.GetUserId();
await Clients.All.SendAsync("getMessage", string.Format("User {0}: {1}", userId, message));
}
public override async Task OnConnectedAsync()
{
await base.OnConnectedAsync();
Debug.WriteLine("UserId" + AbpSession.UserId);
Logger.Debug("A client connected to MyChatHub: " + Context.ConnectionId);
}
public override async Task OnDisconnectedAsync(Exception exception)
{
await base.OnDisconnectedAsync(exception);
Logger.Debug("A client disconnected from MyChatHub: " + Context.ConnectionId);
}
}
```
And then connecting to a Hub that has the `[AbpMvcAuthorize]` attribute over the Hub. You will be authorized correctly but at random times the `UserId` inside of `AbpSession` will be null, even though `Context.User.Identity.GetUserId();` will give you the correct UserId. You can look at the logs for SignalR and notice that JWT is being sent correctly it seems each time and you will always get authorized when sending the JWT, but it is just that the `AbpSession` is empty at random times.
I also had a stackoverflow post a couple days ago that describes it.
https://stackoverflow.com/questions/53125573/aspnetboilerplate-signalr-jwt-authentication/ | priority | abpsession userid null in signalr hub abp package version latest creating new template from site base framework net core steps need to reproduce i have created a repo on github that has a project to test but essentially when connecting to signalr the following way new signalr hubconnectionbuilder withurl abp apppath signalr testhub transport signalr httptransporttype longpolling accesstokenfactory return current jwt token here build and using the example hub from the documentation like so public class testhub hub itransientdependency public iabpsession abpsession get set public ilogger logger get set public testhub abpsession nullabpsession instance logger nulllogger instance public async task sendmessage string message var userid abpsession userid var connection context user identity getuserid await clients all sendasync getmessage string format user userid message public override async task onconnectedasync await base onconnectedasync debug writeline userid abpsession userid logger debug a client connected to mychathub context connectionid public override async task ondisconnectedasync exception exception await base ondisconnectedasync exception logger debug a client disconnected from mychathub context connectionid and then connecting to a hub that has the attribute over the hub you will be authorized correctly but at random times the userid inside of abpsession will be null even though context user identity getuserid will give you the correct userid you can look at the logs for signalr and notice that jwt is being sent correctly it seems each time and you will always get authorized when sending the jwt but it is just that the abpsession is empty at random times i also had a stackoverflow post a couple days ago that describes it | 1 |
230,529 | 7,611,204,373 | IssuesEvent | 2018-05-01 12:53:03 | ropensci/drake | https://api.github.com/repos/ropensci/drake | opened | A quicker end to staged parallelism | priority: high topic: efficiency | ## The problem
With the exception of `make(parallelism = "future")` and `make(parallelism = "Makefile")`, `drake` has an embarrassing approach to parallelism, both literally and figuratively. It divides the dependency graph into conditionally independent stages and then runs those stages sequentially. Targets only run in parallel within those stages. In other words, `drake` waits for the slowest target to finish before moving to the next stage.

At the time, I could not find an R package to traverse an arbitrary network of jobs in parallel. (**Stop me right here if such a tool actually exists!**) Parallel computing options like `foreach`, `parLapply`, `mclapply`, and `future_lapply` all assume the workers can act independently. So I thought staged parallelism was natural. However, I was thinking within the constraints of the available technology, and parallel efficiency suffered.
## Long-term work
In late February, I started the [`workers`](https://github.com/wlandau/workers) package. If you give it a job network and a parallel computing technology, the goal is to traverse the graph using a state-of-the-art algorithm. It has a proof-of-concept implementation and an initial [design specification](https://github.com/wlandau/workers/tree/master/specification), but it has a long way to go. At this stage, it really needs an R-focused [message queue](http://queues.io/). I was hoping [`liteq`](https://github.com/r-lib/liteq) would work, but due to an [apparent race condition](https://github.com/r-lib/liteq/issues/17) and some speed/overhead issues on my end, I am stuck for the time being.
## Now
I say we rework the `make(parallelism = "mclapply")` and `make(parallelism = "parLapply")` backends natively. As I see it, `make(parallelism = "mclapply", jobs = 2)` should fork 3 processes: 2 workers and 1 master. The master sends jobs to the workers in the correct order at the correct time, and the workers keep coming back for more. The goal is to drive overhead down as much as possible, so the workers are persistent.
The implementation will be similar to `make(parallelism = "future", caching = "worker")` (see [`future.R`](https://github.com/ropensci/drake/blob/master/R/future.R)). I think the implementation should go into `drake` itself because
1. The code should be relatively short, and
2. I do not want to lock in the design of [`workers`](https://github.com/wlandau/workers) until I get the chance to do an extensive lit review of parallel graph algorithms.
| 1.0 | A quicker end to staged parallelism - ## The problem
With the exception of `make(parallelism = "future")` and `make(parallelism = "Makefile")`, `drake` has an embarrassing approach to parallelism, both literally and figuratively. It divides the dependency graph into conditionally independent stages and then runs those stages sequentially. Targets only run in parallel within those stages. In other words, `drake` waits for the slowest target to finish before moving to the next stage.

At the time, I could not find an R package to traverse an arbitrary network of jobs in parallel. (**Stop me right here if such a tool actually exists!**) Parallel computing options like `foreach`, `parLapply`, `mclapply`, and `future_lapply` all assume the workers can act independently. So I thought staged parallelism was natural. However, I was thinking within the constraints of the available technology, and parallel efficiency suffered.
## Long-term work
In late February, I started the [`workers`](https://github.com/wlandau/workers) package. If you give it a job network and a parallel computing technology, the goal is to traverse the graph using a state-of-the-art algorithm. It has a proof-of-concept implementation and an initial [design specification](https://github.com/wlandau/workers/tree/master/specification), but it has a long way to go. At this stage, it really needs an R-focused [message queue](http://queues.io/). I was hoping [`liteq`](https://github.com/r-lib/liteq) would work, but due to an [apparent race condition](https://github.com/r-lib/liteq/issues/17) and some speed/overhead issues on my end, I am stuck for the time being.
## Now
I say we rework the `make(parallelism = "mclapply")` and `make(parallelism = "parLapply")` backends natively. As I see it, `make(parallelism = "mclapply", jobs = 2)` should fork 3 processes: 2 workers and 1 master. The master sends jobs to the workers in the correct order at the correct time, and the workers keep coming back for more. The goal is to drive overhead down as much as possible, so the workers are persistent.
The implementation will be similar to `make(parallelism = "future", caching = "worker")` (see [`future.R`](https://github.com/ropensci/drake/blob/master/R/future.R)). I think the implementation should go into `drake` itself because
1. The code should be relatively short, and
2. I do not want to lock in the design of [`workers`](https://github.com/wlandau/workers) until I get the chance to do an extensive lit review of parallel graph algorithms.
| priority | a quicker end to staged parallelism the problem with the exception of make parallelism future and make parallelism makefile drake has an embarrassing approach to parallelism both literally and figuratively it divides the dependency graph into conditionally independent stages and then runs those stages sequentially targets only run in parallel within those stages in other words drake waits for the slowest target to finish before moving to the next stage at the time i could not find an r package to traverse an arbitrary network of jobs in parallel stop me right here if such a tool actually exists parallel computing options like foreach parlapply mclapply and future lapply all assume the workers can act independently so i thought staged parallelism was natural however i was thinking within the constraints of the available technology and parallel efficiency suffered long term work in late february i started the package if you give it a job network and a parallel computing technology the goal is to traverse the graph using a state of the art algorithm it has a proof of concept implementation and an initial but it has a long way to go at this stage it really needs an r focused i was hoping would work but due to an and some speed overhead issues on my end i am stuck for the time being now i say we rework the make parallelism mclapply and make parallelism parlapply backends natively as i see it make parallelism mclapply jobs should fork processes workers and master the master sends jobs to the workers in the correct order at the correct time and the workers keep coming back for more the goal is to drive overhead down as much as possible so the workers are persistent the implementation will be similar to make parallelism future caching worker see i think the implementation should go into drake itself because the code should be relatively short and i do not want to lock in the design of until i get the chance to do an extensive lit review of parallel graph algorithms | 1 |
703,120 | 24,146,902,441 | IssuesEvent | 2022-09-21 19:38:43 | svthalia/concrexit | https://api.github.com/repos/svthalia/concrexit | closed | Partner page partner blocks keep showing the loading animation | priority: high bug | ### Describe the bug
Partner page partner blocks keep showing the loading animation.
### How to reproduce
Steps to reproduce the behaviour:
1. Go to [the Thalia partner page](https://thalia.nu/career/).
2. Scroll down and see the loading animation on the partner blocks.
### Expected behaviour
The loading animation should stop when the partners have been loaded.
| 1.0 | Partner page partner blocks keep showing the loading animation - ### Describe the bug
Partner page partner blocks keep showing the loading animation.
### How to reproduce
Steps to reproduce the behaviour:
1. Go to [the Thalia partner page](https://thalia.nu/career/).
2. Scroll down and see the loading animation on the partner blocks.
### Expected behaviour
The loading animation should stop when the partners have been loaded.
| priority | partner page partner blocks keep showing the loading animation describe the bug partner page partner blocks keep showing the loading animation how to reproduce steps to reproduce the behaviour go to scroll down and see the loading animation on the partner blocks expected behaviour the loading animation should stop when the partners have been loaded | 1 |
290,504 | 8,896,015,583 | IssuesEvent | 2019-01-16 10:16:55 | salesagility/SuiteCRM | https://api.github.com/repos/salesagility/SuiteCRM | closed | Using "E-Mail" from the action menu does not include variables from the templates | High Priority Resolved: Next Release bug | After implementing the fix from pull request #1168, the emails themselves will send, but the don't pull the variables mentioned in the email templates (e.g. Instead of "John", the email is addressed to "prospect_first_name").
| 1.0 | Using "E-Mail" from the action menu does not include variables from the templates - After implementing the fix from pull request #1168, the emails themselves will send, but the don't pull the variables mentioned in the email templates (e.g. Instead of "John", the email is addressed to "prospect_first_name").
| priority | using e mail from the action menu does not include variables from the templates after implementing the fix from pull request the emails themselves will send but the don t pull the variables mentioned in the email templates e g instead of john the email is addressed to prospect first name | 1 |
609,791 | 18,887,423,684 | IssuesEvent | 2021-11-15 09:30:40 | woocommerce/woocommerce-android | https://api.github.com/repos/woocommerce/woocommerce-android | closed | ParseException: Unparseable date: "2021-10-22" | feature: stats type: crash priority: high impact: high | Sentry Issue: [WOOCOMMERCE-ANDROID-2CA](https://sentry.io/organizations/a8c/issues/2669218598/?referrer=github_integration)
```
ParseException: Unparseable date: "2021-10-22"
at com.woocommerce.android.util.DateUtils.getShortHourString(DateUtils.kt:187)
at com.woocommerce.android.ui.mystore.MyStoreStatsView$StartEndDateAxisFormatter.getLabelValue(MyStoreStatsView.kt:607)
at com.woocommerce.android.ui.mystore.MyStoreStatsView$StartEndDateAxisFormatter.getFormattedValue(MyStoreStatsView.kt:593)
...
(107 additional frame(s) were not displayed)
``` | 1.0 | ParseException: Unparseable date: "2021-10-22" - Sentry Issue: [WOOCOMMERCE-ANDROID-2CA](https://sentry.io/organizations/a8c/issues/2669218598/?referrer=github_integration)
```
ParseException: Unparseable date: "2021-10-22"
at com.woocommerce.android.util.DateUtils.getShortHourString(DateUtils.kt:187)
at com.woocommerce.android.ui.mystore.MyStoreStatsView$StartEndDateAxisFormatter.getLabelValue(MyStoreStatsView.kt:607)
at com.woocommerce.android.ui.mystore.MyStoreStatsView$StartEndDateAxisFormatter.getFormattedValue(MyStoreStatsView.kt:593)
...
(107 additional frame(s) were not displayed)
``` | priority | parseexception unparseable date sentry issue parseexception unparseable date at com woocommerce android util dateutils getshorthourstring dateutils kt at com woocommerce android ui mystore mystorestatsview startenddateaxisformatter getlabelvalue mystorestatsview kt at com woocommerce android ui mystore mystorestatsview startenddateaxisformatter getformattedvalue mystorestatsview kt additional frame s were not displayed | 1 |
711,377 | 24,461,325,722 | IssuesEvent | 2022-10-07 11:25:59 | owncloud/ocis | https://api.github.com/repos/owncloud/ocis | closed | Add ParentId in OCS and webdav REPORT responses | Priority:p2-high GA-Blocker | Web offers a navigation into parent folders in various pages / situations:
- for file listings on search result page or in the search preview items (each file can and likely has a different parent)
- share listings on shared with others / shared via link pages (each shared file could have a different parent)
- in personal/project spaces when looking at the `Shares` section in the right sidebar of a subfolder or file of a shared parent folder
- (in the future: favorites listing)
In order to solve https://github.com/owncloud/web/issues/6247 we need to be able to include the file id of the respective parent into the navigation.
As a result, the GA blocking requirement is: Web needs the parent id of files/folders listed in OCS share and webdav REPORT responses.
Edit: The property already exists in the OCS APi and is called `file_parent`, hence a fitting name for WebDAV would be `oc:file-parent`.
## Solution
### Webdav PROPFIND
```xml
<d:multistatus xmlns:s="http://sabredav.org/ns" xmlns:d="DAV:" xmlns:oc="http://owncloud.org/ns">
<d:response>
<d:href>/dav/spaces/1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d/</d:href>
<d:propstat>
<d:prop>
<oc:id>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</oc:id>
<oc:fileid>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</oc:fileid>
<oc:spaceid>cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</oc:spaceid>
<d:getetag>"616a5099844d1048bf438783e33f21eb"</d:getetag>
<oc:permissions>SRDNVCK</oc:permissions>
<d:resourcetype>
<d:collection/>
</d:resourcetype>
<oc:size>197</oc:size>
<d:getlastmodified>Fri, 23 Sep 2022 08:17:43 GMT</d:getlastmodified>
<oc:favorite>0</oc:favorite>
</d:prop>
<d:status>HTTP/1.1 200 OK</d:status>
</d:propstat>
<d:propstat>
<d:prop>
<oc:file-parent></oc:file-parent>
</d:prop>
<d:status>HTTP/1.1 404 Not Found</d:status>
</d:propstat>
</d:response>
<d:response>
<d:href>/dav/spaces/1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d/.space/</d:href>
<d:propstat>
<d:prop>
<oc:id>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!0741cf1f-1e87-45fa-9296-b21ee44ba741</oc:id>
<oc:fileid>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!0741cf1f-1e87-45fa-9296-b21ee44ba741</oc:fileid>
<oc:spaceid>cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</oc:spaceid>
<oc:file-parent>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</oc:file-parent>
<oc:name>.space</oc:name>
<d:getetag>"9a44b1cfc90376074b6b5399f49ae226"</d:getetag>
<oc:permissions>RDNVCK</oc:permissions>
<d:resourcetype>
<d:collection/>
</d:resourcetype>
<oc:size>197</oc:size>
<d:getlastmodified>Fri, 23 Sep 2022 06:50:34 GMT</d:getlastmodified>
<oc:favorite>0</oc:favorite>
</d:prop>
<d:status>HTTP/1.1 200 OK</d:status>
</d:propstat>
</d:response>
<d:response>
<d:href>/dav/spaces/1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d/Folder/</d:href>
<d:propstat>
<d:prop>
<oc:id>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!3e519da2-1812-4e44-a9b1-390144bb3feb</oc:id>
<oc:fileid>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!3e519da2-1812-4e44-a9b1-390144bb3feb</oc:fileid>
<oc:spaceid>cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</oc:spaceid>
<oc:file-parent>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</oc:file-parent>
<oc:name>Folder</oc:name>
<d:getetag>"2a601c7ce774949cd1dd786f19a5b359"</d:getetag>
<oc:permissions>RDNVCK</oc:permissions>
<d:resourcetype>
<d:collection/>
</d:resourcetype>
<oc:size>0</oc:size>
<d:getlastmodified>Fri, 23 Sep 2022 08:12:37 GMT</d:getlastmodified>
<oc:favorite>0</oc:favorite>
</d:prop>
<d:status>HTTP/1.1 200 OK</d:status>
</d:propstat>
</d:response>
<d:response>
<d:href>/dav/spaces/1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d/Shared%20Folder/</d:href>
<d:propstat>
<d:prop>
<oc:id>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!5e683056-2338-49b1-add0-bb1c41b5a1dc</oc:id>
<oc:fileid>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!5e683056-2338-49b1-add0-bb1c41b5a1dc</oc:fileid>
<oc:spaceid>cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</oc:spaceid>
<oc:file-parent>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</oc:file-parent>
<oc:name>Shared Folder</oc:name>
<d:getetag>"c171eb07817781f23493db5084736335"</d:getetag>
<oc:permissions>SRDNVCK</oc:permissions>
<d:resourcetype>
<d:collection/>
</d:resourcetype>
<oc:size>0</oc:size>
<d:getlastmodified>Fri, 23 Sep 2022 06:50:43 GMT</d:getlastmodified>
<oc:favorite>0</oc:favorite>
</d:prop>
<d:status>HTTP/1.1 200 OK</d:status>
</d:propstat>
</d:response>
</d:multistatus>
```
#### Note: the space root has no parent
### OCS
#### List outgoing shares
```xml
<ocs>
<meta>
<status>ok</status>
<statuscode>100</statuscode>
<message>OK</message>
</meta>
<data>
<element>
<id>CyvwfRqbrzpgJvA</id>
<share_type>3</share_type>
<uid_owner>admin</uid_owner>
<displayname_owner>Admin</displayname_owner>
<additional_info_owner>admin@example.org</additional_info_owner>
<permissions>1</permissions>
<stime>1664204880</stime>
<parent></parent>
<expiration></expiration>
<token>ySZgJetXqXoweTG</token>
<uid_file_owner>admin</uid_file_owner>
<displayname_file_owner>Admin</displayname_file_owner>
<additional_info_file_owner>admin@example.org</additional_info_file_owner>
<state>0</state>
<path>/Neue Datei.txt</path>
<item_type>file</item_type>
<mimetype>text/plain</mimetype>
<storage_id>shared::/Neue Datei.txt</storage_id>
<storage>0</storage>
<item_source>1284d238-aa92-42ce-bdc4-0b0000009157$some-admin-user-id-0000-000000000000!ad92d8e2-3059-45db-a34d-267c57ee4c4e</item_source>
<file_source>1284d238-aa92-42ce-bdc4-0b0000009157$some-admin-user-id-0000-000000000000!ad92d8e2-3059-45db-a34d-267c57ee4c4e</file_source>
<file_parent>1284d238-aa92-42ce-bdc4-0b0000009157$some-admin-user-id-0000-000000000000!some-admin-user-id-0000-000000000000</file_parent>
<file_target>/Neue Datei.txt</file_target>
<share_with_user_type>0</share_with_user_type>
<share_with_additional_info></share_with_additional_info>
<mail_send>0</mail_send>
<name>Quicklink</name>
<url>https://localhost:9200/s/ySZgJetXqXoweTG</url>
<quicklink>true</quicklink>
</element>
<element>
<id>MsSvqCkvGcjdhal</id>
<share_type>3</share_type>
<uid_owner>admin</uid_owner>
<displayname_owner>Admin</displayname_owner>
<additional_info_owner>admin@example.org</additional_info_owner>
<permissions>1</permissions>
<stime>1664309312</stime>
<parent></parent>
<expiration></expiration>
<token>JxKTODpBohwWQnd</token>
<uid_file_owner>admin</uid_file_owner>
<displayname_file_owner>Admin</displayname_file_owner>
<additional_info_file_owner>admin@example.org</additional_info_file_owner>
<state>0</state>
<path>/Neue Datei.docx</path>
<item_type>file</item_type>
<mimetype>application/vnd.openxmlformats-officedocument.wordprocessingml.document</mimetype>
<storage_id>shared::/Neue Datei.docx</storage_id>
<storage>0</storage>
<item_source>1284d238-aa92-42ce-bdc4-0b0000009157$some-admin-user-id-0000-000000000000!34c6d699-bbf1-46ed-b544-89d1b91758a0</item_source>
<file_source>1284d238-aa92-42ce-bdc4-0b0000009157$some-admin-user-id-0000-000000000000!34c6d699-bbf1-46ed-b544-89d1b91758a0</file_source>
<file_parent>1284d238-aa92-42ce-bdc4-0b0000009157$some-admin-user-id-0000-000000000000!some-admin-user-id-0000-000000000000</file_parent>
<file_target>/Neue Datei.docx</file_target>
<share_with_user_type>0</share_with_user_type>
<share_with_additional_info></share_with_additional_info>
<mail_send>0</mail_send>
<name>Quicklink</name>
<url>https://localhost:9200/s/JxKTODpBohwWQnd</url>
<quicklink>true</quicklink>
</element>
<element>
<id>1284d238-aa92-42ce-bdc4-0b0000009157:cf792ee7-cdf7-4bc3-99db-7de27ae01d2d:0b334991-066a-4d08-b9dd-00629c32ce62</id>
<share_type>0</share_type>
<uid_owner>admin</uid_owner>
<displayname_owner>Admin</displayname_owner>
<additional_info_owner>admin@example.org</additional_info_owner>
<permissions>31</permissions>
<stime>1663915914</stime>
<parent></parent>
<expiration></expiration>
<token></token>
<uid_file_owner>cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</uid_file_owner>
<displayname_file_owner></displayname_file_owner>
<additional_info_file_owner></additional_info_file_owner>
<state>0</state>
<path>/Shared Folder</path>
<item_type>folder</item_type>
<mimetype>httpd/unix-directory</mimetype>
<storage_id>shared::/Shares/Shared Folder</storage_id>
<storage>0</storage>
<item_source>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!5e683056-2338-49b1-add0-bb1c41b5a1dc</item_source>
<file_source>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!5e683056-2338-49b1-add0-bb1c41b5a1dc</file_source>
<file_parent>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</file_parent>
<file_target>/Shares/Shared Folder</file_target>
<share_with_user_type>0</share_with_user_type>
<share_with_additional_info></share_with_additional_info>
<mail_send>0</mail_send>
<name></name>
</element>
<element>
<id>1284d238-aa92-42ce-bdc4-0b0000009157:cf792ee7-cdf7-4bc3-99db-7de27ae01d2d:1b90eac6-d497-4904-9e5f-915af7f2b32b</id>
<share_type>0</share_type>
<uid_owner>admin</uid_owner>
<displayname_owner>Admin</displayname_owner>
<additional_info_owner>admin@example.org</additional_info_owner>
<permissions>31</permissions>
<stime>1663920764</stime>
<parent></parent>
<expiration></expiration>
<token></token>
<uid_file_owner>cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</uid_file_owner>
<displayname_file_owner></displayname_file_owner>
<additional_info_file_owner></additional_info_file_owner>
<state>0</state>
<path>/Folder</path>
<item_type>folder</item_type>
<mimetype>httpd/unix-directory</mimetype>
<storage_id>shared::/Shares/Folder</storage_id>
<storage>0</storage>
<item_source>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!3e519da2-1812-4e44-a9b1-390144bb3feb</item_source>
<file_source>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!3e519da2-1812-4e44-a9b1-390144bb3feb</file_source>
<file_parent>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</file_parent>
<file_target>/Shares/Folder</file_target>
<share_with>marie</share_with>
<share_with_user_type>0</share_with_user_type>
<share_with_displayname>Marie Skłodowska Curie</share_with_displayname>
<share_with_additional_info>marie@example.org</share_with_additional_info>
<mail_send>0</mail_send>
<name></name>
</element>
<element>
<id>1284d238-aa92-42ce-bdc4-0b0000009157:cf792ee7-cdf7-4bc3-99db-7de27ae01d2d:c36a7974-f6ad-4531-b1fa-41f03d6bd499</id>
<share_type>0</share_type>
<uid_owner>admin</uid_owner>
<displayname_owner>Admin</displayname_owner>
<additional_info_owner>admin@example.org</additional_info_owner>
<permissions>17</permissions>
<stime>1663920729</stime>
<parent></parent>
<expiration></expiration>
<token></token>
<uid_file_owner>cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</uid_file_owner>
<displayname_file_owner></displayname_file_owner>
<additional_info_file_owner></additional_info_file_owner>
<state>0</state>
<path>/Shared Folder</path>
<item_type>folder</item_type>
<mimetype>httpd/unix-directory</mimetype>
<storage_id>shared::/Shares/Shared Folder</storage_id>
<storage>0</storage>
<item_source>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!5e683056-2338-49b1-add0-bb1c41b5a1dc</item_source>
<file_source>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!5e683056-2338-49b1-add0-bb1c41b5a1dc</file_source>
<file_parent>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</file_parent>
<file_target>/Shares/Shared Folder</file_target>
<share_with>marie</share_with>
<share_with_user_type>0</share_with_user_type>
<share_with_displayname>Marie Skłodowska Curie</share_with_displayname>
<share_with_additional_info>marie@example.org</share_with_additional_info>
<mail_send>0</mail_send>
<name></name>
</element>
</data>
</ocs>
``` | 1.0 | Add ParentId in OCS and webdav REPORT responses - Web offers a navigation into parent folders in various pages / situations:
- for file listings on search result page or in the search preview items (each file can and likely has a different parent)
- share listings on shared with others / shared via link pages (each shared file could have a different parent)
- in personal/project spaces when looking at the `Shares` section in the right sidebar of a subfolder or file of a shared parent folder
- (in the future: favorites listing)
In order to solve https://github.com/owncloud/web/issues/6247 we need to be able to include the file id of the respective parent into the navigation.
As a result, the GA blocking requirement is: Web needs the parent id of files/folders listed in OCS share and webdav REPORT responses.
Edit: The property already exists in the OCS APi and is called `file_parent`, hence a fitting name for WebDAV would be `oc:file-parent`.
## Solution
### Webdav PROPFIND
```xml
<d:multistatus xmlns:s="http://sabredav.org/ns" xmlns:d="DAV:" xmlns:oc="http://owncloud.org/ns">
<d:response>
<d:href>/dav/spaces/1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d/</d:href>
<d:propstat>
<d:prop>
<oc:id>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</oc:id>
<oc:fileid>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</oc:fileid>
<oc:spaceid>cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</oc:spaceid>
<d:getetag>"616a5099844d1048bf438783e33f21eb"</d:getetag>
<oc:permissions>SRDNVCK</oc:permissions>
<d:resourcetype>
<d:collection/>
</d:resourcetype>
<oc:size>197</oc:size>
<d:getlastmodified>Fri, 23 Sep 2022 08:17:43 GMT</d:getlastmodified>
<oc:favorite>0</oc:favorite>
</d:prop>
<d:status>HTTP/1.1 200 OK</d:status>
</d:propstat>
<d:propstat>
<d:prop>
<oc:file-parent></oc:file-parent>
</d:prop>
<d:status>HTTP/1.1 404 Not Found</d:status>
</d:propstat>
</d:response>
<d:response>
<d:href>/dav/spaces/1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d/.space/</d:href>
<d:propstat>
<d:prop>
<oc:id>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!0741cf1f-1e87-45fa-9296-b21ee44ba741</oc:id>
<oc:fileid>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!0741cf1f-1e87-45fa-9296-b21ee44ba741</oc:fileid>
<oc:spaceid>cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</oc:spaceid>
<oc:file-parent>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</oc:file-parent>
<oc:name>.space</oc:name>
<d:getetag>"9a44b1cfc90376074b6b5399f49ae226"</d:getetag>
<oc:permissions>RDNVCK</oc:permissions>
<d:resourcetype>
<d:collection/>
</d:resourcetype>
<oc:size>197</oc:size>
<d:getlastmodified>Fri, 23 Sep 2022 06:50:34 GMT</d:getlastmodified>
<oc:favorite>0</oc:favorite>
</d:prop>
<d:status>HTTP/1.1 200 OK</d:status>
</d:propstat>
</d:response>
<d:response>
<d:href>/dav/spaces/1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d/Folder/</d:href>
<d:propstat>
<d:prop>
<oc:id>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!3e519da2-1812-4e44-a9b1-390144bb3feb</oc:id>
<oc:fileid>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!3e519da2-1812-4e44-a9b1-390144bb3feb</oc:fileid>
<oc:spaceid>cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</oc:spaceid>
<oc:file-parent>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</oc:file-parent>
<oc:name>Folder</oc:name>
<d:getetag>"2a601c7ce774949cd1dd786f19a5b359"</d:getetag>
<oc:permissions>RDNVCK</oc:permissions>
<d:resourcetype>
<d:collection/>
</d:resourcetype>
<oc:size>0</oc:size>
<d:getlastmodified>Fri, 23 Sep 2022 08:12:37 GMT</d:getlastmodified>
<oc:favorite>0</oc:favorite>
</d:prop>
<d:status>HTTP/1.1 200 OK</d:status>
</d:propstat>
</d:response>
<d:response>
<d:href>/dav/spaces/1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d/Shared%20Folder/</d:href>
<d:propstat>
<d:prop>
<oc:id>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!5e683056-2338-49b1-add0-bb1c41b5a1dc</oc:id>
<oc:fileid>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!5e683056-2338-49b1-add0-bb1c41b5a1dc</oc:fileid>
<oc:spaceid>cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</oc:spaceid>
<oc:file-parent>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</oc:file-parent>
<oc:name>Shared Folder</oc:name>
<d:getetag>"c171eb07817781f23493db5084736335"</d:getetag>
<oc:permissions>SRDNVCK</oc:permissions>
<d:resourcetype>
<d:collection/>
</d:resourcetype>
<oc:size>0</oc:size>
<d:getlastmodified>Fri, 23 Sep 2022 06:50:43 GMT</d:getlastmodified>
<oc:favorite>0</oc:favorite>
</d:prop>
<d:status>HTTP/1.1 200 OK</d:status>
</d:propstat>
</d:response>
</d:multistatus>
```
#### Note: the space root has no parent
### OCS
#### List outgoing shares
```xml
<ocs>
<meta>
<status>ok</status>
<statuscode>100</statuscode>
<message>OK</message>
</meta>
<data>
<element>
<id>CyvwfRqbrzpgJvA</id>
<share_type>3</share_type>
<uid_owner>admin</uid_owner>
<displayname_owner>Admin</displayname_owner>
<additional_info_owner>admin@example.org</additional_info_owner>
<permissions>1</permissions>
<stime>1664204880</stime>
<parent></parent>
<expiration></expiration>
<token>ySZgJetXqXoweTG</token>
<uid_file_owner>admin</uid_file_owner>
<displayname_file_owner>Admin</displayname_file_owner>
<additional_info_file_owner>admin@example.org</additional_info_file_owner>
<state>0</state>
<path>/Neue Datei.txt</path>
<item_type>file</item_type>
<mimetype>text/plain</mimetype>
<storage_id>shared::/Neue Datei.txt</storage_id>
<storage>0</storage>
<item_source>1284d238-aa92-42ce-bdc4-0b0000009157$some-admin-user-id-0000-000000000000!ad92d8e2-3059-45db-a34d-267c57ee4c4e</item_source>
<file_source>1284d238-aa92-42ce-bdc4-0b0000009157$some-admin-user-id-0000-000000000000!ad92d8e2-3059-45db-a34d-267c57ee4c4e</file_source>
<file_parent>1284d238-aa92-42ce-bdc4-0b0000009157$some-admin-user-id-0000-000000000000!some-admin-user-id-0000-000000000000</file_parent>
<file_target>/Neue Datei.txt</file_target>
<share_with_user_type>0</share_with_user_type>
<share_with_additional_info></share_with_additional_info>
<mail_send>0</mail_send>
<name>Quicklink</name>
<url>https://localhost:9200/s/ySZgJetXqXoweTG</url>
<quicklink>true</quicklink>
</element>
<element>
<id>MsSvqCkvGcjdhal</id>
<share_type>3</share_type>
<uid_owner>admin</uid_owner>
<displayname_owner>Admin</displayname_owner>
<additional_info_owner>admin@example.org</additional_info_owner>
<permissions>1</permissions>
<stime>1664309312</stime>
<parent></parent>
<expiration></expiration>
<token>JxKTODpBohwWQnd</token>
<uid_file_owner>admin</uid_file_owner>
<displayname_file_owner>Admin</displayname_file_owner>
<additional_info_file_owner>admin@example.org</additional_info_file_owner>
<state>0</state>
<path>/Neue Datei.docx</path>
<item_type>file</item_type>
<mimetype>application/vnd.openxmlformats-officedocument.wordprocessingml.document</mimetype>
<storage_id>shared::/Neue Datei.docx</storage_id>
<storage>0</storage>
<item_source>1284d238-aa92-42ce-bdc4-0b0000009157$some-admin-user-id-0000-000000000000!34c6d699-bbf1-46ed-b544-89d1b91758a0</item_source>
<file_source>1284d238-aa92-42ce-bdc4-0b0000009157$some-admin-user-id-0000-000000000000!34c6d699-bbf1-46ed-b544-89d1b91758a0</file_source>
<file_parent>1284d238-aa92-42ce-bdc4-0b0000009157$some-admin-user-id-0000-000000000000!some-admin-user-id-0000-000000000000</file_parent>
<file_target>/Neue Datei.docx</file_target>
<share_with_user_type>0</share_with_user_type>
<share_with_additional_info></share_with_additional_info>
<mail_send>0</mail_send>
<name>Quicklink</name>
<url>https://localhost:9200/s/JxKTODpBohwWQnd</url>
<quicklink>true</quicklink>
</element>
<element>
<id>1284d238-aa92-42ce-bdc4-0b0000009157:cf792ee7-cdf7-4bc3-99db-7de27ae01d2d:0b334991-066a-4d08-b9dd-00629c32ce62</id>
<share_type>0</share_type>
<uid_owner>admin</uid_owner>
<displayname_owner>Admin</displayname_owner>
<additional_info_owner>admin@example.org</additional_info_owner>
<permissions>31</permissions>
<stime>1663915914</stime>
<parent></parent>
<expiration></expiration>
<token></token>
<uid_file_owner>cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</uid_file_owner>
<displayname_file_owner></displayname_file_owner>
<additional_info_file_owner></additional_info_file_owner>
<state>0</state>
<path>/Shared Folder</path>
<item_type>folder</item_type>
<mimetype>httpd/unix-directory</mimetype>
<storage_id>shared::/Shares/Shared Folder</storage_id>
<storage>0</storage>
<item_source>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!5e683056-2338-49b1-add0-bb1c41b5a1dc</item_source>
<file_source>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!5e683056-2338-49b1-add0-bb1c41b5a1dc</file_source>
<file_parent>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</file_parent>
<file_target>/Shares/Shared Folder</file_target>
<share_with_user_type>0</share_with_user_type>
<share_with_additional_info></share_with_additional_info>
<mail_send>0</mail_send>
<name></name>
</element>
<element>
<id>1284d238-aa92-42ce-bdc4-0b0000009157:cf792ee7-cdf7-4bc3-99db-7de27ae01d2d:1b90eac6-d497-4904-9e5f-915af7f2b32b</id>
<share_type>0</share_type>
<uid_owner>admin</uid_owner>
<displayname_owner>Admin</displayname_owner>
<additional_info_owner>admin@example.org</additional_info_owner>
<permissions>31</permissions>
<stime>1663920764</stime>
<parent></parent>
<expiration></expiration>
<token></token>
<uid_file_owner>cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</uid_file_owner>
<displayname_file_owner></displayname_file_owner>
<additional_info_file_owner></additional_info_file_owner>
<state>0</state>
<path>/Folder</path>
<item_type>folder</item_type>
<mimetype>httpd/unix-directory</mimetype>
<storage_id>shared::/Shares/Folder</storage_id>
<storage>0</storage>
<item_source>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!3e519da2-1812-4e44-a9b1-390144bb3feb</item_source>
<file_source>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!3e519da2-1812-4e44-a9b1-390144bb3feb</file_source>
<file_parent>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</file_parent>
<file_target>/Shares/Folder</file_target>
<share_with>marie</share_with>
<share_with_user_type>0</share_with_user_type>
<share_with_displayname>Marie Skłodowska Curie</share_with_displayname>
<share_with_additional_info>marie@example.org</share_with_additional_info>
<mail_send>0</mail_send>
<name></name>
</element>
<element>
<id>1284d238-aa92-42ce-bdc4-0b0000009157:cf792ee7-cdf7-4bc3-99db-7de27ae01d2d:c36a7974-f6ad-4531-b1fa-41f03d6bd499</id>
<share_type>0</share_type>
<uid_owner>admin</uid_owner>
<displayname_owner>Admin</displayname_owner>
<additional_info_owner>admin@example.org</additional_info_owner>
<permissions>17</permissions>
<stime>1663920729</stime>
<parent></parent>
<expiration></expiration>
<token></token>
<uid_file_owner>cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</uid_file_owner>
<displayname_file_owner></displayname_file_owner>
<additional_info_file_owner></additional_info_file_owner>
<state>0</state>
<path>/Shared Folder</path>
<item_type>folder</item_type>
<mimetype>httpd/unix-directory</mimetype>
<storage_id>shared::/Shares/Shared Folder</storage_id>
<storage>0</storage>
<item_source>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!5e683056-2338-49b1-add0-bb1c41b5a1dc</item_source>
<file_source>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!5e683056-2338-49b1-add0-bb1c41b5a1dc</file_source>
<file_parent>1284d238-aa92-42ce-bdc4-0b0000009157$cf792ee7-cdf7-4bc3-99db-7de27ae01d2d!cf792ee7-cdf7-4bc3-99db-7de27ae01d2d</file_parent>
<file_target>/Shares/Shared Folder</file_target>
<share_with>marie</share_with>
<share_with_user_type>0</share_with_user_type>
<share_with_displayname>Marie Skłodowska Curie</share_with_displayname>
<share_with_additional_info>marie@example.org</share_with_additional_info>
<mail_send>0</mail_send>
<name></name>
</element>
</data>
</ocs>
``` | priority | add parentid in ocs and webdav report responses web offers a navigation into parent folders in various pages situations for file listings on search result page or in the search preview items each file can and likely has a different parent share listings on shared with others shared via link pages each shared file could have a different parent in personal project spaces when looking at the shares section in the right sidebar of a subfolder or file of a shared parent folder in the future favorites listing in order to solve we need to be able to include the file id of the respective parent into the navigation as a result the ga blocking requirement is web needs the parent id of files folders listed in ocs share and webdav report responses edit the property already exists in the ocs api and is called file parent hence a fitting name for webdav would be oc file parent solution webdav propfind xml d multistatus xmlns s xmlns d dav xmlns oc dav spaces srdnvck fri sep gmt http ok http not found dav spaces space space rdnvck fri sep gmt http ok dav spaces folder folder rdnvck fri sep gmt http ok dav spaces shared shared folder srdnvck fri sep gmt http ok note the space root has no parent ocs list outgoing shares xml ok ok cyvwfrqbrzpgjva admin admin admin example org yszgjetxqxowetg admin admin admin example org neue datei txt file text plain shared neue datei txt some admin user id some admin user id some admin user id some admin user id neue datei txt quicklink true mssvqckvgcjdhal admin admin admin example org jxktodpbohwwqnd admin admin admin example org neue datei docx file application vnd openxmlformats officedocument wordprocessingml document shared neue datei docx some admin user id some admin user id some admin user id some admin user id neue datei docx quicklink true admin admin admin example org shared folder folder httpd unix directory shared shares shared folder shares shared folder admin admin admin example org folder folder httpd unix directory shared shares folder shares folder marie marie skłodowska curie marie example org admin admin admin example org shared folder folder httpd unix directory shared shares shared folder shares shared folder marie marie skłodowska curie marie example org | 1 |
566,117 | 16,796,434,722 | IssuesEvent | 2021-06-16 04:42:42 | CLIxIndia-Dev/CLIxDashboard | https://api.github.com/repos/CLIxIndia-Dev/CLIxDashboard | closed | Footer Links opening with a different favicon | bug frontend high severity low priority | **FAQ Question on Hover Color Change**
As a Test Engineer, I want to see the footer link open with original clix fav icon
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'https://staging-clixdashboard.tiss.edu/home'
2. Scroll on 'Footer or https://staging-clixdashboard.tiss.edu/static/media/mit.8342a293.svg'
3. See error
**Expected behavior**
FavIcon should be proper
**Screenshots**

**Desktop (please complete the following information):**
- OS: [Windows 10]
- Browser [chrome]
- Version [89.0]
| 1.0 | Footer Links opening with a different favicon - **FAQ Question on Hover Color Change**
As a Test Engineer, I want to see the footer link open with original clix fav icon
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'https://staging-clixdashboard.tiss.edu/home'
2. Scroll on 'Footer or https://staging-clixdashboard.tiss.edu/static/media/mit.8342a293.svg'
3. See error
**Expected behavior**
FavIcon should be proper
**Screenshots**

**Desktop (please complete the following information):**
- OS: [Windows 10]
- Browser [chrome]
- Version [89.0]
| priority | footer links opening with a different favicon faq question on hover color change as a test engineer i want to see the footer link open with original clix fav icon to reproduce steps to reproduce the behavior go to scroll on footer or see error expected behavior favicon should be proper screenshots desktop please complete the following information os browser version | 1 |
568,113 | 16,959,674,675 | IssuesEvent | 2021-06-29 00:38:57 | tallguyjenks/PyRM | https://api.github.com/repos/tallguyjenks/PyRM | closed | 📦️: Add Test suite capabilities to CI/CD pipeline | ✨️ goal: improve 🔢️ points: 5 🟠️ priority: high | - [x] so tests are ran automatically with new code pushes
- [x] Python unit testing
- [x] see output if not passed and work this into the CI/CD process
- [x] utilize custom azure pipeline from work for boilerplate on this | 1.0 | 📦️: Add Test suite capabilities to CI/CD pipeline - - [x] so tests are ran automatically with new code pushes
- [x] Python unit testing
- [x] see output if not passed and work this into the CI/CD process
- [x] utilize custom azure pipeline from work for boilerplate on this | priority | 📦️ add test suite capabilities to ci cd pipeline so tests are ran automatically with new code pushes python unit testing see output if not passed and work this into the ci cd process utilize custom azure pipeline from work for boilerplate on this | 1 |
324,714 | 9,908,087,869 | IssuesEvent | 2019-06-27 17:23:17 | python/mypy | https://api.github.com/repos/python/mypy | closed | New semantic analyzer: infinite loop in daemon with decorators, nested classes | bug new-semantic-analyzer priority-0-high topic-fine-grained-incremental | Minimized from an example reported by a user at dropbox.
```
[case testInfiniteLoop]
# Only happens with new analyzer???
# flags: --new-semantic-analyzer
[file a.py]
from b import f
from typing import Callable, TypeVar
F = TypeVar('F', bound=Callable)
def dec(x: F) -> F: return x
@dec
def foo(self):
class A:
@classmethod
def asdf(cls, x: 'A') -> None: pass
@dec
def bar(self):
class B:
@classmethod
def asdf(cls, x: 'B') -> None: pass
f()
[file b.py]
def f() -> int: pass
[file b.py.2]
def f() -> str: pass
[builtins fixtures/classmethod.pyi]
[out]
==
```
The issue seems to be that if a nested class isn't reprocessed and the toplevel gets reprocessed, it disappears from the symbol table and triggers its dependencies. If the toplevel depends on the nested class, then this can cause ping-ponging where the two nested classes alternate disappearing and reappearing.
The decorator and the method referencing the class are there because the create the needed dependencies.
| 1.0 | New semantic analyzer: infinite loop in daemon with decorators, nested classes - Minimized from an example reported by a user at dropbox.
```
[case testInfiniteLoop]
# Only happens with new analyzer???
# flags: --new-semantic-analyzer
[file a.py]
from b import f
from typing import Callable, TypeVar
F = TypeVar('F', bound=Callable)
def dec(x: F) -> F: return x
@dec
def foo(self):
class A:
@classmethod
def asdf(cls, x: 'A') -> None: pass
@dec
def bar(self):
class B:
@classmethod
def asdf(cls, x: 'B') -> None: pass
f()
[file b.py]
def f() -> int: pass
[file b.py.2]
def f() -> str: pass
[builtins fixtures/classmethod.pyi]
[out]
==
```
The issue seems to be that if a nested class isn't reprocessed and the toplevel gets reprocessed, it disappears from the symbol table and triggers its dependencies. If the toplevel depends on the nested class, then this can cause ping-ponging where the two nested classes alternate disappearing and reappearing.
The decorator and the method referencing the class are there because the create the needed dependencies.
| priority | new semantic analyzer infinite loop in daemon with decorators nested classes minimized from an example reported by a user at dropbox only happens with new analyzer flags new semantic analyzer from b import f from typing import callable typevar f typevar f bound callable def dec x f f return x dec def foo self class a classmethod def asdf cls x a none pass dec def bar self class b classmethod def asdf cls x b none pass f def f int pass def f str pass the issue seems to be that if a nested class isn t reprocessed and the toplevel gets reprocessed it disappears from the symbol table and triggers its dependencies if the toplevel depends on the nested class then this can cause ping ponging where the two nested classes alternate disappearing and reappearing the decorator and the method referencing the class are there because the create the needed dependencies | 1 |
631,784 | 20,160,157,447 | IssuesEvent | 2022-02-09 20:35:03 | dtcenter/METplus | https://api.github.com/repos/dtcenter/METplus | closed | Enhance TCGen wrapper to add support for new configurations | type: enhancement priority: high alert: NEED MORE DEFINITION alert: NEED ACCOUNT KEY requestor: DTC/T&E required: FOR OFFICIAL RELEASE METplus: Configuration | Changed in PR dtcenter/MET#1967 and from future PR that closes dtcenter/MET#1810 add new settings for the tc_gen tool that should be supported in the wrapper.
## Describe the Enhancement ##
Add support for the new settings.
### Time Estimate ###
~1-2 days
### Sub-Issues ###
Consider breaking the enhancement down into sub-issues.
- [X] *Add a checkbox for each sub-issue here.*
### Relevant Deadlines ###
*List relevant project deadlines here or state NONE.*
### Funding Source ###
*Define the source of funding and account keys here or state NONE.*
## Define the Metadata ##
### Assignee ###
- [X] Select **engineer(s)** or **no engineer** required
- [ ] Select **scientist(s)** or **no scientist** required
### Labels ###
- [ ] Select **component(s)**
- [ ] Select **priority**
- [ ] Select **requestor(s)**
### Projects and Milestone ###
- [ ] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label
- [ ] Select **Milestone** as the next official version or **Future Versions**
## Define Related Issue(s) ##
Consider the impact to the other METplus components.
- [ ] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose)
## Enhancement Checklist ##
See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details.
- [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**.
- [ ] Fork this repository or create a branch of **develop**.
Branch name: `feature_<Issue Number>_<Description>`
- [ ] Complete the development and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update unit tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **develop**.
Pull request: `feature <Issue Number> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)** and **Linked issues**
Select: **Repository** level development cycle **Project** for the next official release
Select: **Milestone** as the next official version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Close this issue.
| 1.0 | Enhance TCGen wrapper to add support for new configurations - Changed in PR dtcenter/MET#1967 and from future PR that closes dtcenter/MET#1810 add new settings for the tc_gen tool that should be supported in the wrapper.
## Describe the Enhancement ##
Add support for the new settings.
### Time Estimate ###
~1-2 days
### Sub-Issues ###
Consider breaking the enhancement down into sub-issues.
- [X] *Add a checkbox for each sub-issue here.*
### Relevant Deadlines ###
*List relevant project deadlines here or state NONE.*
### Funding Source ###
*Define the source of funding and account keys here or state NONE.*
## Define the Metadata ##
### Assignee ###
- [X] Select **engineer(s)** or **no engineer** required
- [ ] Select **scientist(s)** or **no scientist** required
### Labels ###
- [ ] Select **component(s)**
- [ ] Select **priority**
- [ ] Select **requestor(s)**
### Projects and Milestone ###
- [ ] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label
- [ ] Select **Milestone** as the next official version or **Future Versions**
## Define Related Issue(s) ##
Consider the impact to the other METplus components.
- [ ] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose)
## Enhancement Checklist ##
See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details.
- [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**.
- [ ] Fork this repository or create a branch of **develop**.
Branch name: `feature_<Issue Number>_<Description>`
- [ ] Complete the development and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update unit tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **develop**.
Pull request: `feature <Issue Number> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)** and **Linked issues**
Select: **Repository** level development cycle **Project** for the next official release
Select: **Milestone** as the next official version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Close this issue.
| priority | enhance tcgen wrapper to add support for new configurations changed in pr dtcenter met and from future pr that closes dtcenter met add new settings for the tc gen tool that should be supported in the wrapper describe the enhancement add support for the new settings time estimate days sub issues consider breaking the enhancement down into sub issues add a checkbox for each sub issue here relevant deadlines list relevant project deadlines here or state none funding source define the source of funding and account keys here or state none define the metadata assignee select engineer s or no engineer required select scientist s or no scientist required labels select component s select priority select requestor s projects and milestone select repository and or organization level project s or add alert need project assignment label select milestone as the next official version or future versions define related issue s consider the impact to the other metplus components enhancement checklist see the for details complete the issue definition above including the time estimate and funding source fork this repository or create a branch of develop branch name feature complete the development and test your changes add update log messages for easier debugging add update unit tests add update documentation push local changes to github submit a pull request to merge into develop pull request feature define the pull request metadata as permissions allow select reviewer s and linked issues select repository level development cycle project for the next official release select milestone as the next official version iterate until the reviewer s accept and merge your changes delete your fork or branch close this issue | 1 |
90,028 | 3,808,265,120 | IssuesEvent | 2016-03-25 14:13:36 | afollestad/app-theme-engine | https://api.github.com/repos/afollestad/app-theme-engine | closed | Toolbar tinting | bug high priority | For whatever reason, Toolbar navigation icon, menu icon and title tinting stopped working with 1.0. Not sure if that matters but it's not an appbar. | 1.0 | Toolbar tinting - For whatever reason, Toolbar navigation icon, menu icon and title tinting stopped working with 1.0. Not sure if that matters but it's not an appbar. | priority | toolbar tinting for whatever reason toolbar navigation icon menu icon and title tinting stopped working with not sure if that matters but it s not an appbar | 1 |
449,685 | 12,973,476,026 | IssuesEvent | 2020-07-21 14:06:04 | ooni/probe-engine | https://api.github.com/repos/ooni/probe-engine | closed | Use best probe service also for orchestration et al. | bug effort/XL priority/high | Once https://github.com/ooni/probe-engine/issues/650 is done, we will be using ps.ooni.io as the base URL every time we wanna speak with a probe service. Yet some interactions are currently more robust (e.g. bouncer), while others are more fragile because we hardcode ps.ooni.io. What we should actually be doing is to use the probe service selected in `MaybeLookupBackends` also for these interactions for which we currently hardcode ps.ooni.io.
Closes https://github.com/ooni/probe-engine/issues/63 | 1.0 | Use best probe service also for orchestration et al. - Once https://github.com/ooni/probe-engine/issues/650 is done, we will be using ps.ooni.io as the base URL every time we wanna speak with a probe service. Yet some interactions are currently more robust (e.g. bouncer), while others are more fragile because we hardcode ps.ooni.io. What we should actually be doing is to use the probe service selected in `MaybeLookupBackends` also for these interactions for which we currently hardcode ps.ooni.io.
Closes https://github.com/ooni/probe-engine/issues/63 | priority | use best probe service also for orchestration et al once is done we will be using ps ooni io as the base url every time we wanna speak with a probe service yet some interactions are currently more robust e g bouncer while others are more fragile because we hardcode ps ooni io what we should actually be doing is to use the probe service selected in maybelookupbackends also for these interactions for which we currently hardcode ps ooni io closes | 1 |
326,508 | 9,957,249,811 | IssuesEvent | 2019-07-05 16:09:06 | econ-ark/OverARK | https://api.github.com/repos/econ-ark/OverARK | closed | Review OTS Technical Assessment | blocked (external) priority: high | OTS has written a technical assessment of econ-ARK and also provided a breakdown of the assessments into a priorities list. I've moved this information into [a google doc](https://docs.google.com/document/d/1xf6FUpv9DkcwNRIIYrJbWDjqAFTNh7yv6A0rFOrJzd8/edit#) where it's easier for us to collaboratively ask questions, via the comments tool. Here's the plan:
- [x] @llorracc, @mnwhite, and anyone else who is interested (@albop? @pkofod? @jackiekazil?) should read through carefully and leave any questions as comments, ideally within the next 1-2 weeks
- [x] once everyone's done this, I will give OTS the heads up that we've got questions ready for them
- [ ] OTS will answer said questions, and they or I (SGM) will incorporate those answers into a final version of the technical assessment, as needed, for future reference
- [ ] SGM will post the final assessment & priorities list somewhere, probably the wiki, and will delete the google doc so as to avoid redundant information
If you would like access to the google doc and I haven't given it to you already, please ping me here. | 1.0 | Review OTS Technical Assessment - OTS has written a technical assessment of econ-ARK and also provided a breakdown of the assessments into a priorities list. I've moved this information into [a google doc](https://docs.google.com/document/d/1xf6FUpv9DkcwNRIIYrJbWDjqAFTNh7yv6A0rFOrJzd8/edit#) where it's easier for us to collaboratively ask questions, via the comments tool. Here's the plan:
- [x] @llorracc, @mnwhite, and anyone else who is interested (@albop? @pkofod? @jackiekazil?) should read through carefully and leave any questions as comments, ideally within the next 1-2 weeks
- [x] once everyone's done this, I will give OTS the heads up that we've got questions ready for them
- [ ] OTS will answer said questions, and they or I (SGM) will incorporate those answers into a final version of the technical assessment, as needed, for future reference
- [ ] SGM will post the final assessment & priorities list somewhere, probably the wiki, and will delete the google doc so as to avoid redundant information
If you would like access to the google doc and I haven't given it to you already, please ping me here. | priority | review ots technical assessment ots has written a technical assessment of econ ark and also provided a breakdown of the assessments into a priorities list i ve moved this information into where it s easier for us to collaboratively ask questions via the comments tool here s the plan llorracc mnwhite and anyone else who is interested albop pkofod jackiekazil should read through carefully and leave any questions as comments ideally within the next weeks once everyone s done this i will give ots the heads up that we ve got questions ready for them ots will answer said questions and they or i sgm will incorporate those answers into a final version of the technical assessment as needed for future reference sgm will post the final assessment priorities list somewhere probably the wiki and will delete the google doc so as to avoid redundant information if you would like access to the google doc and i haven t given it to you already please ping me here | 1 |
433,236 | 12,504,304,575 | IssuesEvent | 2020-06-02 08:47:45 | perslab/CELLEX | https://api.github.com/repos/perslab/CELLEX | opened | gene mapping function: from symbol to ensembl | enhancement high priority |
**Problem**
Many scRNA-seq data sets come with human gene symbols. We should make it easier for users to map to Ensembl IDs, since this is used in CELLECT.
**Solution**
Write function to map from human ens to symbol.
Additionally, _consider_ updating mapping function names for more consistency:
(NEW: ens_human_symbol_to_ens --> human_symbol_to_human_ens)
ens_human_to_symbol --> human_ens_to_human_symbol
ens_mouse_to_ens_human --> mouse_ens_to_human_ens
mgi_mouse_to_ens_mouse --> mouse_symbol_to_mouse_ens
The appropriate file to make the mapping is attached (which allows for mapping genes with 'version numbers' to the appropriate Ensembl ID):
[GRCh38.ens_v90.gene_name_version2ensembl.txt.gz](https://github.com/perslab/CELLEX/files/4715827/GRCh38.ens_v90.gene_name_version2ensembl.txt.gz)
| 1.0 | gene mapping function: from symbol to ensembl -
**Problem**
Many scRNA-seq data sets come with human gene symbols. We should make it easier for users to map to Ensembl IDs, since this is used in CELLECT.
**Solution**
Write function to map from human ens to symbol.
Additionally, _consider_ updating mapping function names for more consistency:
(NEW: ens_human_symbol_to_ens --> human_symbol_to_human_ens)
ens_human_to_symbol --> human_ens_to_human_symbol
ens_mouse_to_ens_human --> mouse_ens_to_human_ens
mgi_mouse_to_ens_mouse --> mouse_symbol_to_mouse_ens
The appropriate file to make the mapping is attached (which allows for mapping genes with 'version numbers' to the appropriate Ensembl ID):
[GRCh38.ens_v90.gene_name_version2ensembl.txt.gz](https://github.com/perslab/CELLEX/files/4715827/GRCh38.ens_v90.gene_name_version2ensembl.txt.gz)
| priority | gene mapping function from symbol to ensembl problem many scrna seq data sets come with human gene symbols we should make it easier for users to map to ensembl ids since this is used in cellect solution write function to map from human ens to symbol additionally consider updating mapping function names for more consistency new ens human symbol to ens human symbol to human ens ens human to symbol human ens to human symbol ens mouse to ens human mouse ens to human ens mgi mouse to ens mouse mouse symbol to mouse ens the appropriate file to make the mapping is attached which allows for mapping genes with version numbers to the appropriate ensembl id | 1 |
367,555 | 10,855,121,255 | IssuesEvent | 2019-11-13 17:42:36 | TLTMedia/Marginalia | https://api.github.com/repos/TLTMedia/Marginalia | closed | comment box drop down restriction broken, Reply box delete broken | Frontend [HIGH] Priority | Not sure the where the problem is. Assume the new UI is why it is broken. | 1.0 | comment box drop down restriction broken, Reply box delete broken - Not sure the where the problem is. Assume the new UI is why it is broken. | priority | comment box drop down restriction broken reply box delete broken not sure the where the problem is assume the new ui is why it is broken | 1 |
29,520 | 2,716,349,358 | IssuesEvent | 2015-04-10 18:32:19 | cs2103jan2015-w11-2c/main | https://api.github.com/repos/cs2103jan2015-w11-2c/main | closed | A user can set recursions/event repetitions | Controller/Data File Storage Parser priority.high priority.low type.story UI Unlikely | ...so that the user does not need retype the event details multiple times | 2.0 | A user can set recursions/event repetitions - ...so that the user does not need retype the event details multiple times | priority | a user can set recursions event repetitions so that the user does not need retype the event details multiple times | 1 |
138,863 | 5,348,822,888 | IssuesEvent | 2017-02-18 09:26:57 | MinetestForFun/server-minetestforfun | https://api.github.com/repos/MinetestForFun/server-minetestforfun | opened | [pipeworks] Pipes' spilling malfunctioning | Modding ➤ BugFix Priority: High | This may be due to the current version of pipeworks being outdated, or one of our modifications on the mod, but upon reaching the end of an unconnected pipe line, the item entity is not dropped.
The issue was confirmed with the latest version of the server's repository, with all pipes. I determined all pipes were affected after checking that when the unconnected end pipe was a vacuuming pipe, no entity would be sucked back in, meaning no entity would be dropped at all (which has to do with the common transport core).
I'll work on a theory that could solve this quickly. | 1.0 | [pipeworks] Pipes' spilling malfunctioning - This may be due to the current version of pipeworks being outdated, or one of our modifications on the mod, but upon reaching the end of an unconnected pipe line, the item entity is not dropped.
The issue was confirmed with the latest version of the server's repository, with all pipes. I determined all pipes were affected after checking that when the unconnected end pipe was a vacuuming pipe, no entity would be sucked back in, meaning no entity would be dropped at all (which has to do with the common transport core).
I'll work on a theory that could solve this quickly. | priority | pipes spilling malfunctioning this may be due to the current version of pipeworks being outdated or one of our modifications on the mod but upon reaching the end of an unconnected pipe line the item entity is not dropped the issue was confirmed with the latest version of the server s repository with all pipes i determined all pipes were affected after checking that when the unconnected end pipe was a vacuuming pipe no entity would be sucked back in meaning no entity would be dropped at all which has to do with the common transport core i ll work on a theory that could solve this quickly | 1 |
479,192 | 13,792,508,310 | IssuesEvent | 2020-10-09 13:42:58 | Moonlet/wallet-app | https://api.github.com/repos/Moonlet/wallet-app | opened | [Ledger] Typo on Sign 1st transaction | High Priority bug staking |
Current behaviour: The sign button from the Processing/Sign transaction screen has a typo. It shows Sign 1th transaction
Expected behaviour: It should have 1st, 2nd or 3rd conversion, son on so forth.
| 1.0 | [Ledger] Typo on Sign 1st transaction -
Current behaviour: The sign button from the Processing/Sign transaction screen has a typo. It shows Sign 1th transaction
Expected behaviour: It should have 1st, 2nd or 3rd conversion, son on so forth.
| priority | typo on sign transaction current behaviour the sign button from the processing sign transaction screen has a typo it shows sign transaction expected behaviour it should have or conversion son on so forth | 1 |
68,609 | 3,291,443,306 | IssuesEvent | 2015-10-30 09:08:13 | radike/issue-tracker | https://api.github.com/repos/radike/issue-tracker | closed | Comment section | enhancement priority HIGH | The comment section should be under the issue description, also adding comment should be done on the same page (so the user is not redirected somewhere else).
After submitting a comment, the user is redirected back to the issue detail (not to the list of all comments). | 1.0 | Comment section - The comment section should be under the issue description, also adding comment should be done on the same page (so the user is not redirected somewhere else).
After submitting a comment, the user is redirected back to the issue detail (not to the list of all comments). | priority | comment section the comment section should be under the issue description also adding comment should be done on the same page so the user is not redirected somewhere else after submitting a comment the user is redirected back to the issue detail not to the list of all comments | 1 |
493,227 | 14,228,866,905 | IssuesEvent | 2020-11-18 04:55:20 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | opened | Type narrowing should be done in match patterns | Priority/High Team/CompilerFE Type/Improvement | **Description:**
$subject
Related spec issue: https://github.com/ballerina-platform/ballerina-spec/issues/152
| 1.0 | Type narrowing should be done in match patterns - **Description:**
$subject
Related spec issue: https://github.com/ballerina-platform/ballerina-spec/issues/152
| priority | type narrowing should be done in match patterns description subject related spec issue | 1 |
231,382 | 7,631,776,016 | IssuesEvent | 2018-05-05 06:48:30 | openshiftio/openshift.io | https://api.github.com/repos/openshiftio/openshift.io | closed | vert.x dnsresolver does not work in che | SEV2-high area/che priority/P0 status/pending-qa-verification team/che team/launcher type/bug | if you run vert.x without use of fmp on openshift - need to set Dvertx.disableDnsResolver=true to make it work on che or in any other IDE using pods.
possible fixes:
change https://github.com/redhat-developer/rh-che/blob/master/assembly/fabric8-stacks/src/main/resources/stacks.json#L476 to have this flag set
OR
change booster pom's to have it set for vertx:run
OR
change the booster to have a main to set the flag early | 1.0 | vert.x dnsresolver does not work in che - if you run vert.x without use of fmp on openshift - need to set Dvertx.disableDnsResolver=true to make it work on che or in any other IDE using pods.
possible fixes:
change https://github.com/redhat-developer/rh-che/blob/master/assembly/fabric8-stacks/src/main/resources/stacks.json#L476 to have this flag set
OR
change booster pom's to have it set for vertx:run
OR
change the booster to have a main to set the flag early | priority | vert x dnsresolver does not work in che if you run vert x without use of fmp on openshift need to set dvertx disablednsresolver true to make it work on che or in any other ide using pods possible fixes change to have this flag set or change booster pom s to have it set for vertx run or change the booster to have a main to set the flag early | 1 |
134,091 | 5,220,030,844 | IssuesEvent | 2017-01-26 20:43:44 | vmware/vic | https://api.github.com/repos/vmware/vic | reopened | New docker client is not working with VIC [v1.13] | kind/note priority/high | New docker client is not working with VIC [version 1.13] and that needs to be documented somewhere
The original issue is tracked at https://github.com/vmware/vic/issues/3720 | 1.0 | New docker client is not working with VIC [v1.13] - New docker client is not working with VIC [version 1.13] and that needs to be documented somewhere
The original issue is tracked at https://github.com/vmware/vic/issues/3720 | priority | new docker client is not working with vic new docker client is not working with vic and that needs to be documented somewhere the original issue is tracked at | 1 |
683,650 | 23,390,037,437 | IssuesEvent | 2022-08-11 16:54:30 | deltaDAO/mvg-portal | https://api.github.com/repos/deltaDAO/mvg-portal | opened | [Feature] Increase transparency using checkboxes & disclaimers | Priority: High Type: Feature | ## Motivation / Problem
There is potential to increase transparency towards our users regarding the immutability of blockchains.
## Solution
(1) When publishing a service offering we can introduce (two) checkboxes next to the "SUBMIT" button which must be selected before the service offering can be published. Suggested text:
- I agree that my wallet address and public key will be transmitted to a smart contract stored permanently on the Gaia-X testnet, so I can prove service offering ownership and sales.
- I confirm that I did not provide personal data in the metadata, which will be stored permanently on-chain on the Gaia-X testnet.
(2) When consuming a service offering, we can add a disclaimer below the "BUY COMPUTE JOB" / "GET"button. Suggested text:
If you consume a service offering, your wallet address and public key will be stored permanently on-chain on the Gaia-X testnet. For more information, please refer to our [privacy policy](https://portal.minimal-gaia-x.eu/privacy/en).
## Alternatives
Instead of checkboxes we could also add an additional disclaimer. | 1.0 | [Feature] Increase transparency using checkboxes & disclaimers - ## Motivation / Problem
There is potential to increase transparency towards our users regarding the immutability of blockchains.
## Solution
(1) When publishing a service offering we can introduce (two) checkboxes next to the "SUBMIT" button which must be selected before the service offering can be published. Suggested text:
- I agree that my wallet address and public key will be transmitted to a smart contract stored permanently on the Gaia-X testnet, so I can prove service offering ownership and sales.
- I confirm that I did not provide personal data in the metadata, which will be stored permanently on-chain on the Gaia-X testnet.
(2) When consuming a service offering, we can add a disclaimer below the "BUY COMPUTE JOB" / "GET"button. Suggested text:
If you consume a service offering, your wallet address and public key will be stored permanently on-chain on the Gaia-X testnet. For more information, please refer to our [privacy policy](https://portal.minimal-gaia-x.eu/privacy/en).
## Alternatives
Instead of checkboxes we could also add an additional disclaimer. | priority | increase transparency using checkboxes disclaimers motivation problem there is potential to increase transparency towards our users regarding the immutability of blockchains solution when publishing a service offering we can introduce two checkboxes next to the submit button which must be selected before the service offering can be published suggested text i agree that my wallet address and public key will be transmitted to a smart contract stored permanently on the gaia x testnet so i can prove service offering ownership and sales i confirm that i did not provide personal data in the metadata which will be stored permanently on chain on the gaia x testnet when consuming a service offering we can add a disclaimer below the buy compute job get button suggested text if you consume a service offering your wallet address and public key will be stored permanently on chain on the gaia x testnet for more information please refer to our alternatives instead of checkboxes we could also add an additional disclaimer | 1 |
178,483 | 6,609,275,014 | IssuesEvent | 2017-09-19 14:03:36 | ccswbs/hjckrrh | https://api.github.com/repos/ccswbs/hjckrrh | closed | Siteimprove Issue: Naming Generic Landmarks - occurs on banner/footer (Round 2) | priority: high type: accessibility | The Siteimprove issue: Naming Generic Landmarks occurs on the banner and footer that comes with the hjckrrh platform. Multiple sites use this so will likely be seeing this error.
Potential fixes:
- Add an aria-label to the ```<section>``` elements in the slideshow banner templates (both Manual and Auto-play)
- Add an aria-label to the ```<section>``` elements in the footer mini-panel template
**Related Issues:** See https://github.com/ccswbs/hjckrrh/issues/791
# Sample
**Note:** In Siteimprove, the banner does not visually show in the cached page screenshot, but it is there on the site.

| 1.0 | Siteimprove Issue: Naming Generic Landmarks - occurs on banner/footer (Round 2) - The Siteimprove issue: Naming Generic Landmarks occurs on the banner and footer that comes with the hjckrrh platform. Multiple sites use this so will likely be seeing this error.
Potential fixes:
- Add an aria-label to the ```<section>``` elements in the slideshow banner templates (both Manual and Auto-play)
- Add an aria-label to the ```<section>``` elements in the footer mini-panel template
**Related Issues:** See https://github.com/ccswbs/hjckrrh/issues/791
# Sample
**Note:** In Siteimprove, the banner does not visually show in the cached page screenshot, but it is there on the site.

| priority | siteimprove issue naming generic landmarks occurs on banner footer round the siteimprove issue naming generic landmarks occurs on the banner and footer that comes with the hjckrrh platform multiple sites use this so will likely be seeing this error potential fixes add an aria label to the elements in the slideshow banner templates both manual and auto play add an aria label to the elements in the footer mini panel template related issues see sample note in siteimprove the banner does not visually show in the cached page screenshot but it is there on the site | 1 |
270,755 | 8,469,635,641 | IssuesEvent | 2018-10-23 23:55:44 | MrBlizzard/RCAdmins-Tracker | https://api.github.com/repos/MrBlizzard/RCAdmins-Tracker | closed | Bug fixes | bug priority:high | - size converters lore wasn't changed to reflect new odds
- Dailycatch menu needs to be updated to proper reset prices
- edit dailycatch chances, raise money chance to 120 instead of 100 | 1.0 | Bug fixes - - size converters lore wasn't changed to reflect new odds
- Dailycatch menu needs to be updated to proper reset prices
- edit dailycatch chances, raise money chance to 120 instead of 100 | priority | bug fixes size converters lore wasn t changed to reflect new odds dailycatch menu needs to be updated to proper reset prices edit dailycatch chances raise money chance to instead of | 1 |
277,505 | 8,629,184,558 | IssuesEvent | 2018-11-21 19:49:37 | franceme/rigorityj | https://api.github.com/repos/franceme/rigorityj | closed | Design Messaging Structure | Priority: High Status: In Progress Type: Enhancement | Design a "template" like structure that matches the format of the current legacy standard out. | 1.0 | Design Messaging Structure - Design a "template" like structure that matches the format of the current legacy standard out. | priority | design messaging structure design a template like structure that matches the format of the current legacy standard out | 1 |
669,935 | 22,646,542,538 | IssuesEvent | 2022-07-01 09:15:03 | TencentBlueKing/bk-iam-saas | https://api.github.com/repos/TencentBlueKing/bk-iam-saas | closed | [RBAC] 缓存模块review问题汇总 | Type: Enhancement Layer: Backend Priority: High Size: XS(Hours) backlog | ## DONE
- [x] SubjectDepartmentCache
- [x] LocalGroupSystemAuthTypeCache
- [x] SubjectSystemGroupCache
- [x] GroupResourcePolicyCache
- pkg/cacheimpls/group_resource_policy.go 需要docstring补充下, 这个hash存储的数据 key=xxx, field=xxxx, value=xxxxx
- 只set当前查询action的对应值
- [x] ActionDetailCache
- [x] LocalActionDetailCache
- 问题: 实例视图更新接口 https://bk.tencent.com/docs/document/6.0/160/8444, 如果更新了 resource_type_chain, 此时应该失效actionDetail? (但是不知道哪里被引用到? )
```sql
SELECT pk FROM action WHERE system_id = ? AND id = ? LIMIT 1;
SELECT pk, action_system_id, action_id, resource_type_system_id, resource_type_id, name_alias, name_alias_en, selection_mode, related_instance_selections FROM saas_action_resource_type WHERE action_system_id = ? AND action_id = ?;
SELECT pk, system_id, id, name, name_en, is_dynamic, resource_type_chain FROM saas_instance_selection WHERE system_id = ?;
SELECT pk, system_id, id FROM resource_type WHERE system_id = ? AND id IN (?);
```
- [x] GroupSystemAuthTypeCache
- pkg/abac/prp/group/redis.go 中 没有用 .cache(memory.go中retriever用了.cache=xxxx) => 需要保持一致, 建议memory.go中的.cache全部直接换成 cacheimpls.Localxxxxx
| 1.0 | [RBAC] 缓存模块review问题汇总 - ## DONE
- [x] SubjectDepartmentCache
- [x] LocalGroupSystemAuthTypeCache
- [x] SubjectSystemGroupCache
- [x] GroupResourcePolicyCache
- pkg/cacheimpls/group_resource_policy.go 需要docstring补充下, 这个hash存储的数据 key=xxx, field=xxxx, value=xxxxx
- 只set当前查询action的对应值
- [x] ActionDetailCache
- [x] LocalActionDetailCache
- 问题: 实例视图更新接口 https://bk.tencent.com/docs/document/6.0/160/8444, 如果更新了 resource_type_chain, 此时应该失效actionDetail? (但是不知道哪里被引用到? )
```sql
SELECT pk FROM action WHERE system_id = ? AND id = ? LIMIT 1;
SELECT pk, action_system_id, action_id, resource_type_system_id, resource_type_id, name_alias, name_alias_en, selection_mode, related_instance_selections FROM saas_action_resource_type WHERE action_system_id = ? AND action_id = ?;
SELECT pk, system_id, id, name, name_en, is_dynamic, resource_type_chain FROM saas_instance_selection WHERE system_id = ?;
SELECT pk, system_id, id FROM resource_type WHERE system_id = ? AND id IN (?);
```
- [x] GroupSystemAuthTypeCache
- pkg/abac/prp/group/redis.go 中 没有用 .cache(memory.go中retriever用了.cache=xxxx) => 需要保持一致, 建议memory.go中的.cache全部直接换成 cacheimpls.Localxxxxx
| priority | 缓存模块review问题汇总 done subjectdepartmentcache localgroupsystemauthtypecache subjectsystemgroupcache groupresourcepolicycache pkg cacheimpls group resource policy go 需要docstring补充下 这个hash存储的数据 key xxx field xxxx value xxxxx 只set当前查询action的对应值 actiondetailcache localactiondetailcache 问题 实例视图更新接口 如果更新了 resource type chain 此时应该失效actiondetail 但是不知道哪里被引用到 sql select pk from action where system id and id limit select pk action system id action id resource type system id resource type id name alias name alias en selection mode related instance selections from saas action resource type where action system id and action id select pk system id id name name en is dynamic resource type chain from saas instance selection where system id select pk system id id from resource type where system id and id in groupsystemauthtypecache pkg abac prp group redis go 中 没有用 cache memory go中retriever用了 cache xxxx 需要保持一致 建议memory go中的 cache全部直接换成 cacheimpls localxxxxx | 1 |
392,666 | 11,594,350,593 | IssuesEvent | 2020-02-24 15:11:43 | Spudnik-Group/Spudnik | https://api.github.com/repos/Spudnik-Group/Spudnik | closed | Create endpoint to get info about bot | feature priority:high | - server info
* name
* owner
* what features are enabled
* # of users
* #of channels
- bot info
* process information (ram usage, uptime, etc)
* total commands
* total guilds
* total users
* total channels | 1.0 | Create endpoint to get info about bot - - server info
* name
* owner
* what features are enabled
* # of users
* #of channels
- bot info
* process information (ram usage, uptime, etc)
* total commands
* total guilds
* total users
* total channels | priority | create endpoint to get info about bot server info name owner what features are enabled of users of channels bot info process information ram usage uptime etc total commands total guilds total users total channels | 1 |
267,162 | 8,379,689,804 | IssuesEvent | 2018-10-07 05:47:11 | pixelfed/pixelfed | https://api.github.com/repos/pixelfed/pixelfed | closed | Search suggestions for some hashtags stop after entering n characters or a # | bug priority - high ui | Example hashtag: `#cat`
Entering the following will load the hashtag as a suggestion:
`c`
`ca`
Entering the following will say "unable to find any matches"
`cat`
`#cat`
`#c`
`#ca`
`#cat`
| 1.0 | Search suggestions for some hashtags stop after entering n characters or a # - Example hashtag: `#cat`
Entering the following will load the hashtag as a suggestion:
`c`
`ca`
Entering the following will say "unable to find any matches"
`cat`
`#cat`
`#c`
`#ca`
`#cat`
| priority | search suggestions for some hashtags stop after entering n characters or a example hashtag cat entering the following will load the hashtag as a suggestion c ca entering the following will say unable to find any matches cat cat c ca cat | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.