Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 957 | labels stringlengths 4 795 | body stringlengths 1 259k | index stringclasses 12
values | text_combine stringlengths 96 259k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
827,682 | 31,792,319,095 | IssuesEvent | 2023-09-13 05:02:44 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [YSQL] BNL query on information_schema crashes in 2.16 | kind/enhancement area/ysql priority/medium | Jira Link: [DB-7018](https://yugabyte.atlassian.net/browse/DB-7018)
### Description
We see that the following query crashes in 2.16:
```
set yb_bnl_batch_size=1024;
SELECT bc.constraint_name as constraint_name, ac.column_name as column_name FROM information_schema.table_constraints bc, information_schema.key_column_usage ac WHERE bc.constraint_type = 'PRIMARY KEY' AND ac.table_name = bc.table_name AND ac.table_schema = bc.table_schema AND ac.constraint_name = bc.constraint_name AND bc.table_schema = 'public' AND bc.table_name = 'example_table' ORDER BY ac.ordinal_position ASC;
```
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information.
[DB-7018]: https://yugabyte.atlassian.net/browse/DB-7018?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | [YSQL] BNL query on information_schema crashes in 2.16 - Jira Link: [DB-7018](https://yugabyte.atlassian.net/browse/DB-7018)
### Description
We see that the following query crashes in 2.16:
```
set yb_bnl_batch_size=1024;
SELECT bc.constraint_name as constraint_name, ac.column_name as column_name FROM information_schema.table_constraints bc, information_schema.key_column_usage ac WHERE bc.constraint_type = 'PRIMARY KEY' AND ac.table_name = bc.table_name AND ac.table_schema = bc.table_schema AND ac.constraint_name = bc.constraint_name AND bc.table_schema = 'public' AND bc.table_name = 'example_table' ORDER BY ac.ordinal_position ASC;
```
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information.
[DB-7018]: https://yugabyte.atlassian.net/browse/DB-7018?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | bnl query on information schema crashes in jira link description we see that the following query crashes in set yb bnl batch size select bc constraint name as constraint name ac column name as column name from information schema table constraints bc information schema key column usage ac where bc constraint type primary key and ac table name bc table name and ac table schema bc table schema and ac constraint name bc constraint name and bc table schema public and bc table name example table order by ac ordinal position asc warning please confirm that this issue does not contain any sensitive information i confirm this issue does not contain any sensitive information | 1 |
688,580 | 23,588,499,942 | IssuesEvent | 2022-08-23 13:31:50 | Kong/docs.konghq.com | https://api.github.com/repos/Kong/docs.konghq.com | closed | GW API Documentation: make the 1-1-1 relationship between GatewayClass, Gateway and KIC a constraint | priority/medium area/gateway-api | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Problem Statement
In the unmanaged Gateway API mode, ask users of Gateway API TP/beta to define only one `GatewayClass` and only one `Gateway` per KIC instance.
This is because the behavior for more than 1 `Gateway` resource in a GatewayClass per KIC instance is undefined today (#2559 captures the problem).
### Proposed Solution
Document KIC's Gateway API support in the following way:
In order to use KIC with Gateway API in unmanaged mode, create one `GatewayClass` and one `Gateway` resource for a KIC instance. Do not define more than one `Gateway` in a `GatewayClass`.
### Additional information
_No response_
### Acceptance Criteria
- [ ] A Gateway API documentation entry as described above exists. | 1.0 | GW API Documentation: make the 1-1-1 relationship between GatewayClass, Gateway and KIC a constraint - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Problem Statement
In the unmanaged Gateway API mode, ask users of Gateway API TP/beta to define only one `GatewayClass` and only one `Gateway` per KIC instance.
This is because the behavior for more than 1 `Gateway` resource in a GatewayClass per KIC instance is undefined today (#2559 captures the problem).
### Proposed Solution
Document KIC's Gateway API support in the following way:
In order to use KIC with Gateway API in unmanaged mode, create one `GatewayClass` and one `Gateway` resource for a KIC instance. Do not define more than one `Gateway` in a `GatewayClass`.
### Additional information
_No response_
### Acceptance Criteria
- [ ] A Gateway API documentation entry as described above exists. | priority | gw api documentation make the relationship between gatewayclass gateway and kic a constraint is there an existing issue for this i have searched the existing issues problem statement in the unmanaged gateway api mode ask users of gateway api tp beta to define only one gatewayclass and only one gateway per kic instance this is because the behavior for more than gateway resource in a gatewayclass per kic instance is undefined today captures the problem proposed solution document kic s gateway api support in the following way in order to use kic with gateway api in unmanaged mode create one gatewayclass and one gateway resource for a kic instance do not define more than one gateway in a gatewayclass additional information no response acceptance criteria a gateway api documentation entry as described above exists | 1 |
48,686 | 2,999,448,850 | IssuesEvent | 2015-07-23 19:05:08 | jayway/powermock | https://api.github.com/repos/jayway/powermock | opened | Investigate if mock policies fails when applied in a test suite | imported Milestone-Release1.5 Priority-Medium Type-Task | _From [johan.ha...@gmail.com](https://code.google.com/u/105676376875942041029/) on November 03, 2009 21:12:07_
From phatblat:
I have a few related test classes running great under the
PowerMockRunner but today I threw them together into a JUnit4 suite
and am finding that the Log4jMockPolicy I have applied to them is not
being honored. The tests still run and pass, the mock policy just
suppresses the following distracting errors:
log4j:ERROR A "org.apache.log4j.xml.DOMConfigurator" object is not
assignable to a "org.apache.log4j.spi.Configurator" variable.
log4j:ERROR The class "org.apache.log4j.spi.Configurator" was loaded
by
log4j:ERROR [org.powermock.core.classloader.MockClassLoader@11121f6]
whereas object of type
log4j:ERROR "org.apache.log4j.xml.DOMConfigurator" was loaded by
[sun.misc.Launcher$AppClassLoader@92e78c].
log4j:ERROR Could not instantiate configurator
[org.apache.log4j.xml.DOMConfigurator].
I thought this issue might have been related to the hierarchy of my
test class setup. I have an abstract base class which has all the mock
construction and PowerMock class-level annotations. The test
subclasses have no class-level annotations. I have experimented with
moving the @MockPolicy(Log4jMockPolicy.class) annotation to the
subclasses with no change in behavior, I still get the above errors
which mean the policy is not being applied. If any of the other
PowerMock annotations were not being picked up the tests would
certainly fail.
Below is a summary of the classes involved (each defined in own file,
class body removed for brevity):
@RunWith(Suite.class)
@SuiteClasses( { HCUploaderManagerImplReconciliationTest.class,
HCUploaderManagerImplDupsInEnteredStateTest.class })
public class HCUploaderManagerTestSuite {}
@RunWith(PowerMockRunner.class)
@PrepareForTest(BusinessManagerImpl.class)
@SuppressStaticInitializationFor
("com.company.business.BusinessManagerImpl")
@MockPolicy(Log4jMockPolicy.class)
public class HCUploaderManagerImplAbstractTest {...}
public class HCUploaderManagerImplReconciliationTest extends
HCUploaderManagerImplAbstractTest {...}
public class HCUploaderManagerImplDupsInEnteredStateTest extends
HCUploaderManagerImplAbstractTest {...}
My question is whether I have set up this test suite incorrectly or
does PowerMock not currently support mock policies when run within a
test suite?
_Original issue: http://code.google.com/p/powermock/issues/detail?id=191_ | 1.0 | Investigate if mock policies fails when applied in a test suite - _From [johan.ha...@gmail.com](https://code.google.com/u/105676376875942041029/) on November 03, 2009 21:12:07_
From phatblat:
I have a few related test classes running great under the
PowerMockRunner but today I threw them together into a JUnit4 suite
and am finding that the Log4jMockPolicy I have applied to them is not
being honored. The tests still run and pass, the mock policy just
suppresses the following distracting errors:
log4j:ERROR A "org.apache.log4j.xml.DOMConfigurator" object is not
assignable to a "org.apache.log4j.spi.Configurator" variable.
log4j:ERROR The class "org.apache.log4j.spi.Configurator" was loaded
by
log4j:ERROR [org.powermock.core.classloader.MockClassLoader@11121f6]
whereas object of type
log4j:ERROR "org.apache.log4j.xml.DOMConfigurator" was loaded by
[sun.misc.Launcher$AppClassLoader@92e78c].
log4j:ERROR Could not instantiate configurator
[org.apache.log4j.xml.DOMConfigurator].
I thought this issue might have been related to the hierarchy of my
test class setup. I have an abstract base class which has all the mock
construction and PowerMock class-level annotations. The test
subclasses have no class-level annotations. I have experimented with
moving the @MockPolicy(Log4jMockPolicy.class) annotation to the
subclasses with no change in behavior, I still get the above errors
which mean the policy is not being applied. If any of the other
PowerMock annotations were not being picked up the tests would
certainly fail.
Below is a summary of the classes involved (each defined in own file,
class body removed for brevity):
@RunWith(Suite.class)
@SuiteClasses( { HCUploaderManagerImplReconciliationTest.class,
HCUploaderManagerImplDupsInEnteredStateTest.class })
public class HCUploaderManagerTestSuite {}
@RunWith(PowerMockRunner.class)
@PrepareForTest(BusinessManagerImpl.class)
@SuppressStaticInitializationFor
("com.company.business.BusinessManagerImpl")
@MockPolicy(Log4jMockPolicy.class)
public class HCUploaderManagerImplAbstractTest {...}
public class HCUploaderManagerImplReconciliationTest extends
HCUploaderManagerImplAbstractTest {...}
public class HCUploaderManagerImplDupsInEnteredStateTest extends
HCUploaderManagerImplAbstractTest {...}
My question is whether I have set up this test suite incorrectly or
does PowerMock not currently support mock policies when run within a
test suite?
_Original issue: http://code.google.com/p/powermock/issues/detail?id=191_ | priority | investigate if mock policies fails when applied in a test suite from on november from phatblat i have a few related test classes running great under the powermockrunner but today i threw them together into a suite and am finding that the i have applied to them is not being honored the tests still run and pass the mock policy just suppresses the following distracting errors error a org apache xml domconfigurator object is not assignable to a org apache spi configurator variable error the class org apache spi configurator was loaded by error whereas object of type error org apache xml domconfigurator was loaded by error could not instantiate configurator i thought this issue might have been related to the hierarchy of my test class setup i have an abstract base class which has all the mock construction and powermock class level annotations the test subclasses have no class level annotations i have experimented with moving the mockpolicy class annotation to the subclasses with no change in behavior i still get the above errors which mean the policy is not being applied if any of the other powermock annotations were not being picked up the tests would certainly fail below is a summary of the classes involved each defined in own file class body removed for brevity runwith suite class suiteclasses hcuploadermanagerimplreconciliationtest class hcuploadermanagerimpldupsinenteredstatetest class public class hcuploadermanagertestsuite runwith powermockrunner class preparefortest businessmanagerimpl class suppressstaticinitializationfor com company business businessmanagerimpl mockpolicy class public class hcuploadermanagerimplabstracttest public class hcuploadermanagerimplreconciliationtest extends hcuploadermanagerimplabstracttest public class hcuploadermanagerimpldupsinenteredstatetest extends hcuploadermanagerimplabstracttest my question is whether i have set up this test suite incorrectly or does powermock not currently support mock policies when run within a test suite original issue | 1 |
778,604 | 27,322,095,978 | IssuesEvent | 2023-02-24 20:53:24 | authzed/spicedb-operator | https://api.github.com/repos/authzed/spicedb-operator | closed | Persistent customization strategy | priority/2 medium state/needs discussion | The operator creates kube resources on behalf of the users: deployments, services, serviceaccounts, rbac, etc, which may require some additional modification by the user:
- Adding extra labels or annotations to integrate with other tools (i.e. GKE workload identity)
- Directing workloads in specific ways (tolerations, nodeselectors, affinitty/anti-affinity, topologySpreadConstraints, etc)
- Capacity planning (resource requests / limits)
- Other unforseen future needs due to new SpiceDB features, tooling (HPA?), or the evolution of Kubernetes
All of these modifications are possible today by modifying operator-created resources after (or before!) they have been created. The operator uses Server Side Apply and will not touch fields it does not own. Users can query for which fields are owned by reading the fieldmanager metadata on a given resource.
But modifying the resources after creation makes git-ops workflows difficult, it would be nice if there was a way to persist such modifications in `SpiceDBCluster` or other native Kube resources.
There are some native methods for persisting this type of change, but only for specific fields of specific resources:
- resource requests can be added automatically via a [limitrange](https://kubernetes.io/docs/concepts/policy/limit-range/) on the namespace to set a default
- tolerations can be added with a [default toleration](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#podtolerationrestriction) setting for the namespace
- volumes and env vars could be injected with `PodPreset` (which are deprecated and no longer available)
With the background out of the way, this leaves some general approaches we could take:
1. **Add new fields** to `SpiceDBCluster` to cover any needs as they come up. This is the approach that most operators seem to take, but doing this for more than a couple of fields leads to huge schemas with dozens of options for customizing specific parts of downstream resources. I don't personally favor this approach - it seems at odds with the fieldmanager tracking that Kube introduced for SSA, and it brings things into the operator's scope that it doesn't actually have an opinion on (all such config is passed blindly to other resources).
2. **Admission Controllers**: this is the general form of the `PodPreset` solution, where external config can modify the resource before it is persisted. There are a couple of competing projects with no clear (to me) leader: [Kyverno](https://kyverno.io/docs/writing-policies/mutate/) and [Gatekeeper](https://open-policy-agent.github.io/gatekeeper/website/docs/mutation/) both support "mutation" policies that can inject arbitrary data into a resource on creation. This approach can be used with the operator today, but we have no example policies for users to lean on, and it requires installing and running one of these projects as well.
3. **Embed generic customizations**: instead of providing specific fields for specific customizations, we could provide a hook to allow users to provide arbitrary customizations. This could look like a single `kustomization: <configMapName>` field with Kustomize manifests (that the operator parses and applies, similar to [kubebuilder-declarative-pattern](https://github.com/kubernetes-sigs/kubebuilder-declarative-pattern/blob/master/docs/addon/walkthrough/README.md)), or it might look more like a Kyverno/Gatekeeper API but with a smaller, spicedb-operator focused scope.
| 1.0 | Persistent customization strategy - The operator creates kube resources on behalf of the users: deployments, services, serviceaccounts, rbac, etc, which may require some additional modification by the user:
- Adding extra labels or annotations to integrate with other tools (i.e. GKE workload identity)
- Directing workloads in specific ways (tolerations, nodeselectors, affinitty/anti-affinity, topologySpreadConstraints, etc)
- Capacity planning (resource requests / limits)
- Other unforseen future needs due to new SpiceDB features, tooling (HPA?), or the evolution of Kubernetes
All of these modifications are possible today by modifying operator-created resources after (or before!) they have been created. The operator uses Server Side Apply and will not touch fields it does not own. Users can query for which fields are owned by reading the fieldmanager metadata on a given resource.
But modifying the resources after creation makes git-ops workflows difficult, it would be nice if there was a way to persist such modifications in `SpiceDBCluster` or other native Kube resources.
There are some native methods for persisting this type of change, but only for specific fields of specific resources:
- resource requests can be added automatically via a [limitrange](https://kubernetes.io/docs/concepts/policy/limit-range/) on the namespace to set a default
- tolerations can be added with a [default toleration](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#podtolerationrestriction) setting for the namespace
- volumes and env vars could be injected with `PodPreset` (which are deprecated and no longer available)
With the background out of the way, this leaves some general approaches we could take:
1. **Add new fields** to `SpiceDBCluster` to cover any needs as they come up. This is the approach that most operators seem to take, but doing this for more than a couple of fields leads to huge schemas with dozens of options for customizing specific parts of downstream resources. I don't personally favor this approach - it seems at odds with the fieldmanager tracking that Kube introduced for SSA, and it brings things into the operator's scope that it doesn't actually have an opinion on (all such config is passed blindly to other resources).
2. **Admission Controllers**: this is the general form of the `PodPreset` solution, where external config can modify the resource before it is persisted. There are a couple of competing projects with no clear (to me) leader: [Kyverno](https://kyverno.io/docs/writing-policies/mutate/) and [Gatekeeper](https://open-policy-agent.github.io/gatekeeper/website/docs/mutation/) both support "mutation" policies that can inject arbitrary data into a resource on creation. This approach can be used with the operator today, but we have no example policies for users to lean on, and it requires installing and running one of these projects as well.
3. **Embed generic customizations**: instead of providing specific fields for specific customizations, we could provide a hook to allow users to provide arbitrary customizations. This could look like a single `kustomization: <configMapName>` field with Kustomize manifests (that the operator parses and applies, similar to [kubebuilder-declarative-pattern](https://github.com/kubernetes-sigs/kubebuilder-declarative-pattern/blob/master/docs/addon/walkthrough/README.md)), or it might look more like a Kyverno/Gatekeeper API but with a smaller, spicedb-operator focused scope.
| priority | persistent customization strategy the operator creates kube resources on behalf of the users deployments services serviceaccounts rbac etc which may require some additional modification by the user adding extra labels or annotations to integrate with other tools i e gke workload identity directing workloads in specific ways tolerations nodeselectors affinitty anti affinity topologyspreadconstraints etc capacity planning resource requests limits other unforseen future needs due to new spicedb features tooling hpa or the evolution of kubernetes all of these modifications are possible today by modifying operator created resources after or before they have been created the operator uses server side apply and will not touch fields it does not own users can query for which fields are owned by reading the fieldmanager metadata on a given resource but modifying the resources after creation makes git ops workflows difficult it would be nice if there was a way to persist such modifications in spicedbcluster or other native kube resources there are some native methods for persisting this type of change but only for specific fields of specific resources resource requests can be added automatically via a on the namespace to set a default tolerations can be added with a setting for the namespace volumes and env vars could be injected with podpreset which are deprecated and no longer available with the background out of the way this leaves some general approaches we could take add new fields to spicedbcluster to cover any needs as they come up this is the approach that most operators seem to take but doing this for more than a couple of fields leads to huge schemas with dozens of options for customizing specific parts of downstream resources i don t personally favor this approach it seems at odds with the fieldmanager tracking that kube introduced for ssa and it brings things into the operator s scope that it doesn t actually have an opinion on all such config is passed blindly to other resources admission controllers this is the general form of the podpreset solution where external config can modify the resource before it is persisted there are a couple of competing projects with no clear to me leader and both support mutation policies that can inject arbitrary data into a resource on creation this approach can be used with the operator today but we have no example policies for users to lean on and it requires installing and running one of these projects as well embed generic customizations instead of providing specific fields for specific customizations we could provide a hook to allow users to provide arbitrary customizations this could look like a single kustomization field with kustomize manifests that the operator parses and applies similar to or it might look more like a kyverno gatekeeper api but with a smaller spicedb operator focused scope | 1 |
33,308 | 2,763,834,780 | IssuesEvent | 2015-04-29 12:18:54 | handsontable/hot-table | https://api.github.com/repos/handsontable/hot-table | closed | Better keyboard arrow keys support for nested tables | Enhancement Priority: medium To review | It should be better support for navigating through nested tables via arrows keys. For now it's not possible to navigate between different tables without using mouse. | 1.0 | Better keyboard arrow keys support for nested tables - It should be better support for navigating through nested tables via arrows keys. For now it's not possible to navigate between different tables without using mouse. | priority | better keyboard arrow keys support for nested tables it should be better support for navigating through nested tables via arrows keys for now it s not possible to navigate between different tables without using mouse | 1 |
408,587 | 11,949,561,943 | IssuesEvent | 2020-04-03 13:51:16 | AY1920S2-CS2103T-W12-3/main | https://api.github.com/repos/AY1920S2-CS2103T-W12-3/main | closed | As an organised student I can categorise my spending | priority.Medium type.Story | ... so that I know the proportions of my spending. | 1.0 | As an organised student I can categorise my spending - ... so that I know the proportions of my spending. | priority | as an organised student i can categorise my spending so that i know the proportions of my spending | 1 |
106,336 | 4,270,115,550 | IssuesEvent | 2016-07-13 05:10:14 | mmisw/orr-portal | https://api.github.com/repos/mmisw/orr-portal | opened | m2r: check and avoid triple duplications | bug m2r Priority-Medium | since triples could duplicated, upon removing one occurrence, make sure to remove all corresp duplicates | 1.0 | m2r: check and avoid triple duplications - since triples could duplicated, upon removing one occurrence, make sure to remove all corresp duplicates | priority | check and avoid triple duplications since triples could duplicated upon removing one occurrence make sure to remove all corresp duplicates | 1 |
40,723 | 2,868,938,429 | IssuesEvent | 2015-06-05 22:04:34 | dart-lang/pub | https://api.github.com/repos/dart-lang/pub | closed | Add pub commands for managing the uploader list | enhancement Fixed Priority-Medium | <a href="https://github.com/munificent"><img src="https://avatars.githubusercontent.com/u/46275?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [munificent](https://github.com/munificent)**
_Originally opened as dart-lang/sdk#7363_
----
The server has URLs for this, but there's no UI to hit those URLs. I'm thinking:
$ pub uploader[s]
Lists the uploaders for the current package and tells the user if they are in the list.
$ pub uploader[s] add <email>
Adds the given user to the uploader list, if this user has permission to do so.
$ pub uploader[s] remove <email>
Removes the given user to the uploader list, if this user has permission to do so. | 1.0 | Add pub commands for managing the uploader list - <a href="https://github.com/munificent"><img src="https://avatars.githubusercontent.com/u/46275?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [munificent](https://github.com/munificent)**
_Originally opened as dart-lang/sdk#7363_
----
The server has URLs for this, but there's no UI to hit those URLs. I'm thinking:
$ pub uploader[s]
Lists the uploaders for the current package and tells the user if they are in the list.
$ pub uploader[s] add <email>
Adds the given user to the uploader list, if this user has permission to do so.
$ pub uploader[s] remove <email>
Removes the given user to the uploader list, if this user has permission to do so. | priority | add pub commands for managing the uploader list issue by originally opened as dart lang sdk the server has urls for this but there s no ui to hit those urls i m thinking nbsp nbsp pub uploader lists the uploaders for the current package and tells the user if they are in the list nbsp nbsp pub uploader add lt email gt adds the given user to the uploader list if this user has permission to do so nbsp nbsp pub uploader remove lt email gt removes the given user to the uploader list if this user has permission to do so | 1 |
718,038 | 24,701,720,500 | IssuesEvent | 2022-10-19 15:44:27 | bounswe/bounswe2022group2 | https://api.github.com/repos/bounswe/bounswe2022group2 | closed | Revising the Diagrams based on Lecture Structure | priority-medium type-enhancement status-needreview diagrams | ### Issue Description
As we decided in our second meeting, I made changes under "Lecture Structure" title of our requirements. Following these changes, design diagrams should be updated to be consistent with our requirements. This issue corresponds to updating the diagrams according to changes made in issue #341.
### Step Details
Steps that will be performed:
- [ ] Update diagrams according to newly updated Lecture Structure requirements
### Final Actions
After necessary changes is determined, I will update all parts related to Lecture Structure in the diagrams accordingly.
### Deadline of the Issue
16.10.2022 - Sunday - 23:59
### Reviewer
Muhammed Enes Sürmeli
### Deadline for the Review
17.10.2022 - Monday - 23:59 | 1.0 | Revising the Diagrams based on Lecture Structure - ### Issue Description
As we decided in our second meeting, I made changes under "Lecture Structure" title of our requirements. Following these changes, design diagrams should be updated to be consistent with our requirements. This issue corresponds to updating the diagrams according to changes made in issue #341.
### Step Details
Steps that will be performed:
- [ ] Update diagrams according to newly updated Lecture Structure requirements
### Final Actions
After necessary changes is determined, I will update all parts related to Lecture Structure in the diagrams accordingly.
### Deadline of the Issue
16.10.2022 - Sunday - 23:59
### Reviewer
Muhammed Enes Sürmeli
### Deadline for the Review
17.10.2022 - Monday - 23:59 | priority | revising the diagrams based on lecture structure issue description as we decided in our second meeting i made changes under lecture structure title of our requirements following these changes design diagrams should be updated to be consistent with our requirements this issue corresponds to updating the diagrams according to changes made in issue step details steps that will be performed update diagrams according to newly updated lecture structure requirements final actions after necessary changes is determined i will update all parts related to lecture structure in the diagrams accordingly deadline of the issue sunday reviewer muhammed enes sürmeli deadline for the review monday | 1 |
324,322 | 9,887,683,026 | IssuesEvent | 2019-06-25 09:44:18 | minio/minio | https://api.github.com/repos/minio/minio | closed | Expected behaviour of prometheus Metrics for network sent/received | community priority: medium triage | I have a dashboard which displays the `minio_network_sent_bytes_total` and `minio_network_received_bytes_total` overlayed with the same results from `node_exporter` for the host sent/received bytes.
We have a 4 node minio cluster and I am only graphing metrics from one of the nodes which is receiving the data. I would expect to see received bytes that match closely with the hosts received bytes, and transmit bytes that match closely with the hosts transmit bytes. However, it appears that only the receive matches while the transmit is close to 0.
**Is this a bug?**
There is definitely data being transmitted to the other nodes (Minio is replicating data between nodes).
**OR - will the transmit bytes only show when a client pulls data from minio?**
## Expected Behavior
Minio Sent/Received should be similar to the Hosts Sent/Received bytes
## Current Behavior
Minio Received matches Host Received but Minio Transmitted is almost 0
## Steps to Reproduce (for bugs)
Grafana dashboard showing Minio sent/received overlayed with `node_exporter` values of `node_network_transmit_bytes_total` and `node_network_received_bytes_total`

## Context
I am just trying to accurately show Minio and Host network usage.
## Regression
n/a
| 1.0 | Expected behaviour of prometheus Metrics for network sent/received - I have a dashboard which displays the `minio_network_sent_bytes_total` and `minio_network_received_bytes_total` overlayed with the same results from `node_exporter` for the host sent/received bytes.
We have a 4 node minio cluster and I am only graphing metrics from one of the nodes which is receiving the data. I would expect to see received bytes that match closely with the hosts received bytes, and transmit bytes that match closely with the hosts transmit bytes. However, it appears that only the receive matches while the transmit is close to 0.
**Is this a bug?**
There is definitely data being transmitted to the other nodes (Minio is replicating data between nodes).
**OR - will the transmit bytes only show when a client pulls data from minio?**
## Expected Behavior
Minio Sent/Received should be similar to the Hosts Sent/Received bytes
## Current Behavior
Minio Received matches Host Received but Minio Transmitted is almost 0
## Steps to Reproduce (for bugs)
Grafana dashboard showing Minio sent/received overlayed with `node_exporter` values of `node_network_transmit_bytes_total` and `node_network_received_bytes_total`

## Context
I am just trying to accurately show Minio and Host network usage.
## Regression
n/a
| priority | expected behaviour of prometheus metrics for network sent received i have a dashboard which displays the minio network sent bytes total and minio network received bytes total overlayed with the same results from node exporter for the host sent received bytes we have a node minio cluster and i am only graphing metrics from one of the nodes which is receiving the data i would expect to see received bytes that match closely with the hosts received bytes and transmit bytes that match closely with the hosts transmit bytes however it appears that only the receive matches while the transmit is close to is this a bug there is definitely data being transmitted to the other nodes minio is replicating data between nodes or will the transmit bytes only show when a client pulls data from minio expected behavior minio sent received should be similar to the hosts sent received bytes current behavior minio received matches host received but minio transmitted is almost steps to reproduce for bugs grafana dashboard showing minio sent received overlayed with node exporter values of node network transmit bytes total and node network received bytes total context i am just trying to accurately show minio and host network usage regression n a | 1 |
746,394 | 26,028,694,641 | IssuesEvent | 2022-12-21 18:48:02 | encorelab/ck-board | https://api.github.com/repos/encorelab/ck-board | closed | Comment Task Bugs in Workspace UI | bug medium priority | These bugs are related to tasks with a "comment at least once" requirement in the Workspace UI:
1. The average group progress bar value is incorrectly showing the current group's progress
2. Commenting on a post from the Workspace UI increments the number of comments by two or more in the Workspace UI
For example, this post has 2 comments and I comment one more time:
The pop-up correctly shows 3 comments.
<img width="421" alt="Screen Shot 2022-11-11 at 6 41 58 PM" src="https://user-images.githubusercontent.com/6416247/201444862-37abff5b-60ee-40e9-9840-7a9e65abdc37.png">
However, the Workspace UI incorrectly shows that there are 4 comments
<img width="689" alt="Screen Shot 2022-11-11 at 6 42 17 PM" src="https://user-images.githubusercontent.com/6416247/201444878-64fbf339-3d29-4f45-93e5-3ba35524311f.png">
| 1.0 | Comment Task Bugs in Workspace UI - These bugs are related to tasks with a "comment at least once" requirement in the Workspace UI:
1. The average group progress bar value is incorrectly showing the current group's progress
2. Commenting on a post from the Workspace UI increments the number of comments by two or more in the Workspace UI
For example, this post has 2 comments and I comment one more time:
The pop-up correctly shows 3 comments.
<img width="421" alt="Screen Shot 2022-11-11 at 6 41 58 PM" src="https://user-images.githubusercontent.com/6416247/201444862-37abff5b-60ee-40e9-9840-7a9e65abdc37.png">
However, the Workspace UI incorrectly shows that there are 4 comments
<img width="689" alt="Screen Shot 2022-11-11 at 6 42 17 PM" src="https://user-images.githubusercontent.com/6416247/201444878-64fbf339-3d29-4f45-93e5-3ba35524311f.png">
| priority | comment task bugs in workspace ui these bugs are related to tasks with a comment at least once requirement in the workspace ui the average group progress bar value is incorrectly showing the current group s progress commenting on a post from the workspace ui increments the number of comments by two or more in the workspace ui for example this post has comments and i comment one more time the pop up correctly shows comments img width alt screen shot at pm src however the workspace ui incorrectly shows that there are comments img width alt screen shot at pm src | 1 |
434,190 | 12,515,368,528 | IssuesEvent | 2020-06-03 07:32:41 | canonical-web-and-design/build.snapcraft.io | https://api.github.com/repos/canonical-web-and-design/build.snapcraft.io | closed | "All set up…" and progress bar briefly appear | Priority: Medium | **To reproduce:**
* After going through the first-time flow, add an additional repository.
* Add a snapcraft.yaml and register the snap name.
**What happens:**
* "All set up…" and the first-time flow progress bar briefly appear (~1-2s).
This is confusing. I'm shown progress for steps unrelated to my current task.
**What should happen:**
* Neither the "All set up…" message nor the first-time flow progress bar appear. | 1.0 | "All set up…" and progress bar briefly appear - **To reproduce:**
* After going through the first-time flow, add an additional repository.
* Add a snapcraft.yaml and register the snap name.
**What happens:**
* "All set up…" and the first-time flow progress bar briefly appear (~1-2s).
This is confusing. I'm shown progress for steps unrelated to my current task.
**What should happen:**
* Neither the "All set up…" message nor the first-time flow progress bar appear. | priority | all set up… and progress bar briefly appear to reproduce after going through the first time flow add an additional repository add a snapcraft yaml and register the snap name what happens all set up… and the first time flow progress bar briefly appear this is confusing i m shown progress for steps unrelated to my current task what should happen neither the all set up… message nor the first time flow progress bar appear | 1 |
502,601 | 14,562,727,459 | IssuesEvent | 2020-12-17 00:45:07 | nih-cfde/training-and-engagement | https://api.github.com/repos/nih-cfde/training-and-engagement | closed | Upload files to Cavatica using the command line uploader thing | Dec-2020 MediumPriority | This tutorial is the official tutorial but it has got some confusing parts: https://docs.cavatica.org/v1.0/docs/upload-via-the-command-line.
For other tutorials we need to be able to transfer fastq data files from AWS to Cavatica. Here are some steps:
Install java on AWS
Download the uploader onto AWS
Figure out how to use it
Related to simulated data issue #191 | 1.0 | Upload files to Cavatica using the command line uploader thing - This tutorial is the official tutorial but it has got some confusing parts: https://docs.cavatica.org/v1.0/docs/upload-via-the-command-line.
For other tutorials we need to be able to transfer fastq data files from AWS to Cavatica. Here are some steps:
Install java on AWS
Download the uploader onto AWS
Figure out how to use it
Related to simulated data issue #191 | priority | upload files to cavatica using the command line uploader thing this tutorial is the official tutorial but it has got some confusing parts for other tutorials we need to be able to transfer fastq data files from aws to cavatica here are some steps install java on aws download the uploader onto aws figure out how to use it related to simulated data issue | 1 |
4,343 | 2,550,445,376 | IssuesEvent | 2015-02-01 15:28:56 | olga-jane/prizm | https://api.github.com/repos/olga-jane/prizm | opened | Impossible to send release note with "empty" railcars | Coding MEDIUM priority Mill railcar | This feature is artefact of "railcar" implementation. It should be impossible to send release note when no pipes are in *release note* itself. | 1.0 | Impossible to send release note with "empty" railcars - This feature is artefact of "railcar" implementation. It should be impossible to send release note when no pipes are in *release note* itself. | priority | impossible to send release note with empty railcars this feature is artefact of railcar implementation it should be impossible to send release note when no pipes are in release note itself | 1 |
92,737 | 3,873,250,932 | IssuesEvent | 2016-04-11 16:18:57 | duckduckgo/p5-app-duckpan | https://api.github.com/repos/duckduckgo/p5-app-duckpan | opened | Incorrect detection of Instant Answer files | Bug Low-Hanging Fruit Priority: Medium | Some commands, such as `duckpan server`, will pick up any `.pm` files in `lib/DDG/(Goodie/Spice/...)` and treat them as Instant Answer files; making it incompatible with multi-file Instant Answers (see https://github.com/duckduckgo/zeroclickinfo-goodies/pull/1927#issuecomment-208041735). There is no issue if the Instant Answer is specified (see https://github.com/duckduckgo/zeroclickinfo-goodies/pull/1927#issuecomment-208042511 and https://github.com/duckduckgo/zeroclickinfo-goodies/pull/1927#issuecomment-208042609), as in `duckpan server MyIA`. | 1.0 | Incorrect detection of Instant Answer files - Some commands, such as `duckpan server`, will pick up any `.pm` files in `lib/DDG/(Goodie/Spice/...)` and treat them as Instant Answer files; making it incompatible with multi-file Instant Answers (see https://github.com/duckduckgo/zeroclickinfo-goodies/pull/1927#issuecomment-208041735). There is no issue if the Instant Answer is specified (see https://github.com/duckduckgo/zeroclickinfo-goodies/pull/1927#issuecomment-208042511 and https://github.com/duckduckgo/zeroclickinfo-goodies/pull/1927#issuecomment-208042609), as in `duckpan server MyIA`. | priority | incorrect detection of instant answer files some commands such as duckpan server will pick up any pm files in lib ddg goodie spice and treat them as instant answer files making it incompatible with multi file instant answers see there is no issue if the instant answer is specified see and as in duckpan server myia | 1 |
493,108 | 14,226,516,032 | IssuesEvent | 2020-11-17 23:10:32 | moonwards1/Moonwards-Virtual-Moon | https://api.github.com/repos/moonwards1/Moonwards-Virtual-Moon | closed | Make left clicking the mouse interact with interactables. | Department: Gameplay Department: UI/UX Priority: Medium Type: Feature | It feels awkward having the Interactables being interacted with by F when clickables get pressed by the mouse.
Keep the F key but add left mouse clicking as an action for "use". (use being the action for interacting.) | 1.0 | Make left clicking the mouse interact with interactables. - It feels awkward having the Interactables being interacted with by F when clickables get pressed by the mouse.
Keep the F key but add left mouse clicking as an action for "use". (use being the action for interacting.) | priority | make left clicking the mouse interact with interactables it feels awkward having the interactables being interacted with by f when clickables get pressed by the mouse keep the f key but add left mouse clicking as an action for use use being the action for interacting | 1 |
158,949 | 6,038,000,870 | IssuesEvent | 2017-06-09 20:12:30 | ngageoint/hootenanny | https://api.github.com/repos/ngageoint/hootenanny | closed | Create messaging mode for hoot command line | Category: Core Priority: Medium Status: New/Undefined Type: Task | Expose a mode where hoot takes messages on stdin and writes messages back to stdout. This will help simplify the interface between services & core.
| 1.0 | Create messaging mode for hoot command line - Expose a mode where hoot takes messages on stdin and writes messages back to stdout. This will help simplify the interface between services & core.
| priority | create messaging mode for hoot command line expose a mode where hoot takes messages on stdin and writes messages back to stdout this will help simplify the interface between services core | 1 |
187,254 | 6,750,475,093 | IssuesEvent | 2017-10-23 05:09:56 | opencurrents/opencurrents | https://api.github.com/repos/opencurrents/opencurrents | opened | record hours: Let admins track hours for volunteers | priority medium | Requested by Bike Austin when their volunteers don't submit hours or don't have an account. | 1.0 | record hours: Let admins track hours for volunteers - Requested by Bike Austin when their volunteers don't submit hours or don't have an account. | priority | record hours let admins track hours for volunteers requested by bike austin when their volunteers don t submit hours or don t have an account | 1 |
353,850 | 10,559,628,502 | IssuesEvent | 2019-10-04 12:05:43 | bounswe/bounswe2019group8 | https://api.github.com/repos/bounswe/bounswe2019group8 | opened | Design the database model | Backend Diagrams Effort: High Group work Planning Priority: Medium Project Plan Status: Available | **Actions:**
1. Develop currently existing User model in database. Specify required fields for an User object.
2. Design models, collections and relations according to the upcoming milestone requirements.
3. Create a document explaining the results of **1** and **2**
**Deadline:** 12.10.2019 - 21.00
| 1.0 | Design the database model - **Actions:**
1. Develop currently existing User model in database. Specify required fields for an User object.
2. Design models, collections and relations according to the upcoming milestone requirements.
3. Create a document explaining the results of **1** and **2**
**Deadline:** 12.10.2019 - 21.00
| priority | design the database model actions develop currently existing user model in database specify required fields for an user object design models collections and relations according to the upcoming milestone requirements create a document explaining the results of and deadline | 1 |
702,959 | 24,142,911,289 | IssuesEvent | 2022-09-21 16:08:50 | ufs-community/regional_workflow | https://api.github.com/repos/ufs-community/regional_workflow | closed | Clean up regional_workflow wiki and add/remove certain sections | enhancement Work in Progress medium priority | Update the regional_workflow wiki with any relevant changes. Decide whether some portions should be migrated to the ufs-srweather-app or whether it can be removed all together (such as the FV3-LAM Workflow Setup and Execution section) | 1.0 | Clean up regional_workflow wiki and add/remove certain sections - Update the regional_workflow wiki with any relevant changes. Decide whether some portions should be migrated to the ufs-srweather-app or whether it can be removed all together (such as the FV3-LAM Workflow Setup and Execution section) | priority | clean up regional workflow wiki and add remove certain sections update the regional workflow wiki with any relevant changes decide whether some portions should be migrated to the ufs srweather app or whether it can be removed all together such as the lam workflow setup and execution section | 1 |
237,542 | 7,761,465,844 | IssuesEvent | 2018-06-01 09:57:41 | Repair-DeskPOS/RepairDesk-Bugs | https://api.github.com/repos/Repair-DeskPOS/RepairDesk-Bugs | closed | Duplicate Barcodes | Medium Priority enhancement | Hi just curious, is there anyway for the system not to use the same UPC or SKU twice? I added two products with the same barcode would be nice to get a warning etc of duplicate codes. | 1.0 | Duplicate Barcodes - Hi just curious, is there anyway for the system not to use the same UPC or SKU twice? I added two products with the same barcode would be nice to get a warning etc of duplicate codes. | priority | duplicate barcodes hi just curious is there anyway for the system not to use the same upc or sku twice i added two products with the same barcode would be nice to get a warning etc of duplicate codes | 1 |
657,444 | 21,794,202,542 | IssuesEvent | 2022-05-15 11:31:23 | stackturing/tekton-visualise | https://api.github.com/repos/stackturing/tekton-visualise | opened | Add basic deployment as a part of the CRD | area/dev stage/baseline priority/medium complexity/medium | Add a deployment with served blank webpage as a part of the CRD | 1.0 | Add basic deployment as a part of the CRD - Add a deployment with served blank webpage as a part of the CRD | priority | add basic deployment as a part of the crd add a deployment with served blank webpage as a part of the crd | 1 |
509,575 | 14,739,917,707 | IssuesEvent | 2021-01-07 08:10:41 | rubyforgood/casa | https://api.github.com/repos/rubyforgood/casa | closed | Add seeding for hearing types | :woman_judge: Court Reports Good First Issue Help Wanted Priority: Medium | **What type of user is this for? volunteer/supervisor/admin/all**
Developer
**Description**
Add seeding for hearing types.
Per #1000 we have added an interface for hearing types per organization, however there was a conversation in #928 regarding seeding the data for this which was not added to the PR.
| 1.0 | Add seeding for hearing types - **What type of user is this for? volunteer/supervisor/admin/all**
Developer
**Description**
Add seeding for hearing types.
Per #1000 we have added an interface for hearing types per organization, however there was a conversation in #928 regarding seeding the data for this which was not added to the PR.
| priority | add seeding for hearing types what type of user is this for volunteer supervisor admin all developer description add seeding for hearing types per we have added an interface for hearing types per organization however there was a conversation in regarding seeding the data for this which was not added to the pr | 1 |
470,414 | 13,537,053,875 | IssuesEvent | 2020-09-16 09:55:21 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | opened | BB REST API - Create the Lost Password Endpoint | feature: enhancement priority: medium | **Is your feature request related to a problem? Please describe.**
Could you please add a route in the REST API which helps a user to reset his password when he lose it.
If that can help you, here is the code I used to use with BuddyPress :
[class-pf-lost-password-endpoints.php](https://github.com/buddyboss/buddyboss-platform/files/5231323/class-pf-lost-password-endpoints.txt)
**Describe the solution you'd like**
Create an endpoint to reset lost password
**Describe alternatives you've considered**
none
**Support ticket links**
none
| 1.0 | BB REST API - Create the Lost Password Endpoint - **Is your feature request related to a problem? Please describe.**
Could you please add a route in the REST API which helps a user to reset his password when he lose it.
If that can help you, here is the code I used to use with BuddyPress :
[class-pf-lost-password-endpoints.php](https://github.com/buddyboss/buddyboss-platform/files/5231323/class-pf-lost-password-endpoints.txt)
**Describe the solution you'd like**
Create an endpoint to reset lost password
**Describe alternatives you've considered**
none
**Support ticket links**
none
| priority | bb rest api create the lost password endpoint is your feature request related to a problem please describe could you please add a route in the rest api which helps a user to reset his password when he lose it if that can help you here is the code i used to use with buddypress describe the solution you d like create an endpoint to reset lost password describe alternatives you ve considered none support ticket links none | 1 |
706,113 | 24,260,777,113 | IssuesEvent | 2022-09-27 22:26:56 | objectify/objectify | https://api.github.com/repos/objectify/objectify | closed | make the docs downloadable | Priority-Medium Type-Task | Original [issue 172](https://code.google.com/p/objectify-appengine/issues/detail?id=172) created by objectify on 2013-08-13T12:36:04.000Z:
pls include the v4 guide somewhere than can be downloaded. thanks. :)
| 1.0 | make the docs downloadable - Original [issue 172](https://code.google.com/p/objectify-appengine/issues/detail?id=172) created by objectify on 2013-08-13T12:36:04.000Z:
pls include the v4 guide somewhere than can be downloaded. thanks. :)
| priority | make the docs downloadable original created by objectify on pls include the guide somewhere than can be downloaded thanks | 1 |
58,045 | 3,087,110,070 | IssuesEvent | 2015-08-25 09:27:11 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | closed | Баг со /switch | bug imported Priority-Medium | _From [mnr...@gmail.com](https://code.google.com/u/114542743364409907977/) on October 15, 2013 13:10:57_
При запуске флай иногда сам меняет местами чат\список пользователей, хотя при выходе из него было "правильное" расположение.
Хорошо видно при запуске с 500 хабами.
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1342_ | 1.0 | Баг со /switch - _From [mnr...@gmail.com](https://code.google.com/u/114542743364409907977/) on October 15, 2013 13:10:57_
При запуске флай иногда сам меняет местами чат\список пользователей, хотя при выходе из него было "правильное" расположение.
Хорошо видно при запуске с 500 хабами.
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1342_ | priority | баг со switch from on october при запуске флай иногда сам меняет местами чат список пользователей хотя при выходе из него было правильное расположение хорошо видно при запуске с хабами original issue | 1 |
1,955 | 2,522,175,050 | IssuesEvent | 2015-01-19 19:59:12 | couchbase/couchbase-lite-net | https://api.github.com/repos/couchbase/couchbase-lite-net | closed | [iOS] TestContinuousPushReplicationGoesIdle fails only in release mode | bug P4: minor priority-low size-medium | Appears to deadlock, causing the CountdownLatch timeout to fire. | 1.0 | [iOS] TestContinuousPushReplicationGoesIdle fails only in release mode - Appears to deadlock, causing the CountdownLatch timeout to fire. | priority | testcontinuouspushreplicationgoesidle fails only in release mode appears to deadlock causing the countdownlatch timeout to fire | 1 |
128,266 | 5,051,962,927 | IssuesEvent | 2016-12-20 23:48:39 | vanowm/MasterPasswordPlus | https://api.github.com/repos/vanowm/MasterPasswordPlus | closed | Replace pre-prompt with prompt. | auto-migrated enhancement Priority-Medium | ```
I would like to see "Click here or press any key to unlock" replaced with the
next step "Please enter the master password...etc...etc..."
My idea is to use only one screen with "enter password to unlock".
If used regularly there is two passwords: startup, lock. Lock has extra key (or
click).
To me it seems this is a mental hindrance, more steps than necessary.
```
Original issue reported on code.google.com by `flipyali...@gmail.com` on 16 Jul 2015 at 8:46
| 1.0 | Replace pre-prompt with prompt. - ```
I would like to see "Click here or press any key to unlock" replaced with the
next step "Please enter the master password...etc...etc..."
My idea is to use only one screen with "enter password to unlock".
If used regularly there is two passwords: startup, lock. Lock has extra key (or
click).
To me it seems this is a mental hindrance, more steps than necessary.
```
Original issue reported on code.google.com by `flipyali...@gmail.com` on 16 Jul 2015 at 8:46
| priority | replace pre prompt with prompt i would like to see click here or press any key to unlock replaced with the next step please enter the master password etc etc my idea is to use only one screen with enter password to unlock if used regularly there is two passwords startup lock lock has extra key or click to me it seems this is a mental hindrance more steps than necessary original issue reported on code google com by flipyali gmail com on jul at | 1 |
265,543 | 8,355,737,742 | IssuesEvent | 2018-10-02 16:27:41 | otrv4/pidgin-otrng | https://api.github.com/repos/otrv4/pidgin-otrng | opened | Fingerprint verification - the privacy status changes too early | bug medium priority | When doing manual fingerprint verification, you switch the drop down from "I have not" to "I have". At this point the plugin will switch the privacy status to "Private", print this to the conversation window, and print "trusted" in the otr4.fingerprints file. This all happens BEFORE the "authenticate" button has been pressed.
Correct behavior should be to not do anything until the "authenticate" button is pressed. | 1.0 | Fingerprint verification - the privacy status changes too early - When doing manual fingerprint verification, you switch the drop down from "I have not" to "I have". At this point the plugin will switch the privacy status to "Private", print this to the conversation window, and print "trusted" in the otr4.fingerprints file. This all happens BEFORE the "authenticate" button has been pressed.
Correct behavior should be to not do anything until the "authenticate" button is pressed. | priority | fingerprint verification the privacy status changes too early when doing manual fingerprint verification you switch the drop down from i have not to i have at this point the plugin will switch the privacy status to private print this to the conversation window and print trusted in the fingerprints file this all happens before the authenticate button has been pressed correct behavior should be to not do anything until the authenticate button is pressed | 1 |
271,877 | 8,491,769,168 | IssuesEvent | 2018-10-27 16:18:25 | INET-Complexity/housing-model | https://api.github.com/repos/INET-Complexity/housing-model | closed | Averaging of time on market to consider different quality bands | enhancement high-priority medium-time | This is mostly for consistency with the averaging of sale prices that household look at when making their decisions. | 1.0 | Averaging of time on market to consider different quality bands - This is mostly for consistency with the averaging of sale prices that household look at when making their decisions. | priority | averaging of time on market to consider different quality bands this is mostly for consistency with the averaging of sale prices that household look at when making their decisions | 1 |
666,449 | 22,356,059,140 | IssuesEvent | 2022-06-15 15:44:26 | geosolutions-it/MapStore2 | https://api.github.com/repos/geosolutions-it/MapStore2 | opened | Review 2D/3D switch in the map viewer | enhancement Priority: Medium | ## Description
<!-- A few sentences describing new feature -->
<!-- screenshot, video, or link to mockup/prototype are welcome -->
The switch mechanism between 2D/3D in the map viewer is related to the mapType path inside the viewer url and this information is not persisted in the map config.
Some map could contain layer only visible in a specific context, for example the 3D tiles layers that could be available only in a 3D scene. In this specific case it would be nice to be able to save the map with the information related to the current map type in use.
- [ ] Find a solution to store this property in the state. Should we store it as 2D/3D flag or be explicit with the map type in use (oprnlayers, leaflet, cesium)? In some case we switch the 2d library based on device
- [ ] Evaluate if it makes sense to deprecate the /viewer/:mapType/:mapId in favor of /viewer/:mapId where the mapType is managed only via confing or internal state
- [ ] Review if query params needs some adjusment based on previous improvement. In particular if we need to be explicit with a map_type param or guess the map type based on the query content
**What kind of improvement you want to add?** (check one with "x", remove the others)
- [x] Minor changes to existing features
## Other useful information
see https://github.com/geosolutions-it/MapStore2/pull/8320#pullrequestreview-1007647980
| 1.0 | Review 2D/3D switch in the map viewer - ## Description
<!-- A few sentences describing new feature -->
<!-- screenshot, video, or link to mockup/prototype are welcome -->
The switch mechanism between 2D/3D in the map viewer is related to the mapType path inside the viewer url and this information is not persisted in the map config.
Some map could contain layer only visible in a specific context, for example the 3D tiles layers that could be available only in a 3D scene. In this specific case it would be nice to be able to save the map with the information related to the current map type in use.
- [ ] Find a solution to store this property in the state. Should we store it as 2D/3D flag or be explicit with the map type in use (oprnlayers, leaflet, cesium)? In some case we switch the 2d library based on device
- [ ] Evaluate if it makes sense to deprecate the /viewer/:mapType/:mapId in favor of /viewer/:mapId where the mapType is managed only via confing or internal state
- [ ] Review if query params needs some adjusment based on previous improvement. In particular if we need to be explicit with a map_type param or guess the map type based on the query content
**What kind of improvement you want to add?** (check one with "x", remove the others)
- [x] Minor changes to existing features
## Other useful information
see https://github.com/geosolutions-it/MapStore2/pull/8320#pullrequestreview-1007647980
| priority | review switch in the map viewer description the switch mechanism between in the map viewer is related to the maptype path inside the viewer url and this information is not persisted in the map config some map could contain layer only visible in a specific context for example the tiles layers that could be available only in a scene in this specific case it would be nice to be able to save the map with the information related to the current map type in use find a solution to store this property in the state should we store it as flag or be explicit with the map type in use oprnlayers leaflet cesium in some case we switch the library based on device evaluate if it makes sense to deprecate the viewer maptype mapid in favor of viewer mapid where the maptype is managed only via confing or internal state review if query params needs some adjusment based on previous improvement in particular if we need to be explicit with a map type param or guess the map type based on the query content what kind of improvement you want to add check one with x remove the others minor changes to existing features other useful information see | 1 |
195,874 | 6,919,452,162 | IssuesEvent | 2017-11-29 15:30:34 | uracreative/task-management | https://api.github.com/repos/uracreative/task-management | opened | Blog post: working in the open | Internal: Social Media Internal: Website Priority: Medium | Following the discussion and decisions during our work week regarding our way of working in the open, please proceed with a blog post explaining how our new approach about working in the open is now defined. Please also work with @AnXh3L0 on the visuals needed for the blog post.
Deadline: 20.12.2017 | 1.0 | Blog post: working in the open - Following the discussion and decisions during our work week regarding our way of working in the open, please proceed with a blog post explaining how our new approach about working in the open is now defined. Please also work with @AnXh3L0 on the visuals needed for the blog post.
Deadline: 20.12.2017 | priority | blog post working in the open following the discussion and decisions during our work week regarding our way of working in the open please proceed with a blog post explaining how our new approach about working in the open is now defined please also work with on the visuals needed for the blog post deadline | 1 |
189,611 | 6,799,909,498 | IssuesEvent | 2017-11-02 12:08:10 | revilheart/ESGST | https://api.github.com/repos/revilheart/ESGST | closed | Exporting to dropbox - errors | Enhancement Medium Priority | I am not able to export all the settings to dropbox anymore. In the attached image, you can see three errors.
First is when I clicked on Select all.
The second is when I clicked on Export (Dropbox.)
Third is after a few minutes of importing.

| 1.0 | Exporting to dropbox - errors - I am not able to export all the settings to dropbox anymore. In the attached image, you can see three errors.
First is when I clicked on Select all.
The second is when I clicked on Export (Dropbox.)
Third is after a few minutes of importing.

| priority | exporting to dropbox errors i am not able to export all the settings to dropbox anymore in the attached image you can see three errors first is when i clicked on select all the second is when i clicked on export dropbox third is after a few minutes of importing | 1 |
193,811 | 6,888,215,500 | IssuesEvent | 2017-11-22 04:19:12 | vedantswain/vedantswain.github.io | https://api.github.com/repos/vedantswain/vedantswain.github.io | closed | Remove stupid staircase lists | enhancement priority: medium | They aren't responsive, and add little to progression. Consider replacing with unordered/ordered list. | 1.0 | Remove stupid staircase lists - They aren't responsive, and add little to progression. Consider replacing with unordered/ordered list. | priority | remove stupid staircase lists they aren t responsive and add little to progression consider replacing with unordered ordered list | 1 |
468,461 | 13,483,390,822 | IssuesEvent | 2020-09-11 03:48:12 | dmwm/WMCore | https://api.github.com/repos/dmwm/WMCore | closed | Use Rucio list_dataset_replicas with a list of DIDs instead of single DID calls | Enhancement Feature change Medium Priority Rucio Transition WMAgent WorkQueue | **Impact of the new feature**
WMCore in general
**Is your feature request related to a problem? Please describe.**
Now that Rucio supports listing dataset replicas in bulk, as reported in this issue:
https://github.com/rucio/rucio/issues/2459
made available through this client API `list_dataset_replicas_bulk`.
We should update the Rucio wrapper `getReplicaInfoForBlocks` method and make HTTP calls in bulk, instead of making single block HTTP calls.
**Describe the solution you'd like**
Once we have a list of blocks to list their replicas, use the `list_dataset_replicas_bulk` client API to retrieve replicas for all of them with a single HTTP call.
This change should be transparent, but we better tag Stefano once a solution gets implemented.
**Describe alternatives you've considered**
not touch anything and keep hitting the Rucio server with many "unneeded" calls.
**Additional context**
none
| 1.0 | Use Rucio list_dataset_replicas with a list of DIDs instead of single DID calls - **Impact of the new feature**
WMCore in general
**Is your feature request related to a problem? Please describe.**
Now that Rucio supports listing dataset replicas in bulk, as reported in this issue:
https://github.com/rucio/rucio/issues/2459
made available through this client API `list_dataset_replicas_bulk`.
We should update the Rucio wrapper `getReplicaInfoForBlocks` method and make HTTP calls in bulk, instead of making single block HTTP calls.
**Describe the solution you'd like**
Once we have a list of blocks to list their replicas, use the `list_dataset_replicas_bulk` client API to retrieve replicas for all of them with a single HTTP call.
This change should be transparent, but we better tag Stefano once a solution gets implemented.
**Describe alternatives you've considered**
not touch anything and keep hitting the Rucio server with many "unneeded" calls.
**Additional context**
none
| priority | use rucio list dataset replicas with a list of dids instead of single did calls impact of the new feature wmcore in general is your feature request related to a problem please describe now that rucio supports listing dataset replicas in bulk as reported in this issue made available through this client api list dataset replicas bulk we should update the rucio wrapper getreplicainfoforblocks method and make http calls in bulk instead of making single block http calls describe the solution you d like once we have a list of blocks to list their replicas use the list dataset replicas bulk client api to retrieve replicas for all of them with a single http call this change should be transparent but we better tag stefano once a solution gets implemented describe alternatives you ve considered not touch anything and keep hitting the rucio server with many unneeded calls additional context none | 1 |
452,569 | 13,055,915,683 | IssuesEvent | 2020-07-30 03:06:35 | kubesphere/console | https://api.github.com/repos/kubesphere/console | opened | ws custom role missing info | area/console area/iam kind/bug priority/medium | **Describe the bug**
`ws-tester6` is the user with the custom role of Workspace Members View / Workspace Roles View / Workspace Roles Management. Use `ws-tester6` to log in and check the user `ws-self-provisioner` but found not project over there. Actually `ws-self-provisioner` is the admin of a project.



**Versions used(KubeSphere/Kubernetes)**
KubeSphere: 3.0.0-dev | 1.0 | ws custom role missing info - **Describe the bug**
`ws-tester6` is the user with the custom role of Workspace Members View / Workspace Roles View / Workspace Roles Management. Use `ws-tester6` to log in and check the user `ws-self-provisioner` but found not project over there. Actually `ws-self-provisioner` is the admin of a project.



**Versions used(KubeSphere/Kubernetes)**
KubeSphere: 3.0.0-dev | priority | ws custom role missing info describe the bug ws is the user with the custom role of workspace members view workspace roles view workspace roles management use ws to log in and check the user ws self provisioner but found not project over there actually ws self provisioner is the admin of a project versions used kubesphere kubernetes kubesphere dev | 1 |
45,881 | 2,941,817,117 | IssuesEvent | 2015-07-02 10:25:51 | google/google-api-dotnet-client | https://api.github.com/repos/google/google-api-dotnet-client | closed | Google Analytics in mvc | auto-migrated Component-Samples Priority-Medium | ```
public IList<int> GetStats()
{
string scope = AnalyticsService.Scopes.AnalyticsReadonly.GetStringValue();
//UPDATE this to match your developer account address. Note, you also need to add this address
//as a user on your Google Analytics profile which you want to extract data from (this may take
//up to 15 mins to recognise)
//string client_id = "493694130884-m0jlcf6dabtnnjf5ji3jpfk5uh1m5ose@developer.gserviceaccount.com";
string client_id = "372469488083-8imjoam8une71sevtmn3urtvulq9kcom.apps.googleusercontent.com";
//836224293376-3524as8ngv6jia4l9qsf7dd4snr1utds@developer.gserviceaccount.com
//UPDATE this to match the path to your certificate
//string key_file = @"E:\8678acc035aa1965cbf36543dcf30f330c7549d2-privatekey.p12";
string key_file = @"E:\7b6826181f30240ce74035b2d09d6c2869885b6c-privatekey.p12";
//ddd39a925dad7f817b609931ddd91cbf03aa7e81-privatekey
//string key_pass = "notasecret";
string key_pass = "notasecret";
AuthorizationServerDescription desc = GoogleAuthenticationServer.Description;
X509Certificate2 key = new X509Certificate2(key_file, key_pass, X509KeyStorageFlags.Exportable);
AssertionFlowClient client =
new AssertionFlowClient(desc, key) { ServiceAccountId = client_id, Scope = scope };
OAuth2Authenticator<AssertionFlowClient> auth =
new OAuth2Authenticator<AssertionFlowClient>(client, AssertionFlowClient.GetState);
AnalyticsService gas = new AnalyticsService(new BaseClientService.Initializer() { Authenticator = auth });
//UPDATE the ga:nnnnnnnn string to match your profile Id from Google Analytics
DataResource.GaResource.GetRequest r =
gas.Data.Ga.Get("ga:88028792", "2014-01-01", "2014-07-31", "ga:visitors");
r.Dimensions = "ga:pagePath";
r.Sort = "-ga:visitors";
r.MaxResults = 5;
GaData d = r.Execute();
IList<int> stats = new List<int>();
for (int y = 0; y < d.Rows.Count; y++)
{
stats.Add(Convert.ToInt32(d.Rows[y][1]));
}
return stats;
}
This is giving error of the auth.token (server error 404)
```
Original issue reported on code.google.com by `rakhi001...@gmail.com` on 3 Jul 2014 at 10:09 | 1.0 | Google Analytics in mvc - ```
public IList<int> GetStats()
{
string scope = AnalyticsService.Scopes.AnalyticsReadonly.GetStringValue();
//UPDATE this to match your developer account address. Note, you also need to add this address
//as a user on your Google Analytics profile which you want to extract data from (this may take
//up to 15 mins to recognise)
//string client_id = "493694130884-m0jlcf6dabtnnjf5ji3jpfk5uh1m5ose@developer.gserviceaccount.com";
string client_id = "372469488083-8imjoam8une71sevtmn3urtvulq9kcom.apps.googleusercontent.com";
//836224293376-3524as8ngv6jia4l9qsf7dd4snr1utds@developer.gserviceaccount.com
//UPDATE this to match the path to your certificate
//string key_file = @"E:\8678acc035aa1965cbf36543dcf30f330c7549d2-privatekey.p12";
string key_file = @"E:\7b6826181f30240ce74035b2d09d6c2869885b6c-privatekey.p12";
//ddd39a925dad7f817b609931ddd91cbf03aa7e81-privatekey
//string key_pass = "notasecret";
string key_pass = "notasecret";
AuthorizationServerDescription desc = GoogleAuthenticationServer.Description;
X509Certificate2 key = new X509Certificate2(key_file, key_pass, X509KeyStorageFlags.Exportable);
AssertionFlowClient client =
new AssertionFlowClient(desc, key) { ServiceAccountId = client_id, Scope = scope };
OAuth2Authenticator<AssertionFlowClient> auth =
new OAuth2Authenticator<AssertionFlowClient>(client, AssertionFlowClient.GetState);
AnalyticsService gas = new AnalyticsService(new BaseClientService.Initializer() { Authenticator = auth });
//UPDATE the ga:nnnnnnnn string to match your profile Id from Google Analytics
DataResource.GaResource.GetRequest r =
gas.Data.Ga.Get("ga:88028792", "2014-01-01", "2014-07-31", "ga:visitors");
r.Dimensions = "ga:pagePath";
r.Sort = "-ga:visitors";
r.MaxResults = 5;
GaData d = r.Execute();
IList<int> stats = new List<int>();
for (int y = 0; y < d.Rows.Count; y++)
{
stats.Add(Convert.ToInt32(d.Rows[y][1]));
}
return stats;
}
This is giving error of the auth.token (server error 404)
```
Original issue reported on code.google.com by `rakhi001...@gmail.com` on 3 Jul 2014 at 10:09 | priority | google analytics in mvc public ilist getstats string scope analyticsservice scopes analyticsreadonly getstringvalue update this to match your developer account address note you also need to add this address as a user on your google analytics profile which you want to extract data from this may take up to mins to recognise string client id developer gserviceaccount com string client id apps googleusercontent com developer gserviceaccount com update this to match the path to your certificate string key file e privatekey string key file e privatekey privatekey string key pass notasecret string key pass notasecret authorizationserverdescription desc googleauthenticationserver description key new key file key pass exportable assertionflowclient client new assertionflowclient desc key serviceaccountid client id scope scope auth new client assertionflowclient getstate analyticsservice gas new analyticsservice new baseclientservice initializer authenticator auth update the ga nnnnnnnn string to match your profile id from google analytics dataresource garesource getrequest r gas data ga get ga ga visitors r dimensions ga pagepath r sort ga visitors r maxresults gadata d r execute ilist stats new list for int y y d rows count y stats add convert d rows return stats this is giving error of the auth token server error original issue reported on code google com by gmail com on jul at | 1 |
598,510 | 18,246,649,371 | IssuesEvent | 2021-10-01 19:23:33 | fosscord/fosscord-server | https://api.github.com/repos/fosscord/fosscord-server | opened | Admin API/Controlled accounts route: PUT /users/ | enhancement route api medium priority admin dashboard | This route shall create a pre-prepared user with the details supplied in the JSON body. Request body shall follow the same format as the one returned by `GET /users/@me`.
Use restricted to server operators and account controllers. | 1.0 | Admin API/Controlled accounts route: PUT /users/ - This route shall create a pre-prepared user with the details supplied in the JSON body. Request body shall follow the same format as the one returned by `GET /users/@me`.
Use restricted to server operators and account controllers. | priority | admin api controlled accounts route put users this route shall create a pre prepared user with the details supplied in the json body request body shall follow the same format as the one returned by get users me use restricted to server operators and account controllers | 1 |
268,320 | 8,406,021,033 | IssuesEvent | 2018-10-11 16:43:56 | CS2113-AY1819S1-F09-2/main | https://api.github.com/repos/CS2113-AY1819S1-F09-2/main | opened | Implement undo command | priority.medium | Implement the undo commands (and some other commands) based on their implementation in AB4. | 1.0 | Implement undo command - Implement the undo commands (and some other commands) based on their implementation in AB4. | priority | implement undo command implement the undo commands and some other commands based on their implementation in | 1 |
312,408 | 9,546,747,439 | IssuesEvent | 2019-05-01 20:51:50 | AugurProject/augur | https://api.github.com/repos/AugurProject/augur | closed | PnL Chart regression | Priority: Medium | Think we fixed this and closed, looks like it went back to the old way....
0 eth needs to be solid line and dotted line needs to be the high/low or both.
****The starting pt needs to be at 0 and adjust based over time to the profit or loss...see screenshot for what it currently looks like
 | 1.0 | PnL Chart regression - Think we fixed this and closed, looks like it went back to the old way....
0 eth needs to be solid line and dotted line needs to be the high/low or both.
****The starting pt needs to be at 0 and adjust based over time to the profit or loss...see screenshot for what it currently looks like
 | priority | pnl chart regression think we fixed this and closed looks like it went back to the old way eth needs to be solid line and dotted line needs to be the high low or both the starting pt needs to be at and adjust based over time to the profit or loss see screenshot for what it currently looks like | 1 |
557,622 | 16,513,547,350 | IssuesEvent | 2021-05-26 07:52:55 | nim-lang/Nim | https://api.github.com/repos/nim-lang/Nim | reopened | parseCmdLine doesn't handle quoting correctly (and also prevents passing empty arguments) | Medium Priority std.os | parseCmdLine doesn't handle quoting correctly. IMO current behavior, even if "works as designed according to docs" is not uesful and should be replaced by a more intuitive behavior.
### Example 1
```nim
import os, unittest
proc main()=
# let a = "foo '--nimcache:hi world" # ok
let a = "foo --nimcache:'hi world'" # BUG
# let a = "foo --nimcache:\"hi world\"" # ditto
let s = parseCmdLine(a)
check s == @["foo", "--nimcache:hi world"]
main()
```
### Current Output
fails
### Expected Output
works
### Example 2
this bug prevents passing empty arguments, eg `--batch:''` or `--nimcache:''` (and is the reason why `--threads: on` "works" by accident but shouldn't)
see ``` # BUG: with initOptParser, `--batch:'' all` interprets `all` as the argument of --batch` ```
(refs https://github.com/nim-lang/Nim/pull/14823/files#diff-0bc1750c146a1dfef2e41e4aa555789f310d35a44d7ee97abd35ef3fd128389bR5530)
### Example 3
root cause of https://github.com/nim-lang/Nim/issues/18077
### Possible Solution
We don't need to reproduce everything the shell does (which is complicated) but at least we should handle quoting correctly. If there is valid use case to keep existing behavior, I suggest we add a new proc, but I doubt there's a valid use case.
### Additional Information
* 5fb40af57ef66618d2003dae1091d80e8f026a1c | 1.0 | parseCmdLine doesn't handle quoting correctly (and also prevents passing empty arguments) - parseCmdLine doesn't handle quoting correctly. IMO current behavior, even if "works as designed according to docs" is not uesful and should be replaced by a more intuitive behavior.
### Example 1
```nim
import os, unittest
proc main()=
# let a = "foo '--nimcache:hi world" # ok
let a = "foo --nimcache:'hi world'" # BUG
# let a = "foo --nimcache:\"hi world\"" # ditto
let s = parseCmdLine(a)
check s == @["foo", "--nimcache:hi world"]
main()
```
### Current Output
fails
### Expected Output
works
### Example 2
this bug prevents passing empty arguments, eg `--batch:''` or `--nimcache:''` (and is the reason why `--threads: on` "works" by accident but shouldn't)
see ``` # BUG: with initOptParser, `--batch:'' all` interprets `all` as the argument of --batch` ```
(refs https://github.com/nim-lang/Nim/pull/14823/files#diff-0bc1750c146a1dfef2e41e4aa555789f310d35a44d7ee97abd35ef3fd128389bR5530)
### Example 3
root cause of https://github.com/nim-lang/Nim/issues/18077
### Possible Solution
We don't need to reproduce everything the shell does (which is complicated) but at least we should handle quoting correctly. If there is valid use case to keep existing behavior, I suggest we add a new proc, but I doubt there's a valid use case.
### Additional Information
* 5fb40af57ef66618d2003dae1091d80e8f026a1c | priority | parsecmdline doesn t handle quoting correctly and also prevents passing empty arguments parsecmdline doesn t handle quoting correctly imo current behavior even if works as designed according to docs is not uesful and should be replaced by a more intuitive behavior example nim import os unittest proc main let a foo nimcache hi world ok let a foo nimcache hi world bug let a foo nimcache hi world ditto let s parsecmdline a check s main current output fails expected output works example this bug prevents passing empty arguments eg batch or nimcache and is the reason why threads on works by accident but shouldn t see bug with initoptparser batch all interprets all as the argument of batch refs example root cause of possible solution we don t need to reproduce everything the shell does which is complicated but at least we should handle quoting correctly if there is valid use case to keep existing behavior i suggest we add a new proc but i doubt there s a valid use case additional information | 1 |
428,161 | 12,403,989,681 | IssuesEvent | 2020-05-21 14:48:57 | department-of-veterans-affairs/caseflow | https://api.github.com/repos/department-of-veterans-affairs/caseflow | opened | Preserve Counsel notes written in ConfirmScheduleHearing Task | Priority: Medium Product: caseflow-hearings Stakeholder: BVA Team: Tango 💃 | ## Description
Preserve the notes in the tasks that Counsel writes in ConfirmScheduleHearing Task on Case timeline.
## Acceptance criteria
- [ ] Update task documentation
## Background/context/resources
Because notes don't persist in the task, the Hearing Management Branch team is copy/pasting counsel's comments into an Excel spreadsheet.
Notes and task history aren't captured in Case timeline, so when the task is cleared from the Hearing Management Branch queue the notes and history don't persist. While it's not such a big deal when goes to be rescheduled, the problem is when it's canceled. None of those details are captured so when it goes back to Counsel they won't have their own history and it might come right back to us in a new ConfirmScheduleHearing Task.
## Technical notes
| 1.0 | Preserve Counsel notes written in ConfirmScheduleHearing Task - ## Description
Preserve the notes in the tasks that Counsel writes in ConfirmScheduleHearing Task on Case timeline.
## Acceptance criteria
- [ ] Update task documentation
## Background/context/resources
Because notes don't persist in the task, the Hearing Management Branch team is copy/pasting counsel's comments into an Excel spreadsheet.
Notes and task history aren't captured in Case timeline, so when the task is cleared from the Hearing Management Branch queue the notes and history don't persist. While it's not such a big deal when goes to be rescheduled, the problem is when it's canceled. None of those details are captured so when it goes back to Counsel they won't have their own history and it might come right back to us in a new ConfirmScheduleHearing Task.
## Technical notes
| priority | preserve counsel notes written in confirmschedulehearing task description preserve the notes in the tasks that counsel writes in confirmschedulehearing task on case timeline acceptance criteria update task documentation background context resources because notes don t persist in the task the hearing management branch team is copy pasting counsel s comments into an excel spreadsheet notes and task history aren t captured in case timeline so when the task is cleared from the hearing management branch queue the notes and history don t persist while it s not such a big deal when goes to be rescheduled the problem is when it s canceled none of those details are captured so when it goes back to counsel they won t have their own history and it might come right back to us in a new confirmschedulehearing task technical notes | 1 |
323,107 | 9,842,900,825 | IssuesEvent | 2019-06-18 10:18:41 | OpenSRP/opensrp-client-native-form | https://api.github.com/repos/OpenSRP/opensrp-client-native-form | opened | Fix bugs in Checkbox widget | Priority: Medium bug |
- [ ] Use value attribute keys array to pre-select defaults
- [ ] Fix bug multiple select/deselect does not reset values | 1.0 | Fix bugs in Checkbox widget -
- [ ] Use value attribute keys array to pre-select defaults
- [ ] Fix bug multiple select/deselect does not reset values | priority | fix bugs in checkbox widget use value attribute keys array to pre select defaults fix bug multiple select deselect does not reset values | 1 |
208,252 | 7,137,559,135 | IssuesEvent | 2018-01-23 11:26:54 | status-im/status-react | https://api.github.com/repos/status-im/status-react | closed | Error undefined is not a function (evaluating 'i(e)') shown when tapping more link in group chat | bug fix them all medium-priority to be automated | ### Description
*Type*: Bug
*Summary*: Having more than 4 member in a group chat I can go to settings and should be able to view all members in the chat by tapping `more` link. When I do that i receive an error `undefined is not a function (evaluating 'i(e)')`
#### Expected behavior
List of members can be viewed
#### Actual behavior

### Reproduction
- Open Status
- Create goup chat with more than 4 memers
- Open this group chat
- Tap chat avatar
- Tap settings
- Tap `more` link
### Additional Information
* Status version: release 0.9.13, also happens in last develop 0.9.11d598 (2155)
* Operating System: Android, iOS
* https://app.testfairy.com/projects/4803590-status/builds/7533374/sessions/6/?accessToken=-7E6IbEAUAi9fQOFRObSo/qeJ9o | 1.0 | Error undefined is not a function (evaluating 'i(e)') shown when tapping more link in group chat - ### Description
*Type*: Bug
*Summary*: Having more than 4 member in a group chat I can go to settings and should be able to view all members in the chat by tapping `more` link. When I do that i receive an error `undefined is not a function (evaluating 'i(e)')`
#### Expected behavior
List of members can be viewed
#### Actual behavior

### Reproduction
- Open Status
- Create goup chat with more than 4 memers
- Open this group chat
- Tap chat avatar
- Tap settings
- Tap `more` link
### Additional Information
* Status version: release 0.9.13, also happens in last develop 0.9.11d598 (2155)
* Operating System: Android, iOS
* https://app.testfairy.com/projects/4803590-status/builds/7533374/sessions/6/?accessToken=-7E6IbEAUAi9fQOFRObSo/qeJ9o | priority | error undefined is not a function evaluating i e shown when tapping more link in group chat description type bug summary having more than member in a group chat i can go to settings and should be able to view all members in the chat by tapping more link when i do that i receive an error undefined is not a function evaluating i e expected behavior list of members can be viewed actual behavior reproduction open status create goup chat with more than memers open this group chat tap chat avatar tap settings tap more link additional information status version release also happens in last develop operating system android ios | 1 |
798,225 | 28,240,553,249 | IssuesEvent | 2023-04-06 06:45:41 | AstarTeam/hufs_baekjoon_front | https://api.github.com/repos/AstarTeam/hufs_baekjoon_front | opened | [HOME] - 문제 리스트 구현 | ✨Feat 🚀API 🖐Priority: Medium | ## 추가 기능 설명
[HOME] - 문제 리스트 구현
- 메인페이지에 한국외대학생들의 미해결 문제를 보여줍니다
- 문제 리스트에는 문제 번호, 제목, 난이도, 도전자수를 보여주어야 합니다.
- 로그인시, 문제 리스트에 나의 도전 상태 칼럼을 보여줍니다.
- 문제 제목을 클릭했을 때, 해당 문제의 백준 사이트로 이동합니다.
- 아직 안품 버튼 클릭 → 도전중으로 상태가 바뀌고, 도전자 수가 올라갑니다.
- 도전중 클릭 → 아직 안품으로 상태가 바뀌고, 도전자 수가 내려갑니다
<br/>
- [정렬] 로그인시, 회원이 도전중인 문제 별로 정렬해서 보여줍니다.
- [정렬] 로그인시, 회원이 안푼 문제 별로 정렬해서 보여줍니다.
- [정렬] 도전중인 문제 & 안푼 문제 정렬 선택시 로그인이 필요하다는 경고창을 띄웁니다.
- [정렬] 문제를 난이도 순으로 정렬해서 보여줍니다.
- [정렬] 문제를 도전자 수 순으로 정렬해서 보여줍니다.
<br/>
- 문제 리스트 페이지네이션
## 할 일
- [ ] 문제 리스트 표 UI 구현
- [ ] select box 컴포넌트 구현
- [ ] 도전중, 아직안품 버튼 컴포넌트 구현
- [ ] 페이지네이션 버튼 컴포넌트 구현
<br/>
- [ ] 문제리스트 API에서 받아오기
- [ ] 로그인 여부에 따라 도전중 상태 보여주는 여부 바꾸기
- [ ] 도전중 버튼을 누를 시 외대 도전자 수 증가시키기
- [ ] 특정 문제 선택시 백준 링크로 이동하도록 하기
- [ ] 문제 리스트 정렬
- [ ] 문제 리스트 페이지네이션 기능 구현
## ETC
| 1.0 | [HOME] - 문제 리스트 구현 - ## 추가 기능 설명
[HOME] - 문제 리스트 구현
- 메인페이지에 한국외대학생들의 미해결 문제를 보여줍니다
- 문제 리스트에는 문제 번호, 제목, 난이도, 도전자수를 보여주어야 합니다.
- 로그인시, 문제 리스트에 나의 도전 상태 칼럼을 보여줍니다.
- 문제 제목을 클릭했을 때, 해당 문제의 백준 사이트로 이동합니다.
- 아직 안품 버튼 클릭 → 도전중으로 상태가 바뀌고, 도전자 수가 올라갑니다.
- 도전중 클릭 → 아직 안품으로 상태가 바뀌고, 도전자 수가 내려갑니다
<br/>
- [정렬] 로그인시, 회원이 도전중인 문제 별로 정렬해서 보여줍니다.
- [정렬] 로그인시, 회원이 안푼 문제 별로 정렬해서 보여줍니다.
- [정렬] 도전중인 문제 & 안푼 문제 정렬 선택시 로그인이 필요하다는 경고창을 띄웁니다.
- [정렬] 문제를 난이도 순으로 정렬해서 보여줍니다.
- [정렬] 문제를 도전자 수 순으로 정렬해서 보여줍니다.
<br/>
- 문제 리스트 페이지네이션
## 할 일
- [ ] 문제 리스트 표 UI 구현
- [ ] select box 컴포넌트 구현
- [ ] 도전중, 아직안품 버튼 컴포넌트 구현
- [ ] 페이지네이션 버튼 컴포넌트 구현
<br/>
- [ ] 문제리스트 API에서 받아오기
- [ ] 로그인 여부에 따라 도전중 상태 보여주는 여부 바꾸기
- [ ] 도전중 버튼을 누를 시 외대 도전자 수 증가시키기
- [ ] 특정 문제 선택시 백준 링크로 이동하도록 하기
- [ ] 문제 리스트 정렬
- [ ] 문제 리스트 페이지네이션 기능 구현
## ETC
| priority | 문제 리스트 구현 추가 기능 설명 문제 리스트 구현 메인페이지에 한국외대학생들의 미해결 문제를 보여줍니다 문제 리스트에는 문제 번호 제목 난이도 도전자수를 보여주어야 합니다 로그인시 문제 리스트에 나의 도전 상태 칼럼을 보여줍니다 문제 제목을 클릭했을 때 해당 문제의 백준 사이트로 이동합니다 아직 안품 버튼 클릭 → 도전중으로 상태가 바뀌고 도전자 수가 올라갑니다 도전중 클릭 → 아직 안품으로 상태가 바뀌고 도전자 수가 내려갑니다 로그인시 회원이 도전중인 문제 별로 정렬해서 보여줍니다 로그인시 회원이 안푼 문제 별로 정렬해서 보여줍니다 도전중인 문제 안푼 문제 정렬 선택시 로그인이 필요하다는 경고창을 띄웁니다 문제를 난이도 순으로 정렬해서 보여줍니다 문제를 도전자 수 순으로 정렬해서 보여줍니다 문제 리스트 페이지네이션 할 일 문제 리스트 표 ui 구현 select box 컴포넌트 구현 도전중 아직안품 버튼 컴포넌트 구현 페이지네이션 버튼 컴포넌트 구현 문제리스트 api에서 받아오기 로그인 여부에 따라 도전중 상태 보여주는 여부 바꾸기 도전중 버튼을 누를 시 외대 도전자 수 증가시키기 특정 문제 선택시 백준 링크로 이동하도록 하기 문제 리스트 정렬 문제 리스트 페이지네이션 기능 구현 etc | 1 |
321,355 | 9,797,514,255 | IssuesEvent | 2019-06-11 10:07:03 | wso2/product-is | https://api.github.com/repos/wso2/product-is | opened | Hard to find the Mgt Console URL since some other log print after that | Affected/5.4.0 Complexity/Medium Priority/Normal Severity/Major Type/Improvement | Hard to find the Mgt Console URL since some other log print after that. IMO Mgt Console URL should be the last line.
Moved from https://wso2.org/jira/browse/IDENTITY-7194 | 1.0 | Hard to find the Mgt Console URL since some other log print after that - Hard to find the Mgt Console URL since some other log print after that. IMO Mgt Console URL should be the last line.
Moved from https://wso2.org/jira/browse/IDENTITY-7194 | priority | hard to find the mgt console url since some other log print after that hard to find the mgt console url since some other log print after that imo mgt console url should be the last line moved from | 1 |
403,029 | 11,834,683,662 | IssuesEvent | 2020-03-23 09:21:09 | geosolutions-it/ogc-testbed | https://api.github.com/repos/geosolutions-it/ogc-testbed | opened | [Tiles API] syria_vtp layer group is working as WMS but not with tiles API | Priority: Medium VTP2 bug | syria_vtp WMS
http://vtp2.geo-solutions.it/geoserver/syria_vtp/wms?service=WMS&version=1.1.0&request=GetMap&layers=syria_vtp%3Asyria_vtp&bbox=-180.0%2C-90.0%2C180.0%2C90.0&width=768&height=384&srs=EPSG%3A4326&format=application/openlayers
syria_vtp Tiles API
https://vtp2.geo-solutions.it/geoserver/ogc/tiles/collections/syria_vtp%3Asyria_vtp/tiles/WebMercatorQuad/9/204/309?f=application%2Fvnd.mapbox-vector-tile
```
{
"code": "NoApplicableCode",
"description": "Problem communicating with GeoServer\nExpected: RenderedImageMap, got null"
}
```
| 1.0 | [Tiles API] syria_vtp layer group is working as WMS but not with tiles API - syria_vtp WMS
http://vtp2.geo-solutions.it/geoserver/syria_vtp/wms?service=WMS&version=1.1.0&request=GetMap&layers=syria_vtp%3Asyria_vtp&bbox=-180.0%2C-90.0%2C180.0%2C90.0&width=768&height=384&srs=EPSG%3A4326&format=application/openlayers
syria_vtp Tiles API
https://vtp2.geo-solutions.it/geoserver/ogc/tiles/collections/syria_vtp%3Asyria_vtp/tiles/WebMercatorQuad/9/204/309?f=application%2Fvnd.mapbox-vector-tile
```
{
"code": "NoApplicableCode",
"description": "Problem communicating with GeoServer\nExpected: RenderedImageMap, got null"
}
```
| priority | syria vtp layer group is working as wms but not with tiles api syria vtp wms syria vtp tiles api code noapplicablecode description problem communicating with geoserver nexpected renderedimagemap got null | 1 |
828,233 | 31,817,414,784 | IssuesEvent | 2023-09-13 21:43:30 | LBPUnion/ProjectLighthouse | https://api.github.com/repos/LBPUnion/ProjectLighthouse | closed | Expired cased don't automatically get dismissed | bug priority:medium server:website | Created from a CrashHelper report submitted by @vilijur:
```
When you ban a user for a set amount of time, a case is made, and the user should be banned
until the case expires, however the user remains banned despite this.
An expired case should automatically be dismissed, so a server moderator or admin doesn't
need to automatically dismiss it. This has happened on Beacon with case #94. Their case is
expired, yet the user is still banned and unable to use Beacon. This causes an inconvenience
to both the user and the moderator/admin who has to manually dismiss the case.
What should happen is that once the case expires, it should be dismissed. Without automatic
dismissal, all punishments are essentially permanent until someone dismisses it.
``` | 1.0 | Expired cased don't automatically get dismissed - Created from a CrashHelper report submitted by @vilijur:
```
When you ban a user for a set amount of time, a case is made, and the user should be banned
until the case expires, however the user remains banned despite this.
An expired case should automatically be dismissed, so a server moderator or admin doesn't
need to automatically dismiss it. This has happened on Beacon with case #94. Their case is
expired, yet the user is still banned and unable to use Beacon. This causes an inconvenience
to both the user and the moderator/admin who has to manually dismiss the case.
What should happen is that once the case expires, it should be dismissed. Without automatic
dismissal, all punishments are essentially permanent until someone dismisses it.
``` | priority | expired cased don t automatically get dismissed created from a crashhelper report submitted by vilijur when you ban a user for a set amount of time a case is made and the user should be banned until the case expires however the user remains banned despite this an expired case should automatically be dismissed so a server moderator or admin doesn t need to automatically dismiss it this has happened on beacon with case their case is expired yet the user is still banned and unable to use beacon this causes an inconvenience to both the user and the moderator admin who has to manually dismiss the case what should happen is that once the case expires it should be dismissed without automatic dismissal all punishments are essentially permanent until someone dismisses it | 1 |
399,418 | 11,748,056,512 | IssuesEvent | 2020-03-12 14:37:39 | oslc-op/oslc-specs | https://api.github.com/repos/oslc-op/oslc-specs | closed | Behaviour if oslc.orderBy specifies an unsupported ordering property | Core: Query Priority: Medium Xtra: Jira | Some implementations might not support sorting by all queryable properties. If a client specifies a property that is not supported for sorting, the spec should define the behaviour.
---
_Migrated from https://issues.oasis-open.org/browse/OSLCCORE-149 (opened by @DavidJHoney; previously assigned to @jamsden)_
| 1.0 | Behaviour if oslc.orderBy specifies an unsupported ordering property - Some implementations might not support sorting by all queryable properties. If a client specifies a property that is not supported for sorting, the spec should define the behaviour.
---
_Migrated from https://issues.oasis-open.org/browse/OSLCCORE-149 (opened by @DavidJHoney; previously assigned to @jamsden)_
| priority | behaviour if oslc orderby specifies an unsupported ordering property some implementations might not support sorting by all queryable properties if a client specifies a property that is not supported for sorting the spec should define the behaviour migrated from opened by davidjhoney previously assigned to jamsden | 1 |
284,420 | 8,738,258,537 | IssuesEvent | 2018-12-12 02:14:51 | uvasomrc/ithriv | https://api.github.com/repos/uvasomrc/ithriv | closed | Search error | 1 - Medium Priority | When opening the search window the following error appears in the console:
ERROR Error: "ExpressionChangedAfterItHasBeenCheckedError: Expression has changed after it was checked. Previous value: 'mat-form-field-should-float: false'. Current value: 'mat-form-field-should-float: true'."
While this isn't an error that effects the user experience at this point, it would be good to fix.
| 1.0 | Search error - When opening the search window the following error appears in the console:
ERROR Error: "ExpressionChangedAfterItHasBeenCheckedError: Expression has changed after it was checked. Previous value: 'mat-form-field-should-float: false'. Current value: 'mat-form-field-should-float: true'."
While this isn't an error that effects the user experience at this point, it would be good to fix.
| priority | search error when opening the search window the following error appears in the console error error expressionchangedafterithasbeencheckederror expression has changed after it was checked previous value mat form field should float false current value mat form field should float true while this isn t an error that effects the user experience at this point it would be good to fix | 1 |
786,604 | 27,660,194,912 | IssuesEvent | 2023-03-12 12:39:25 | dgd03146/anabada-refactoring | https://api.github.com/repos/dgd03146/anabada-refactoring | opened | Migration of notification components to TypeScript with Next.js | Status: In Progress Priority: Medium Type: Refactor/Function | ## Description
**A clear and concise description of what the problem is.**
The goal of this issue is to migrate notification components to TypeScript with Next.js. This will involve converting existing JavaScript code to TypeScript, which will help catch type errors and improve the maintainability of the codebase. Additionally, I will separate the React-Query hook for separate concerns and to reduce complicated code.
## To-do
**A clear and concise description of what you have to do.**
- [ ] Convert existing JavaScript code to TypeScript
- [ ] Separate React-Query hook for separate concerns
- [ ] Refactor code to reduce complexity
## Additional context
**Add any other context or screenshots about the feature request here.**
## Describe alternatives you've considered
**A clear and concise description of any alternative solutions or features you've considered.**
| 1.0 | Migration of notification components to TypeScript with Next.js - ## Description
**A clear and concise description of what the problem is.**
The goal of this issue is to migrate notification components to TypeScript with Next.js. This will involve converting existing JavaScript code to TypeScript, which will help catch type errors and improve the maintainability of the codebase. Additionally, I will separate the React-Query hook for separate concerns and to reduce complicated code.
## To-do
**A clear and concise description of what you have to do.**
- [ ] Convert existing JavaScript code to TypeScript
- [ ] Separate React-Query hook for separate concerns
- [ ] Refactor code to reduce complexity
## Additional context
**Add any other context or screenshots about the feature request here.**
## Describe alternatives you've considered
**A clear and concise description of any alternative solutions or features you've considered.**
| priority | migration of notification components to typescript with next js description a clear and concise description of what the problem is the goal of this issue is to migrate notification components to typescript with next js this will involve converting existing javascript code to typescript which will help catch type errors and improve the maintainability of the codebase additionally i will separate the react query hook for separate concerns and to reduce complicated code to do a clear and concise description of what you have to do convert existing javascript code to typescript separate react query hook for separate concerns refactor code to reduce complexity additional context add any other context or screenshots about the feature request here describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered | 1 |
213,107 | 7,246,059,539 | IssuesEvent | 2018-02-14 20:15:16 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [craftercms] Add new gradle task to run the scripts on src/test-suite for pretest check out | new feature priority: medium | ### Request
The new task should execute the next scripts on src/test-suite (according to with the platform where it will be executed):
-initialSetupCheckForTestScript.sh
-initialSetupCheckForTestScript.bat
### How it would work
* Go to craftercms folder and we need to execute the pretest check (environment setup check) before the execution of automated test framework (**here we need new task of gradle**)
Note:
According to with the following ticket, we need the pre-test task:
https://github.com/craftercms/craftercms/issues/1431 | 1.0 | [craftercms] Add new gradle task to run the scripts on src/test-suite for pretest check out - ### Request
The new task should execute the next scripts on src/test-suite (according to with the platform where it will be executed):
-initialSetupCheckForTestScript.sh
-initialSetupCheckForTestScript.bat
### How it would work
* Go to craftercms folder and we need to execute the pretest check (environment setup check) before the execution of automated test framework (**here we need new task of gradle**)
Note:
According to with the following ticket, we need the pre-test task:
https://github.com/craftercms/craftercms/issues/1431 | priority | add new gradle task to run the scripts on src test suite for pretest check out request the new task should execute the next scripts on src test suite according to with the platform where it will be executed initialsetupcheckfortestscript sh initialsetupcheckfortestscript bat how it would work go to craftercms folder and we need to execute the pretest check environment setup check before the execution of automated test framework here we need new task of gradle note according to with the following ticket we need the pre test task | 1 |
143,374 | 5,515,124,257 | IssuesEvent | 2017-03-17 16:40:09 | music-encoding/sibmei | https://api.github.com/repos/music-encoding/sibmei | closed | problem with grace note groupings | Priority: Medium Status: Needs Patch | This is a problem that only comes up if there are multiple notes with grace notes in a single bar.
If the first grace note in the bar is a single grace note, it works fine, but all the remaining grace notes, no matter what their original grouping was, get grouped together (with the right pitches, duration and order) onto the second note that contains a grace note.
However, if the first grace note is a group of grace notes, all the rest in the bar appear on that first note right away.


| 1.0 | problem with grace note groupings - This is a problem that only comes up if there are multiple notes with grace notes in a single bar.
If the first grace note in the bar is a single grace note, it works fine, but all the remaining grace notes, no matter what their original grouping was, get grouped together (with the right pitches, duration and order) onto the second note that contains a grace note.
However, if the first grace note is a group of grace notes, all the rest in the bar appear on that first note right away.


| priority | problem with grace note groupings this is a problem that only comes up if there are multiple notes with grace notes in a single bar if the first grace note in the bar is a single grace note it works fine but all the remaining grace notes no matter what their original grouping was get grouped together with the right pitches duration and order onto the second note that contains a grace note however if the first grace note is a group of grace notes all the rest in the bar appear on that first note right away | 1 |
324,509 | 9,904,821,075 | IssuesEvent | 2019-06-27 10:01:42 | EyeSeeTea/pictureapp | https://api.github.com/repos/EyeSeeTea/pictureapp | closed | Jumping between questions is not consistent (next drop downs don't open) | complexity - high (5+hr) eReferrals priority - medium type - maintenance | - [ ] Make jumping between questions consistent (for dropdowns and autocomplete open the selection as for date pickers.
- [ ] Add setting to disable jumping | 1.0 | Jumping between questions is not consistent (next drop downs don't open) - - [ ] Make jumping between questions consistent (for dropdowns and autocomplete open the selection as for date pickers.
- [ ] Add setting to disable jumping | priority | jumping between questions is not consistent next drop downs don t open make jumping between questions consistent for dropdowns and autocomplete open the selection as for date pickers add setting to disable jumping | 1 |
272,519 | 8,514,454,470 | IssuesEvent | 2018-10-31 18:36:51 | ngageoint/hootenanny | https://api.github.com/repos/ngageoint/hootenanny | opened | Add translation for DNC | Category: Core Category: Translation Priority: Medium Status: In Progress Type: Feature Type: Task in progress | Add the ability to translate to/from DNC.
NOTE: We ARE NOT exporting VPF.
1) Update the DNC importer
2) Add Export | 1.0 | Add translation for DNC - Add the ability to translate to/from DNC.
NOTE: We ARE NOT exporting VPF.
1) Update the DNC importer
2) Add Export | priority | add translation for dnc add the ability to translate to from dnc note we are not exporting vpf update the dnc importer add export | 1 |
450,835 | 13,020,347,057 | IssuesEvent | 2020-07-27 02:46:53 | kubesphere/console | https://api.github.com/repos/kubesphere/console | closed | bugs of creating credentials for devops project | kind/bug kind/need-to-verify priority/medium | **Describe the bug**
1. No promt when credential ID repeats
1. No promt when credential ID is chinese or special characters
**Versions used(KubeSphere/Kubernetes)**
KubeSphere(2020-07-25)
/kind bug
/assign @harrisonliu5 | 1.0 | bugs of creating credentials for devops project - **Describe the bug**
1. No promt when credential ID repeats
1. No promt when credential ID is chinese or special characters
**Versions used(KubeSphere/Kubernetes)**
KubeSphere(2020-07-25)
/kind bug
/assign @harrisonliu5 | priority | bugs of creating credentials for devops project describe the bug no promt when credential id repeats no promt when credential id is chinese or special characters versions used kubesphere kubernetes kubesphere kind bug assign | 1 |
337,177 | 10,211,538,629 | IssuesEvent | 2019-08-14 17:11:07 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | stack_sentinel: rare ASSERTION FAIL [!(z_arch_curr_cpu()->nested != 0U)] @ ZEPHYR_BASE/kernel/thread.c:429 Threads may not be created in ISRs | area: Kernel bug priority: medium | **Describe the bug**
Intermittent failure of sanitycheck test:
```
qemu_xtensa tests/kernel/fatal/kernel.common.stack_sentinel
```
Spotted only once. Generally doesn't fail. The previous run was in the exact same conditions and didn't fail.
**To Reproduce**
Steps to reproduce the behavior:
1. Run this test in sanitycheck many times. On a loaded machine maybe?
2. Pray
**Expected behavior**
PASS
**Impact**
Low.
**Environment (please complete the following information):**
- OS: Ubuntu 18.0.4.2
- Toolchain Zephyr SDK 0.10.0
- Commit SHA or Version used: b58aa20e135a
**Additional context**
JOBS=50
**Screenshots or console output**
```
qemu_xtensa tests/kernel/fatal/kernel.common.stack_sentinel FAILED: unexpected byte (qemu)`
***** Booting Zephyr build zephyr-v1.14.0-1527-g565bf3263d3b *****
Running test suite fatal
===================================================================
starting test - test_fatal
test alt thread 1: generic CPU exception
** FATAL EXCEPTION
** CPU 0 EXCCAUSE 0 PS 0x00060020 PC 0x600044c0 VADDR 0x00000000
** A0 0xa0001a72 SP 0x600044c0 A2 0x00000000 A3 0x00000000
** A4 0x00000000 A5 0x60004b10 A6 0x00000000 A7 0x00060520
** A8 0xa0000f68 A9 0x00000040 A10 0x00000000 A11 0x60004500
** A12 0x60004b10 A13 0x60004af0 A14 0xfffffff5 A15 0x00060520
** SAR 0x00000000
Current thread ID = 0x60006088
Faulting instruction address = 0xdeaddead
Caught system error -- reason 0
test alt thread 2: initiate kernel oops
ASSERTION FAIL [!(z_arch_curr_cpu()->nested != 0U)] @ ZEPHYR_BASE/kernel/thread.c:429
Threads may not be created in ISRs
@ ZEPHYR_BASE/lib/os/assert.c:30:
***** Kernel Panic! *****
Current thread ID = 0x60006104
Faulting instruction address = 0xdeaddead
Caught system error -- reason 6
Assertion failed at ZEPHYR_BASE/tests/kernel/fatal/src/main.c:247: test_fatal: (crash_reason not equal to _NANO_ERR_KERNEL_OOPS)
bad reason code got 6 expected 5
test alt thread 3: initiate kernel panic
ASSERTION FAIL [!(z_arch_curr_cpu()->nested != 0U)] @ ZEPHYR_BASE/kernel/thread.c:429
Threads may not be created in ISRs
@ ZEPHYR_BASE/lib/os/assert.c:30:
***** Kernel Panic! *****
Current thread ID = 0x60006104
Faulting instruction address = 0xdeaddead
Caught system error -- reason 6
test stack sentinel overflow - timer irq
ASSERTION FAIL [!(z_arch_curr_cpu()->nested != 0U)] @ ZEPHYR_BASE/kernel/thread.c:429
Threads may not be created in ISRs
@ ZEPHYR_BASE/lib/os/assert.c:30:
***** Kernel Panic! *****
Current thread ID = 0x60006104
Faulting instruction address = 0xdeaddead
Caught system error -- reason 6
Assertion failed at ZEPHYR_BASE/tests/kernel/fatal/src/main.c:197: check_stack_overflow: (crash_reason not equal to _NANO_ERR_STACK_CHK_FAIL)
bad reason code got 6 expected 2
FAIL - test_fatal
===================================================================
Test suite fatal failed.
===================================================================
PROJECT EXECUTION FAILED
```
| 1.0 | stack_sentinel: rare ASSERTION FAIL [!(z_arch_curr_cpu()->nested != 0U)] @ ZEPHYR_BASE/kernel/thread.c:429 Threads may not be created in ISRs - **Describe the bug**
Intermittent failure of sanitycheck test:
```
qemu_xtensa tests/kernel/fatal/kernel.common.stack_sentinel
```
Spotted only once. Generally doesn't fail. The previous run was in the exact same conditions and didn't fail.
**To Reproduce**
Steps to reproduce the behavior:
1. Run this test in sanitycheck many times. On a loaded machine maybe?
2. Pray
**Expected behavior**
PASS
**Impact**
Low.
**Environment (please complete the following information):**
- OS: Ubuntu 18.0.4.2
- Toolchain Zephyr SDK 0.10.0
- Commit SHA or Version used: b58aa20e135a
**Additional context**
JOBS=50
**Screenshots or console output**
```
qemu_xtensa tests/kernel/fatal/kernel.common.stack_sentinel FAILED: unexpected byte (qemu)`
***** Booting Zephyr build zephyr-v1.14.0-1527-g565bf3263d3b *****
Running test suite fatal
===================================================================
starting test - test_fatal
test alt thread 1: generic CPU exception
** FATAL EXCEPTION
** CPU 0 EXCCAUSE 0 PS 0x00060020 PC 0x600044c0 VADDR 0x00000000
** A0 0xa0001a72 SP 0x600044c0 A2 0x00000000 A3 0x00000000
** A4 0x00000000 A5 0x60004b10 A6 0x00000000 A7 0x00060520
** A8 0xa0000f68 A9 0x00000040 A10 0x00000000 A11 0x60004500
** A12 0x60004b10 A13 0x60004af0 A14 0xfffffff5 A15 0x00060520
** SAR 0x00000000
Current thread ID = 0x60006088
Faulting instruction address = 0xdeaddead
Caught system error -- reason 0
test alt thread 2: initiate kernel oops
ASSERTION FAIL [!(z_arch_curr_cpu()->nested != 0U)] @ ZEPHYR_BASE/kernel/thread.c:429
Threads may not be created in ISRs
@ ZEPHYR_BASE/lib/os/assert.c:30:
***** Kernel Panic! *****
Current thread ID = 0x60006104
Faulting instruction address = 0xdeaddead
Caught system error -- reason 6
Assertion failed at ZEPHYR_BASE/tests/kernel/fatal/src/main.c:247: test_fatal: (crash_reason not equal to _NANO_ERR_KERNEL_OOPS)
bad reason code got 6 expected 5
test alt thread 3: initiate kernel panic
ASSERTION FAIL [!(z_arch_curr_cpu()->nested != 0U)] @ ZEPHYR_BASE/kernel/thread.c:429
Threads may not be created in ISRs
@ ZEPHYR_BASE/lib/os/assert.c:30:
***** Kernel Panic! *****
Current thread ID = 0x60006104
Faulting instruction address = 0xdeaddead
Caught system error -- reason 6
test stack sentinel overflow - timer irq
ASSERTION FAIL [!(z_arch_curr_cpu()->nested != 0U)] @ ZEPHYR_BASE/kernel/thread.c:429
Threads may not be created in ISRs
@ ZEPHYR_BASE/lib/os/assert.c:30:
***** Kernel Panic! *****
Current thread ID = 0x60006104
Faulting instruction address = 0xdeaddead
Caught system error -- reason 6
Assertion failed at ZEPHYR_BASE/tests/kernel/fatal/src/main.c:197: check_stack_overflow: (crash_reason not equal to _NANO_ERR_STACK_CHK_FAIL)
bad reason code got 6 expected 2
FAIL - test_fatal
===================================================================
Test suite fatal failed.
===================================================================
PROJECT EXECUTION FAILED
```
| priority | stack sentinel rare assertion fail zephyr base kernel thread c threads may not be created in isrs describe the bug intermittent failure of sanitycheck test qemu xtensa tests kernel fatal kernel common stack sentinel spotted only once generally doesn t fail the previous run was in the exact same conditions and didn t fail to reproduce steps to reproduce the behavior run this test in sanitycheck many times on a loaded machine maybe pray expected behavior pass impact low environment please complete the following information os ubuntu toolchain zephyr sdk commit sha or version used additional context jobs screenshots or console output qemu xtensa tests kernel fatal kernel common stack sentinel failed unexpected byte qemu booting zephyr build zephyr running test suite fatal starting test test fatal test alt thread generic cpu exception fatal exception cpu exccause ps pc vaddr sp sar current thread id faulting instruction address caught system error reason test alt thread initiate kernel oops assertion fail zephyr base kernel thread c threads may not be created in isrs zephyr base lib os assert c kernel panic current thread id faulting instruction address caught system error reason assertion failed at zephyr base tests kernel fatal src main c test fatal crash reason not equal to nano err kernel oops bad reason code got expected test alt thread initiate kernel panic assertion fail zephyr base kernel thread c threads may not be created in isrs zephyr base lib os assert c kernel panic current thread id faulting instruction address caught system error reason test stack sentinel overflow timer irq assertion fail zephyr base kernel thread c threads may not be created in isrs zephyr base lib os assert c kernel panic current thread id faulting instruction address caught system error reason assertion failed at zephyr base tests kernel fatal src main c check stack overflow crash reason not equal to nano err stack chk fail bad reason code got expected fail test fatal test suite fatal failed project execution failed | 1 |
384,534 | 11,394,417,547 | IssuesEvent | 2020-01-30 09:19:01 | input-output-hk/ouroboros-network | https://api.github.com/repos/input-output-hk/ouroboros-network | opened | Immutable DB validation: warn user when about to truncate a large part of the DB | consensus daedalus immutable db priority medium | When the validation finds an invalid block early in the immutable DB (either due to data corruption or due to clock changes, #1531), it might be nice to ask the user before deleting a large part of the database.
Note that this will need to be done in such a way that Daedalus can present this choice to the user.
Setting to medium priority because although the _real_ causes for such a truncation (wrong clock, data corruption) would be very rare, if we decide to truncate due to a _bug_ in our code, it would be good if the user gets a warning ahead of time. | 1.0 | Immutable DB validation: warn user when about to truncate a large part of the DB - When the validation finds an invalid block early in the immutable DB (either due to data corruption or due to clock changes, #1531), it might be nice to ask the user before deleting a large part of the database.
Note that this will need to be done in such a way that Daedalus can present this choice to the user.
Setting to medium priority because although the _real_ causes for such a truncation (wrong clock, data corruption) would be very rare, if we decide to truncate due to a _bug_ in our code, it would be good if the user gets a warning ahead of time. | priority | immutable db validation warn user when about to truncate a large part of the db when the validation finds an invalid block early in the immutable db either due to data corruption or due to clock changes it might be nice to ask the user before deleting a large part of the database note that this will need to be done in such a way that daedalus can present this choice to the user setting to medium priority because although the real causes for such a truncation wrong clock data corruption would be very rare if we decide to truncate due to a bug in our code it would be good if the user gets a warning ahead of time | 1 |
256,199 | 8,127,031,703 | IssuesEvent | 2018-08-17 06:15:31 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | Inverse ghost zone operator should have an option to request ghost zones. | Expected Use: 3 - Occasional Feature Impact: 3 - Medium Priority: Normal Support Group: DOE/ASC | cq-id: VisIt00008899
cq-submitter: Cyrus Harrison
cq-submit-date: 03/06/09
This option should be enabled by default & would be very useful when debugging ghost zone issues.
For example: With pseudocolor plots you can force ghost zones by recentering a zonal quantity, but it's impossible to create a label plot of the ghost zones using zone id b/c it's a zonal quantity that doesn't require ghost zones to calculate.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 122
Status: Resolved
Project: VisIt
Tracker: Feature
Priority: Normal
Subject: Inverse ghost zone operator should have an option to request ghost zones.
Assigned to: Cyrus Harrison
Category:
Target version: 2.1
Author: Cyrus Harrison
Start:
Due date:
% Done: 0
Estimated time:
Created: 06/21/2010 07:16 pm
Updated: 07/08/2010 04:25 pm
Likelihood:
Severity:
Found in version:
Impact: 3 - Medium
Expected Use: 3 - Occasional
OS: All
Support Group: DOE/ASC
Description:
cq-id: VisIt00008899
cq-submitter: Cyrus Harrison
cq-submit-date: 03/06/09
This option should be enabled by default & would be very useful when debugging ghost zone issues.
For example: With pseudocolor plots you can force ghost zones by recentering a zonal quantity, but it's impossible to create a label plot of the ghost zones using zone id b/c it's a zonal quantity that doesn't require ghost zones to calculate.
Comments:
Assignment from LLNL team meeting.
| 1.0 | Inverse ghost zone operator should have an option to request ghost zones. - cq-id: VisIt00008899
cq-submitter: Cyrus Harrison
cq-submit-date: 03/06/09
This option should be enabled by default & would be very useful when debugging ghost zone issues.
For example: With pseudocolor plots you can force ghost zones by recentering a zonal quantity, but it's impossible to create a label plot of the ghost zones using zone id b/c it's a zonal quantity that doesn't require ghost zones to calculate.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 122
Status: Resolved
Project: VisIt
Tracker: Feature
Priority: Normal
Subject: Inverse ghost zone operator should have an option to request ghost zones.
Assigned to: Cyrus Harrison
Category:
Target version: 2.1
Author: Cyrus Harrison
Start:
Due date:
% Done: 0
Estimated time:
Created: 06/21/2010 07:16 pm
Updated: 07/08/2010 04:25 pm
Likelihood:
Severity:
Found in version:
Impact: 3 - Medium
Expected Use: 3 - Occasional
OS: All
Support Group: DOE/ASC
Description:
cq-id: VisIt00008899
cq-submitter: Cyrus Harrison
cq-submit-date: 03/06/09
This option should be enabled by default & would be very useful when debugging ghost zone issues.
For example: With pseudocolor plots you can force ghost zones by recentering a zonal quantity, but it's impossible to create a label plot of the ghost zones using zone id b/c it's a zonal quantity that doesn't require ghost zones to calculate.
Comments:
Assignment from LLNL team meeting.
| priority | inverse ghost zone operator should have an option to request ghost zones cq id cq submitter cyrus harrison cq submit date this option should be enabled by default would be very useful when debugging ghost zone issues for example with pseudocolor plots you can force ghost zones by recentering a zonal quantity but it s impossible to create a label plot of the ghost zones using zone id b c it s a zonal quantity that doesn t require ghost zones to calculate redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker feature priority normal subject inverse ghost zone operator should have an option to request ghost zones assigned to cyrus harrison category target version author cyrus harrison start due date done estimated time created pm updated pm likelihood severity found in version impact medium expected use occasional os all support group doe asc description cq id cq submitter cyrus harrison cq submit date this option should be enabled by default would be very useful when debugging ghost zone issues for example with pseudocolor plots you can force ghost zones by recentering a zonal quantity but it s impossible to create a label plot of the ghost zones using zone id b c it s a zonal quantity that doesn t require ghost zones to calculate comments assignment from llnl team meeting | 1 |
55,531 | 3,073,643,643 | IssuesEvent | 2015-08-19 23:19:11 | RobotiumTech/robotium | https://api.github.com/repos/RobotiumTech/robotium | closed | Unable to see "Android Project" as option under File Menu | bug imported invalid Priority-Medium | _From [rajasekh...@gmail.com](https://code.google.com/u/109241739080889652865/) on April 17, 2013 11:23:18_
What steps will reproduce the problem? 1.Setup Android environment as explained in the tutorial
2.Click on File menu, select New and click on the Others,
3.From New window, Drag down to Android option, expand it, and i'm unable to see "Android Project" as option What is the expected output? What do you see instead? I should see "Android Project" as option What version of the product are you using? On what operating system? ECLIPSE Classic, Android SDK tools Rev 21.1, Android SDK Platform-tools Rev 16.0.2 Please provide any additional information below. I have created a workspace in C: drive and my android SDK tools is in C;\Program Files\Android JDK 7 is installed
**Attachment:** [Android_Issue.png](http://code.google.com/p/robotium/issues/detail?id=439)
_Original issue: http://code.google.com/p/robotium/issues/detail?id=439_ | 1.0 | Unable to see "Android Project" as option under File Menu - _From [rajasekh...@gmail.com](https://code.google.com/u/109241739080889652865/) on April 17, 2013 11:23:18_
What steps will reproduce the problem? 1.Setup Android environment as explained in the tutorial
2.Click on File menu, select New and click on the Others,
3.From New window, Drag down to Android option, expand it, and i'm unable to see "Android Project" as option What is the expected output? What do you see instead? I should see "Android Project" as option What version of the product are you using? On what operating system? ECLIPSE Classic, Android SDK tools Rev 21.1, Android SDK Platform-tools Rev 16.0.2 Please provide any additional information below. I have created a workspace in C: drive and my android SDK tools is in C;\Program Files\Android JDK 7 is installed
**Attachment:** [Android_Issue.png](http://code.google.com/p/robotium/issues/detail?id=439)
_Original issue: http://code.google.com/p/robotium/issues/detail?id=439_ | priority | unable to see android project as option under file menu from on april what steps will reproduce the problem setup android environment as explained in the tutorial click on file menu select new and click on the others from new window drag down to android option expand it and i m unable to see android project as option what is the expected output what do you see instead i should see android project as option what version of the product are you using on what operating system eclipse classic android sdk tools rev android sdk platform tools rev please provide any additional information below i have created a workspace in c drive and my android sdk tools is in c program files android jdk is installed attachment original issue | 1 |
249,195 | 7,954,070,700 | IssuesEvent | 2018-07-12 05:48:21 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Server listing masks servers with sub-40ms ping. | Medium Priority | My server is hosted in MS Azure US East and my users in the USA consistently have sub-30ms ping times when logged in. [](See: https://imgur.com/ARp1nxz)
Unfortunately, I end up getting many international users and not many people from US East because the server join screen masks servers with a sub-40ms ping. This is not a bad thing, but my server is very well populated and I'd like more users - and because it has a sub-40ms ping, it's not being shown. [](See: https://imgur.com/OXcwtnC) - server not showing at top of list of server; but it is in my favorites with a sub-40ms ping time. [](See: https://imgur.com/BTAwiIm)
I don't understand why the server listing page does this, but I feel like it's adversely affecting my server population during core gaming times.
| 1.0 | Server listing masks servers with sub-40ms ping. - My server is hosted in MS Azure US East and my users in the USA consistently have sub-30ms ping times when logged in. [](See: https://imgur.com/ARp1nxz)
Unfortunately, I end up getting many international users and not many people from US East because the server join screen masks servers with a sub-40ms ping. This is not a bad thing, but my server is very well populated and I'd like more users - and because it has a sub-40ms ping, it's not being shown. [](See: https://imgur.com/OXcwtnC) - server not showing at top of list of server; but it is in my favorites with a sub-40ms ping time. [](See: https://imgur.com/BTAwiIm)
I don't understand why the server listing page does this, but I feel like it's adversely affecting my server population during core gaming times.
| priority | server listing masks servers with sub ping my server is hosted in ms azure us east and my users in the usa consistently have sub ping times when logged in see unfortunately i end up getting many international users and not many people from us east because the server join screen masks servers with a sub ping this is not a bad thing but my server is very well populated and i d like more users and because it has a sub ping it s not being shown see server not showing at top of list of server but it is in my favorites with a sub ping time see i don t understand why the server listing page does this but i feel like it s adversely affecting my server population during core gaming times | 1 |
215,052 | 7,286,228,109 | IssuesEvent | 2018-02-23 08:55:44 | dirkwhoffmann/virtualc64 | https://api.github.com/repos/dirkwhoffmann/virtualc64 | closed | Keyboard issue (Commodore key, CTRL key) | Priority-Medium bug | A couple requests, if I may:
• Option on my keyboard works like the Commodore key SOMETIMES, as in with other keys (letters) to make graphics characters. It does not, however, work with shift to change the characters to lowercase, and also does not change 56321 to value 223. Wondering if these are coming, because the menu item under keyboard for Commodore DOES do that for a moment, so having a key for this would be great.
• Same for the Control key. It works with many keys to change colors and other items, but does not fully function – 56321 doesn't change and you can't slow down a listing with it. | 1.0 | Keyboard issue (Commodore key, CTRL key) - A couple requests, if I may:
• Option on my keyboard works like the Commodore key SOMETIMES, as in with other keys (letters) to make graphics characters. It does not, however, work with shift to change the characters to lowercase, and also does not change 56321 to value 223. Wondering if these are coming, because the menu item under keyboard for Commodore DOES do that for a moment, so having a key for this would be great.
• Same for the Control key. It works with many keys to change colors and other items, but does not fully function – 56321 doesn't change and you can't slow down a listing with it. | priority | keyboard issue commodore key ctrl key a couple requests if i may • option on my keyboard works like the commodore key sometimes as in with other keys letters to make graphics characters it does not however work with shift to change the characters to lowercase and also does not change to value wondering if these are coming because the menu item under keyboard for commodore does do that for a moment so having a key for this would be great • same for the control key it works with many keys to change colors and other items but does not fully function – doesn t change and you can t slow down a listing with it | 1 |
480,227 | 13,838,182,000 | IssuesEvent | 2020-10-14 05:41:10 | AY2021S1-CS2113-T13-4/tp | https://api.github.com/repos/AY2021S1-CS2113-T13-4/tp | closed | Add method to add a course | priority.Medium type.Story | As a user I can add a course so that I can record the information of the course, e.g. AU, teaching staff, etc. | 1.0 | Add method to add a course - As a user I can add a course so that I can record the information of the course, e.g. AU, teaching staff, etc. | priority | add method to add a course as a user i can add a course so that i can record the information of the course e g au teaching staff etc | 1 |
398,994 | 11,742,586,137 | IssuesEvent | 2020-03-12 01:20:27 | thaliawww/concrexit | https://api.github.com/repos/thaliawww/concrexit | closed | Thumbnails of partner images | bug priority: medium | In GitLab by @se-bastiaan on Dec 20, 2018, 18:16
### One-sentence description
Partner images do not have thumbnails
### Current behaviour / Reproducing the bug
Check all partner images
### Expected behaviour
Originals are not used | 1.0 | Thumbnails of partner images - In GitLab by @se-bastiaan on Dec 20, 2018, 18:16
### One-sentence description
Partner images do not have thumbnails
### Current behaviour / Reproducing the bug
Check all partner images
### Expected behaviour
Originals are not used | priority | thumbnails of partner images in gitlab by se bastiaan on dec one sentence description partner images do not have thumbnails current behaviour reproducing the bug check all partner images expected behaviour originals are not used | 1 |
192,342 | 6,848,919,022 | IssuesEvent | 2017-11-13 20:11:44 | Aubron/scoreshots-templates | https://api.github.com/repos/Aubron/scoreshots-templates | closed | Soccer, Head-to-Head Stat Preview Copy | Priority: Medium Status: Needs Finalization / Preview Image | ### Requested by:
UNCW, Eric Rhew
Probably public.
## Template Description:
> Tom Riordan and I were chatting this morning about graphics and one suggestion we were wondering about is the creation of a head-to-head team stats graphic that could be used before a game.
>I believe you have a head-to-head individual stats graphic in baseball, but wanted to see if there was the possibility of using an XML file to create something on a team comparison for soccer.
> I am sure this could be something to use for other sports, but soccer was the first one we thought of.
>I have attached a team stats XML file that STAT CREW produces. If you want to take a look at that file and see what data could be pulled from that, please feel free.
> I think keeping it to the basics (W-L record, Goals per game, Goals Against Average, Shots Per Game) would be best. Probably also having the option to upload each team’s XML file would be helpful as well.
> This is something that can be developed over time – not something we would need right away.
> Please let me know your thoughts when you have a chance.
## Dynamic Considerations:
Probably lots. | 1.0 | Soccer, Head-to-Head Stat Preview Copy - ### Requested by:
UNCW, Eric Rhew
Probably public.
## Template Description:
> Tom Riordan and I were chatting this morning about graphics and one suggestion we were wondering about is the creation of a head-to-head team stats graphic that could be used before a game.
>I believe you have a head-to-head individual stats graphic in baseball, but wanted to see if there was the possibility of using an XML file to create something on a team comparison for soccer.
> I am sure this could be something to use for other sports, but soccer was the first one we thought of.
>I have attached a team stats XML file that STAT CREW produces. If you want to take a look at that file and see what data could be pulled from that, please feel free.
> I think keeping it to the basics (W-L record, Goals per game, Goals Against Average, Shots Per Game) would be best. Probably also having the option to upload each team’s XML file would be helpful as well.
> This is something that can be developed over time – not something we would need right away.
> Please let me know your thoughts when you have a chance.
## Dynamic Considerations:
Probably lots. | priority | soccer head to head stat preview copy requested by uncw eric rhew probably public template description tom riordan and i were chatting this morning about graphics and one suggestion we were wondering about is the creation of a head to head team stats graphic that could be used before a game i believe you have a head to head individual stats graphic in baseball but wanted to see if there was the possibility of using an xml file to create something on a team comparison for soccer i am sure this could be something to use for other sports but soccer was the first one we thought of i have attached a team stats xml file that stat crew produces if you want to take a look at that file and see what data could be pulled from that please feel free i think keeping it to the basics w l record goals per game goals against average shots per game would be best probably also having the option to upload each team’s xml file would be helpful as well this is something that can be developed over time – not something we would need right away please let me know your thoughts when you have a chance dynamic considerations probably lots | 1 |
675,552 | 23,098,028,485 | IssuesEvent | 2022-07-26 21:48:33 | vignetteapp/SeeShark | https://api.github.com/repos/vignetteapp/SeeShark | closed | Lag of the video stream from real time. Problem of handling errors when disconnecting the camera during processing. | enhancement priority:medium | Hello! I'm interested in this code and the asynchronous processing option.
SeeShark\SeeShark\Device\VideoDevice.cs:
```
public DecodeStatus TryGetFrame(out Frame frame)
{
DecodeStatus status = decoder.TryDecodeNextFrame(out frame);
// Big brain move to avoid overloading the CPU \o/
// Decide whether we wait longer during a Thread.Sleep() when there are no frames available.
// Waiting longer would mean a full frame interval (for example ~16ms when 60 fps), 1ms otherwise.
// Always wait longer just after receiving a new frame.
bool waitLonger = status == DecodeStatus.NewFrame;
Thread.Sleep(waitLonger ? 1000 * decoder.Framerate.den / (decoder.Framerate.num + 5) : 1);
return status;
}
```
And so is this loop:
I append here OnEndOfStream event:
```
public event EventHandler<DecodeStatus>? OnEndOfStream;
. . .
protected void DecodeLoop()
{
DecodeStatus status;
while ((status = TryGetFrame(out var frame)) != DecodeStatus.EndOfStream)
{
OnFrame?.Invoke(this, new FrameEventArgs(frame, status));
if (!IsPlaying)
break;
}
if (status == DecodeStatus.EndOfStream)
{
OnEndOfStream?.Invoke(this, status);
}
}
```
I slightly modified the code here, adding a handler in case of an error.
The problem is that in case of an error, if the camera was disconnected from USB during processing, calling `ffmpeg.av_read_frame(FormatContext, Packet);` will return an `EIO` error with code -5:
SeeShark\SeeShark\Decode\VideoStreamDecoder.cs:
```
error = ffmpeg.av_read_frame(FormatContext, Packet);
if (error < 0)
{
nextFrame = Frame;
GC.Collect();
// We only wait longer once to make sure we catch the frame on time.
return error == eagain
? DecodeStatus.NoFrameAvailable
: DecodeStatus.EndOfStream;
}
```
EIO is not the same as EAGAIN and will return DecodeStatus.EndOfStream accordingly. TryGetFrame will return EndOfStream along the chain, then DecodeLoop will exit the loop and **IsPlaying will remain True** (here we not set it to false on exit from loop), but we won't get any more frames. We also don't know about the error. those. **OnFrame?.Invoke will not be called**, with the status DecodeStatus.EndOfStream and there **is no way to determine that an error has occurred** and the camera is no longer working.
To solve this problem i append here `OnEndOfStream` and process it here. Other option still call OnFrame again with this status. But as it is now, we simply cannot find out that the camera has stopped, due to an unexpected error..
Problem two...
```
bool waitLonger = status == DecodeStatus.NewFrame;
Thread.Sleep(waitLonger ? 1000 * decoder.Framerate.den / (decoder.Framerate.num + 5) : 1);
```
This loop has a delay based on the FPS of the camera, which it declares in its configuration. Many cameras are able to adapt to light levels, I found this on both the ELP IMX 179 camera and the Genius camera. The fact is that, regardless of the FPS specified in the settings, they can raise it or lower it, compensating for too dark or overexposed frames.
I tried lighting the camera with a direct light with a lamp, and it raised the FPS to over 30 fps.
Over time, I got a video lag from reality, because each successfully received frame waited for the full time to the FPS indicated by the camera.
In case of low light, FPS drops to 14FPS or even lower. As a result, the loop executes several times with a delay of 1ms, which is also not good...
One more thing, if there is at least some processing in the `OnFrame` event, then this will lead to the same lag from reality. Because the event runs on the same thread as the loop, and processing time will increase the overall total delay between frames.
i append here next code, to look what happenes here:
```
public DecodeStatus TryGetFrame(out Frame frame)
{
DecodeStatus status = decoder.TryDecodeNextFrame(out frame);
bool waitLonger = status == DecodeStatus.NewFrame;
int waitTime = waitLonger ? 1000 * decoder.Framerate.den / (decoder.Framerate.num + 5) : 1;
Console.WriteLine($"wait time: {waitTime} {decoder.Framerate.den} {decoder.Framerate.num}");
Thread.Sleep(waitTime);
return status;
}
```
the output, when camera increase FPS:
```
NewFrame
wait time: 33 333333 10000000
NewFrame
wait time: 33 333333 10000000
NewFrame
wait time: 33 333333 10000000
NewFrame
wait time: 33 333333 10000000
NewFrame
wait time: 33 333333 10000000
NewFrame
wait time: 33 333333 10000000
NewFrame
wait time: 33 333333 10000000
NewFrame
wait time: 33 333333 10000000
NewFrame
wait time: 33 333333 10000000
NewFrame
wait time: 33 333333 10000000
```
the output, when camera reduce FPS:
```
NewFrame
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 33 333333 10000000
NewFrame
```
Thank you! | 1.0 | Lag of the video stream from real time. Problem of handling errors when disconnecting the camera during processing. - Hello! I'm interested in this code and the asynchronous processing option.
SeeShark\SeeShark\Device\VideoDevice.cs:
```
public DecodeStatus TryGetFrame(out Frame frame)
{
DecodeStatus status = decoder.TryDecodeNextFrame(out frame);
// Big brain move to avoid overloading the CPU \o/
// Decide whether we wait longer during a Thread.Sleep() when there are no frames available.
// Waiting longer would mean a full frame interval (for example ~16ms when 60 fps), 1ms otherwise.
// Always wait longer just after receiving a new frame.
bool waitLonger = status == DecodeStatus.NewFrame;
Thread.Sleep(waitLonger ? 1000 * decoder.Framerate.den / (decoder.Framerate.num + 5) : 1);
return status;
}
```
And so is this loop:
I append here OnEndOfStream event:
```
public event EventHandler<DecodeStatus>? OnEndOfStream;
. . .
protected void DecodeLoop()
{
DecodeStatus status;
while ((status = TryGetFrame(out var frame)) != DecodeStatus.EndOfStream)
{
OnFrame?.Invoke(this, new FrameEventArgs(frame, status));
if (!IsPlaying)
break;
}
if (status == DecodeStatus.EndOfStream)
{
OnEndOfStream?.Invoke(this, status);
}
}
```
I slightly modified the code here, adding a handler in case of an error.
The problem is that in case of an error, if the camera was disconnected from USB during processing, calling `ffmpeg.av_read_frame(FormatContext, Packet);` will return an `EIO` error with code -5:
SeeShark\SeeShark\Decode\VideoStreamDecoder.cs:
```
error = ffmpeg.av_read_frame(FormatContext, Packet);
if (error < 0)
{
nextFrame = Frame;
GC.Collect();
// We only wait longer once to make sure we catch the frame on time.
return error == eagain
? DecodeStatus.NoFrameAvailable
: DecodeStatus.EndOfStream;
}
```
EIO is not the same as EAGAIN and will return DecodeStatus.EndOfStream accordingly. TryGetFrame will return EndOfStream along the chain, then DecodeLoop will exit the loop and **IsPlaying will remain True** (here we not set it to false on exit from loop), but we won't get any more frames. We also don't know about the error. those. **OnFrame?.Invoke will not be called**, with the status DecodeStatus.EndOfStream and there **is no way to determine that an error has occurred** and the camera is no longer working.
To solve this problem i append here `OnEndOfStream` and process it here. Other option still call OnFrame again with this status. But as it is now, we simply cannot find out that the camera has stopped, due to an unexpected error..
Problem two...
```
bool waitLonger = status == DecodeStatus.NewFrame;
Thread.Sleep(waitLonger ? 1000 * decoder.Framerate.den / (decoder.Framerate.num + 5) : 1);
```
This loop has a delay based on the FPS of the camera, which it declares in its configuration. Many cameras are able to adapt to light levels, I found this on both the ELP IMX 179 camera and the Genius camera. The fact is that, regardless of the FPS specified in the settings, they can raise it or lower it, compensating for too dark or overexposed frames.
I tried lighting the camera with a direct light with a lamp, and it raised the FPS to over 30 fps.
Over time, I got a video lag from reality, because each successfully received frame waited for the full time to the FPS indicated by the camera.
In case of low light, FPS drops to 14FPS or even lower. As a result, the loop executes several times with a delay of 1ms, which is also not good...
One more thing, if there is at least some processing in the `OnFrame` event, then this will lead to the same lag from reality. Because the event runs on the same thread as the loop, and processing time will increase the overall total delay between frames.
i append here next code, to look what happenes here:
```
public DecodeStatus TryGetFrame(out Frame frame)
{
DecodeStatus status = decoder.TryDecodeNextFrame(out frame);
bool waitLonger = status == DecodeStatus.NewFrame;
int waitTime = waitLonger ? 1000 * decoder.Framerate.den / (decoder.Framerate.num + 5) : 1;
Console.WriteLine($"wait time: {waitTime} {decoder.Framerate.den} {decoder.Framerate.num}");
Thread.Sleep(waitTime);
return status;
}
```
the output, when camera increase FPS:
```
NewFrame
wait time: 33 333333 10000000
NewFrame
wait time: 33 333333 10000000
NewFrame
wait time: 33 333333 10000000
NewFrame
wait time: 33 333333 10000000
NewFrame
wait time: 33 333333 10000000
NewFrame
wait time: 33 333333 10000000
NewFrame
wait time: 33 333333 10000000
NewFrame
wait time: 33 333333 10000000
NewFrame
wait time: 33 333333 10000000
NewFrame
wait time: 33 333333 10000000
```
the output, when camera reduce FPS:
```
NewFrame
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 1 333333 10000000
NoFrameAvailable
wait time: 33 333333 10000000
NewFrame
```
Thank you! | priority | lag of the video stream from real time problem of handling errors when disconnecting the camera during processing hello i m interested in this code and the asynchronous processing option seeshark seeshark device videodevice cs public decodestatus trygetframe out frame frame decodestatus status decoder trydecodenextframe out frame big brain move to avoid overloading the cpu o decide whether we wait longer during a thread sleep when there are no frames available waiting longer would mean a full frame interval for example when fps otherwise always wait longer just after receiving a new frame bool waitlonger status decodestatus newframe thread sleep waitlonger decoder framerate den decoder framerate num return status and so is this loop i append here onendofstream event public event eventhandler onendofstream protected void decodeloop decodestatus status while status trygetframe out var frame decodestatus endofstream onframe invoke this new frameeventargs frame status if isplaying break if status decodestatus endofstream onendofstream invoke this status i slightly modified the code here adding a handler in case of an error the problem is that in case of an error if the camera was disconnected from usb during processing calling ffmpeg av read frame formatcontext packet will return an eio error with code seeshark seeshark decode videostreamdecoder cs error ffmpeg av read frame formatcontext packet if error nextframe frame gc collect we only wait longer once to make sure we catch the frame on time return error eagain decodestatus noframeavailable decodestatus endofstream eio is not the same as eagain and will return decodestatus endofstream accordingly trygetframe will return endofstream along the chain then decodeloop will exit the loop and isplaying will remain true here we not set it to false on exit from loop but we won t get any more frames we also don t know about the error those onframe invoke will not be called with the status decodestatus endofstream and there is no way to determine that an error has occurred and the camera is no longer working to solve this problem i append here onendofstream and process it here other option still call onframe again with this status but as it is now we simply cannot find out that the camera has stopped due to an unexpected error problem two bool waitlonger status decodestatus newframe thread sleep waitlonger decoder framerate den decoder framerate num this loop has a delay based on the fps of the camera which it declares in its configuration many cameras are able to adapt to light levels i found this on both the elp imx camera and the genius camera the fact is that regardless of the fps specified in the settings they can raise it or lower it compensating for too dark or overexposed frames i tried lighting the camera with a direct light with a lamp and it raised the fps to over fps over time i got a video lag from reality because each successfully received frame waited for the full time to the fps indicated by the camera in case of low light fps drops to or even lower as a result the loop executes several times with a delay of which is also not good one more thing if there is at least some processing in the onframe event then this will lead to the same lag from reality because the event runs on the same thread as the loop and processing time will increase the overall total delay between frames i append here next code to look what happenes here public decodestatus trygetframe out frame frame decodestatus status decoder trydecodenextframe out frame bool waitlonger status decodestatus newframe int waittime waitlonger decoder framerate den decoder framerate num console writeline wait time waittime decoder framerate den decoder framerate num thread sleep waittime return status the output when camera increase fps newframe wait time newframe wait time newframe wait time newframe wait time newframe wait time newframe wait time newframe wait time newframe wait time newframe wait time newframe wait time the output when camera reduce fps newframe wait time noframeavailable wait time noframeavailable wait time noframeavailable wait time noframeavailable wait time noframeavailable wait time noframeavailable wait time noframeavailable wait time noframeavailable wait time noframeavailable wait time noframeavailable wait time noframeavailable wait time noframeavailable wait time noframeavailable wait time noframeavailable wait time noframeavailable wait time noframeavailable wait time newframe thank you | 1 |
151,012 | 5,795,378,649 | IssuesEvent | 2017-05-02 16:56:54 | pingzing/digi-transit-10 | https://api.github.com/repos/pingzing/digi-transit-10 | opened | [Polish] Smarter handling of back button the SearchPage | enhancement polish priority: medium | - [ ] If we hop from a Stop Details to the Line search, the back button should take us back to the Stop details.
- [ ] Maybe something else too? I seem to remember something. | 1.0 | [Polish] Smarter handling of back button the SearchPage - - [ ] If we hop from a Stop Details to the Line search, the back button should take us back to the Stop details.
- [ ] Maybe something else too? I seem to remember something. | priority | smarter handling of back button the searchpage if we hop from a stop details to the line search the back button should take us back to the stop details maybe something else too i seem to remember something | 1 |
288,490 | 8,847,713,510 | IssuesEvent | 2019-01-08 03:05:40 | bradnoble/msc-vuejs | https://api.github.com/repos/bradnoble/msc-vuejs | closed | Admin: Not clear how to move person to new household | Component: Admin:Members Priority: Medium Status: Duplicate Type: Change | Would like to avoid deleting people from existing households in order to create a new household due to potential future use tied to payments with lodge stays and history.
Does not appear to be a way to move a person to a new household right now through the admin interface, only to delete the person and re-create, which we want to avoid.
Please comment if I missed something here. | 1.0 | Admin: Not clear how to move person to new household - Would like to avoid deleting people from existing households in order to create a new household due to potential future use tied to payments with lodge stays and history.
Does not appear to be a way to move a person to a new household right now through the admin interface, only to delete the person and re-create, which we want to avoid.
Please comment if I missed something here. | priority | admin not clear how to move person to new household would like to avoid deleting people from existing households in order to create a new household due to potential future use tied to payments with lodge stays and history does not appear to be a way to move a person to a new household right now through the admin interface only to delete the person and re create which we want to avoid please comment if i missed something here | 1 |
496,014 | 14,292,107,255 | IssuesEvent | 2020-11-24 00:13:50 | sButtons/sbuttons | https://api.github.com/repos/sButtons/sbuttons | closed | Ripple button border and fill issue | Cannot Reproduce Priority: Medium bug buttons help wanted stale-issue up-for-grabs | Ripple buttons: For whatever reason the buttons display normally but when I hover over them and cause the ripple animation to happen, some of the fill of the buttons don't reach the border of the button. I tested it out on my laptop and it happens to any button based on where the button was placed on the page. I tried fixing it but came up with nothing so hoping someone can fix it!
Below I attached a screenshot of an example of this issue. This shows up on other buttons as well and it's not consistently in the same edges of the button either. For example, sometimes it was only the bottom edge, or the right and top, or all 4 edges even. It really came in any combination as far as I was aware.

@shahednasser was saying she didn't see this error on her end so I'm not sure if this is just an issue for me or if anyone else is able to see it when they use the ripple button/load the website.
| 1.0 | Ripple button border and fill issue - Ripple buttons: For whatever reason the buttons display normally but when I hover over them and cause the ripple animation to happen, some of the fill of the buttons don't reach the border of the button. I tested it out on my laptop and it happens to any button based on where the button was placed on the page. I tried fixing it but came up with nothing so hoping someone can fix it!
Below I attached a screenshot of an example of this issue. This shows up on other buttons as well and it's not consistently in the same edges of the button either. For example, sometimes it was only the bottom edge, or the right and top, or all 4 edges even. It really came in any combination as far as I was aware.

@shahednasser was saying she didn't see this error on her end so I'm not sure if this is just an issue for me or if anyone else is able to see it when they use the ripple button/load the website.
| priority | ripple button border and fill issue ripple buttons for whatever reason the buttons display normally but when i hover over them and cause the ripple animation to happen some of the fill of the buttons don t reach the border of the button i tested it out on my laptop and it happens to any button based on where the button was placed on the page i tried fixing it but came up with nothing so hoping someone can fix it below i attached a screenshot of an example of this issue this shows up on other buttons as well and it s not consistently in the same edges of the button either for example sometimes it was only the bottom edge or the right and top or all edges even it really came in any combination as far as i was aware shahednasser was saying she didn t see this error on her end so i m not sure if this is just an issue for me or if anyone else is able to see it when they use the ripple button load the website | 1 |
818,048 | 30,668,695,343 | IssuesEvent | 2023-07-25 20:25:18 | elastic/security-docs | https://api.github.com/repos/elastic/security-docs | closed | [Detection rules] Prebuilt rules tags documentation | Feature: Rules Feature: Prebuilt rules v8.5.0 v8.6.0 v8.7.0 v8.8.0 v8.9.0 Priority: Medium Effort: Medium | ## Description
In 8.9 we have structured prebuilt rules tags better, and added additional context to their names.
We would like to add documentation about what the tags mean so that users can make better use of them.
## Related issues/PRs
https://github.com/elastic/security-team/issues/5652
https://github.com/elastic/detection-rules/pull/2725
## Context
Tag categories:
Data Source - references the data source for the rule (eg specific applications, cloud providers etc, the data can be coming from an Elastic integration, or shipped with other data shippers)
Domain - categorised data sources into higher level buckets (eg Cloud, Endpoint or Network)
OS - references host OSs, also to be used as a Data Source reference.
Resources - informs about additional rule resources like Investigation guide
Rule Type - informs the user if the rle uses Machine learning jobs, or is a higher order rule (is built on top of other rules alerts)
Tactic - references MITRE ATT&CK tactics
Threat - references specific threats the rule is detecting (eg Cobalt Strike, BPFDoor)
Use Case - informs the user about what is the rule for (eg Threat Detection, Identity and Access Audit etc)
Expanding more on the Use Case tags:
- Use Case: Asset Visibility - rules that detect changes to specified asset types
- Use Case: Configuration Audit - rules that detect undesirable configuration changes
- Use case: Guided Onboarding - example rule, used for Elastic security guided onboarding tour
- Use Case: Identity and Access Audit - rules detecting activity related to IAM
- Use Case: Log Auditing - rules detecting activity on logs configurations or log storage
- Use Case: Network Security Monitoring - rules detecting network security configuration activity
- Use Case: Threat Detection - rules detecting threats
- Use Case: Vulnerability - rules detecting exploitation of specific vulnerabilities
| 1.0 | [Detection rules] Prebuilt rules tags documentation - ## Description
In 8.9 we have structured prebuilt rules tags better, and added additional context to their names.
We would like to add documentation about what the tags mean so that users can make better use of them.
## Related issues/PRs
https://github.com/elastic/security-team/issues/5652
https://github.com/elastic/detection-rules/pull/2725
## Context
Tag categories:
Data Source - references the data source for the rule (eg specific applications, cloud providers etc, the data can be coming from an Elastic integration, or shipped with other data shippers)
Domain - categorised data sources into higher level buckets (eg Cloud, Endpoint or Network)
OS - references host OSs, also to be used as a Data Source reference.
Resources - informs about additional rule resources like Investigation guide
Rule Type - informs the user if the rle uses Machine learning jobs, or is a higher order rule (is built on top of other rules alerts)
Tactic - references MITRE ATT&CK tactics
Threat - references specific threats the rule is detecting (eg Cobalt Strike, BPFDoor)
Use Case - informs the user about what is the rule for (eg Threat Detection, Identity and Access Audit etc)
Expanding more on the Use Case tags:
- Use Case: Asset Visibility - rules that detect changes to specified asset types
- Use Case: Configuration Audit - rules that detect undesirable configuration changes
- Use case: Guided Onboarding - example rule, used for Elastic security guided onboarding tour
- Use Case: Identity and Access Audit - rules detecting activity related to IAM
- Use Case: Log Auditing - rules detecting activity on logs configurations or log storage
- Use Case: Network Security Monitoring - rules detecting network security configuration activity
- Use Case: Threat Detection - rules detecting threats
- Use Case: Vulnerability - rules detecting exploitation of specific vulnerabilities
| priority | prebuilt rules tags documentation description in we have structured prebuilt rules tags better and added additional context to their names we would like to add documentation about what the tags mean so that users can make better use of them related issues prs context tag categories data source references the data source for the rule eg specific applications cloud providers etc the data can be coming from an elastic integration or shipped with other data shippers domain categorised data sources into higher level buckets eg cloud endpoint or network os references host oss also to be used as a data source reference resources informs about additional rule resources like investigation guide rule type informs the user if the rle uses machine learning jobs or is a higher order rule is built on top of other rules alerts tactic references mitre att ck tactics threat references specific threats the rule is detecting eg cobalt strike bpfdoor use case informs the user about what is the rule for eg threat detection identity and access audit etc expanding more on the use case tags use case asset visibility rules that detect changes to specified asset types use case configuration audit rules that detect undesirable configuration changes use case guided onboarding example rule used for elastic security guided onboarding tour use case identity and access audit rules detecting activity related to iam use case log auditing rules detecting activity on logs configurations or log storage use case network security monitoring rules detecting network security configuration activity use case threat detection rules detecting threats use case vulnerability rules detecting exploitation of specific vulnerabilities | 1 |
483,182 | 13,920,266,712 | IssuesEvent | 2020-10-21 10:11:25 | AY2021S1-CS2103T-W16-3/tp | https://api.github.com/repos/AY2021S1-CS2103T-W16-3/tp | opened | Fix UI display on Income Panel | priority.medium :2nd_place_medal: type.bug :bug: | When the user switches from `Overview` tab to `Income` tab, the header of the right panel is not updated accordingly. This behaviour is the same when the user switches from `Expenses` tab to `Income` tab. | 1.0 | Fix UI display on Income Panel - When the user switches from `Overview` tab to `Income` tab, the header of the right panel is not updated accordingly. This behaviour is the same when the user switches from `Expenses` tab to `Income` tab. | priority | fix ui display on income panel when the user switches from overview tab to income tab the header of the right panel is not updated accordingly this behaviour is the same when the user switches from expenses tab to income tab | 1 |
371,226 | 10,963,238,849 | IssuesEvent | 2019-11-27 19:11:05 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [0.9.0 staging-1262] Audio: there is no shout about falling tree | Medium Priority | Usually I will shout TIMBER! after cutting a tree, so no one get hurt.
But now I don't shout anything, just silently watching how tree is falling. | 1.0 | [0.9.0 staging-1262] Audio: there is no shout about falling tree - Usually I will shout TIMBER! after cutting a tree, so no one get hurt.
But now I don't shout anything, just silently watching how tree is falling. | priority | audio there is no shout about falling tree usually i will shout timber after cutting a tree so no one get hurt but now i don t shout anything just silently watching how tree is falling | 1 |
675,649 | 23,101,076,462 | IssuesEvent | 2022-07-27 02:52:11 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [docdb] LoadBalancerMiniClusterTest.UninitializedTSDescriptorOnPendingAddTest flaky in TSAN | kind/bug area/docdb priority/medium | Jira Link: [DB-2188](https://yugabyte.atlassian.net/browse/DB-2188)
`ybd tsan --cxx-test integration-tests_load_balancer_mini_cluster-test --gtest_filter LoadBalancerMiniClusterTest.UninitializedTSDescriptorOnPendingAddTest -n 100` | 1.0 | [docdb] LoadBalancerMiniClusterTest.UninitializedTSDescriptorOnPendingAddTest flaky in TSAN - Jira Link: [DB-2188](https://yugabyte.atlassian.net/browse/DB-2188)
`ybd tsan --cxx-test integration-tests_load_balancer_mini_cluster-test --gtest_filter LoadBalancerMiniClusterTest.UninitializedTSDescriptorOnPendingAddTest -n 100` | priority | loadbalancerminiclustertest uninitializedtsdescriptoronpendingaddtest flaky in tsan jira link ybd tsan cxx test integration tests load balancer mini cluster test gtest filter loadbalancerminiclustertest uninitializedtsdescriptoronpendingaddtest n | 1 |
790,828 | 27,838,018,837 | IssuesEvent | 2023-03-20 10:49:14 | AY2223S2-CS2103-F11-3/tp | https://api.github.com/repos/AY2223S2-CS2103-F11-3/tp | closed | Assertions for Defensive Programming | priority.Medium type.Task | Required for defensive programming.
Tracked in the tP progress dashboard
See:
https://nus-cs2103-ay2223s2.github.io/website/admin/tp-w10.html#3-start-the-next-iteration


| 1.0 | Assertions for Defensive Programming - Required for defensive programming.
Tracked in the tP progress dashboard
See:
https://nus-cs2103-ay2223s2.github.io/website/admin/tp-w10.html#3-start-the-next-iteration


| priority | assertions for defensive programming required for defensive programming tracked in the tp progress dashboard see | 1 |
74,220 | 3,436,409,150 | IssuesEvent | 2015-12-12 10:51:05 | nikcross/open-forum | https://api.github.com/repos/nikcross/open-forum | closed | A Wiki Calendar | auto-migrated Priority-Medium Type-Enhancement | ```
An extension that displays a calendar on the page and automatically shows
links to blog entries that match a day.
By allowing blog entries in the future, future events could be marked.
```
Original issue reported on code.google.com by `nicholas...@gmail.com` on 15 May 2008 at 10:32 | 1.0 | A Wiki Calendar - ```
An extension that displays a calendar on the page and automatically shows
links to blog entries that match a day.
By allowing blog entries in the future, future events could be marked.
```
Original issue reported on code.google.com by `nicholas...@gmail.com` on 15 May 2008 at 10:32 | priority | a wiki calendar an extension that displays a calendar on the page and automatically shows links to blog entries that match a day by allowing blog entries in the future future events could be marked original issue reported on code google com by nicholas gmail com on may at | 1 |
754,716 | 26,399,306,378 | IssuesEvent | 2023-01-12 22:52:33 | aave/interface | https://api.github.com/repos/aave/interface | reopened | Emphasize repay with aToken feature | priority:medium enhancement | In V3, borrowed assets can be directly repaid with aTokens. Currently the feature is kind of hidden though. You have to click the arrow next to the asset, then select the aToken. Maybe this could be redesigned to emphasize it a bit more.

| 1.0 | Emphasize repay with aToken feature - In V3, borrowed assets can be directly repaid with aTokens. Currently the feature is kind of hidden though. You have to click the arrow next to the asset, then select the aToken. Maybe this could be redesigned to emphasize it a bit more.

| priority | emphasize repay with atoken feature in borrowed assets can be directly repaid with atokens currently the feature is kind of hidden though you have to click the arrow next to the asset then select the atoken maybe this could be redesigned to emphasize it a bit more | 1 |
123,066 | 4,852,092,526 | IssuesEvent | 2016-11-11 09:02:47 | MatchboxDorry/dorry-web | https://api.github.com/repos/MatchboxDorry/dorry-web | closed | [UI]test1-app里的详情页背景 | effort: 2 (medium) feature: style sheet flag: fixed priority: 1 (urgent) type: enhancement | **System:**
Mac mini Os X EI Capitan
**Browser:**
Chrome
**What I want to do**
我想看app apge里一个app的详情页显示效果.
**Where I am**
app page
**What I have done**
我点击一个app的名字打开它的详情页查看它的效果.
**What I expect:**
我看到的详情页背景是模糊且不是很透明的 .
**What really happened**:
我看到它的背景过于透明,与所提供的数据有差距.
| 1.0 | [UI]test1-app里的详情页背景 - **System:**
Mac mini Os X EI Capitan
**Browser:**
Chrome
**What I want to do**
我想看app apge里一个app的详情页显示效果.
**Where I am**
app page
**What I have done**
我点击一个app的名字打开它的详情页查看它的效果.
**What I expect:**
我看到的详情页背景是模糊且不是很透明的 .
**What really happened**:
我看到它的背景过于透明,与所提供的数据有差距.
| priority | app里的详情页背景 system mac mini os x ei capitan browser chrome what i want to do 我想看app apge里一个app的详情页显示效果 where i am app page what i have done 我点击一个app的名字打开它的详情页查看它的效果 what i expect 我看到的详情页背景是模糊且不是很透明的 what really happened 我看到它的背景过于透明,与所提供的数据有差距 | 1 |
699,897 | 24,036,369,599 | IssuesEvent | 2022-09-15 19:35:32 | wp-media/wp-rocket | https://api.github.com/repos/wp-media/wp-rocket | closed | RUCSS - Revolution theme - Multiple elements break because dynamic CSS inline selectors | type: enhancement 3rd party compatibility priority: medium module: remove unused css dynamic selectors | **Before submitting an issue please check that you’ve completed the following steps:**
yes - Made sure you’re on the latest version
yes - Used the search feature to ensure that the bug hasn’t been reported before
**Describe the bug**
The [Revolution Theme, by fuelthemes](https://themeforest.net/item/revolution-creative-multi-purpose-theme/21758544) uses dynamic inline selectors.
So, our Used CSS won't contain the appropriate value after the cache is cleared to insert the used css.
Adding `#thb-` to `rocket_rucss_inline_content_exclusions` solves the issue
**To Reproduce**
Steps to reproduce the behavior:
1. Using the Revolution Theme or this HTML template https://tinyurl.com/2jjty6vl
4. Enable Remove Unused CSS
5. See the issue.
**Expected behavior**
We should preserve these dynamic inline declarations.
**Screenshots**
The number changes on every page load:

https://i.imgur.com/vh34b3k.png
**Additional context**
ticket: https://secure.helpscout.net/conversation/1940409215/353995?folderId=2683093
**Backlog Grooming (for WP Media dev team use only)**
- [ ] Reproduce the problem
- [ ] Identify the root cause
- [ ] Scope a solution
- [ ] Estimate the effort
| 1.0 | RUCSS - Revolution theme - Multiple elements break because dynamic CSS inline selectors - **Before submitting an issue please check that you’ve completed the following steps:**
yes - Made sure you’re on the latest version
yes - Used the search feature to ensure that the bug hasn’t been reported before
**Describe the bug**
The [Revolution Theme, by fuelthemes](https://themeforest.net/item/revolution-creative-multi-purpose-theme/21758544) uses dynamic inline selectors.
So, our Used CSS won't contain the appropriate value after the cache is cleared to insert the used css.
Adding `#thb-` to `rocket_rucss_inline_content_exclusions` solves the issue
**To Reproduce**
Steps to reproduce the behavior:
1. Using the Revolution Theme or this HTML template https://tinyurl.com/2jjty6vl
4. Enable Remove Unused CSS
5. See the issue.
**Expected behavior**
We should preserve these dynamic inline declarations.
**Screenshots**
The number changes on every page load:

https://i.imgur.com/vh34b3k.png
**Additional context**
ticket: https://secure.helpscout.net/conversation/1940409215/353995?folderId=2683093
**Backlog Grooming (for WP Media dev team use only)**
- [ ] Reproduce the problem
- [ ] Identify the root cause
- [ ] Scope a solution
- [ ] Estimate the effort
| priority | rucss revolution theme multiple elements break because dynamic css inline selectors before submitting an issue please check that you’ve completed the following steps yes made sure you’re on the latest version yes used the search feature to ensure that the bug hasn’t been reported before describe the bug the uses dynamic inline selectors so our used css won t contain the appropriate value after the cache is cleared to insert the used css adding thb to rocket rucss inline content exclusions solves the issue to reproduce steps to reproduce the behavior using the revolution theme or this html template enable remove unused css see the issue expected behavior we should preserve these dynamic inline declarations screenshots the number changes on every page load additional context ticket backlog grooming for wp media dev team use only reproduce the problem identify the root cause scope a solution estimate the effort | 1 |
338,265 | 10,226,730,003 | IssuesEvent | 2019-08-16 18:39:02 | seung-lab/neuroglancer | https://api.github.com/repos/seung-lab/neuroglancer | closed | Multicut annotation management tool is buggy with many points | Priority: Medium Realm: SeungLab Status: In Progress Type: Bug | When annotating many points for a multicut the widget on the lower right (after left key press) is not shown anymore for the annotations high in the list. It still shows up for those low on the list.
Example: https://neuromancer-seung-import.appspot.com/?json_url=https://www.dynamicannotationframework.com/nglstate/4656646044909568 | 1.0 | Multicut annotation management tool is buggy with many points - When annotating many points for a multicut the widget on the lower right (after left key press) is not shown anymore for the annotations high in the list. It still shows up for those low on the list.
Example: https://neuromancer-seung-import.appspot.com/?json_url=https://www.dynamicannotationframework.com/nglstate/4656646044909568 | priority | multicut annotation management tool is buggy with many points when annotating many points for a multicut the widget on the lower right after left key press is not shown anymore for the annotations high in the list it still shows up for those low on the list example | 1 |
309,939 | 9,481,995,801 | IssuesEvent | 2019-04-21 11:03:40 | Luca1152/gravity-box | https://api.github.com/repos/Luca1152/gravity-box | opened | Sounds and music | Priority: Medium Status: Available Type: Enhancement | ## Description
Currently, the game has no sounds or music, so it is pretty silent. I should change this. Most likely, I won't make them myself, but search the web in the hope of finding something nice. | 1.0 | Sounds and music - ## Description
Currently, the game has no sounds or music, so it is pretty silent. I should change this. Most likely, I won't make them myself, but search the web in the hope of finding something nice. | priority | sounds and music description currently the game has no sounds or music so it is pretty silent i should change this most likely i won t make them myself but search the web in the hope of finding something nice | 1 |
588,293 | 17,651,561,509 | IssuesEvent | 2021-08-20 13:52:35 | eclipse/dirigible | https://api.github.com/repos/eclipse/dirigible | opened | [Editors] CSV files with # delimiter are not properly opened. | enhancement priority-medium efforts-low | **Describe the bug**
Opening CSV files with # delimiter in the CSV Editor results in everything being put in a single column.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a csv file with # delimiter
2. Open it
3. See issue
**Expected behavior**
CSV Editor should be able to open and edit CSV files with # delimiter.
**Desktop:**
- OS: macOS 11.5
- Browser: Firefox 91
- Version: Dirigible 5.12.12
| 1.0 | [Editors] CSV files with # delimiter are not properly opened. - **Describe the bug**
Opening CSV files with # delimiter in the CSV Editor results in everything being put in a single column.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a csv file with # delimiter
2. Open it
3. See issue
**Expected behavior**
CSV Editor should be able to open and edit CSV files with # delimiter.
**Desktop:**
- OS: macOS 11.5
- Browser: Firefox 91
- Version: Dirigible 5.12.12
| priority | csv files with delimiter are not properly opened describe the bug opening csv files with delimiter in the csv editor results in everything being put in a single column to reproduce steps to reproduce the behavior create a csv file with delimiter open it see issue expected behavior csv editor should be able to open and edit csv files with delimiter desktop os macos browser firefox version dirigible | 1 |
25,980 | 2,684,076,494 | IssuesEvent | 2015-03-28 16:44:45 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | Feature request: /tray command-line parameter | 1 star bug imported Priority-Medium | _From [thecybershadow](https://code.google.com/u/thecybershadow/) on May 16, 2012 06:34:29_
Ключ для запуска ConEmu в трее был бы полезен для запуска произвольных консольных приложений в трее (напр. при запуске компьютера).
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=546_ | 1.0 | Feature request: /tray command-line parameter - _From [thecybershadow](https://code.google.com/u/thecybershadow/) on May 16, 2012 06:34:29_
Ключ для запуска ConEmu в трее был бы полезен для запуска произвольных консольных приложений в трее (напр. при запуске компьютера).
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=546_ | priority | feature request tray command line parameter from on may ключ для запуска conemu в трее был бы полезен для запуска произвольных консольных приложений в трее напр при запуске компьютера original issue | 1 |
102,543 | 4,156,671,952 | IssuesEvent | 2016-06-16 18:44:01 | NREL/OpenStudio-Beopt | https://api.github.com/repos/NREL/OpenStudio-Beopt | closed | Consolidate location measures | priority medium | Consolidate all measures related to location?
- [x] Set Residential Weather File
- [x] Set Residential Mains Water Temperature
- [x] Set Residential Ground Temperatures (Why EnergyPlus measure?)
- [x] Set Residential Sizing Period (Why EnergyPlus measure?)
cc @joseph-robertson @jmaguire1 @ejhw | 1.0 | Consolidate location measures - Consolidate all measures related to location?
- [x] Set Residential Weather File
- [x] Set Residential Mains Water Temperature
- [x] Set Residential Ground Temperatures (Why EnergyPlus measure?)
- [x] Set Residential Sizing Period (Why EnergyPlus measure?)
cc @joseph-robertson @jmaguire1 @ejhw | priority | consolidate location measures consolidate all measures related to location set residential weather file set residential mains water temperature set residential ground temperatures why energyplus measure set residential sizing period why energyplus measure cc joseph robertson ejhw | 1 |
254,840 | 8,099,683,574 | IssuesEvent | 2018-08-11 12:09:04 | TV-Rename/tvrename | https://api.github.com/repos/TV-Rename/tvrename | closed | More tags for filename template editor | Priority-Medium auto-migrated enhancement | ```
- {Actors} / {GuestStars}
- {Year} tag (uses year of FirstAired)
- {SeasonYear} tag (year aired of first episode in season)
```
Original issue reported on code.google.com by `tvren...@tvrename.com` on 10 Sep 2009 at 1:13
| 1.0 | More tags for filename template editor - ```
- {Actors} / {GuestStars}
- {Year} tag (uses year of FirstAired)
- {SeasonYear} tag (year aired of first episode in season)
```
Original issue reported on code.google.com by `tvren...@tvrename.com` on 10 Sep 2009 at 1:13
| priority | more tags for filename template editor actors gueststars year tag uses year of firstaired seasonyear tag year aired of first episode in season original issue reported on code google com by tvren tvrename com on sep at | 1 |
165,580 | 6,278,526,573 | IssuesEvent | 2017-07-18 14:32:36 | cjlee112/socraticqs2 | https://api.github.com/repos/cjlee112/socraticqs2 | closed | Line return not shown as-is in FAQ edit page | Medium [Priority] | https://www.courselets.org/faq/?edit
The FAQ page doesn't seem to allow me to show line return. For example, I put in content in editing window like this:

but once it's published, line returns disappeared

Is it possible to show line returns in the published view? Thanks! | 1.0 | Line return not shown as-is in FAQ edit page - https://www.courselets.org/faq/?edit
The FAQ page doesn't seem to allow me to show line return. For example, I put in content in editing window like this:

but once it's published, line returns disappeared

Is it possible to show line returns in the published view? Thanks! | priority | line return not shown as is in faq edit page the faq page doesn t seem to allow me to show line return for example i put in content in editing window like this but once it s published line returns disappeared is it possible to show line returns in the published view thanks | 1 |
206,588 | 7,113,849,071 | IssuesEvent | 2018-01-17 21:58:05 | AdChain/AdChainRegistryDapp | https://api.github.com/repos/AdChain/AdChainRegistryDapp | closed | Create modals in table view for challenging, committing, revealing, | Priority: Medium Type: UX Enhancement | - clicking the action buttons in the domain table rows should bring up a modal to take action, instead of taking the user to the profile page. | 1.0 | Create modals in table view for challenging, committing, revealing, - - clicking the action buttons in the domain table rows should bring up a modal to take action, instead of taking the user to the profile page. | priority | create modals in table view for challenging committing revealing clicking the action buttons in the domain table rows should bring up a modal to take action instead of taking the user to the profile page | 1 |
181,373 | 6,659,112,589 | IssuesEvent | 2017-10-01 06:19:09 | AnSyn/ansyn | https://api.github.com/repos/AnSyn/ansyn | opened | base map disappears at certain zoom levels | Bug Priority: Medium Severity: High | see attached image
when zooming in too much the base map disappears.
i think we need to block zoom level in order to prevent this | 1.0 | base map disappears at certain zoom levels - see attached image
when zooming in too much the base map disappears.
i think we need to block zoom level in order to prevent this | priority | base map disappears at certain zoom levels see attached image when zooming in too much the base map disappears i think we need to block zoom level in order to prevent this | 1 |
29,655 | 2,716,757,709 | IssuesEvent | 2015-04-10 21:11:21 | CruxFramework/crux | https://api.github.com/repos/CruxFramework/crux | closed | Trouble with internacionalization using placeHolder | bug Component-UI imported Milestone-M14-C3 Module-CruxWidgets Priority-Medium TargetVersion-5.2.0 | _From [fla...@triggolabs.com](https://code.google.com/u/103356193102194777994/) on September 15, 2014 09:02:48_
The placeHolder attribute of the textBox component not support internacionalization messages.
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=524_ | 1.0 | Trouble with internacionalization using placeHolder - _From [fla...@triggolabs.com](https://code.google.com/u/103356193102194777994/) on September 15, 2014 09:02:48_
The placeHolder attribute of the textBox component not support internacionalization messages.
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=524_ | priority | trouble with internacionalization using placeholder from on september the placeholder attribute of the textbox component not support internacionalization messages original issue | 1 |
337,115 | 10,210,757,291 | IssuesEvent | 2019-08-14 15:24:48 | medic/medic | https://api.github.com/repos/medic/medic | closed | Incorrect XLSForm meta data values in Enketo | Priority: 2 - Medium Type: Bug | In XLSForm there are [keywords for meta data](http://xlsform.org/en/#metadata), some of which are relevant and available in our app with Enketo. Of those some provide incorrect values, such as `start` and `end` which always give midnight GMT for the given day.
**To Reproduce**
Steps to reproduce the behavior:
1. Upload the [`meta` form](https://github.com/medic/medic/files/3340296/meta.zip)
2. Open the `meta` form
3. See the incorrect values as notes
4. Submit the form and see the same incorrect values being saved
**Expected behavior**
The `start` and `end` values should have the time included with the date.
To maintain consistency with XLSForm it would be nice to also have the user's `phonenumber`, `username`, `email`. That said, those fields are less relevant in our context, and are available via `inputs` as a workaround.
**Screenshots**

```json
"fields": {
"start": "2019-06-27T20:00:00.000-04:00",
"end": "2019-06-27T20:00:00.000-04:00",
"today": "2019-06-27",
"deviceid": "deviceid not found",
"subscriberid": "subscriberid not found",
"simserial": "simserial not found",
"phonenumber": "phonenumber not found",
"username": "username not found",
"email": "email not found",
"n": "",
"meta": {
"instanceID": "uuid:f31d64b7-19ba-43c1-8e34-657c050c81a3"
},
"deprecatedID": ""
},
```
**Environment**
- Instance: gamma.dev.medicmobile.org
- Browser: Chrome
- App: webapp
- Version: 3.6 (pre-release), also tested on 2.18
**Additional context**
Being consistent with XLSForm keywords makes it easier for partners to build their own forms. In this case, a partner wanted to track the `start` and `end` time to see how long it takes users to complete a form. Alternatively, the `once(now())` could have been used to get the start time, but that would be a bit of a hack and we don't yet support `once()`. Also, having ODK events and actions (https://github.com/enketo/enketo-core/issues/577) could eventually be used for this as well, but doesn't fully override the need for supporting the XLSForm meta fields.
Also, this issue highlights the need to document differences with XLSForm and ODK XForm standards. If updating [the documentation](https://github.com/medic/medic-docs/blob/master/configuration/forms.md#custom-xpath-functions) is not covered in this issue we should create a new one for that. | 1.0 | Incorrect XLSForm meta data values in Enketo - In XLSForm there are [keywords for meta data](http://xlsform.org/en/#metadata), some of which are relevant and available in our app with Enketo. Of those some provide incorrect values, such as `start` and `end` which always give midnight GMT for the given day.
**To Reproduce**
Steps to reproduce the behavior:
1. Upload the [`meta` form](https://github.com/medic/medic/files/3340296/meta.zip)
2. Open the `meta` form
3. See the incorrect values as notes
4. Submit the form and see the same incorrect values being saved
**Expected behavior**
The `start` and `end` values should have the time included with the date.
To maintain consistency with XLSForm it would be nice to also have the user's `phonenumber`, `username`, `email`. That said, those fields are less relevant in our context, and are available via `inputs` as a workaround.
**Screenshots**

```json
"fields": {
"start": "2019-06-27T20:00:00.000-04:00",
"end": "2019-06-27T20:00:00.000-04:00",
"today": "2019-06-27",
"deviceid": "deviceid not found",
"subscriberid": "subscriberid not found",
"simserial": "simserial not found",
"phonenumber": "phonenumber not found",
"username": "username not found",
"email": "email not found",
"n": "",
"meta": {
"instanceID": "uuid:f31d64b7-19ba-43c1-8e34-657c050c81a3"
},
"deprecatedID": ""
},
```
**Environment**
- Instance: gamma.dev.medicmobile.org
- Browser: Chrome
- App: webapp
- Version: 3.6 (pre-release), also tested on 2.18
**Additional context**
Being consistent with XLSForm keywords makes it easier for partners to build their own forms. In this case, a partner wanted to track the `start` and `end` time to see how long it takes users to complete a form. Alternatively, the `once(now())` could have been used to get the start time, but that would be a bit of a hack and we don't yet support `once()`. Also, having ODK events and actions (https://github.com/enketo/enketo-core/issues/577) could eventually be used for this as well, but doesn't fully override the need for supporting the XLSForm meta fields.
Also, this issue highlights the need to document differences with XLSForm and ODK XForm standards. If updating [the documentation](https://github.com/medic/medic-docs/blob/master/configuration/forms.md#custom-xpath-functions) is not covered in this issue we should create a new one for that. | priority | incorrect xlsform meta data values in enketo in xlsform there are some of which are relevant and available in our app with enketo of those some provide incorrect values such as start and end which always give midnight gmt for the given day to reproduce steps to reproduce the behavior upload the open the meta form see the incorrect values as notes submit the form and see the same incorrect values being saved expected behavior the start and end values should have the time included with the date to maintain consistency with xlsform it would be nice to also have the user s phonenumber username email that said those fields are less relevant in our context and are available via inputs as a workaround screenshots json fields start end today deviceid deviceid not found subscriberid subscriberid not found simserial simserial not found phonenumber phonenumber not found username username not found email email not found n meta instanceid uuid deprecatedid environment instance gamma dev medicmobile org browser chrome app webapp version pre release also tested on additional context being consistent with xlsform keywords makes it easier for partners to build their own forms in this case a partner wanted to track the start and end time to see how long it takes users to complete a form alternatively the once now could have been used to get the start time but that would be a bit of a hack and we don t yet support once also having odk events and actions could eventually be used for this as well but doesn t fully override the need for supporting the xlsform meta fields also this issue highlights the need to document differences with xlsform and odk xform standards if updating is not covered in this issue we should create a new one for that | 1 |
800,619 | 28,372,880,229 | IssuesEvent | 2023-04-12 18:23:15 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | closed | `minimumReleaseAge` instead of `stabilityDays` | type:feature priority-3-medium breaking status:ready | ### What would you like Renovate to be able to do?
Get a flexible way to delay looking for new updates as commented here: https://github.com/renovatebot/renovate/issues/12133#issuecomment-943966256
Related issue: https://github.com/renovatebot/renovate/issues/12310
### If you have any ideas on how this should be implemented, please tell us here.
The summary mentioned by @rarkins is:
- Rename it to be a bit more generic (e.g. "releaseDelay", but better)
- We retire/migrate stabilityDays
- Define a simple format for duration - or find one which already exists - so that it can be e.g. 1d2h3m. We might find use for such a format elsewhere
### Is this a feature you are interested in implementing yourself?
No | 1.0 | `minimumReleaseAge` instead of `stabilityDays` - ### What would you like Renovate to be able to do?
Get a flexible way to delay looking for new updates as commented here: https://github.com/renovatebot/renovate/issues/12133#issuecomment-943966256
Related issue: https://github.com/renovatebot/renovate/issues/12310
### If you have any ideas on how this should be implemented, please tell us here.
The summary mentioned by @rarkins is:
- Rename it to be a bit more generic (e.g. "releaseDelay", but better)
- We retire/migrate stabilityDays
- Define a simple format for duration - or find one which already exists - so that it can be e.g. 1d2h3m. We might find use for such a format elsewhere
### Is this a feature you are interested in implementing yourself?
No | priority | minimumreleaseage instead of stabilitydays what would you like renovate to be able to do get a flexible way to delay looking for new updates as commented here related issue if you have any ideas on how this should be implemented please tell us here the summary mentioned by rarkins is rename it to be a bit more generic e g releasedelay but better we retire migrate stabilitydays define a simple format for duration or find one which already exists so that it can be e g we might find use for such a format elsewhere is this a feature you are interested in implementing yourself no | 1 |
354,329 | 10,565,615,277 | IssuesEvent | 2019-10-05 12:53:44 | AY1920S1-CS2103T-F11-2/main | https://api.github.com/repos/AY1920S1-CS2103T-F11-2/main | opened | Add SortCommandTest | priority.Medium type.Task | - add the corresponding test class for Sort Command
- add test classes for other classes that may be associated to /dependent on Sort Command | 1.0 | Add SortCommandTest - - add the corresponding test class for Sort Command
- add test classes for other classes that may be associated to /dependent on Sort Command | priority | add sortcommandtest add the corresponding test class for sort command add test classes for other classes that may be associated to dependent on sort command | 1 |
550,461 | 16,113,125,944 | IssuesEvent | 2021-04-28 01:38:05 | spicygreenbook/greenbook-app | https://api.github.com/repos/spicygreenbook/greenbook-app | closed | About Us Page Copyedit | Priority: Medium Status: Available Type: Enhancement | The about page has been copyedited.
The full document can be found [here ](https://prismic-io.s3.amazonaws.com/spicygreenbook/b9ee4094-1fa8-4916-920a-ae3922d6c962_SGB_AboutUs.docx)
Can we use this document to change the copy on our about us page. | 1.0 | About Us Page Copyedit - The about page has been copyedited.
The full document can be found [here ](https://prismic-io.s3.amazonaws.com/spicygreenbook/b9ee4094-1fa8-4916-920a-ae3922d6c962_SGB_AboutUs.docx)
Can we use this document to change the copy on our about us page. | priority | about us page copyedit the about page has been copyedited the full document can be found can we use this document to change the copy on our about us page | 1 |
69,976 | 3,316,357,869 | IssuesEvent | 2015-11-06 16:34:17 | TeselaGen/Peony-Issue-Tracking | https://api.github.com/repos/TeselaGen/Peony-Issue-Tracking | opened | Grid panels need to be copy/pastable | Customer: DAS Phase I Priority: Medium Type: Enhancement | _From @mfero on August 27, 2015 3:32_
See j5 output for example. This can apply to all grid views.
_Copied from original issue: TeselaGen/ve#1316_ | 1.0 | Grid panels need to be copy/pastable - _From @mfero on August 27, 2015 3:32_
See j5 output for example. This can apply to all grid views.
_Copied from original issue: TeselaGen/ve#1316_ | priority | grid panels need to be copy pastable from mfero on august see output for example this can apply to all grid views copied from original issue teselagen ve | 1 |
81,776 | 3,594,393,889 | IssuesEvent | 2016-02-01 23:28:57 | OCHA-DAP/hdx-ckan | https://api.github.com/repos/OCHA-DAP/hdx-ckan | closed | New Contribute Flow: Loading images & icons | New Contribute Flow Priority-Medium | Show animated loading image during upload & icons for file type | 1.0 | New Contribute Flow: Loading images & icons - Show animated loading image during upload & icons for file type | priority | new contribute flow loading images icons show animated loading image during upload icons for file type | 1 |
555,958 | 16,472,622,043 | IssuesEvent | 2021-05-23 18:15:12 | SkriptLang/Skript | https://api.github.com/repos/SkriptLang/Skript | closed | Issue with region enter | bug completed priority: medium | ### Description
Hi there. I have a issue with enter region with Skript. When I enter region "test" the blocks are set incorrectly and are created outside a region.
### Steps to Reproduce
Create a code:
```vb
on enter region "test":
set all blocks in region "kapitulakaczynskiego" to air
on leave region "test":
set all blocks in region "kapitulakaczynskiego" to grass block
```
"Kapitulakaczynskiego" is a region which remove all blocks under player when he enters the region "test" (images).
### Expected Behavior
Should work like in 2.5.1. ShaneBee has fixed it and send me a fixed version but this it still does not work on my own server but on other works.
### Errors / Screenshots
This one is bugged:

Using 2.5.3 and ShaneBee's one.
This one is correct:

using Skript 2.5.1
<!---
If you have console errors, copy them to a paste service which won't delete them.
DON'T use Hastebin!
---> None.
<!--
Screenshots of bugs visible in-game can also be attached.
--->
Done. They are above.
### Server Information
* **Server version/platform:** This server is running Paper version git-Paper-443 (MC: 1.16.5) (Implementing API version 1.16.5-R0.1-SNAPSHOT)
[10:47:37 INFO]: Checking version, please wait...
[10:47:38 INFO]: Previous version: git-Paper-439 (MC: 1.16.5)
[10:47:38 INFO]: You are running the latest version
* **Skript version:** Skript version 2.5.1. Tested also on 2.5.3
### Additional Context
I'm using these plugins:
Plugins (20): Chunky, ChunkyMap*, Citizens, dynmap*, Essentials, EssentialsChat, HolographicDisplays, LuckPerms, Multiverse-Core, Multiverse-Inventories, PixelPrinter, ServerRestorer, Shopkeepers, Skent*, SkQuery, Skript, TuSKe*, Vault, WorldEdit, WorldGuard. The WorldEdit and WorldGuard are in beta versions. | 1.0 | Issue with region enter - ### Description
Hi there. I have a issue with enter region with Skript. When I enter region "test" the blocks are set incorrectly and are created outside a region.
### Steps to Reproduce
Create a code:
```vb
on enter region "test":
set all blocks in region "kapitulakaczynskiego" to air
on leave region "test":
set all blocks in region "kapitulakaczynskiego" to grass block
```
"Kapitulakaczynskiego" is a region which remove all blocks under player when he enters the region "test" (images).
### Expected Behavior
Should work like in 2.5.1. ShaneBee has fixed it and send me a fixed version but this it still does not work on my own server but on other works.
### Errors / Screenshots
This one is bugged:

Using 2.5.3 and ShaneBee's one.
This one is correct:

using Skript 2.5.1
<!---
If you have console errors, copy them to a paste service which won't delete them.
DON'T use Hastebin!
---> None.
<!--
Screenshots of bugs visible in-game can also be attached.
--->
Done. They are above.
### Server Information
* **Server version/platform:** This server is running Paper version git-Paper-443 (MC: 1.16.5) (Implementing API version 1.16.5-R0.1-SNAPSHOT)
[10:47:37 INFO]: Checking version, please wait...
[10:47:38 INFO]: Previous version: git-Paper-439 (MC: 1.16.5)
[10:47:38 INFO]: You are running the latest version
* **Skript version:** Skript version 2.5.1. Tested also on 2.5.3
### Additional Context
I'm using these plugins:
Plugins (20): Chunky, ChunkyMap*, Citizens, dynmap*, Essentials, EssentialsChat, HolographicDisplays, LuckPerms, Multiverse-Core, Multiverse-Inventories, PixelPrinter, ServerRestorer, Shopkeepers, Skent*, SkQuery, Skript, TuSKe*, Vault, WorldEdit, WorldGuard. The WorldEdit and WorldGuard are in beta versions. | priority | issue with region enter description hi there i have a issue with enter region with skript when i enter region test the blocks are set incorrectly and are created outside a region steps to reproduce create a code vb on enter region test set all blocks in region kapitulakaczynskiego to air on leave region test set all blocks in region kapitulakaczynskiego to grass block kapitulakaczynskiego is a region which remove all blocks under player when he enters the region test images expected behavior should work like in shanebee has fixed it and send me a fixed version but this it still does not work on my own server but on other works errors screenshots this one is bugged using and shanebee s one this one is correct using skript if you have console errors copy them to a paste service which won t delete them don t use hastebin none screenshots of bugs visible in game can also be attached done they are above server information server version platform this server is running paper version git paper mc implementing api version snapshot checking version please wait previous version git paper mc you are running the latest version skript version skript version tested also on additional context i m using these plugins plugins chunky chunkymap citizens dynmap essentials essentialschat holographicdisplays luckperms multiverse core multiverse inventories pixelprinter serverrestorer shopkeepers skent skquery skript tuske vault worldedit worldguard the worldedit and worldguard are in beta versions | 1 |
321,209 | 9,794,978,577 | IssuesEvent | 2019-06-11 01:33:46 | bradnoble/msc-vuejs | https://api.github.com/repos/bradnoble/msc-vuejs | closed | add staging env with staging db | Component: General Priority: Medium Status: Verified Type: Change | We should be able to QA in a staging environment, and also QA against a db that is not our production environment. | 1.0 | add staging env with staging db - We should be able to QA in a staging environment, and also QA against a db that is not our production environment. | priority | add staging env with staging db we should be able to qa in a staging environment and also qa against a db that is not our production environment | 1 |
338,913 | 10,239,372,318 | IssuesEvent | 2019-08-19 18:05:03 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [studio] Default site blueprints should enforce postfixes by default and have all content type fields with postfixes | priority: medium task | Change all our current blueprints to have the `Site Configuration` property set to true:
```
<form-engine>
<field-name-postfix>true</field-name-postfix>
</form-engine>
```
All content type fields of the blueprints should also have postfixes.
For more info refer to https://github.com/craftercms/craftercms/issues/3198 | 1.0 | [studio] Default site blueprints should enforce postfixes by default and have all content type fields with postfixes - Change all our current blueprints to have the `Site Configuration` property set to true:
```
<form-engine>
<field-name-postfix>true</field-name-postfix>
</form-engine>
```
All content type fields of the blueprints should also have postfixes.
For more info refer to https://github.com/craftercms/craftercms/issues/3198 | priority | default site blueprints should enforce postfixes by default and have all content type fields with postfixes change all our current blueprints to have the site configuration property set to true true all content type fields of the blueprints should also have postfixes for more info refer to | 1 |
250,242 | 7,973,839,462 | IssuesEvent | 2018-07-17 01:39:24 | utra-robosoccer/soccer-embedded | https://api.github.com/repos/utra-robosoccer/soccer-embedded | opened | FreeRTOS: Cube generates FreeRTOS application code stubs, which limits program modularity | Module: FreeRTOS Priority: Medium Type: Maintenance | Cube places all the task stubs, queues, etc in freertos.c, which reduces modularity. A better approach is using Cube to generate the peripheral drivers, and adding the FreeRTOS application code ourselves. | 1.0 | FreeRTOS: Cube generates FreeRTOS application code stubs, which limits program modularity - Cube places all the task stubs, queues, etc in freertos.c, which reduces modularity. A better approach is using Cube to generate the peripheral drivers, and adding the FreeRTOS application code ourselves. | priority | freertos cube generates freertos application code stubs which limits program modularity cube places all the task stubs queues etc in freertos c which reduces modularity a better approach is using cube to generate the peripheral drivers and adding the freertos application code ourselves | 1 |
346,193 | 10,409,206,016 | IssuesEvent | 2019-09-13 08:08:54 | AY1920S1-CS2103T-T11-1/main | https://api.github.com/repos/AY1920S1-CS2103T-T11-1/main | opened | Add the feature to display periodic statements | priority.Medium type.Story | As a user, I can request for periodic statements so that I can revise and reflect on my past expenses. | 1.0 | Add the feature to display periodic statements - As a user, I can request for periodic statements so that I can revise and reflect on my past expenses. | priority | add the feature to display periodic statements as a user i can request for periodic statements so that i can revise and reflect on my past expenses | 1 |
76,198 | 3,482,976,335 | IssuesEvent | 2015-12-30 06:44:26 | antialiasis/serebii-fanfic-awards | https://api.github.com/repos/antialiasis/serebii-fanfic-awards | closed | Add the ability to override eligibility for Yuletide fics | enhancement nominations priority: medium | Could also apply to e.g. fics entered in a one-shot contest that don't get threads until after the beginning of the next year. Basically, we want to be able to tell the system that some fics that weren't actually posted until after the end of the awards year are still eligible. This should make them automatically ineligible for any later awards, too.
We *could* do this with shenanigans involving manipulating posted dates. This has the advantage of working without any new infrastructure to support it... just barely; we'd either need to be able to do a lookup without validating them or manually create the Fic objects in the admin in order to have a post date to manipulate in the first place. Presumably then the post date would be set to Yuletide reveal day. However, this is a bit hackish, and could cause problems if we ever for any reason refetch the fic and its posted date (which would mean we need additional infrastructure anyway).
It would also be possible to simply manually add these fics to the eligibility cache. This is also a bit hackish, but technically the eligibility cache isn't even called a cache in the code, so who is to say we can't just add entries to it? This is pretty convenient; we would need to add two entries for each late-posted Yuletide fic (one to make it eligible for this year, one to make it ineligible for the next), but would need absolutely no additional infrastructure other than adding an admin interface to edit FicEligibility objects, and even has room for the possibility that a Yuletide fic could grow into a chapterfic that *should* be eligible for the next awards (all we'd need to do is remove the entry that says it's ineligible for next year). The error message when someone would try to nominate one of them next year would be a bit misleading, since it would only reference when the fic is posted or updated, but this may be uncommon enough to not really be a problem. It's also not super-convenient for admins to do, but this *is* an exceptional case, and on that front it has the advantage of not requiring the Fic objects to be created in the database first.
The third option would be to add some actual special infrastructure, but after typing all this, that option is looking less attractive by the minute. It would mean a better admin interface, and a better error message, but also an annoying increase in complexity. Yeah, no, let's just go with option two. | 1.0 | Add the ability to override eligibility for Yuletide fics - Could also apply to e.g. fics entered in a one-shot contest that don't get threads until after the beginning of the next year. Basically, we want to be able to tell the system that some fics that weren't actually posted until after the end of the awards year are still eligible. This should make them automatically ineligible for any later awards, too.
We *could* do this with shenanigans involving manipulating posted dates. This has the advantage of working without any new infrastructure to support it... just barely; we'd either need to be able to do a lookup without validating them or manually create the Fic objects in the admin in order to have a post date to manipulate in the first place. Presumably then the post date would be set to Yuletide reveal day. However, this is a bit hackish, and could cause problems if we ever for any reason refetch the fic and its posted date (which would mean we need additional infrastructure anyway).
It would also be possible to simply manually add these fics to the eligibility cache. This is also a bit hackish, but technically the eligibility cache isn't even called a cache in the code, so who is to say we can't just add entries to it? This is pretty convenient; we would need to add two entries for each late-posted Yuletide fic (one to make it eligible for this year, one to make it ineligible for the next), but would need absolutely no additional infrastructure other than adding an admin interface to edit FicEligibility objects, and even has room for the possibility that a Yuletide fic could grow into a chapterfic that *should* be eligible for the next awards (all we'd need to do is remove the entry that says it's ineligible for next year). The error message when someone would try to nominate one of them next year would be a bit misleading, since it would only reference when the fic is posted or updated, but this may be uncommon enough to not really be a problem. It's also not super-convenient for admins to do, but this *is* an exceptional case, and on that front it has the advantage of not requiring the Fic objects to be created in the database first.
The third option would be to add some actual special infrastructure, but after typing all this, that option is looking less attractive by the minute. It would mean a better admin interface, and a better error message, but also an annoying increase in complexity. Yeah, no, let's just go with option two. | priority | add the ability to override eligibility for yuletide fics could also apply to e g fics entered in a one shot contest that don t get threads until after the beginning of the next year basically we want to be able to tell the system that some fics that weren t actually posted until after the end of the awards year are still eligible this should make them automatically ineligible for any later awards too we could do this with shenanigans involving manipulating posted dates this has the advantage of working without any new infrastructure to support it just barely we d either need to be able to do a lookup without validating them or manually create the fic objects in the admin in order to have a post date to manipulate in the first place presumably then the post date would be set to yuletide reveal day however this is a bit hackish and could cause problems if we ever for any reason refetch the fic and its posted date which would mean we need additional infrastructure anyway it would also be possible to simply manually add these fics to the eligibility cache this is also a bit hackish but technically the eligibility cache isn t even called a cache in the code so who is to say we can t just add entries to it this is pretty convenient we would need to add two entries for each late posted yuletide fic one to make it eligible for this year one to make it ineligible for the next but would need absolutely no additional infrastructure other than adding an admin interface to edit ficeligibility objects and even has room for the possibility that a yuletide fic could grow into a chapterfic that should be eligible for the next awards all we d need to do is remove the entry that says it s ineligible for next year the error message when someone would try to nominate one of them next year would be a bit misleading since it would only reference when the fic is posted or updated but this may be uncommon enough to not really be a problem it s also not super convenient for admins to do but this is an exceptional case and on that front it has the advantage of not requiring the fic objects to be created in the database first the third option would be to add some actual special infrastructure but after typing all this that option is looking less attractive by the minute it would mean a better admin interface and a better error message but also an annoying increase in complexity yeah no let s just go with option two | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.