Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 957 | labels stringlengths 4 795 | body stringlengths 1 259k | index stringclasses 12
values | text_combine stringlengths 96 259k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
141,569 | 5,438,276,235 | IssuesEvent | 2017-03-06 10:00:00 | pybel/pybel-tools | https://api.github.com/repos/pybel/pybel-tools | closed | Collapse based on orthology | medium medium priority | Make a function that collapses nodes based on their orthology connections
```python
def collapse_by_orthology(graph, priority_list=None):
"""Collapses a graph based on the orthology between nodes"""
priority_list = ['HGNC', 'MGI', 'RGD'] if priority_list is none else priority_list
...
``` | 1.0 | Collapse based on orthology - Make a function that collapses nodes based on their orthology connections
```python
def collapse_by_orthology(graph, priority_list=None):
"""Collapses a graph based on the orthology between nodes"""
priority_list = ['HGNC', 'MGI', 'RGD'] if priority_list is none else priority_list
...
``` | priority | collapse based on orthology make a function that collapses nodes based on their orthology connections python def collapse by orthology graph priority list none collapses a graph based on the orthology between nodes priority list if priority list is none else priority list | 1 |
162,006 | 6,145,463,706 | IssuesEvent | 2017-06-27 11:35:20 | ressec/thot | https://api.github.com/repos/ressec/thot | opened | Create a command coordinator entity | Domain: Actor Priority: Medium Type: New Feature | ## Description
The purpose of the command coordinator is to have the possibility to have some common commands to be pre-implemented and automatically registered by a command coordinator.
When the terminal (see #20 ) is notified on issued commands, it can pass to the command coordinator some pre-defined commands to be automatically executed. | 1.0 | Create a command coordinator entity - ## Description
The purpose of the command coordinator is to have the possibility to have some common commands to be pre-implemented and automatically registered by a command coordinator.
When the terminal (see #20 ) is notified on issued commands, it can pass to the command coordinator some pre-defined commands to be automatically executed. | priority | create a command coordinator entity description the purpose of the command coordinator is to have the possibility to have some common commands to be pre implemented and automatically registered by a command coordinator when the terminal see is notified on issued commands it can pass to the command coordinator some pre defined commands to be automatically executed | 1 |
681,100 | 23,297,266,556 | IssuesEvent | 2022-08-06 19:31:40 | ansible-collections/azure | https://api.github.com/repos/ansible-collections/azure | closed | Azure Ansible - Doesn't fetch IPConfiguration Public IP Method | medium_priority work in |
##### SUMMARY
When I try to fetch the IP configuration of the network interface, I see that public_ip_allocation_method is set to NULL even though it is dynamic in Azure portal. I even tested by creating a new IP yet it shows NULL.
As a result I'm unable to modify network interface settings like Security Group, I get the error value of public_ip_allocation_method must be one of: Dynamic, Static, got: None found in ip_configurations
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible [core 2.13.2]
config file = /root/azure_ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.12 (default, Sep 16 2021, 10:46:05) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
jinja version = 3.1.2
libyaml = True
```
##### COLLECTION VERSION
<!--- Paste verbatim output from "ansible-galaxy collection list <namespace>.<collection>" between the quotes
for example: ansible-galaxy collection list community.general
-->
```
# /root/.ansible/collections/ansible_collections
Collection Version
------------------ -------
azure.azcollection 1.13.0
# /usr/local/lib/python3.8/site-packages/ansible_collections
Collection Version
------------------ -------
azure.azcollection 1.13.0
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
sh-4.4# ansible-config dump --only-changed
DEFAULT_HOST_LIST(/root/azure_ansible/ansible.cfg) = ['/root/azure_ansible/inventory/test.azure_rm.yml']
DEFAULT_LOAD_CALLBACK_PLUGINS(/root/azure_ansible/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/root/azure_ansible/ansible.cfg) = json
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Centos 7 Docker container
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Get facts for one network interface
azure_rm_networkinterface_info:
resource_group: "{{ resource_group }}"
name: "{{ azure_vm_network_interface }}"
register: azure_network_interface_info
- name: Applying NSG to target NIC
azure_rm_networkinterface:
name: "{{ azure_vm_network_interface }}"
resource_group: "{{ resource_group }}"
subnet_name: "{{ azure_network_interface_info.networkinterfaces[0].subnet }}"
virtual_network: "{{ azure_network_interface_info.networkinterfaces[0].virtual_network.name }}"
ip_configurations: "{{ azure_network_interface_info.networkinterfaces[0].ip_configurations }}"
security_group: "/subscriptions/123456/resourceGroups/test-resource-group/providers/Microsoft.Network/networkSecurityGroups/testing_temp_8"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Fetch `public_ip_allocation_method` method instead of return NULL
Portal shows that the IP is Dynamic (I tested by creating a new IP but still ansible returns NULL for `public_ip_allocation_method`)

##### ACTUAL RESULTS
"ip_configurations": [
{
"application_gateway_backend_address_pools": null,
"application_security_groups": null,
"load_balancer_backend_address_pools": null,
"name": "Ubuntu915",
"primary": true,
"private_ip_address": "10.0.0.5",
"private_ip_address_version": "IPv4",
"private_ip_allocation_method": "Dynamic",
"public_ip_address": "/subscriptions/123456789/resourceGroups/test-resource-group/providers/Microsoft.Network/publicIPAddresses/Ubuntu-915-test",
"public_ip_address_name": "/subscriptions/123456789/resourceGroups/test-resource-group/providers/Microsoft.Network/publicIPAddresses/Ubuntu-915-test",
"public_ip_allocation_method": null
}
],
When trying to change security group, I get the following error
```
"msg": "value of public_ip_allocation_method must be one of: Dynamic, Static, got: None found in ip_configurations"
```
<!--- Paste verbatim command output between quotes -->
```paste below
```
| 1.0 | Azure Ansible - Doesn't fetch IPConfiguration Public IP Method -
##### SUMMARY
When I try to fetch the IP configuration of the network interface, I see that public_ip_allocation_method is set to NULL even though it is dynamic in Azure portal. I even tested by creating a new IP yet it shows NULL.
As a result I'm unable to modify network interface settings like Security Group, I get the error value of public_ip_allocation_method must be one of: Dynamic, Static, got: None found in ip_configurations
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```
ansible [core 2.13.2]
config file = /root/azure_ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.8.12 (default, Sep 16 2021, 10:46:05) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
jinja version = 3.1.2
libyaml = True
```
##### COLLECTION VERSION
<!--- Paste verbatim output from "ansible-galaxy collection list <namespace>.<collection>" between the quotes
for example: ansible-galaxy collection list community.general
-->
```
# /root/.ansible/collections/ansible_collections
Collection Version
------------------ -------
azure.azcollection 1.13.0
# /usr/local/lib/python3.8/site-packages/ansible_collections
Collection Version
------------------ -------
azure.azcollection 1.13.0
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```
sh-4.4# ansible-config dump --only-changed
DEFAULT_HOST_LIST(/root/azure_ansible/ansible.cfg) = ['/root/azure_ansible/inventory/test.azure_rm.yml']
DEFAULT_LOAD_CALLBACK_PLUGINS(/root/azure_ansible/ansible.cfg) = True
DEFAULT_STDOUT_CALLBACK(/root/azure_ansible/ansible.cfg) = json
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Centos 7 Docker container
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Get facts for one network interface
azure_rm_networkinterface_info:
resource_group: "{{ resource_group }}"
name: "{{ azure_vm_network_interface }}"
register: azure_network_interface_info
- name: Applying NSG to target NIC
azure_rm_networkinterface:
name: "{{ azure_vm_network_interface }}"
resource_group: "{{ resource_group }}"
subnet_name: "{{ azure_network_interface_info.networkinterfaces[0].subnet }}"
virtual_network: "{{ azure_network_interface_info.networkinterfaces[0].virtual_network.name }}"
ip_configurations: "{{ azure_network_interface_info.networkinterfaces[0].ip_configurations }}"
security_group: "/subscriptions/123456/resourceGroups/test-resource-group/providers/Microsoft.Network/networkSecurityGroups/testing_temp_8"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Fetch `public_ip_allocation_method` method instead of return NULL
Portal shows that the IP is Dynamic (I tested by creating a new IP but still ansible returns NULL for `public_ip_allocation_method`)

##### ACTUAL RESULTS
"ip_configurations": [
{
"application_gateway_backend_address_pools": null,
"application_security_groups": null,
"load_balancer_backend_address_pools": null,
"name": "Ubuntu915",
"primary": true,
"private_ip_address": "10.0.0.5",
"private_ip_address_version": "IPv4",
"private_ip_allocation_method": "Dynamic",
"public_ip_address": "/subscriptions/123456789/resourceGroups/test-resource-group/providers/Microsoft.Network/publicIPAddresses/Ubuntu-915-test",
"public_ip_address_name": "/subscriptions/123456789/resourceGroups/test-resource-group/providers/Microsoft.Network/publicIPAddresses/Ubuntu-915-test",
"public_ip_allocation_method": null
}
],
When trying to change security group, I get the following error
```
"msg": "value of public_ip_allocation_method must be one of: Dynamic, Static, got: None found in ip_configurations"
```
<!--- Paste verbatim command output between quotes -->
```paste below
```
| priority | azure ansible doesn t fetch ipconfiguration public ip method summary when i try to fetch the ip configuration of the network interface i see that public ip allocation method is set to null even though it is dynamic in azure portal i even tested by creating a new ip yet it shows null as a result i m unable to modify network interface settings like security group i get the error value of public ip allocation method must be one of dynamic static got none found in ip configurations issue type bug report component name ansible version ansible config file root azure ansible ansible cfg configured module search path ansible python module location usr local lib site packages ansible ansible collection location root ansible collections usr share ansible collections executable location usr local bin ansible python version default sep jinja version libyaml true collection version between the quotes for example ansible galaxy collection list community general root ansible collections ansible collections collection version azure azcollection usr local lib site packages ansible collections collection version azure azcollection configuration sh ansible config dump only changed default host list root azure ansible ansible cfg default load callback plugins root azure ansible ansible cfg true default stdout callback root azure ansible ansible cfg json os environment centos docker container steps to reproduce name get facts for one network interface azure rm networkinterface info resource group resource group name azure vm network interface register azure network interface info name applying nsg to target nic azure rm networkinterface name azure vm network interface resource group resource group subnet name azure network interface info networkinterfaces subnet virtual network azure network interface info networkinterfaces virtual network name ip configurations azure network interface info networkinterfaces ip configurations security group subscriptions resourcegroups test resource group providers microsoft network networksecuritygroups testing temp expected results fetch public ip allocation method method instead of return null portal shows that the ip is dynamic i tested by creating a new ip but still ansible returns null for public ip allocation method actual results ip configurations application gateway backend address pools null application security groups null load balancer backend address pools null name primary true private ip address private ip address version private ip allocation method dynamic public ip address subscriptions resourcegroups test resource group providers microsoft network publicipaddresses ubuntu test public ip address name subscriptions resourcegroups test resource group providers microsoft network publicipaddresses ubuntu test public ip allocation method null when trying to change security group i get the following error msg value of public ip allocation method must be one of dynamic static got none found in ip configurations paste below | 1 |
475,131 | 13,687,543,255 | IssuesEvent | 2020-09-30 10:16:11 | ooni/probe-engine | https://api.github.com/repos/ooni/probe-engine | closed | Routine releases in Sprint 23 | effort/M priority/medium | This is about preparing a new stable release of ooni/probe-engine as well as a stable release of ooni/probe-cli. We are going to use such releases as the basic building blocks for upcoming probe-desktop and probe-{ios,android} releases.
- [x] Update dependencies
- [x] Update internal/httpheader/useragent.go
- [x] Update version/version.go
- [x] Update internal/resources/assets.go
- [x] Run go generate ./...
- [x] Tag a new version of ooni/probe-engine
- [x] Update again version.go to be alpha
- [x] Create release at GitHub
- [x] Update ooni/probe-engine mobile-staging branch
- [x] Pin ooni/probe-cli to ooni/probe-engine
- [x] Pin ooni/probe-android to latest mobile-staging
- [x] Pin ooni/probe-ios to latest mobile-staging
| 1.0 | Routine releases in Sprint 23 - This is about preparing a new stable release of ooni/probe-engine as well as a stable release of ooni/probe-cli. We are going to use such releases as the basic building blocks for upcoming probe-desktop and probe-{ios,android} releases.
- [x] Update dependencies
- [x] Update internal/httpheader/useragent.go
- [x] Update version/version.go
- [x] Update internal/resources/assets.go
- [x] Run go generate ./...
- [x] Tag a new version of ooni/probe-engine
- [x] Update again version.go to be alpha
- [x] Create release at GitHub
- [x] Update ooni/probe-engine mobile-staging branch
- [x] Pin ooni/probe-cli to ooni/probe-engine
- [x] Pin ooni/probe-android to latest mobile-staging
- [x] Pin ooni/probe-ios to latest mobile-staging
| priority | routine releases in sprint this is about preparing a new stable release of ooni probe engine as well as a stable release of ooni probe cli we are going to use such releases as the basic building blocks for upcoming probe desktop and probe ios android releases update dependencies update internal httpheader useragent go update version version go update internal resources assets go run go generate tag a new version of ooni probe engine update again version go to be alpha create release at github update ooni probe engine mobile staging branch pin ooni probe cli to ooni probe engine pin ooni probe android to latest mobile staging pin ooni probe ios to latest mobile staging | 1 |
769,547 | 27,011,159,793 | IssuesEvent | 2023-02-10 15:30:21 | authzed/spicedb | https://api.github.com/repos/authzed/spicedb | closed | Some mySQL URIs unable to be parsed by url.Parse | kind/bug priority/2 medium | The changes introduced by https://github.com/authzed/spicedb/pull/1129 cause mySQL URIs generated with https://pkg.go.dev/github.com/go-sql-driver/mysql#Config.FormatDSN to not work. The DSNs generated by the library are not able to be parsed by `url.Parse`. Can this be updated to use https://pkg.go.dev/github.com/go-sql-driver/mysql#ParseDSN instead of `url.Parse` or is the intention that SpiceDB only accepts URI connection strings? | 1.0 | Some mySQL URIs unable to be parsed by url.Parse - The changes introduced by https://github.com/authzed/spicedb/pull/1129 cause mySQL URIs generated with https://pkg.go.dev/github.com/go-sql-driver/mysql#Config.FormatDSN to not work. The DSNs generated by the library are not able to be parsed by `url.Parse`. Can this be updated to use https://pkg.go.dev/github.com/go-sql-driver/mysql#ParseDSN instead of `url.Parse` or is the intention that SpiceDB only accepts URI connection strings? | priority | some mysql uris unable to be parsed by url parse the changes introduced by cause mysql uris generated with to not work the dsns generated by the library are not able to be parsed by url parse can this be updated to use instead of url parse or is the intention that spicedb only accepts uri connection strings | 1 |
68,320 | 3,286,189,932 | IssuesEvent | 2015-10-29 00:35:27 | oshoukry/openpojo | https://api.github.com/repos/oshoukry/openpojo | closed | Adding support for inherited PojoFields similar to PojoClass.getPojoFields()? | auto-migrated Priority-Medium Type-Enhancement | ```
From Luke on wiki FAQ page - Dec 8th:
Is there any support for inheritance?
'PojoFieldFactory?.getPojoFields()' uses 'Class.getDeclaredFields()' , which
does not include inherited fields, which would be nice to have when testing
equals/hashCode and even toString in some cases...
Is there any plans for something along the lines of apache commons
'EqualsBuilder?.appendSuper()' etc?
```
Original issue reported on code.google.com by `oshou...@gmail.com` on 17 Jan 2012 at 6:18 | 1.0 | Adding support for inherited PojoFields similar to PojoClass.getPojoFields()? - ```
From Luke on wiki FAQ page - Dec 8th:
Is there any support for inheritance?
'PojoFieldFactory?.getPojoFields()' uses 'Class.getDeclaredFields()' , which
does not include inherited fields, which would be nice to have when testing
equals/hashCode and even toString in some cases...
Is there any plans for something along the lines of apache commons
'EqualsBuilder?.appendSuper()' etc?
```
Original issue reported on code.google.com by `oshou...@gmail.com` on 17 Jan 2012 at 6:18 | priority | adding support for inherited pojofields similar to pojoclass getpojofields from luke on wiki faq page dec is there any support for inheritance pojofieldfactory getpojofields uses class getdeclaredfields which does not include inherited fields which would be nice to have when testing equals hashcode and even tostring in some cases is there any plans for something along the lines of apache commons equalsbuilder appendsuper etc original issue reported on code google com by oshou gmail com on jan at | 1 |
663,360 | 22,191,603,961 | IssuesEvent | 2022-06-07 00:01:42 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [DocDB] PITR: RESTORE_SYS_CATALOG operation returns a segmentation fault when replaying during tablet bootstrap | kind/bug area/docdb priority/medium | Jira Link: [DB-565](https://yugabyte.atlassian.net/browse/DB-565)
### Description
During tablet bootstrap if a committed RESTORE_SYS_CATALOG operation replays, then it gives a segmentation fault with the following stack trace:
```
[m-2] PC: @ 0x0 (unknown)
[m-2] *** SIGSEGV (@0x0) received by PID 776151 (TID 0xffff9b867740) from PID 0; stack trace: ***
[m-2] @ 0xffffac4507a0 ([vdso]+0x79f)
[m-2] @ 0xffffab7bbbd7 yb::tablet::Tablet::UnregisterOperationFilter()
[m-2] @ 0xffffab83684b yb::tablet::Operation::Replicated()
[m-2] @ 0xffffab7de763 yb::tablet::TabletBootstrap::PlayTabletSnapshotRequest()
[m-2] @ 0xffffab7dd07f yb::tablet::TabletBootstrap::PlayAnyRequest()
[m-2] @ 0xffffab7dc9c7 yb::tablet::TabletBootstrap::MaybeReplayCommittedEntry()
[m-2] @ 0xffffab7da5b3 yb::tablet::TabletBootstrap::ApplyCommittedPendingReplicates()
[m-2] @ 0xffffab7d562b yb::tablet::TabletBootstrap::PlaySegments()
[m-2] @ 0xffffab7d197f yb::tablet::TabletBootstrap::Bootstrap()
[m-2] @ 0xffffab7d0ef3 yb::tablet::BootstrapTabletImpl()
[m-2] @ 0xffffab7e4783 yb::tablet::BootstrapTablet()
[m-2] @ 0xffffac0d2937 yb::master::SysCatalogTable::OpenTablet()
[m-2] @ 0xffffac0cec3f yb::master::SysCatalogTable::Load()
```
```
yb-master: /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220506040442-4cda3bf56c-almalinux8-aarch64-clang12/installed/uninstrumented/include/boost/intrusive/list.hpp:1309: boost::intrusive::list_impl::iterator boost::intrusive::list_impl<boost::intrusive::bhtraits<yb::tablet::OperationFilter, boost::intrusive::list_node_traits<void *>, boost::intrusive::safe_link, boost::intrusive::dft_tag, 1>, unsigned long, true, void>::iterator_to(boost::intrusive::list_impl::reference) [ValueTraits = boost::intrusive::bhtraits<yb::tablet::OperationFilter, boost::intrusive::list_node_traits<void *>, boost::intrusive::safe_link, boost::intrusive::dft_tag, 1>, SizeType = unsigned long, ConstantTimeSize = true, HeaderHolder = void]: Assertion `!node_algorithms::inited(this->priv_value_traits().to_node_ptr(value))' failed.
[m-2] *** Aborted at 1652974512 (unix time) try "date -d @1652974512" if you are using GNU date ***
```
This is due to a null operation filter. We should guard against such a situation. | 1.0 | [DocDB] PITR: RESTORE_SYS_CATALOG operation returns a segmentation fault when replaying during tablet bootstrap - Jira Link: [DB-565](https://yugabyte.atlassian.net/browse/DB-565)
### Description
During tablet bootstrap if a committed RESTORE_SYS_CATALOG operation replays, then it gives a segmentation fault with the following stack trace:
```
[m-2] PC: @ 0x0 (unknown)
[m-2] *** SIGSEGV (@0x0) received by PID 776151 (TID 0xffff9b867740) from PID 0; stack trace: ***
[m-2] @ 0xffffac4507a0 ([vdso]+0x79f)
[m-2] @ 0xffffab7bbbd7 yb::tablet::Tablet::UnregisterOperationFilter()
[m-2] @ 0xffffab83684b yb::tablet::Operation::Replicated()
[m-2] @ 0xffffab7de763 yb::tablet::TabletBootstrap::PlayTabletSnapshotRequest()
[m-2] @ 0xffffab7dd07f yb::tablet::TabletBootstrap::PlayAnyRequest()
[m-2] @ 0xffffab7dc9c7 yb::tablet::TabletBootstrap::MaybeReplayCommittedEntry()
[m-2] @ 0xffffab7da5b3 yb::tablet::TabletBootstrap::ApplyCommittedPendingReplicates()
[m-2] @ 0xffffab7d562b yb::tablet::TabletBootstrap::PlaySegments()
[m-2] @ 0xffffab7d197f yb::tablet::TabletBootstrap::Bootstrap()
[m-2] @ 0xffffab7d0ef3 yb::tablet::BootstrapTabletImpl()
[m-2] @ 0xffffab7e4783 yb::tablet::BootstrapTablet()
[m-2] @ 0xffffac0d2937 yb::master::SysCatalogTable::OpenTablet()
[m-2] @ 0xffffac0cec3f yb::master::SysCatalogTable::Load()
```
```
yb-master: /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20220506040442-4cda3bf56c-almalinux8-aarch64-clang12/installed/uninstrumented/include/boost/intrusive/list.hpp:1309: boost::intrusive::list_impl::iterator boost::intrusive::list_impl<boost::intrusive::bhtraits<yb::tablet::OperationFilter, boost::intrusive::list_node_traits<void *>, boost::intrusive::safe_link, boost::intrusive::dft_tag, 1>, unsigned long, true, void>::iterator_to(boost::intrusive::list_impl::reference) [ValueTraits = boost::intrusive::bhtraits<yb::tablet::OperationFilter, boost::intrusive::list_node_traits<void *>, boost::intrusive::safe_link, boost::intrusive::dft_tag, 1>, SizeType = unsigned long, ConstantTimeSize = true, HeaderHolder = void]: Assertion `!node_algorithms::inited(this->priv_value_traits().to_node_ptr(value))' failed.
[m-2] *** Aborted at 1652974512 (unix time) try "date -d @1652974512" if you are using GNU date ***
```
This is due to a null operation filter. We should guard against such a situation. | priority | pitr restore sys catalog operation returns a segmentation fault when replaying during tablet bootstrap jira link description during tablet bootstrap if a committed restore sys catalog operation replays then it gives a segmentation fault with the following stack trace pc unknown sigsegv received by pid tid from pid stack trace yb tablet tablet unregisteroperationfilter yb tablet operation replicated yb tablet tabletbootstrap playtabletsnapshotrequest yb tablet tabletbootstrap playanyrequest yb tablet tabletbootstrap maybereplaycommittedentry yb tablet tabletbootstrap applycommittedpendingreplicates yb tablet tabletbootstrap playsegments yb tablet tabletbootstrap bootstrap yb tablet bootstraptabletimpl yb tablet bootstraptablet yb master syscatalogtable opentablet yb master syscatalogtable load yb master opt yb build thirdparty yugabyte db thirdparty installed uninstrumented include boost intrusive list hpp boost intrusive list impl iterator boost intrusive list impl boost intrusive safe link boost intrusive dft tag unsigned long true void iterator to boost intrusive list impl reference assertion node algorithms inited this priv value traits to node ptr value failed aborted at unix time try date d if you are using gnu date this is due to a null operation filter we should guard against such a situation | 1 |
56,563 | 3,080,254,261 | IssuesEvent | 2015-08-21 20:59:17 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | opened | Контекстное меню действий для магнет - ссылок (альтернатива диалогу при клике по ссылке). | Component-UI enhancement imported Priority-Medium | _From [sc0rpi0n...@gmail.com](https://code.google.com/u/100092996917054333852/) on May 29, 2012 00:26:33_
Возможно ли реализовать возможность из контекстного меню при клике правой кнопки мыши по магнет ссылке в чате вида magnet:?xt=urn:tree:tiger:MMNPMRPVVMWULKISXBX6HYO57WEWRG56XLZXDBY&xl=1848125&dn=MediaInfo_GUI_0.7.21_Windows_i386.exe там чтобы было: поиск,добавить в очередь скачивания,в общем чтоб как при клике?
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=761_ | 1.0 | Контекстное меню действий для магнет - ссылок (альтернатива диалогу при клике по ссылке). - _From [sc0rpi0n...@gmail.com](https://code.google.com/u/100092996917054333852/) on May 29, 2012 00:26:33_
Возможно ли реализовать возможность из контекстного меню при клике правой кнопки мыши по магнет ссылке в чате вида magnet:?xt=urn:tree:tiger:MMNPMRPVVMWULKISXBX6HYO57WEWRG56XLZXDBY&xl=1848125&dn=MediaInfo_GUI_0.7.21_Windows_i386.exe там чтобы было: поиск,добавить в очередь скачивания,в общем чтоб как при клике?
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=761_ | priority | контекстное меню действий для магнет ссылок альтернатива диалогу при клике по ссылке from on may возможно ли реализовать возможность из контекстного меню при клике правой кнопки мыши по магнет ссылке в чате вида magnet xt urn tree tiger xl dn mediainfo gui windows exe там чтобы было поиск добавить в очередь скачивания в общем чтоб как при клике original issue | 1 |
714,013 | 24,547,611,625 | IssuesEvent | 2022-10-12 09:59:45 | IAmTamal/Milan | https://api.github.com/repos/IAmTamal/Milan | closed | [Automations]: Linter using husky pre-commit/pre-push | ✨ goal: improvement 🟨 priority: medium 🤖 aspect: dx 🛠 status : under development hacktoberfest | ### What would you like to share?
Currently eslint is set up but the linting is not automated.
I'd like to make use of `husky` and `lint-staged` to create a `pre-commit` or `pre-push` hooks to run linter and fix auto-fixable issues before a PR can be created
### Additional information
_No response_
### 🥦 Browser
Microsoft Edge
### 👀 Have you checked if this issue has been raised before?
- [X] I checked and didn't find similar issue
### 🏢 Have you read the Contributing Guidelines?
- [X] I have read the [Contributing Guidelines](https://github.com/IAmTamal/Milan/blob/main/CONTRIBUTING.md)
### Are you willing to work on this issue ?
Yes I am willing to submit a PR! | 1.0 | [Automations]: Linter using husky pre-commit/pre-push - ### What would you like to share?
Currently eslint is set up but the linting is not automated.
I'd like to make use of `husky` and `lint-staged` to create a `pre-commit` or `pre-push` hooks to run linter and fix auto-fixable issues before a PR can be created
### Additional information
_No response_
### 🥦 Browser
Microsoft Edge
### 👀 Have you checked if this issue has been raised before?
- [X] I checked and didn't find similar issue
### 🏢 Have you read the Contributing Guidelines?
- [X] I have read the [Contributing Guidelines](https://github.com/IAmTamal/Milan/blob/main/CONTRIBUTING.md)
### Are you willing to work on this issue ?
Yes I am willing to submit a PR! | priority | linter using husky pre commit pre push what would you like to share currently eslint is set up but the linting is not automated i d like to make use of husky and lint staged to create a pre commit or pre push hooks to run linter and fix auto fixable issues before a pr can be created additional information no response 🥦 browser microsoft edge 👀 have you checked if this issue has been raised before i checked and didn t find similar issue 🏢 have you read the contributing guidelines i have read the are you willing to work on this issue yes i am willing to submit a pr | 1 |
800,352 | 28,362,570,621 | IssuesEvent | 2023-04-12 11:51:50 | uhh-cms/columnflow | https://api.github.com/repos/uhh-cms/columnflow | opened | Error due to `combine_uncs` in yields task | bug medium-priority low-priority | There seems to be an issue related to the `combine_uncs` option when transforming the yields into a string representation
https://github.com/uhh-cms/columnflow/blob/master/columnflow/tasks/yields.py#L225 | 2.0 | Error due to `combine_uncs` in yields task - There seems to be an issue related to the `combine_uncs` option when transforming the yields into a string representation
https://github.com/uhh-cms/columnflow/blob/master/columnflow/tasks/yields.py#L225 | priority | error due to combine uncs in yields task there seems to be an issue related to the combine uncs option when transforming the yields into a string representation | 1 |
102,108 | 4,150,882,928 | IssuesEvent | 2016-06-15 18:46:09 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | opened | Can CDash report per-command build time? | priority: medium team: kitware type: feature request | Build time is a pain point. It would be useful to have a prioritized list of targets to improve. | 1.0 | Can CDash report per-command build time? - Build time is a pain point. It would be useful to have a prioritized list of targets to improve. | priority | can cdash report per command build time build time is a pain point it would be useful to have a prioritized list of targets to improve | 1 |
231,800 | 7,643,550,496 | IssuesEvent | 2018-05-08 13:04:27 | learnweb/moodle-mod_ratingallocate | https://api.github.com/repos/learnweb/moodle-mod_ratingallocate | closed | Rank Choices: Wrong description for number of choices | Backlog Effort: Very Low Priority: Medium bug | Within the settings dialog, there is the description for number of ranks:
> Number of fields the user is presented to vote on (smaller than number of choices!)
However, the number has to be smaller or equal to the number of choices. | 1.0 | Rank Choices: Wrong description for number of choices - Within the settings dialog, there is the description for number of ranks:
> Number of fields the user is presented to vote on (smaller than number of choices!)
However, the number has to be smaller or equal to the number of choices. | priority | rank choices wrong description for number of choices within the settings dialog there is the description for number of ranks number of fields the user is presented to vote on smaller than number of choices however the number has to be smaller or equal to the number of choices | 1 |
829,944 | 31,931,339,699 | IssuesEvent | 2023-09-19 07:38:55 | tooget/tooget.github.io | https://api.github.com/repos/tooget/tooget.github.io | opened | Contacts | Resume/CV 컴포넌트 순서 변경 및 단순화 | Priority:Medium Task:Enhancement ETR:1W- Domain:UX | ## 이런 목표를 달성해야 합니다
> 이 이슈로 무슨 목표를 달성하고자 하며 어떤 상태가 되어야 하는지 간결히 적어주세요.
- tooget.github.io QR코드를 읽은 경우 "이력"이 바로 보일 수 있게 수정
## 현재 이런 상태입니다
> 이 이슈를 생성한 현시점의 문제 혹은 향후 문제 발생 가능성에 대하여 간결히 적어주세요.
- 실제로 QR코드 명함을 전달하며 의견을 확인한 결과, 관심사는 "이력"에 있으며 "커피챗"에는 관심이 덜 함.
## 이 이슈는 이 분이 풀 수 있을 것 같습니다
> 담당할 Assignee를 @로 **1명만** 멘션해주세요.
@tooget
## 아래의 세부적인 문제를 풀어야 할 것 같습니다
> 이 이슈를 해결하기 위한 세부 항목(이슈 클로징 조건)을 체크리스트로 적어주세요.
- [ ] TBD
## 이 이슈를 해결하기 위해 이런 내용을 참고할 수 있을 것 같습니다
> 문제 해결에 도움이 될 수 있을 것 같은 관련 이슈 번호, 문서, Wiki, 스크린샷, 개인적인 의견 등을 최대한 적어주세요.
> 이 이슈가 다른 이슈와 관련되어 있는 경우는 **반드시 이슈 번호를 적어주세요**
- 관련이슈: TBD
- 참고사항: TBD
## 이 이슈 해결을 위해 이정도 시간이 예상됩니다
> 예상소요시간을 한가지만 선택해주세요.
> (1W+ 가 아닌 경우 레이블을 변경해주세요.)
- 예상소요시간: **1W-**
## 관련된 세부 정보입니다.
> Reporter는 **1명만**, Domain, Priority, Task를 **각각 한가지만** 선택해주세요.
> (UX, Medium, Enhancement 가 아닌 경우 레이블을 변경해주세요.)
- Reporter: @tooget
- Domain : **UX**
- Priority: **Medium**
- Task : **Enhancement**
## 이 이슈를 해결함에 따라 이정도 재무적 영향이 예상됩니다.
> 이 이슈를 해결함에 따라 전사적으로 유의미한 수익/비용 변동이 예상될 경우, 해당 수치를 입력해주세요.
- 예상수익: 0 원/월
- 예상비용: 0 원/월
| 1.0 | Contacts | Resume/CV 컴포넌트 순서 변경 및 단순화 - ## 이런 목표를 달성해야 합니다
> 이 이슈로 무슨 목표를 달성하고자 하며 어떤 상태가 되어야 하는지 간결히 적어주세요.
- tooget.github.io QR코드를 읽은 경우 "이력"이 바로 보일 수 있게 수정
## 현재 이런 상태입니다
> 이 이슈를 생성한 현시점의 문제 혹은 향후 문제 발생 가능성에 대하여 간결히 적어주세요.
- 실제로 QR코드 명함을 전달하며 의견을 확인한 결과, 관심사는 "이력"에 있으며 "커피챗"에는 관심이 덜 함.
## 이 이슈는 이 분이 풀 수 있을 것 같습니다
> 담당할 Assignee를 @로 **1명만** 멘션해주세요.
@tooget
## 아래의 세부적인 문제를 풀어야 할 것 같습니다
> 이 이슈를 해결하기 위한 세부 항목(이슈 클로징 조건)을 체크리스트로 적어주세요.
- [ ] TBD
## 이 이슈를 해결하기 위해 이런 내용을 참고할 수 있을 것 같습니다
> 문제 해결에 도움이 될 수 있을 것 같은 관련 이슈 번호, 문서, Wiki, 스크린샷, 개인적인 의견 등을 최대한 적어주세요.
> 이 이슈가 다른 이슈와 관련되어 있는 경우는 **반드시 이슈 번호를 적어주세요**
- 관련이슈: TBD
- 참고사항: TBD
## 이 이슈 해결을 위해 이정도 시간이 예상됩니다
> 예상소요시간을 한가지만 선택해주세요.
> (1W+ 가 아닌 경우 레이블을 변경해주세요.)
- 예상소요시간: **1W-**
## 관련된 세부 정보입니다.
> Reporter는 **1명만**, Domain, Priority, Task를 **각각 한가지만** 선택해주세요.
> (UX, Medium, Enhancement 가 아닌 경우 레이블을 변경해주세요.)
- Reporter: @tooget
- Domain : **UX**
- Priority: **Medium**
- Task : **Enhancement**
## 이 이슈를 해결함에 따라 이정도 재무적 영향이 예상됩니다.
> 이 이슈를 해결함에 따라 전사적으로 유의미한 수익/비용 변동이 예상될 경우, 해당 수치를 입력해주세요.
- 예상수익: 0 원/월
- 예상비용: 0 원/월
| priority | contacts resume cv 컴포넌트 순서 변경 및 단순화 이런 목표를 달성해야 합니다 이 이슈로 무슨 목표를 달성하고자 하며 어떤 상태가 되어야 하는지 간결히 적어주세요 tooget github io qr코드를 읽은 경우 이력 이 바로 보일 수 있게 수정 현재 이런 상태입니다 이 이슈를 생성한 현시점의 문제 혹은 향후 문제 발생 가능성에 대하여 간결히 적어주세요 실제로 qr코드 명함을 전달하며 의견을 확인한 결과 관심사는 이력 에 있으며 커피챗 에는 관심이 덜 함 이 이슈는 이 분이 풀 수 있을 것 같습니다 담당할 assignee를 로 멘션해주세요 tooget 아래의 세부적인 문제를 풀어야 할 것 같습니다 이 이슈를 해결하기 위한 세부 항목 이슈 클로징 조건 을 체크리스트로 적어주세요 tbd 이 이슈를 해결하기 위해 이런 내용을 참고할 수 있을 것 같습니다 문제 해결에 도움이 될 수 있을 것 같은 관련 이슈 번호 문서 wiki 스크린샷 개인적인 의견 등을 최대한 적어주세요 이 이슈가 다른 이슈와 관련되어 있는 경우는 반드시 이슈 번호를 적어주세요 관련이슈 tbd 참고사항 tbd 이 이슈 해결을 위해 이정도 시간이 예상됩니다 예상소요시간을 한가지만 선택해주세요 가 아닌 경우 레이블을 변경해주세요 예상소요시간 관련된 세부 정보입니다 reporter는 domain priority task를 각각 한가지만 선택해주세요 ux medium enhancement 가 아닌 경우 레이블을 변경해주세요 reporter tooget domain ux priority medium task enhancement 이 이슈를 해결함에 따라 이정도 재무적 영향이 예상됩니다 이 이슈를 해결함에 따라 전사적으로 유의미한 수익 비용 변동이 예상될 경우 해당 수치를 입력해주세요 예상수익 원 월 예상비용 원 월 | 1 |
534,849 | 15,650,419,786 | IssuesEvent | 2021-03-23 08:58:07 | AY2021S2-CS2113T-T09-4/tp | https://api.github.com/repos/AY2021S2-CS2113T-T09-4/tp | closed | As a user, I can create, save data and load existing data | priority.Medium type.Story | - So that I can work on another device with the saved data.
- I can edit the data file directly if I am an expert user. | 1.0 | As a user, I can create, save data and load existing data - - So that I can work on another device with the saved data.
- I can edit the data file directly if I am an expert user. | priority | as a user i can create save data and load existing data so that i can work on another device with the saved data i can edit the data file directly if i am an expert user | 1 |
111,567 | 4,478,843,468 | IssuesEvent | 2016-08-27 07:45:18 | classilla/tenfourfox | https://api.github.com/repos/classilla/tenfourfox | reopened | Localized versions | auto-migrated Priority-Medium Type-Enhancement | ```
* I read everything above and have demonstrated this bug only occurs on
10.4Fx by testing against this official version of Firefox 4 (not
applicable for startup failure) - specify:
* This is a startup crash or failure to start (Y/N): N
* What steps are necessary to reproduce the bug? These must be reasonably
reliable.
Start tenfourfox on a non english system
* Describe your processor, computer, operating system and any special
things about your environment.
IMac Power PC G4 1,25Ghz
No localization is included into the package. I Try to replace en-en.jar by the
official fr.jar in chrome directory but didn't work. I manage to change search
engines, and dictionaries...
How could we localize tenfourfox ?
```
Original issue reported on code.google.com by `narcoti...@gmail.com` on 22 Mar 2011 at 10:58
* Blocked on: #61 | 1.0 | Localized versions - ```
* I read everything above and have demonstrated this bug only occurs on
10.4Fx by testing against this official version of Firefox 4 (not
applicable for startup failure) - specify:
* This is a startup crash or failure to start (Y/N): N
* What steps are necessary to reproduce the bug? These must be reasonably
reliable.
Start tenfourfox on a non english system
* Describe your processor, computer, operating system and any special
things about your environment.
IMac Power PC G4 1,25Ghz
No localization is included into the package. I Try to replace en-en.jar by the
official fr.jar in chrome directory but didn't work. I manage to change search
engines, and dictionaries...
How could we localize tenfourfox ?
```
Original issue reported on code.google.com by `narcoti...@gmail.com` on 22 Mar 2011 at 10:58
* Blocked on: #61 | priority | localized versions i read everything above and have demonstrated this bug only occurs on by testing against this official version of firefox not applicable for startup failure specify this is a startup crash or failure to start y n n what steps are necessary to reproduce the bug these must be reasonably reliable start tenfourfox on a non english system describe your processor computer operating system and any special things about your environment imac power pc no localization is included into the package i try to replace en en jar by the official fr jar in chrome directory but didn t work i manage to change search engines and dictionaries how could we localize tenfourfox original issue reported on code google com by narcoti gmail com on mar at blocked on | 1 |
431,999 | 12,487,485,244 | IssuesEvent | 2020-05-31 09:20:27 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | closed | Using apostrophe(') in search displays backslash(\) on the result page | Has-PR bug priority: medium | **Describe the bug**
Using apostrophe(') in search displays backslash(\) on the result page
**To Reproduce**
Steps to reproduce the behavior:
1. Go to front page
2. Search any keyword with apostrophe(')
3. Notice in the result page that there are backslashes added before the apostrophe(').
**Expected behavior**
There should be no backslashes
**Screencast**
https://drive.google.com/file/d/1hv9u05rXvzQcpwrXSUAQ5odSStGFT9_G/view
**Screenshots**


**Support ticket links**
https://secure.helpscout.net/conversation/1163576561/73270?folderId=3701263
| 1.0 | Using apostrophe(') in search displays backslash(\) on the result page - **Describe the bug**
Using apostrophe(') in search displays backslash(\) on the result page
**To Reproduce**
Steps to reproduce the behavior:
1. Go to front page
2. Search any keyword with apostrophe(')
3. Notice in the result page that there are backslashes added before the apostrophe(').
**Expected behavior**
There should be no backslashes
**Screencast**
https://drive.google.com/file/d/1hv9u05rXvzQcpwrXSUAQ5odSStGFT9_G/view
**Screenshots**


**Support ticket links**
https://secure.helpscout.net/conversation/1163576561/73270?folderId=3701263
| priority | using apostrophe in search displays backslash on the result page describe the bug using apostrophe in search displays backslash on the result page to reproduce steps to reproduce the behavior go to front page search any keyword with apostrophe notice in the result page that there are backslashes added before the apostrophe expected behavior there should be no backslashes screencast screenshots support ticket links | 1 |
332,631 | 10,102,053,346 | IssuesEvent | 2019-07-29 10:10:49 | conan-io/conan | https://api.github.com/repos/conan-io/conan | closed | conan remote remove fails with editable packages | complex: low priority: medium stage: review type: bug | Trying to update the metadata, it fails because the `PackageEditableLayout` has no `update_metadata()`. The main questions are:
- Should the editable packages still have the metadata methods available?
or
- Should we force always to retrieve the `PackageCacheLayout` sometimes, e.g from the `remote_registry.py`?
```
Traceback (most recent call last):
File "/home/luism/workspace/conan_sources/conans/client/command.py", line 1832, in run
method(args[0][1:])
File "/home/luism/workspace/conan_sources/conans/client/command.py", line 1423, in remote
return self._conan.remote_remove(remote_name)
File "/home/luism/workspace/conan_sources/conans/client/conan_api.py", line 77, in wrapper
return f(*args, **kwargs)
File "/home/luism/workspace/conan_sources/conans/client/conan_api.py", line 922, in remote_remove
return self._cache.registry.remove(remote_name)
File "/home/luism/workspace/conan_sources/conans/client/cache/remote_registry.py", line 301, in remove
with self._cache.package_layout(ref).update_metadata() as metadata:
AttributeError: 'PackageEditableLayout' object has no attribute 'update_metadata'
``` | 1.0 | conan remote remove fails with editable packages - Trying to update the metadata, it fails because the `PackageEditableLayout` has no `update_metadata()`. The main questions are:
- Should the editable packages still have the metadata methods available?
or
- Should we force always to retrieve the `PackageCacheLayout` sometimes, e.g from the `remote_registry.py`?
```
Traceback (most recent call last):
File "/home/luism/workspace/conan_sources/conans/client/command.py", line 1832, in run
method(args[0][1:])
File "/home/luism/workspace/conan_sources/conans/client/command.py", line 1423, in remote
return self._conan.remote_remove(remote_name)
File "/home/luism/workspace/conan_sources/conans/client/conan_api.py", line 77, in wrapper
return f(*args, **kwargs)
File "/home/luism/workspace/conan_sources/conans/client/conan_api.py", line 922, in remote_remove
return self._cache.registry.remove(remote_name)
File "/home/luism/workspace/conan_sources/conans/client/cache/remote_registry.py", line 301, in remove
with self._cache.package_layout(ref).update_metadata() as metadata:
AttributeError: 'PackageEditableLayout' object has no attribute 'update_metadata'
``` | priority | conan remote remove fails with editable packages trying to update the metadata it fails because the packageeditablelayout has no update metadata the main questions are should the editable packages still have the metadata methods available or should we force always to retrieve the packagecachelayout sometimes e g from the remote registry py traceback most recent call last file home luism workspace conan sources conans client command py line in run method args file home luism workspace conan sources conans client command py line in remote return self conan remote remove remote name file home luism workspace conan sources conans client conan api py line in wrapper return f args kwargs file home luism workspace conan sources conans client conan api py line in remote remove return self cache registry remove remote name file home luism workspace conan sources conans client cache remote registry py line in remove with self cache package layout ref update metadata as metadata attributeerror packageeditablelayout object has no attribute update metadata | 1 |
278,908 | 8,652,422,404 | IssuesEvent | 2018-11-27 07:59:39 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | samples/subsys/usb/hid/ test hangs on quark_se_c1000_devboard | area: USB bug priority: medium | samples/subsys/usb/hid/ test hangs on quark_se_c1000_devboard
Arch: x86
Board: quark_se_c1000_devboard
Error Console log:
***** Booting Zephyr OS v1.13.0-rc3 *****
[main] [DBG] main: Starting application
[main] [DBG] set_idle_cb: Set Idle callback
[main] [DBG] main: Wrote 2 bytes with ret 0
Steps to reproduce:
cd samples/subsys/usb/hid
rm -rf build && mkdir build && cd build
cmake -D BOARD=quark_se_c1000_devboard ../
make BOARD=quark_se_c1000_devboard flash
| 1.0 | samples/subsys/usb/hid/ test hangs on quark_se_c1000_devboard - samples/subsys/usb/hid/ test hangs on quark_se_c1000_devboard
Arch: x86
Board: quark_se_c1000_devboard
Error Console log:
***** Booting Zephyr OS v1.13.0-rc3 *****
[main] [DBG] main: Starting application
[main] [DBG] set_idle_cb: Set Idle callback
[main] [DBG] main: Wrote 2 bytes with ret 0
Steps to reproduce:
cd samples/subsys/usb/hid
rm -rf build && mkdir build && cd build
cmake -D BOARD=quark_se_c1000_devboard ../
make BOARD=quark_se_c1000_devboard flash
| priority | samples subsys usb hid test hangs on quark se devboard samples subsys usb hid test hangs on quark se devboard arch board quark se devboard error console log booting zephyr os main starting application set idle cb set idle callback main wrote bytes with ret steps to reproduce cd samples subsys usb hid rm rf build mkdir build cd build cmake d board quark se devboard make board quark se devboard flash | 1 |
20,450 | 2,622,849,124 | IssuesEvent | 2015-03-04 08:04:12 | max99x/pagemon-chrome-ext | https://api.github.com/repos/max99x/pagemon-chrome-ext | closed | French translation | auto-migrated Priority-Medium Type-Enhancement | ```
see the attached file.
possible abbreviations if needed :
- "Voir la page d'origine" -> Voir l'original
- "Page mise à jour" -> "Page MàJ"
- "unknown" key : ("inconnu") literal translation but depends on what it's used
for, a more accurate translation can be done if you give me more details.
also is the "test_no_matches" key actually used somewhere ? I can't see it with
a wrong regex or a wrong selector...
some entries are too long to fit in the display, be sure to correct that before
integrate it to the new version.
Please, contact me if you need to change, adapt or reduce anything related to
this translation
```
Original issue reported on code.google.com by `djac...@gmail.com` on 19 Jun 2011 at 1:13
Attachments:
* [messages.json](https://storage.googleapis.com/google-code-attachments/pagemon-chrome-ext/issue-129/comment-0/messages.json)
| 1.0 | French translation - ```
see the attached file.
possible abbreviations if needed :
- "Voir la page d'origine" -> Voir l'original
- "Page mise à jour" -> "Page MàJ"
- "unknown" key : ("inconnu") literal translation but depends on what it's used
for, a more accurate translation can be done if you give me more details.
also is the "test_no_matches" key actually used somewhere ? I can't see it with
a wrong regex or a wrong selector...
some entries are too long to fit in the display, be sure to correct that before
integrate it to the new version.
Please, contact me if you need to change, adapt or reduce anything related to
this translation
```
Original issue reported on code.google.com by `djac...@gmail.com` on 19 Jun 2011 at 1:13
Attachments:
* [messages.json](https://storage.googleapis.com/google-code-attachments/pagemon-chrome-ext/issue-129/comment-0/messages.json)
| priority | french translation see the attached file possible abbreviations if needed voir la page d origine voir l original page mise à jour page màj unknown key inconnu literal translation but depends on what it s used for a more accurate translation can be done if you give me more details also is the test no matches key actually used somewhere i can t see it with a wrong regex or a wrong selector some entries are too long to fit in the display be sure to correct that before integrate it to the new version please contact me if you need to change adapt or reduce anything related to this translation original issue reported on code google com by djac gmail com on jun at attachments | 1 |
467,815 | 13,455,673,104 | IssuesEvent | 2020-09-09 06:38:56 | teamforus/forus | https://api.github.com/repos/teamforus/forus | closed | Provider bug: after email confirmation the signupflow tells me the acces token is invalid | Difficulty: Medium Priority: Must have Scope: Medium bug | ## Main asssignee: @
**Start from branch** Release v0.14.0
## Context/goal:
1. Was logged out and opened signup flow
2. created profile and verifef mail
3. Tried to create a organization and confirmed. The signup flow was not redirecting me to the next page. because of an invalid acces token (see image below)
<img width="437" alt="Screen Shot 2020-08-20 at 11 20 44" src="https://user-images.githubusercontent.com/38419514/90752151-56c7c480-e2d7-11ea-94a4-5174c09b993d.png">
| 1.0 | Provider bug: after email confirmation the signupflow tells me the acces token is invalid - ## Main asssignee: @
**Start from branch** Release v0.14.0
## Context/goal:
1. Was logged out and opened signup flow
2. created profile and verifef mail
3. Tried to create a organization and confirmed. The signup flow was not redirecting me to the next page. because of an invalid acces token (see image below)
<img width="437" alt="Screen Shot 2020-08-20 at 11 20 44" src="https://user-images.githubusercontent.com/38419514/90752151-56c7c480-e2d7-11ea-94a4-5174c09b993d.png">
| priority | provider bug after email confirmation the signupflow tells me the acces token is invalid main asssignee start from branch release context goal was logged out and opened signup flow created profile and verifef mail tried to create a organization and confirmed the signup flow was not redirecting me to the next page because of an invalid acces token see image below img width alt screen shot at src | 1 |
589,776 | 17,761,185,843 | IssuesEvent | 2021-08-29 18:24:05 | ClinGen/clincoded | https://api.github.com/repos/ClinGen/clincoded | closed | UI changes for Evidence Summary linking | GCI EP request priority: medium | Re-arrange UI to make it more obvious how to link to view the Provisional and Approved evidence summaries in the GCI.


| 1.0 | UI changes for Evidence Summary linking - Re-arrange UI to make it more obvious how to link to view the Provisional and Approved evidence summaries in the GCI.


| priority | ui changes for evidence summary linking re arrange ui to make it more obvious how to link to view the provisional and approved evidence summaries in the gci | 1 |
225,705 | 7,494,391,698 | IssuesEvent | 2018-04-07 09:05:28 | andgein/SIStema | https://api.github.com/repos/andgein/SIStema | closed | Перенести вопрос про размер футболки в профиль | priority:2:medium type:feature | Эта информация редко меняется из года в год, а ещё мы можем захотеть собирать её для препов. | 1.0 | Перенести вопрос про размер футболки в профиль - Эта информация редко меняется из года в год, а ещё мы можем захотеть собирать её для препов. | priority | перенести вопрос про размер футболки в профиль эта информация редко меняется из года в год а ещё мы можем захотеть собирать её для препов | 1 |
353,501 | 10,553,315,319 | IssuesEvent | 2019-10-03 16:56:39 | emory-libraries/ezpaarse-platforms | https://api.github.com/repos/emory-libraries/ezpaarse-platforms | closed | Foundation Center (Candid) | Add Parser Medium Priority | ### Example:star::star: :
http://foundationcenter.org.proxy.library.emory.edu/
### Priority:
Medium
### Subscriber (Library):
Woodruff
### ezPAARSE
Analysis: None | 1.0 | Foundation Center (Candid) - ### Example:star::star: :
http://foundationcenter.org.proxy.library.emory.edu/
### Priority:
Medium
### Subscriber (Library):
Woodruff
### ezPAARSE
Analysis: None | priority | foundation center candid example star star priority medium subscriber library woodruff ezpaarse analysis none | 1 |
68,512 | 3,288,950,612 | IssuesEvent | 2015-10-29 17:01:13 | patrickomni/omnimobileserver | https://api.github.com/repos/patrickomni/omnimobileserver | closed | Alerts - low battery alert not being sent | bug Priority MEDIUM | Pearl's Sendum device (id 05671343) delivered a LOWBATTERYCAP alarm to Classic Omni Listener on 2015-09-20 11:42:04 GMT (see below). When looking at the portal on 2015-09-21 @ 11:20 PDT (18:20 GMT), filter was set to "last 3 days" to ensure the alert would be included and there are no active alerts displayed.
I don't know whether/how Sendum alarm messages are processed - do they result in Omni alerts or do we ignore them and look at our own rules? | 1.0 | Alerts - low battery alert not being sent - Pearl's Sendum device (id 05671343) delivered a LOWBATTERYCAP alarm to Classic Omni Listener on 2015-09-20 11:42:04 GMT (see below). When looking at the portal on 2015-09-21 @ 11:20 PDT (18:20 GMT), filter was set to "last 3 days" to ensure the alert would be included and there are no active alerts displayed.
I don't know whether/how Sendum alarm messages are processed - do they result in Omni alerts or do we ignore them and look at our own rules? | priority | alerts low battery alert not being sent pearl s sendum device id delivered a lowbatterycap alarm to classic omni listener on gmt see below when looking at the portal on pdt gmt filter was set to last days to ensure the alert would be included and there are no active alerts displayed i don t know whether how sendum alarm messages are processed do they result in omni alerts or do we ignore them and look at our own rules | 1 |
25,888 | 2,684,026,527 | IssuesEvent | 2015-03-28 15:47:17 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | Resizeing tabs with mouse in "panel view" (thumbs/tiles) does not work | 1 star bug imported Priority-Medium | _From [mickem](https://code.google.com/u/mickem/) on October 28, 2011 22:58:55_
Resizeing tabs with mouse in "panel view" (thumbs/tiles) does not work
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=449_ | 1.0 | Resizeing tabs with mouse in "panel view" (thumbs/tiles) does not work - _From [mickem](https://code.google.com/u/mickem/) on October 28, 2011 22:58:55_
Resizeing tabs with mouse in "panel view" (thumbs/tiles) does not work
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=449_ | priority | resizeing tabs with mouse in panel view thumbs tiles does not work from on october resizeing tabs with mouse in panel view thumbs tiles does not work original issue | 1 |
484,660 | 13,943,412,860 | IssuesEvent | 2020-10-22 23:03:21 | jcr7467/UCLAbookstack | https://api.github.com/repos/jcr7467/UCLAbookstack | opened | Login/signup page still accessible once signed in. | Priority - Medium bug | **Describe the bug**
When we login, we expect the user to not be able to access '/signin' but it still does.
**To Reproduce**
Steps to reproduce the behavior:
1. Sign in
2. manually enter into the url 'localhost:8000/signin' or alternatively, 'uclabookstack.com/signin'
**Expected behavior**
This is an easy issue to fix. We just need to place a middleware function that does not allow access to certain pages if a user is already signed in. (our /signin and /signup routes)
**Screenshots**
In this image, we can see that we are still on the login page, but on the upper right you can also see that I am already logged in. This is also true with the signup page
<img width="1265" alt="Screen Shot 2020-10-22 at 4 01 57 PM" src="https://user-images.githubusercontent.com/25701682/96938692-ec223900-147f-11eb-82ba-4d92ff50cdc9.png">
| 1.0 | Login/signup page still accessible once signed in. - **Describe the bug**
When we login, we expect the user to not be able to access '/signin' but it still does.
**To Reproduce**
Steps to reproduce the behavior:
1. Sign in
2. manually enter into the url 'localhost:8000/signin' or alternatively, 'uclabookstack.com/signin'
**Expected behavior**
This is an easy issue to fix. We just need to place a middleware function that does not allow access to certain pages if a user is already signed in. (our /signin and /signup routes)
**Screenshots**
In this image, we can see that we are still on the login page, but on the upper right you can also see that I am already logged in. This is also true with the signup page
<img width="1265" alt="Screen Shot 2020-10-22 at 4 01 57 PM" src="https://user-images.githubusercontent.com/25701682/96938692-ec223900-147f-11eb-82ba-4d92ff50cdc9.png">
| priority | login signup page still accessible once signed in describe the bug when we login we expect the user to not be able to access signin but it still does to reproduce steps to reproduce the behavior sign in manually enter into the url localhost signin or alternatively uclabookstack com signin expected behavior this is an easy issue to fix we just need to place a middleware function that does not allow access to certain pages if a user is already signed in our signin and signup routes screenshots in this image we can see that we are still on the login page but on the upper right you can also see that i am already logged in this is also true with the signup page img width alt screen shot at pm src | 1 |
597,436 | 18,163,456,243 | IssuesEvent | 2021-09-27 12:20:05 | robbinjanssen/home-assistant-omnik-inverter | https://api.github.com/repos/robbinjanssen/home-assistant-omnik-inverter | closed | Status.html support? | enhancement new-feature priority-medium | i use an omnik 2500tl. this inverter does not has a json or js page but it just posts it in a status.html. is it possible to support this? as it is webdata is directly visible in the html code? any suggestions? | 1.0 | Status.html support? - i use an omnik 2500tl. this inverter does not has a json or js page but it just posts it in a status.html. is it possible to support this? as it is webdata is directly visible in the html code? any suggestions? | priority | status html support i use an omnik this inverter does not has a json or js page but it just posts it in a status html is it possible to support this as it is webdata is directly visible in the html code any suggestions | 1 |
577,183 | 17,104,912,458 | IssuesEvent | 2021-07-09 16:11:48 | conan-io/conan | https://api.github.com/repos/conan-io/conan | closed | [feature] New AutotoolsDeps customization | complex: low priority: medium stage: queue type: feature | The `AutotoolsDeps` generator should precalculate things in the constructor to allow the user to modify them before calling the `generate()` method. | 1.0 | [feature] New AutotoolsDeps customization - The `AutotoolsDeps` generator should precalculate things in the constructor to allow the user to modify them before calling the `generate()` method. | priority | new autotoolsdeps customization the autotoolsdeps generator should precalculate things in the constructor to allow the user to modify them before calling the generate method | 1 |
142,422 | 5,475,177,584 | IssuesEvent | 2017-03-11 08:06:01 | Rsl1122/Plan-PlayerAnalytics | https://api.github.com/repos/Rsl1122/Plan-PlayerAnalytics | closed | ConcurrentModificationException | Bug Priority: MEDIUM | Spigot 1.11.2, Plan 2.8.2, 3:00 after server start:
> [12:03:30 INFO]: [Plan] Analysis | Starting Boot Analysis..
[12:03:30 WARN]: [Plan] Plugin Plan v2.8.2 generated an exception while executing task 842
java.util.ConcurrentModificationException
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1380) ~[?:1.8.0_121]
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580) ~[?:1.8.0_121]
at main.java.com.djrapitops.plan.utilities.AnalysisUtils.transformSessionDataToLengths(AnalysisUtils.java:89) ~[?:?]
at main.java.com.djrapitops.plan.utilities.Analysis$1.run(Analysis.java:150) ~[?:?]
at org.bukkit.craftbukkit.v1_11_R1.scheduler.CraftTask.run(CraftTask.java:71) ~[spigot-1.11.2.jar-2017-03-10-0649:git-Spigot-283de8b-eac8591]
at org.bukkit.craftbukkit.v1_11_R1.scheduler.CraftAsyncTask.run(CraftAsyncTask.java:52) [spigot-1.11.2.jar-2017-03-10-0649:git-Spigot-283de8b-eac8591]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
[12:04:35 INFO]: [StreetlightsAdvanced] Activated streetlights in world_blackdog
[12:04:35 INFO]: [StreetlightsAdvanced] Activated streetlights in world_city | 1.0 | ConcurrentModificationException - Spigot 1.11.2, Plan 2.8.2, 3:00 after server start:
> [12:03:30 INFO]: [Plan] Analysis | Starting Boot Analysis..
[12:03:30 WARN]: [Plan] Plugin Plan v2.8.2 generated an exception while executing task 842
java.util.ConcurrentModificationException
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1380) ~[?:1.8.0_121]
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580) ~[?:1.8.0_121]
at main.java.com.djrapitops.plan.utilities.AnalysisUtils.transformSessionDataToLengths(AnalysisUtils.java:89) ~[?:?]
at main.java.com.djrapitops.plan.utilities.Analysis$1.run(Analysis.java:150) ~[?:?]
at org.bukkit.craftbukkit.v1_11_R1.scheduler.CraftTask.run(CraftTask.java:71) ~[spigot-1.11.2.jar-2017-03-10-0649:git-Spigot-283de8b-eac8591]
at org.bukkit.craftbukkit.v1_11_R1.scheduler.CraftAsyncTask.run(CraftAsyncTask.java:52) [spigot-1.11.2.jar-2017-03-10-0649:git-Spigot-283de8b-eac8591]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
[12:04:35 INFO]: [StreetlightsAdvanced] Activated streetlights in world_blackdog
[12:04:35 INFO]: [StreetlightsAdvanced] Activated streetlights in world_city | priority | concurrentmodificationexception spigot plan after server start analysis starting boot analysis plugin plan generated an exception while executing task java util concurrentmodificationexception at java util arraylist arraylistspliterator foreachremaining arraylist java at java util stream referencepipeline head foreach referencepipeline java at main java com djrapitops plan utilities analysisutils transformsessiondatatolengths analysisutils java at main java com djrapitops plan utilities analysis run analysis java at org bukkit craftbukkit scheduler crafttask run crafttask java at org bukkit craftbukkit scheduler craftasynctask run craftasynctask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java activated streetlights in world blackdog activated streetlights in world city | 1 |
646,416 | 21,047,179,640 | IssuesEvent | 2022-03-31 17:07:07 | AY2122S2-TIC4002-F18-3/tp2 | https://api.github.com/repos/AY2122S2-TIC4002-F18-3/tp2 | closed | [PE-D] Rename Feature | priority.Medium severity.Low | Description: Tag number not available, showing Rename is completed

<!--session: 1648207698366-0d66add7-7474-4ccf-bbf8-13feb137c709-->
<!--Version: Web v3.4.2-->
-------------
Labels: `severity.Low` `type.FeatureFlaw`
original: jr-mojito/ped#3 | 1.0 | [PE-D] Rename Feature - Description: Tag number not available, showing Rename is completed

<!--session: 1648207698366-0d66add7-7474-4ccf-bbf8-13feb137c709-->
<!--Version: Web v3.4.2-->
-------------
Labels: `severity.Low` `type.FeatureFlaw`
original: jr-mojito/ped#3 | priority | rename feature description tag number not available showing rename is completed labels severity low type featureflaw original jr mojito ped | 1 |
670,099 | 22,673,929,374 | IssuesEvent | 2022-07-04 00:54:53 | space-wizards/space-station-14 | https://api.github.com/repos/space-wizards/space-station-14 | closed | You can disarm and pick up items from within disposal units, lockers and cloning tubes. | Issue: Bug Priority: 1-Urgent Difficulty: 2-Medium Bug: Replicated | ## Description
<!-- Explain your issue in detail, including the steps to reproduce it if applicable. Issues without proper explanation are liable to be closed by maintainers.-->
You can hide within disposals and be safe from retribution while disarming people and taking their weapons and hitting them with it. While being cloned, you can disarm people near your cloning pod and do the same. Picking up itemsf rom outside lockers seemed UNRELIABLE. I really couldn't work out when I could and when I couldn't.
To reproduce:
1) Climb into a disposals unit
2) Have someone with an item in their active hand stasnd outside
3) Toggle disarm intent on, and click on them.
4) you disarm them/shove them.
It makes long-term brigging peopel even more unsustainable.
**Screenshots**
I have a two hour long video from a round, but it should be easy to reproduce.
**Additional context**
This is a big problem for Security, and is mercilessly exploited by those who are aware of it.
| 1.0 | You can disarm and pick up items from within disposal units, lockers and cloning tubes. - ## Description
<!-- Explain your issue in detail, including the steps to reproduce it if applicable. Issues without proper explanation are liable to be closed by maintainers.-->
You can hide within disposals and be safe from retribution while disarming people and taking their weapons and hitting them with it. While being cloned, you can disarm people near your cloning pod and do the same. Picking up itemsf rom outside lockers seemed UNRELIABLE. I really couldn't work out when I could and when I couldn't.
To reproduce:
1) Climb into a disposals unit
2) Have someone with an item in their active hand stasnd outside
3) Toggle disarm intent on, and click on them.
4) you disarm them/shove them.
It makes long-term brigging peopel even more unsustainable.
**Screenshots**
I have a two hour long video from a round, but it should be easy to reproduce.
**Additional context**
This is a big problem for Security, and is mercilessly exploited by those who are aware of it.
| priority | you can disarm and pick up items from within disposal units lockers and cloning tubes description you can hide within disposals and be safe from retribution while disarming people and taking their weapons and hitting them with it while being cloned you can disarm people near your cloning pod and do the same picking up itemsf rom outside lockers seemed unreliable i really couldn t work out when i could and when i couldn t to reproduce climb into a disposals unit have someone with an item in their active hand stasnd outside toggle disarm intent on and click on them you disarm them shove them it makes long term brigging peopel even more unsustainable screenshots i have a two hour long video from a round but it should be easy to reproduce additional context this is a big problem for security and is mercilessly exploited by those who are aware of it | 1 |
40,682 | 2,868,935,955 | IssuesEvent | 2015-06-05 22:03:41 | dart-lang/pub | https://api.github.com/repos/dart-lang/pub | closed | Make http://pub.dartlang.org/authorized a real page | bug Fixed Priority-Medium | <a href="https://github.com/munificent"><img src="https://avatars.githubusercontent.com/u/46275?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [munificent](https://github.com/munificent)**
_Originally opened as dart-lang/sdk#7064_
----
After you authenticate with oauth, it bounces you to http://pub.dartlang.org/authorized. That's currently a 404.
I'm guessing there is a fix for this in progress, or maybe already done and just not uploaded. This is a tracking bug for that. :) | 1.0 | Make http://pub.dartlang.org/authorized a real page - <a href="https://github.com/munificent"><img src="https://avatars.githubusercontent.com/u/46275?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [munificent](https://github.com/munificent)**
_Originally opened as dart-lang/sdk#7064_
----
After you authenticate with oauth, it bounces you to http://pub.dartlang.org/authorized. That's currently a 404.
I'm guessing there is a fix for this in progress, or maybe already done and just not uploaded. This is a tracking bug for that. :) | priority | make a real page issue by originally opened as dart lang sdk after you authenticate with oauth it bounces you to that s currently a i m guessing there is a fix for this in progress or maybe already done and just not uploaded this is a tracking bug for that | 1 |
592,918 | 17,934,027,263 | IssuesEvent | 2021-09-10 13:13:27 | ranking-agent/strider | https://api.github.com/repos/ranking-agent/strider | closed | Add Liveness Probe to Strider | Status: Review Needed Priority: Medium | **Issue:** When a pod is moved or restarted, it doesn’t always come back with the data it needs.
**Solution:** Add a liveness probe to Strider. | 1.0 | Add Liveness Probe to Strider - **Issue:** When a pod is moved or restarted, it doesn’t always come back with the data it needs.
**Solution:** Add a liveness probe to Strider. | priority | add liveness probe to strider issue when a pod is moved or restarted it doesn’t always come back with the data it needs solution add a liveness probe to strider | 1 |
30,421 | 2,723,808,877 | IssuesEvent | 2015-04-14 14:39:06 | CruxFramework/crux-widgets | https://api.github.com/repos/CruxFramework/crux-widgets | closed | Make Wizard widget generic to read and write context data | CruxWidgets enhancement imported Milestone-3.0.0 Priority-Medium | _From [tr_busta...@yahoo.com.br](https://code.google.com/u/115454294030253308352/) on July 18, 2010 13:20:04_
Purpose of enhancement Make Wizard widget generic to read and write context data
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=150_ | 1.0 | Make Wizard widget generic to read and write context data - _From [tr_busta...@yahoo.com.br](https://code.google.com/u/115454294030253308352/) on July 18, 2010 13:20:04_
Purpose of enhancement Make Wizard widget generic to read and write context data
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=150_ | priority | make wizard widget generic to read and write context data from on july purpose of enhancement make wizard widget generic to read and write context data original issue | 1 |
445,035 | 12,825,311,294 | IssuesEvent | 2020-07-06 14:47:05 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | [Coverity CID :211039] Out-of-bounds access in drivers/gpio/gpio_nrfx.c | Coverity bug priority: medium |
Static code scan issues found in file:
https://github.com/zephyrproject-rtos/zephyr/tree/8e2c4a475dc375da6691175dd1da87525053ed76/drivers/gpio/gpio_nrfx.c#L105
Category: Memory - corruptions
Function: `gpiote_pin_int_cfg`
Component: Drivers
CID: [211039](https://scan9.coverity.com/reports.htm#v29726/p12996/mergedDefectId=211039)
Details:
```
99 {
100 struct gpio_nrfx_data *data = get_port_data(port);
101 const struct gpio_nrfx_cfg *cfg = get_port_cfg(port);
102 uint32_t abs_pin = NRF_GPIO_PIN_MAP(cfg->port_num, pin);
103 int res = 0;
104
>>> CID 211039: Memory - corruptions (ARRAY_VS_SINGLETON)
>>> Passing "&gpiote_alloc_mask" to function "gpiote_pin_cleanup" which uses it as an array. This might corrupt or misinterpret adjacent memory locations.
105 gpiote_pin_cleanup(&gpiote_alloc_mask, abs_pin);
106 nrf_gpio_cfg_sense_set(abs_pin, NRF_GPIO_PIN_NOSENSE);
107
108 /* Pins trigger interrupts only if pin has been configured to do so */
109 if (data->pin_int_en & BIT(pin)) {
110 if (data->trig_edge & BIT(pin)) {
```
Please fix or provide comments in coverity using the link:
https://scan9.coverity.com/reports.htm#v32951/p12996.
Note: This issue was created automatically. Priority was set based on classification
of the file affected and the impact field in coverity. Assignees were set using the CODEOWNERS file.
| 1.0 | [Coverity CID :211039] Out-of-bounds access in drivers/gpio/gpio_nrfx.c -
Static code scan issues found in file:
https://github.com/zephyrproject-rtos/zephyr/tree/8e2c4a475dc375da6691175dd1da87525053ed76/drivers/gpio/gpio_nrfx.c#L105
Category: Memory - corruptions
Function: `gpiote_pin_int_cfg`
Component: Drivers
CID: [211039](https://scan9.coverity.com/reports.htm#v29726/p12996/mergedDefectId=211039)
Details:
```
99 {
100 struct gpio_nrfx_data *data = get_port_data(port);
101 const struct gpio_nrfx_cfg *cfg = get_port_cfg(port);
102 uint32_t abs_pin = NRF_GPIO_PIN_MAP(cfg->port_num, pin);
103 int res = 0;
104
>>> CID 211039: Memory - corruptions (ARRAY_VS_SINGLETON)
>>> Passing "&gpiote_alloc_mask" to function "gpiote_pin_cleanup" which uses it as an array. This might corrupt or misinterpret adjacent memory locations.
105 gpiote_pin_cleanup(&gpiote_alloc_mask, abs_pin);
106 nrf_gpio_cfg_sense_set(abs_pin, NRF_GPIO_PIN_NOSENSE);
107
108 /* Pins trigger interrupts only if pin has been configured to do so */
109 if (data->pin_int_en & BIT(pin)) {
110 if (data->trig_edge & BIT(pin)) {
```
Please fix or provide comments in coverity using the link:
https://scan9.coverity.com/reports.htm#v32951/p12996.
Note: This issue was created automatically. Priority was set based on classification
of the file affected and the impact field in coverity. Assignees were set using the CODEOWNERS file.
| priority | out of bounds access in drivers gpio gpio nrfx c static code scan issues found in file category memory corruptions function gpiote pin int cfg component drivers cid details struct gpio nrfx data data get port data port const struct gpio nrfx cfg cfg get port cfg port t abs pin nrf gpio pin map cfg port num pin int res cid memory corruptions array vs singleton passing gpiote alloc mask to function gpiote pin cleanup which uses it as an array this might corrupt or misinterpret adjacent memory locations gpiote pin cleanup gpiote alloc mask abs pin nrf gpio cfg sense set abs pin nrf gpio pin nosense pins trigger interrupts only if pin has been configured to do so if data pin int en bit pin if data trig edge bit pin please fix or provide comments in coverity using the link note this issue was created automatically priority was set based on classification of the file affected and the impact field in coverity assignees were set using the codeowners file | 1 |
301,369 | 9,219,457,325 | IssuesEvent | 2019-03-11 15:27:25 | strapi/strapi | https://api.github.com/repos/strapi/strapi | closed | Documentation is not support array method | priority: medium type: enhancement 💅 | **Informations**
- **Node.js version**: 10.15.0
- **NPM version**: 6.4.1
- **Strapi version**: 3.0.0-alpha.24.1
- **Database**: SQLite
- **Operating system**: Linux
**What is the current behavior?**
Error throwed, documentation is not generated
```
UnhandledPromiseRejectionWarning: TypeError: current.method.toLowerCase is not a function
at routes.reduce (src/backend/plugins/documentation/services/Documentation.js:433:35)
```
**Steps to reproduce the problem**
1. Create routes.json with next content
```
{
"routes": [
{
"method": ["GET", "POST"],
"path": "/example",
"handler": "Example.example",
"config": {
"policies": []
}
}
]
}
```
2. Start strapi and observe result
**What is the expected behavior?**
Documentation should be generated
| 1.0 | Documentation is not support array method - **Informations**
- **Node.js version**: 10.15.0
- **NPM version**: 6.4.1
- **Strapi version**: 3.0.0-alpha.24.1
- **Database**: SQLite
- **Operating system**: Linux
**What is the current behavior?**
Error throwed, documentation is not generated
```
UnhandledPromiseRejectionWarning: TypeError: current.method.toLowerCase is not a function
at routes.reduce (src/backend/plugins/documentation/services/Documentation.js:433:35)
```
**Steps to reproduce the problem**
1. Create routes.json with next content
```
{
"routes": [
{
"method": ["GET", "POST"],
"path": "/example",
"handler": "Example.example",
"config": {
"policies": []
}
}
]
}
```
2. Start strapi and observe result
**What is the expected behavior?**
Documentation should be generated
| priority | documentation is not support array method informations node js version npm version strapi version alpha database sqlite operating system linux what is the current behavior error throwed documentation is not generated unhandledpromiserejectionwarning typeerror current method tolowercase is not a function at routes reduce src backend plugins documentation services documentation js steps to reproduce the problem create routes json with next content routes method path example handler example example config policies start strapi and observe result what is the expected behavior documentation should be generated | 1 |
516,255 | 14,978,208,679 | IssuesEvent | 2021-01-28 10:31:22 | sButtons/sbuttons | https://api.github.com/repos/sButtons/sbuttons | closed | Disable horizontal scrolling of website on small devices | Priority: Medium bug good first issue help wanted up-for-grabs website | On small devices, the website can be scrolled horizontally. This issue is not seen on resizing the window to a smaller width, but on using mobile devices and trying to scroll horizontally. | 1.0 | Disable horizontal scrolling of website on small devices - On small devices, the website can be scrolled horizontally. This issue is not seen on resizing the window to a smaller width, but on using mobile devices and trying to scroll horizontally. | priority | disable horizontal scrolling of website on small devices on small devices the website can be scrolled horizontally this issue is not seen on resizing the window to a smaller width but on using mobile devices and trying to scroll horizontally | 1 |
732,238 | 25,250,828,877 | IssuesEvent | 2022-11-15 14:34:43 | testomatio/app | https://api.github.com/repos/testomatio/app | opened | Option to define a default branch for project | enhancement branches priority medium | - There should be an ability to define main or stable branch for the project
- Such "main" branch should be outlined always in brach list (if no other branches are used)
- "Main" branch label should visible all the time so you know what brach is used right now | 1.0 | Option to define a default branch for project - - There should be an ability to define main or stable branch for the project
- Such "main" branch should be outlined always in brach list (if no other branches are used)
- "Main" branch label should visible all the time so you know what brach is used right now | priority | option to define a default branch for project there should be an ability to define main or stable branch for the project such main branch should be outlined always in brach list if no other branches are used main branch label should visible all the time so you know what brach is used right now | 1 |
731,411 | 25,214,977,936 | IssuesEvent | 2022-11-14 08:26:06 | NIAEFEUP/uporto-schedule-scrapper | https://api.github.com/repos/NIAEFEUP/uporto-schedule-scrapper | closed | No error message on shutdown with auth failure | medium priority | At the moment if the auth fails, the script shuts down as expected. However, it does not print the error message for some reason. I have even set the flush flag to True, to no effect. @miguelpduarte, do you have any idea as to how this behaviour would emerge?
_Originally posted by @joaonmatos in https://github.com/NIAEFEUP/uporto-timetable-scrapper/pull/46#issuecomment-688945552_ | 1.0 | No error message on shutdown with auth failure - At the moment if the auth fails, the script shuts down as expected. However, it does not print the error message for some reason. I have even set the flush flag to True, to no effect. @miguelpduarte, do you have any idea as to how this behaviour would emerge?
_Originally posted by @joaonmatos in https://github.com/NIAEFEUP/uporto-timetable-scrapper/pull/46#issuecomment-688945552_ | priority | no error message on shutdown with auth failure at the moment if the auth fails the script shuts down as expected however it does not print the error message for some reason i have even set the flush flag to true to no effect miguelpduarte do you have any idea as to how this behaviour would emerge originally posted by joaonmatos in | 1 |
251,599 | 8,017,839,934 | IssuesEvent | 2018-07-25 17:09:56 | mozilla-services/fx-sig-verify | https://api.github.com/repos/mozilla-services/fx-sig-verify | opened | Lambda function allocated memory may need updating | Medium Priority | Now that we're processing larger files (#46), the memory allocated to the lambda function may also need to be updated.
From today's report after 24 hours of activity, we ran out of memory a number of times (which is a hard fail):
```sh
$ analyze_cloudwatch --report --summarize `lf`
24,089 runs
5,275 seconds execution time
7,577 seconds billed
68,447,689 GBi seconds (AWS Billing Unit)
219 average milliseconds per run
4,122 times we used all memory
17% of runs maxing out memory
384 MBi max memory used
0 times run aborted for excessive time
0% of runs exceeding time limit
0 times retry did not succeed
```
The failed runs should only be on the new nightly builds. | 1.0 | Lambda function allocated memory may need updating - Now that we're processing larger files (#46), the memory allocated to the lambda function may also need to be updated.
From today's report after 24 hours of activity, we ran out of memory a number of times (which is a hard fail):
```sh
$ analyze_cloudwatch --report --summarize `lf`
24,089 runs
5,275 seconds execution time
7,577 seconds billed
68,447,689 GBi seconds (AWS Billing Unit)
219 average milliseconds per run
4,122 times we used all memory
17% of runs maxing out memory
384 MBi max memory used
0 times run aborted for excessive time
0% of runs exceeding time limit
0 times retry did not succeed
```
The failed runs should only be on the new nightly builds. | priority | lambda function allocated memory may need updating now that we re processing larger files the memory allocated to the lambda function may also need to be updated from today s report after hours of activity we ran out of memory a number of times which is a hard fail sh analyze cloudwatch report summarize lf runs seconds execution time seconds billed gbi seconds aws billing unit average milliseconds per run times we used all memory of runs maxing out memory mbi max memory used times run aborted for excessive time of runs exceeding time limit times retry did not succeed the failed runs should only be on the new nightly builds | 1 |
535,000 | 15,680,238,631 | IssuesEvent | 2021-03-25 02:24:54 | kubesphere/console | https://api.github.com/repos/kubesphere/console | closed | Pipeline log improvement | area/devops kind/feature priority/medium | <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
There're a few things about the Pipeline log that we can improve:
* Select the stage and step which error happened
* User can see the whole log output instead of download it every time
* User should be able to follow the log output. It'd be better to offer a switch button to do that
* Be aware of the very big log data. In some cases, the log output might be huge. For example, the whole log output might be more than 1M. Don't let it consumes too much memory of the browser.
* In some cases, the critical part of the log error output cannot be found from the steps log.
* The log file download feature is useful only when you want to send it to someone instead of regular checking.

**Why is this needed**:
The log output is the most important part when there're some errors happened. In my view, this is the reason why I think the priority of this improvement request is medium.
/area devops
/priority medium | 1.0 | Pipeline log improvement - <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
There're a few things about the Pipeline log that we can improve:
* Select the stage and step which error happened
* User can see the whole log output instead of download it every time
* User should be able to follow the log output. It'd be better to offer a switch button to do that
* Be aware of the very big log data. In some cases, the log output might be huge. For example, the whole log output might be more than 1M. Don't let it consumes too much memory of the browser.
* In some cases, the critical part of the log error output cannot be found from the steps log.
* The log file download feature is useful only when you want to send it to someone instead of regular checking.

**Why is this needed**:
The log output is the most important part when there're some errors happened. In my view, this is the reason why I think the priority of this improvement request is medium.
/area devops
/priority medium | priority | pipeline log improvement what would you like to be added there re a few things about the pipeline log that we can improve select the stage and step which error happened user can see the whole log output instead of download it every time user should be able to follow the log output it d be better to offer a switch button to do that be aware of the very big log data in some cases the log output might be huge for example the whole log output might be more than don t let it consumes too much memory of the browser in some cases the critical part of the log error output cannot be found from the steps log the log file download feature is useful only when you want to send it to someone instead of regular checking why is this needed the log output is the most important part when there re some errors happened in my view this is the reason why i think the priority of this improvement request is medium area devops priority medium | 1 |
488,420 | 14,077,156,389 | IssuesEvent | 2020-11-04 11:34:11 | dmwm/CRABServer | https://api.github.com/repos/dmwm/CRABServer | closed | improve PublisherMaster log | Area: StandalonePublish/ASOless Priority: Medium Status: Done | should print a message when it starts the sleep for the polling loop, since polling time is usually longish (60 min) | 1.0 | improve PublisherMaster log - should print a message when it starts the sleep for the polling loop, since polling time is usually longish (60 min) | priority | improve publishermaster log should print a message when it starts the sleep for the polling loop since polling time is usually longish min | 1 |
778,487 | 27,318,474,125 | IssuesEvent | 2023-02-24 17:35:12 | AY2223S2-CS2103T-W11-2/tp | https://api.github.com/repos/AY2223S2-CS2103T-W11-2/tp | opened | Have a calendar view of all application deadlines | type.Story priority.Medium | as an Intermediate user so that I can schedule a new interview | 1.0 | Have a calendar view of all application deadlines - as an Intermediate user so that I can schedule a new interview | priority | have a calendar view of all application deadlines as an intermediate user so that i can schedule a new interview | 1 |
43,538 | 2,889,840,888 | IssuesEvent | 2015-06-13 20:18:49 | damonkohler/sl4a | https://api.github.com/repos/damonkohler/sl4a | opened | Add image handling package(s) to Tcl | auto-migrated Priority-Medium Type-Enhancement | _From @GoogleCodeExporter on May 31, 2015 11:25_
```
How would I go about adding image handling for Tcl?
I am personally not familiar with the structure of the interpreters for
this environment, and don't have the know-how to patch them myself... Is
this an exceptionally difficult request?
```
Original issue reported on code.google.com by `iheartmy...@gmail.com` on 1 Apr 2010 at 7:25
_Copied from original issue: damonkohler/android-scripting#275_ | 1.0 | Add image handling package(s) to Tcl - _From @GoogleCodeExporter on May 31, 2015 11:25_
```
How would I go about adding image handling for Tcl?
I am personally not familiar with the structure of the interpreters for
this environment, and don't have the know-how to patch them myself... Is
this an exceptionally difficult request?
```
Original issue reported on code.google.com by `iheartmy...@gmail.com` on 1 Apr 2010 at 7:25
_Copied from original issue: damonkohler/android-scripting#275_ | priority | add image handling package s to tcl from googlecodeexporter on may how would i go about adding image handling for tcl i am personally not familiar with the structure of the interpreters for this environment and don t have the know how to patch them myself is this an exceptionally difficult request original issue reported on code google com by iheartmy gmail com on apr at copied from original issue damonkohler android scripting | 1 |
728,318 | 25,074,946,868 | IssuesEvent | 2022-11-07 14:52:41 | OpenTabletDriver/OpenTabletDriver | https://api.github.com/repos/OpenTabletDriver/OpenTabletDriver | closed | Plugin Manager increasingly takes longer to load on each refresh | bug gui priority:medium | ## Description
<!-- Describe the issue below -->
Root cause appears to be as result of the App.Settings property being changed, causing all event hooks to be rehooked, and eventually causes huge slowdowns and/or hangups.
https://github.com/OpenTabletDriver/OpenTabletDriver/blob/a91c751d2ee57a8edf30225d34ab12fd81ba3ad5/OpenTabletDriver.UX/Windows/PluginManagerWindow.cs#L114-L126
## System Information:
<!-- Please fill out this information -->
| Name | Value |
| ---------------- | ----- |
| Operating System | NixOS Linux
| Software Version | a91c751d2ee57a8edf30225d34ab12fd81ba3ad5 | 1.0 | Plugin Manager increasingly takes longer to load on each refresh - ## Description
<!-- Describe the issue below -->
Root cause appears to be as result of the App.Settings property being changed, causing all event hooks to be rehooked, and eventually causes huge slowdowns and/or hangups.
https://github.com/OpenTabletDriver/OpenTabletDriver/blob/a91c751d2ee57a8edf30225d34ab12fd81ba3ad5/OpenTabletDriver.UX/Windows/PluginManagerWindow.cs#L114-L126
## System Information:
<!-- Please fill out this information -->
| Name | Value |
| ---------------- | ----- |
| Operating System | NixOS Linux
| Software Version | a91c751d2ee57a8edf30225d34ab12fd81ba3ad5 | priority | plugin manager increasingly takes longer to load on each refresh description root cause appears to be as result of the app settings property being changed causing all event hooks to be rehooked and eventually causes huge slowdowns and or hangups system information name value operating system nixos linux software version | 1 |
567,859 | 16,903,344,108 | IssuesEvent | 2021-06-24 02:09:42 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | [0.9.4 Staging-2030]Butchery table issues with Housing Skill value | Category: Gameplay Priority: Medium Regression Type: Bug | Build: 0.9.4 Staging-2030
## Issues
The butchery table seems to have some kind of issues with housing value. By default, it has a skill value of 2 for Kitchen rooms. So basically, if you place a butchery table inside an empty room, that room should be counted as a Kitchen. Just like every furniture. However, that's not the case.
### Butchery Table

### Room with 1 Butchery Table
Placing a butchery table in an empty room:

### Room with 2 Butchery Table
Here's the behavior of the housing skill value if you place down 2 butchery tables.

### Room with 3 Butchery Table
Now, here's the behavior of the room if you place down 3 butchery tables.

As you can see, one of the butchery tables is still ignored and disabled for some reason.
### 3 Butchery Table with 1 Kitchen Furniture
It doesn't help if you try to place more furniture. One of the tables is still disabled.

### 2 Butchery Tables are Disabled
Here's what happens if you try to place down 3 butchery table and then pickup the first table you placed down and place it down again.

Two butchery table becomes disabled. If you try to do the same thing on the second butchery table, the same issue persists causing 3 butchery tables to become disabled.

## Reproduction Steps
The steps indicated here are steps to emphasize the issue greatly. These are not specific or exclusive steps for the issue to get triggered.
1. Obtain at least 3 butchery tables.
2. Place down all of them in an empty room.
3. Pick up the first table you placed down and place it down again.
4. Do the same thing on the second table.
5. Observe the housing value.
### Recorded Video
https://user-images.githubusercontent.com/77248866/123191623-255ded80-d4d4-11eb-9f31-caf5b6ed2a2c.mp4
| 1.0 | [0.9.4 Staging-2030]Butchery table issues with Housing Skill value - Build: 0.9.4 Staging-2030
## Issues
The butchery table seems to have some kind of issues with housing value. By default, it has a skill value of 2 for Kitchen rooms. So basically, if you place a butchery table inside an empty room, that room should be counted as a Kitchen. Just like every furniture. However, that's not the case.
### Butchery Table

### Room with 1 Butchery Table
Placing a butchery table in an empty room:

### Room with 2 Butchery Table
Here's the behavior of the housing skill value if you place down 2 butchery tables.

### Room with 3 Butchery Table
Now, here's the behavior of the room if you place down 3 butchery tables.

As you can see, one of the butchery tables is still ignored and disabled for some reason.
### 3 Butchery Table with 1 Kitchen Furniture
It doesn't help if you try to place more furniture. One of the tables is still disabled.

### 2 Butchery Tables are Disabled
Here's what happens if you try to place down 3 butchery table and then pickup the first table you placed down and place it down again.

Two butchery table becomes disabled. If you try to do the same thing on the second butchery table, the same issue persists causing 3 butchery tables to become disabled.

## Reproduction Steps
The steps indicated here are steps to emphasize the issue greatly. These are not specific or exclusive steps for the issue to get triggered.
1. Obtain at least 3 butchery tables.
2. Place down all of them in an empty room.
3. Pick up the first table you placed down and place it down again.
4. Do the same thing on the second table.
5. Observe the housing value.
### Recorded Video
https://user-images.githubusercontent.com/77248866/123191623-255ded80-d4d4-11eb-9f31-caf5b6ed2a2c.mp4
| priority | butchery table issues with housing skill value build staging issues the butchery table seems to have some kind of issues with housing value by default it has a skill value of for kitchen rooms so basically if you place a butchery table inside an empty room that room should be counted as a kitchen just like every furniture however that s not the case butchery table room with butchery table placing a butchery table in an empty room room with butchery table here s the behavior of the housing skill value if you place down butchery tables room with butchery table now here s the behavior of the room if you place down butchery tables as you can see one of the butchery tables is still ignored and disabled for some reason butchery table with kitchen furniture it doesn t help if you try to place more furniture one of the tables is still disabled butchery tables are disabled here s what happens if you try to place down butchery table and then pickup the first table you placed down and place it down again two butchery table becomes disabled if you try to do the same thing on the second butchery table the same issue persists causing butchery tables to become disabled reproduction steps the steps indicated here are steps to emphasize the issue greatly these are not specific or exclusive steps for the issue to get triggered obtain at least butchery tables place down all of them in an empty room pick up the first table you placed down and place it down again do the same thing on the second table observe the housing value recorded video | 1 |
636,898 | 20,612,413,275 | IssuesEvent | 2022-03-07 09:56:38 | Soulcialize/souldragonknight | https://api.github.com/repos/Soulcialize/souldragonknight | opened | Implement puzzle minigame | type.Enhancement priority.Medium | The puzzle will be placed at the end of the level and will unlock a gate/barrier for the players to access the victory condition.
Basic idea for puzzle:
Two set of combination locks which are unlocked by passwords.
Each password is made of a set of runes, which are revealed to each player as they progress through the level and defeat enemies.
Each player will only be able to see each other's password. They will have to relay the password to each other in order to successfully unlock the two locks.
| 1.0 | Implement puzzle minigame - The puzzle will be placed at the end of the level and will unlock a gate/barrier for the players to access the victory condition.
Basic idea for puzzle:
Two set of combination locks which are unlocked by passwords.
Each password is made of a set of runes, which are revealed to each player as they progress through the level and defeat enemies.
Each player will only be able to see each other's password. They will have to relay the password to each other in order to successfully unlock the two locks.
| priority | implement puzzle minigame the puzzle will be placed at the end of the level and will unlock a gate barrier for the players to access the victory condition basic idea for puzzle two set of combination locks which are unlocked by passwords each password is made of a set of runes which are revealed to each player as they progress through the level and defeat enemies each player will only be able to see each other s password they will have to relay the password to each other in order to successfully unlock the two locks | 1 |
431,018 | 12,473,909,369 | IssuesEvent | 2020-05-29 08:42:31 | ansible/ansible-lint | https://api.github.com/repos/ansible/ansible-lint | closed | 'skip_ansible_lint' tag does not work with E204 (over 160 char) rules. | priority/medium status/new type/bug | <!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and master branch are affected too -->
##### Summary
<!--- Explain the problem briefly below -->
'skip_ansible_lint' tag does not work with E204 (over 160 char) rules.
##### Issue Type
- Bug Report
##### Ansible and Ansible Lint details
<!--- Paste verbatim output between tripple backticks -->
```console (paste below)
$ ansible --version
ansible 2.9.9
config file = None
configured module search path = ['/Users/myname/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/myname/.pyenv/versions/3.8.2/lib/python3.8/site-packages/ansible
executable location = /Users/myname/.pyenv/versions/3.8.2/bin/ansible
python version = 3.8.2 (default, May 20 2020, 17:00:00) [Clang 10.0.1 (clang-1001.0.46.4)]
$ ansible-lint --version
ansible-lint 4.2.0
```
- ansible installation method: one of source, pip, OS package
pyenv/pip
- ansible-lint installation method: one of source, pip, OS package
pyenv/pip
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
macOS mojava
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between tripple backticks below -->
You can test with this playbook.
```yaml
- name: over 160 char (only this error is not skipped)
set_fact:
test: "randomstringover160char.anmtgxxp7t25mumrrkz9en42zjr3kuhkhut68ddtuya7mktagjwxzn7mts3idz2id3uyubg6ef9wxzy2g5byh2rpmj8uh8nxd949s8nwj5e323cmk5dhbyimi87b4p8k5xpihfnfa6pxj9zassx2gxei2nrextht66esgn4b"
tags:
- skip_ansible_lint
- set_fact: # This error will be skipped
test2: "randomstring"
tags:
- skip_ansible_lint
- name: skip test
command: whoami
tags:
- skip_ansible_lint
```
```console
$ ansible-lint -p *.yml
roles/test_role/tasks/main.yml:3: [E204] Lines should be no longer than 160 chars
```
##### Actual Behaviour
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
skip_ansible_lint should be available for all rules. | 1.0 | 'skip_ansible_lint' tag does not work with E204 (over 160 char) rules. - <!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and master branch are affected too -->
##### Summary
<!--- Explain the problem briefly below -->
'skip_ansible_lint' tag does not work with E204 (over 160 char) rules.
##### Issue Type
- Bug Report
##### Ansible and Ansible Lint details
<!--- Paste verbatim output between tripple backticks -->
```console (paste below)
$ ansible --version
ansible 2.9.9
config file = None
configured module search path = ['/Users/myname/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/myname/.pyenv/versions/3.8.2/lib/python3.8/site-packages/ansible
executable location = /Users/myname/.pyenv/versions/3.8.2/bin/ansible
python version = 3.8.2 (default, May 20 2020, 17:00:00) [Clang 10.0.1 (clang-1001.0.46.4)]
$ ansible-lint --version
ansible-lint 4.2.0
```
- ansible installation method: one of source, pip, OS package
pyenv/pip
- ansible-lint installation method: one of source, pip, OS package
pyenv/pip
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
macOS mojava
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between tripple backticks below -->
You can test with this playbook.
```yaml
- name: over 160 char (only this error is not skipped)
set_fact:
test: "randomstringover160char.anmtgxxp7t25mumrrkz9en42zjr3kuhkhut68ddtuya7mktagjwxzn7mts3idz2id3uyubg6ef9wxzy2g5byh2rpmj8uh8nxd949s8nwj5e323cmk5dhbyimi87b4p8k5xpihfnfa6pxj9zassx2gxei2nrextht66esgn4b"
tags:
- skip_ansible_lint
- set_fact: # This error will be skipped
test2: "randomstring"
tags:
- skip_ansible_lint
- name: skip test
command: whoami
tags:
- skip_ansible_lint
```
```console
$ ansible-lint -p *.yml
roles/test_role/tasks/main.yml:3: [E204] Lines should be no longer than 160 chars
```
##### Actual Behaviour
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
skip_ansible_lint should be available for all rules. | priority | skip ansible lint tag does not work with over char rules summary skip ansible lint tag does not work with over char rules issue type bug report ansible and ansible lint details console paste below ansible version ansible config file none configured module search path ansible python module location users myname pyenv versions lib site packages ansible executable location users myname pyenv versions bin ansible python version default may ansible lint version ansible lint ansible installation method one of source pip os package pyenv pip ansible lint installation method one of source pip os package pyenv pip os environment macos mojava steps to reproduce you can test with this playbook yaml name over char only this error is not skipped set fact test tags skip ansible lint set fact this error will be skipped randomstring tags skip ansible lint name skip test command whoami tags skip ansible lint console ansible lint p yml roles test role tasks main yml lines should be no longer than chars actual behaviour skip ansible lint should be available for all rules | 1 |
409,170 | 11,957,995,390 | IssuesEvent | 2020-04-04 16:21:36 | kkkmail/ClmFSharp | https://api.github.com/repos/kkkmail/ClmFSharp | closed | Refactor WorkerNodeService <-> SolverRunner communication | 5.0.0.3 Priority: Medium Type: Enhancement | It appears that under some unknown conditions the communication between `WorkerNodeService` and `SolverRunner` breaks down. Unfortunately it is also possible that some of the messages get lost.
To remedy the situation it seems reasonable to switch from spawning external processing to spawning threads. That should significantly simplify internal communication.
`WorkerNodeService` monitor should be improved at the same time as it will be no longer be possible to observe what's going on just by looking at the running processes. | 1.0 | Refactor WorkerNodeService <-> SolverRunner communication - It appears that under some unknown conditions the communication between `WorkerNodeService` and `SolverRunner` breaks down. Unfortunately it is also possible that some of the messages get lost.
To remedy the situation it seems reasonable to switch from spawning external processing to spawning threads. That should significantly simplify internal communication.
`WorkerNodeService` monitor should be improved at the same time as it will be no longer be possible to observe what's going on just by looking at the running processes. | priority | refactor workernodeservice solverrunner communication it appears that under some unknown conditions the communication between workernodeservice and solverrunner breaks down unfortunately it is also possible that some of the messages get lost to remedy the situation it seems reasonable to switch from spawning external processing to spawning threads that should significantly simplify internal communication workernodeservice monitor should be improved at the same time as it will be no longer be possible to observe what s going on just by looking at the running processes | 1 |
525,166 | 15,239,376,340 | IssuesEvent | 2021-02-19 04:19:05 | actually-colab/editor | https://api.github.com/repos/actually-colab/editor | opened | Share notebooks with non-registered users | REST bug difficulty: medium priority: low server | Currently, notebooks can only be shared with users stored inside the db. Users who have not yet registered should be able to get an email to access a notebook even if they do not yet have a uid. | 1.0 | Share notebooks with non-registered users - Currently, notebooks can only be shared with users stored inside the db. Users who have not yet registered should be able to get an email to access a notebook even if they do not yet have a uid. | priority | share notebooks with non registered users currently notebooks can only be shared with users stored inside the db users who have not yet registered should be able to get an email to access a notebook even if they do not yet have a uid | 1 |
57,392 | 3,081,917,004 | IssuesEvent | 2015-08-23 07:14:47 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | opened | Выводить в колонку Видео HD для определённых разрешений | enhancement imported Priority-Medium | _From [bobrikov](https://code.google.com/u/bobrikov/) on January 30, 2013 12:01:26_
если ширина кадра видео больше 1280 выводить после разрешения HD и делать текст жирным. Можно и иконку после текста поставить.
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=905_ | 1.0 | Выводить в колонку Видео HD для определённых разрешений - _From [bobrikov](https://code.google.com/u/bobrikov/) on January 30, 2013 12:01:26_
если ширина кадра видео больше 1280 выводить после разрешения HD и делать текст жирным. Можно и иконку после текста поставить.
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=905_ | priority | выводить в колонку видео hd для определённых разрешений from on january если ширина кадра видео больше выводить после разрешения hd и делать текст жирным можно и иконку после текста поставить original issue | 1 |
790,668 | 27,832,331,851 | IssuesEvent | 2023-03-20 06:31:51 | FinalProject-8/BackFinalProject | https://api.github.com/repos/FinalProject-8/BackFinalProject | opened | Feat: 기출해설, 학습전략 서비스 - 조회 기능 개발 | Type: Feature Status: In Progress Priority: Medium For: API | ## Description
- 기출해설 서비스 - 페이지 조회 기능 개발
- 학습전략 서비스 - 페이지 조회 기능 개발
## Tasks(Process)
- [ ] 기출해설 페이지 조회
- [ ] 학습전략 페이지 조회
## References | 1.0 | Feat: 기출해설, 학습전략 서비스 - 조회 기능 개발 - ## Description
- 기출해설 서비스 - 페이지 조회 기능 개발
- 학습전략 서비스 - 페이지 조회 기능 개발
## Tasks(Process)
- [ ] 기출해설 페이지 조회
- [ ] 학습전략 페이지 조회
## References | priority | feat 기출해설 학습전략 서비스 조회 기능 개발 description 기출해설 서비스 페이지 조회 기능 개발 학습전략 서비스 페이지 조회 기능 개발 tasks process 기출해설 페이지 조회 학습전략 페이지 조회 references | 1 |
291,919 | 8,951,461,820 | IssuesEvent | 2019-01-25 14:01:59 | status-im/status-react | https://api.github.com/repos/status-im/status-react | closed | Show progress bar when fetching messages from mailserver | 099-confidence bounty bounty-s chat enhancement medium-priority to-discuss | Blocked by: https://github.com/status-im/status-go/issues/1036
### User Story
As a user, I want to know when the app is fetching messages from the mailserver so I'm not wondering if the app is doing anything, or if my push notification was bogus.
cc @andmironov for design input. My suggestion would be to have an indeterminate progress indicator near the top of the mobile app, like this (the indicator pictured is not indeterminate, but you get the gist):

### Description
[comment]: # (Feature or Bug? i.e Type: Bug)
*Type*: Feature
[comment]: # (Describe the feature you would like, or briefly summarise the bug and what you did, what you expected to happen, and what actually happens. Sections below)
*Summary*: It is confusing to receive a push notification telling me I have a new message, clicking on it and then seeing no new messages. Since the app only fetches messages after being brought to the foreground, we should show an indeterminate progress bar to inform the user that the app is working on something.
#### Expected behavior
[comment]: # (Describe what you expected to happen.)
1. I receive a Push Notification
1. I tap the PN
1. The app opens on the main screen, and a progress bar indicates it is fetching messages
1. After a few seconds, new unread messages appear
1. After a predefined time limit or if a few more seconds have gone by without no new messages appearing, the progress bar is removed
#### Actual behavior
[comment]: # (Describe what actually happened.)
1. I receive a Push Notification
1. I tap the PN
1. The app opens on the main screen, and no new unread messages are visible
1. After a few seconds, new unread messages appear
### Solution
[comment]: # (Please summarise the solution and provide a task list on what needs to be fixed.)
*Summary*:
Right now it is not possible to know whether messages which are arriving come from the mailserver, nor if/when they have finished arriving. Therefore we have to do a best effort solution for now. For instance, we could follow these heuristics:
- Keep progress bar visible if `shhext_requestMessages` was called less than 10 seconds ago
- Keep progress bar visible if a new unread message was added less than 5 seconds ago
- Hide progress bar if `shhext_requestMessages` was called more than 30 seconds ago | 1.0 | Show progress bar when fetching messages from mailserver - Blocked by: https://github.com/status-im/status-go/issues/1036
### User Story
As a user, I want to know when the app is fetching messages from the mailserver so I'm not wondering if the app is doing anything, or if my push notification was bogus.
cc @andmironov for design input. My suggestion would be to have an indeterminate progress indicator near the top of the mobile app, like this (the indicator pictured is not indeterminate, but you get the gist):

### Description
[comment]: # (Feature or Bug? i.e Type: Bug)
*Type*: Feature
[comment]: # (Describe the feature you would like, or briefly summarise the bug and what you did, what you expected to happen, and what actually happens. Sections below)
*Summary*: It is confusing to receive a push notification telling me I have a new message, clicking on it and then seeing no new messages. Since the app only fetches messages after being brought to the foreground, we should show an indeterminate progress bar to inform the user that the app is working on something.
#### Expected behavior
[comment]: # (Describe what you expected to happen.)
1. I receive a Push Notification
1. I tap the PN
1. The app opens on the main screen, and a progress bar indicates it is fetching messages
1. After a few seconds, new unread messages appear
1. After a predefined time limit or if a few more seconds have gone by without no new messages appearing, the progress bar is removed
#### Actual behavior
[comment]: # (Describe what actually happened.)
1. I receive a Push Notification
1. I tap the PN
1. The app opens on the main screen, and no new unread messages are visible
1. After a few seconds, new unread messages appear
### Solution
[comment]: # (Please summarise the solution and provide a task list on what needs to be fixed.)
*Summary*:
Right now it is not possible to know whether messages which are arriving come from the mailserver, nor if/when they have finished arriving. Therefore we have to do a best effort solution for now. For instance, we could follow these heuristics:
- Keep progress bar visible if `shhext_requestMessages` was called less than 10 seconds ago
- Keep progress bar visible if a new unread message was added less than 5 seconds ago
- Hide progress bar if `shhext_requestMessages` was called more than 30 seconds ago | priority | show progress bar when fetching messages from mailserver blocked by user story as a user i want to know when the app is fetching messages from the mailserver so i m not wondering if the app is doing anything or if my push notification was bogus cc andmironov for design input my suggestion would be to have an indeterminate progress indicator near the top of the mobile app like this the indicator pictured is not indeterminate but you get the gist description feature or bug i e type bug type feature describe the feature you would like or briefly summarise the bug and what you did what you expected to happen and what actually happens sections below summary it is confusing to receive a push notification telling me i have a new message clicking on it and then seeing no new messages since the app only fetches messages after being brought to the foreground we should show an indeterminate progress bar to inform the user that the app is working on something expected behavior describe what you expected to happen i receive a push notification i tap the pn the app opens on the main screen and a progress bar indicates it is fetching messages after a few seconds new unread messages appear after a predefined time limit or if a few more seconds have gone by without no new messages appearing the progress bar is removed actual behavior describe what actually happened i receive a push notification i tap the pn the app opens on the main screen and no new unread messages are visible after a few seconds new unread messages appear solution please summarise the solution and provide a task list on what needs to be fixed summary right now it is not possible to know whether messages which are arriving come from the mailserver nor if when they have finished arriving therefore we have to do a best effort solution for now for instance we could follow these heuristics keep progress bar visible if shhext requestmessages was called less than seconds ago keep progress bar visible if a new unread message was added less than seconds ago hide progress bar if shhext requestmessages was called more than seconds ago | 1 |
486,684 | 14,012,629,212 | IssuesEvent | 2020-10-29 09:18:47 | canonical-web-and-design/juju.is | https://api.github.com/repos/canonical-web-and-design/juju.is | closed | Add the Operator framework call out in the footer | Priority: Medium | Charmhub has a call out for Operator framework in the footer which could be adopted by Juju.is. | 1.0 | Add the Operator framework call out in the footer - Charmhub has a call out for Operator framework in the footer which could be adopted by Juju.is. | priority | add the operator framework call out in the footer charmhub has a call out for operator framework in the footer which could be adopted by juju is | 1 |
751,190 | 26,232,601,369 | IssuesEvent | 2023-01-05 02:30:55 | Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth-2 | https://api.github.com/repos/Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth-2 | opened | WC Longbow Competitions Tradition Adaptation | lore :books: priority medium :grey_exclamation: 2D graphics :paintbrush: cultural :mortar_board: | <!--
**DO NOT REMOVE PRE-EXISTING LINES**
------------------------------------------------------------------------------------------------------------
-->
### **Branched from #936 ; to be based on #880 after #936 is merged.**
Adapt/Design `tradition_longbow_competitions` for the Warcraft universe. Currently, vanilla assigns it the inactivated tradition exclusive maa `unlock_maa_longbowmen = yes`. I suggest either adapting maa_longbowmen or replacing it with the base maa `bowmen` to avoid having to create a unique tradition/wc maa for each subculture in the darnassian and highborne heritages. | 1.0 | WC Longbow Competitions Tradition Adaptation - <!--
**DO NOT REMOVE PRE-EXISTING LINES**
------------------------------------------------------------------------------------------------------------
-->
### **Branched from #936 ; to be based on #880 after #936 is merged.**
Adapt/Design `tradition_longbow_competitions` for the Warcraft universe. Currently, vanilla assigns it the inactivated tradition exclusive maa `unlock_maa_longbowmen = yes`. I suggest either adapting maa_longbowmen or replacing it with the base maa `bowmen` to avoid having to create a unique tradition/wc maa for each subculture in the darnassian and highborne heritages. | priority | wc longbow competitions tradition adaptation do not remove pre existing lines branched from to be based on after is merged adapt design tradition longbow competitions for the warcraft universe currently vanilla assigns it the inactivated tradition exclusive maa unlock maa longbowmen yes i suggest either adapting maa longbowmen or replacing it with the base maa bowmen to avoid having to create a unique tradition wc maa for each subculture in the darnassian and highborne heritages | 1 |
523,603 | 15,186,215,183 | IssuesEvent | 2021-02-15 12:03:15 | ooni/probe-engine | https://api.github.com/repos/ooni/probe-engine | closed | probeservices: sync client and server API spec | priority/medium | Here we want to be sure that the client's definition of the API matches the server's definition. We could for example use OpenAPI. The server is already using OpenAPI (more precisely, Swagger 2.0). We want a test that fails if server and client are not in sync. | 1.0 | probeservices: sync client and server API spec - Here we want to be sure that the client's definition of the API matches the server's definition. We could for example use OpenAPI. The server is already using OpenAPI (more precisely, Swagger 2.0). We want a test that fails if server and client are not in sync. | priority | probeservices sync client and server api spec here we want to be sure that the client s definition of the api matches the server s definition we could for example use openapi the server is already using openapi more precisely swagger we want a test that fails if server and client are not in sync | 1 |
402,058 | 11,801,566,889 | IssuesEvent | 2020-03-18 19:43:12 | LBNL-ETA/BEDES-Manager | https://api.github.com/repos/LBNL-ETA/BEDES-Manager | opened | application terms export | bug medium priority | There are some problems with the .csv file that I get when I export my application terms:
1. There shouldn't be a newline before the header.
2. The header row should match the example import file.
3. The application and BEDES data types are missing.
4. In the term mapping column and the list mapping column, there should be spaces around the equals sign, e.g., "Construction Status = [value]" instead of "Construction Status=[value]".
5. In the two mapping columns, mappings should be separated by newlines, not by "|".
6. Quotation mark should be removed.
7. Overall, I think that if I import a .csv file into an application, and then I download it back to a .csv, the two .csv files should match. This is currently not the case.
| 1.0 | application terms export - There are some problems with the .csv file that I get when I export my application terms:
1. There shouldn't be a newline before the header.
2. The header row should match the example import file.
3. The application and BEDES data types are missing.
4. In the term mapping column and the list mapping column, there should be spaces around the equals sign, e.g., "Construction Status = [value]" instead of "Construction Status=[value]".
5. In the two mapping columns, mappings should be separated by newlines, not by "|".
6. Quotation mark should be removed.
7. Overall, I think that if I import a .csv file into an application, and then I download it back to a .csv, the two .csv files should match. This is currently not the case.
| priority | application terms export there are some problems with the csv file that i get when i export my application terms there shouldn t be a newline before the header the header row should match the example import file the application and bedes data types are missing in the term mapping column and the list mapping column there should be spaces around the equals sign e g construction status instead of construction status in the two mapping columns mappings should be separated by newlines not by quotation mark should be removed overall i think that if i import a csv file into an application and then i download it back to a csv the two csv files should match this is currently not the case | 1 |
55,936 | 3,075,539,048 | IssuesEvent | 2015-08-20 14:11:06 | dart-lang/sdk | https://api.github.com/repos/dart-lang/sdk | closed | Implement null-aware operators | Area-Multi meta Priority-Medium Triaged Type-Enhancement | The [null-aware operators proposal](https://github.com/dart-lang/dart_enhancement_proposals/issues/9) has been accepted by the DEP committee and is in the Dart language spec. Implementation should be underway, and it doesn't need to be behind any sort of flag.
This is the main tracking issue. Individual issues are below:
- [x] VM: #23455, first released in [1.12.0-dev.0.0](/dart-lang/sdk/releases/tag/1.12.0-dev.0.0)
- [x] support `Class?.x` in the VM: #23794
- [x] implement behind a flag in Analyzer: #23456, first released in [1.10.0](/dart-lang/sdk/releases/tag/1.10.0)
- [x] don't emit an error for `Class?.x` in Analyzer: #23464
- [x] enable by default in Analyzer: #23793
- [x] implement behind a flag in dart2js: #23457, first released in [1.11.0](/dart-lang/sdk/releases/tag/1.11.0)
- [ ] support `Class?.x` in dart2js: #23795
- [x] enable by default in dart2js: #23791, to be released in 1.12.0-dev.3.0
- [ ] Documentation (dart-lang/dartlang.org#1364)
*Edited by @nex3 to match [other similar issues](https://github.com/dart-lang/sdk/labels/meta)* | 1.0 | Implement null-aware operators - The [null-aware operators proposal](https://github.com/dart-lang/dart_enhancement_proposals/issues/9) has been accepted by the DEP committee and is in the Dart language spec. Implementation should be underway, and it doesn't need to be behind any sort of flag.
This is the main tracking issue. Individual issues are below:
- [x] VM: #23455, first released in [1.12.0-dev.0.0](/dart-lang/sdk/releases/tag/1.12.0-dev.0.0)
- [x] support `Class?.x` in the VM: #23794
- [x] implement behind a flag in Analyzer: #23456, first released in [1.10.0](/dart-lang/sdk/releases/tag/1.10.0)
- [x] don't emit an error for `Class?.x` in Analyzer: #23464
- [x] enable by default in Analyzer: #23793
- [x] implement behind a flag in dart2js: #23457, first released in [1.11.0](/dart-lang/sdk/releases/tag/1.11.0)
- [ ] support `Class?.x` in dart2js: #23795
- [x] enable by default in dart2js: #23791, to be released in 1.12.0-dev.3.0
- [ ] Documentation (dart-lang/dartlang.org#1364)
*Edited by @nex3 to match [other similar issues](https://github.com/dart-lang/sdk/labels/meta)* | priority | implement null aware operators the has been accepted by the dep committee and is in the dart language spec implementation should be underway and it doesn t need to be behind any sort of flag this is the main tracking issue individual issues are below vm first released in dart lang sdk releases tag dev support class x in the vm implement behind a flag in analyzer first released in dart lang sdk releases tag don t emit an error for class x in analyzer enable by default in analyzer implement behind a flag in first released in dart lang sdk releases tag support class x in enable by default in to be released in dev documentation dart lang dartlang org edited by to match | 1 |
537,934 | 15,757,599,053 | IssuesEvent | 2021-03-31 05:33:17 | AY2021S2-CS2113T-T09-4/tp | https://api.github.com/repos/AY2021S2-CS2113T-T09-4/tp | closed | As a user, I can get tips regarding items sold | priority.Medium type.Story | The most popular item(s) will be identified accordring to most number of item sales made and the user is reminded to stock up if not enough. | 1.0 | As a user, I can get tips regarding items sold - The most popular item(s) will be identified accordring to most number of item sales made and the user is reminded to stock up if not enough. | priority | as a user i can get tips regarding items sold the most popular item s will be identified accordring to most number of item sales made and the user is reminded to stock up if not enough | 1 |
200,834 | 7,017,164,110 | IssuesEvent | 2017-12-21 08:30:38 | minio/minio | https://api.github.com/repos/minio/minio | opened | CoreDNS integration for Minio server | priority: medium | Ignoring the template on purpose.
With https://github.com/minio/minio/pull/5095 , Minio server added support for virtual style requests where requests to `bucketname.minio.domain` would be honoured by the Minio server. Here `bucketname` is a bucket created on the Minio server and `minio.domain` is the domain name Minio server is running on. Note that this needs Minio server to be configured with environment variable `MINIO_DOMAIN`.
However, this still needs the user to
- either make DNS entries manually as new buckets are being made on Minio server
- create a wildcard entry like `*.minio.domain`
With CoreDNS integration, Minio server can make DNS entries and maintain it (i.e. remove the entries in case a server goes down or a bucket is deleted). Note that CoreDNS would still need to be run as a separate server and _not_ bundled with Minio server in any way.
Here are the potential steps involved:
- CoreDNS server details are provided in Minio config file, `MINIO_DOMAIN` is set, and DNS Zone in CoreDNS is set accordingly.
- Whenever a new bucket is created, Minio server makes A record entries to CoreDNS server under the given Zone.
- Whenever a bucket is deleted, Minio server removes the corresponding A record entries from CoreDNS server under the given Zone.
- In case if any Minio server instances down (in Distributed Minio setup), corresponding entries should be removed from CoreDNS server. | 1.0 | CoreDNS integration for Minio server - Ignoring the template on purpose.
With https://github.com/minio/minio/pull/5095 , Minio server added support for virtual style requests where requests to `bucketname.minio.domain` would be honoured by the Minio server. Here `bucketname` is a bucket created on the Minio server and `minio.domain` is the domain name Minio server is running on. Note that this needs Minio server to be configured with environment variable `MINIO_DOMAIN`.
However, this still needs the user to
- either make DNS entries manually as new buckets are being made on Minio server
- create a wildcard entry like `*.minio.domain`
With CoreDNS integration, Minio server can make DNS entries and maintain it (i.e. remove the entries in case a server goes down or a bucket is deleted). Note that CoreDNS would still need to be run as a separate server and _not_ bundled with Minio server in any way.
Here are the potential steps involved:
- CoreDNS server details are provided in Minio config file, `MINIO_DOMAIN` is set, and DNS Zone in CoreDNS is set accordingly.
- Whenever a new bucket is created, Minio server makes A record entries to CoreDNS server under the given Zone.
- Whenever a bucket is deleted, Minio server removes the corresponding A record entries from CoreDNS server under the given Zone.
- In case if any Minio server instances down (in Distributed Minio setup), corresponding entries should be removed from CoreDNS server. | priority | coredns integration for minio server ignoring the template on purpose with minio server added support for virtual style requests where requests to bucketname minio domain would be honoured by the minio server here bucketname is a bucket created on the minio server and minio domain is the domain name minio server is running on note that this needs minio server to be configured with environment variable minio domain however this still needs the user to either make dns entries manually as new buckets are being made on minio server create a wildcard entry like minio domain with coredns integration minio server can make dns entries and maintain it i e remove the entries in case a server goes down or a bucket is deleted note that coredns would still need to be run as a separate server and not bundled with minio server in any way here are the potential steps involved coredns server details are provided in minio config file minio domain is set and dns zone in coredns is set accordingly whenever a new bucket is created minio server makes a record entries to coredns server under the given zone whenever a bucket is deleted minio server removes the corresponding a record entries from coredns server under the given zone in case if any minio server instances down in distributed minio setup corresponding entries should be removed from coredns server | 1 |
581,290 | 17,290,478,205 | IssuesEvent | 2021-07-24 16:37:21 | CryptoBlades/cryptoblades | https://api.github.com/repos/CryptoBlades/cryptoblades | closed | sell history interface | priority-medium type-frontend | a button, when clicked, opens a modal with a log of all the weapons you sold, their id, name, and price sold for. | 1.0 | sell history interface - a button, when clicked, opens a modal with a log of all the weapons you sold, their id, name, and price sold for. | priority | sell history interface a button when clicked opens a modal with a log of all the weapons you sold their id name and price sold for | 1 |
283,668 | 8,721,822,908 | IssuesEvent | 2018-12-09 04:32:58 | bounswe/bounswe2018group8 | https://api.github.com/repos/bounswe/bounswe2018group8 | closed | Configuring homepage according to authentication | Frontend-Web Priority: Medium Status: In Progress | Even non-authenticated users can see the details of homepage, we should make some fields usable for only authenticated users.Therefore we should define some boolean conditions, functions etc. | 1.0 | Configuring homepage according to authentication - Even non-authenticated users can see the details of homepage, we should make some fields usable for only authenticated users.Therefore we should define some boolean conditions, functions etc. | priority | configuring homepage according to authentication even non authenticated users can see the details of homepage we should make some fields usable for only authenticated users therefore we should define some boolean conditions functions etc | 1 |
229,064 | 7,571,067,110 | IssuesEvent | 2018-04-23 10:59:50 | Caleydo/malevo | https://api.github.com/repos/Caleydo/malevo | opened | Add selected epoch to image detail view | priority: medium type: aesthetics | 
Add the selected single epoch to the label to indicate which images are shown. | 1.0 | Add selected epoch to image detail view - 
Add the selected single epoch to the label to indicate which images are shown. | priority | add selected epoch to image detail view add the selected single epoch to the label to indicate which images are shown | 1 |
710,013 | 24,400,781,019 | IssuesEvent | 2022-10-05 01:01:15 | AY2223S1-CS2103T-W13-2/tp | https://api.github.com/repos/AY2223S1-CS2103T-W13-2/tp | closed | As a student financial advisor, I want to be able to store the different policies that I am currently pitching to my clients. | type.Story priority.Medium | This allows me to keep easily reference key details of specific policies when going through them with a client. | 1.0 | As a student financial advisor, I want to be able to store the different policies that I am currently pitching to my clients. - This allows me to keep easily reference key details of specific policies when going through them with a client. | priority | as a student financial advisor i want to be able to store the different policies that i am currently pitching to my clients this allows me to keep easily reference key details of specific policies when going through them with a client | 1 |
417,084 | 12,155,727,105 | IssuesEvent | 2020-04-25 14:24:29 | AngelGuerra/regazo-fotografia | https://api.github.com/repos/AngelGuerra/regazo-fotografia | closed | Modificar la sección de Portfolio para que acepte una nube de imágenes | Priority: Medium Type: Enhancement | Al ser trabajos fotográficos, es necesario un carrusel de fotografías, si se encuentra un diseño alternatico en plan, nube de imágenes también puede ser una buena opción | 1.0 | Modificar la sección de Portfolio para que acepte una nube de imágenes - Al ser trabajos fotográficos, es necesario un carrusel de fotografías, si se encuentra un diseño alternatico en plan, nube de imágenes también puede ser una buena opción | priority | modificar la sección de portfolio para que acepte una nube de imágenes al ser trabajos fotográficos es necesario un carrusel de fotografías si se encuentra un diseño alternatico en plan nube de imágenes también puede ser una buena opción | 1 |
196,750 | 6,942,674,394 | IssuesEvent | 2017-12-05 01:16:12 | intel-analytics/BigDL | https://api.github.com/repos/intel-analytics/BigDL | closed | Add new wrapper Table in python | medium priority python | ` def getHiddenStates(rec: Recurrent[T]): JList[JList[JTensor]] = { `
This is too complex; instead of doing this, maybe we can have automatic mapping of Table to python list?
Not sure if we need to introduce a Table concept into Python. Maybe nested list is what we are looking for.
related pr: https://github.com/intel-analytics/BigDL/pull/1591#pullrequestreview-66305975
We can compare with the other framework and then think of this seriously.
| 1.0 | Add new wrapper Table in python - ` def getHiddenStates(rec: Recurrent[T]): JList[JList[JTensor]] = { `
This is too complex; instead of doing this, maybe we can have automatic mapping of Table to python list?
Not sure if we need to introduce a Table concept into Python. Maybe nested list is what we are looking for.
related pr: https://github.com/intel-analytics/BigDL/pull/1591#pullrequestreview-66305975
We can compare with the other framework and then think of this seriously.
| priority | add new wrapper table in python def gethiddenstates rec recurrent jlist this is too complex instead of doing this maybe we can have automatic mapping of table to python list not sure if we need to introduce a table concept into python maybe nested list is what we are looking for related pr we can compare with the other framework and then think of this seriously | 1 |
586,182 | 17,572,138,341 | IssuesEvent | 2021-08-14 23:11:05 | MyMICDS/MyMICDS-v2-Angular | https://api.github.com/repos/MyMICDS/MyMICDS-v2-Angular | closed | Possible Migration to D3 graphs | effort: medium priority: it can wait enhancement work length: medium ui / ux | While chart js has been very useful, it's being pushed to its limits with our current uses, possibly switching chart.js for [D3](https://d3js.org/) could be useful for animation and over all polish. | 1.0 | Possible Migration to D3 graphs - While chart js has been very useful, it's being pushed to its limits with our current uses, possibly switching chart.js for [D3](https://d3js.org/) could be useful for animation and over all polish. | priority | possible migration to graphs while chart js has been very useful it s being pushed to its limits with our current uses possibly switching chart js for could be useful for animation and over all polish | 1 |
269,645 | 8,441,454,518 | IssuesEvent | 2018-10-18 10:15:50 | neuropoly/qMRLab | https://api.github.com/repos/neuropoly/qMRLab | closed | Red warning of mismatching data and protocol does not go away after correcting the issue | interface priority:medium | When I load the data the warning pops up. I change the protocol to match the data but the warning does not update and is still there. I can still go through with the fitting and it works no problem, but still makes it confusing initially as to whether there is a problem or not. See screenshot.

| 1.0 | Red warning of mismatching data and protocol does not go away after correcting the issue - When I load the data the warning pops up. I change the protocol to match the data but the warning does not update and is still there. I can still go through with the fitting and it works no problem, but still makes it confusing initially as to whether there is a problem or not. See screenshot.

| priority | red warning of mismatching data and protocol does not go away after correcting the issue when i load the data the warning pops up i change the protocol to match the data but the warning does not update and is still there i can still go through with the fitting and it works no problem but still makes it confusing initially as to whether there is a problem or not see screenshot | 1 |
407,849 | 11,938,113,985 | IssuesEvent | 2020-04-02 13:18:33 | teamforus/me | https://api.github.com/repos/teamforus/me | closed | Remove creating passcode on creation Me identity | Difficulty: Medium Priority: Must have Scope: Small Topic: Android | ## Main asssignee: @
## Context/goal:
On startup me-android lets user set up a passcode. The passcode should be entirely optionally. We should not force a user to set any passcode. Lets remove this on first login.
## Task
- [ ] remove setting up passcode when first login instead user needs to set it up manually on profile page.
| 1.0 | Remove creating passcode on creation Me identity - ## Main asssignee: @
## Context/goal:
On startup me-android lets user set up a passcode. The passcode should be entirely optionally. We should not force a user to set any passcode. Lets remove this on first login.
## Task
- [ ] remove setting up passcode when first login instead user needs to set it up manually on profile page.
| priority | remove creating passcode on creation me identity main asssignee context goal on startup me android lets user set up a passcode the passcode should be entirely optionally we should not force a user to set any passcode lets remove this on first login task remove setting up passcode when first login instead user needs to set it up manually on profile page | 1 |
828,604 | 31,835,946,095 | IssuesEvent | 2023-09-14 13:32:06 | Marin-MK/RPG-Studio-MK | https://api.github.com/repos/Marin-MK/RPG-Studio-MK | opened | Event Templates | medium-priority to-do | Allow users to create events from a template, such as a Nurse Joy event, a Hidden Item, PC, Trainer, or other generic event type. | 1.0 | Event Templates - Allow users to create events from a template, such as a Nurse Joy event, a Hidden Item, PC, Trainer, or other generic event type. | priority | event templates allow users to create events from a template such as a nurse joy event a hidden item pc trainer or other generic event type | 1 |
55,999 | 3,075,622,175 | IssuesEvent | 2015-08-20 14:33:54 | RobotiumTech/robotium | https://api.github.com/repos/RobotiumTech/robotium | closed | Cannot click on Button while a Toast is visible | bug imported Priority-Medium wontfix | _From [juanmou...@gmail.com](https://code.google.com/u/115620822732880041921/) on September 19, 2014 09:15:30_
What steps will reproduce the problem? To reproduce this issue, an EditText and a Button are needed (like a Login form, for example). Also, a Toast should be shown at a given moment.
The Robotium test should:
1. Open the Activity
2. Enter some text in the EditText
3. Wait for the Toast to appear
4. While the Toast is vissible, call solo.clickOnButton(BUTTON) or solo.clickOnView(myButton); What is the expected output? What do you see instead? The Button is never clicked, and the test times out. What version of the product are you using? On what operating system? robotium-solo:5.2.1, Android 4.2.2, Win 7 64, Android Studio 0.8.6 Please provide any additional information below.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=633_ | 1.0 | Cannot click on Button while a Toast is visible - _From [juanmou...@gmail.com](https://code.google.com/u/115620822732880041921/) on September 19, 2014 09:15:30_
What steps will reproduce the problem? To reproduce this issue, an EditText and a Button are needed (like a Login form, for example). Also, a Toast should be shown at a given moment.
The Robotium test should:
1. Open the Activity
2. Enter some text in the EditText
3. Wait for the Toast to appear
4. While the Toast is vissible, call solo.clickOnButton(BUTTON) or solo.clickOnView(myButton); What is the expected output? What do you see instead? The Button is never clicked, and the test times out. What version of the product are you using? On what operating system? robotium-solo:5.2.1, Android 4.2.2, Win 7 64, Android Studio 0.8.6 Please provide any additional information below.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=633_ | priority | cannot click on button while a toast is visible from on september what steps will reproduce the problem to reproduce this issue an edittext and a button are needed like a login form for example also a toast should be shown at a given moment the robotium test should open the activity enter some text in the edittext wait for the toast to appear while the toast is vissible call solo clickonbutton button or solo clickonview mybutton what is the expected output what do you see instead the button is never clicked and the test times out what version of the product are you using on what operating system robotium solo android win android studio please provide any additional information below original issue | 1 |
496,954 | 14,359,445,359 | IssuesEvent | 2020-11-30 15:38:06 | MikeVedsted/JoinMe | https://api.github.com/repos/MikeVedsted/JoinMe | closed | Add user creation | Priority: Medium :zap: Status: Awaiting review :hourglass: Type: Enhancement :rocket: | 💡 I would really like to solve or include
It should be possible to create an account on the site.
👶 How would a user describe this?
I want to have my data persisting on the site.
🏆 My dream solution would be
User is able to create new account with Google. | 1.0 | Add user creation - 💡 I would really like to solve or include
It should be possible to create an account on the site.
👶 How would a user describe this?
I want to have my data persisting on the site.
🏆 My dream solution would be
User is able to create new account with Google. | priority | add user creation 💡 i would really like to solve or include it should be possible to create an account on the site 👶 how would a user describe this i want to have my data persisting on the site 🏆 my dream solution would be user is able to create new account with google | 1 |
485,495 | 13,979,398,277 | IssuesEvent | 2020-10-27 00:02:05 | openforcefield/openforcefield | https://api.github.com/repos/openforcefield/openforcefield | closed | Have toolkit wrappers know version of the toolkit they're wrapping | api extension effort:low priority:medium | **Is your feature request related to a problem? Please describe.**
As far as I can tell, `Toolkitwrapper` and therefore `ToolkitRegistry` objects don't know the version of the toolkit that they're wrapping. Would these be useful features?
**Describe the solution you'd like**
* When instantiating a wrapper class, the version detected from the wrapped toolkit is saved
* Each wrapper knows the class, and registries can also query that for each wrapper
Something like
```python3
In [1]: from openforcefield.utils.toolkits import *
In [2]: reg = ToolkitRegistry(toolkit_precedence=[OpenEyeToolkitWrapper, RDKitToolkitWrapper, AmberToolsToolkitWrapper])
In [3]: # OpenEyeToolkitWrapper().wrapped_version() would return '2020.0.4'
In [4]: # RDKitToolkitWrapper().wrapped_version() would return '2020.03.4'
In [5]: # reg.wrapped_versions() would return something like ['2020.0.4', '2020.03.04', 'whatever_ambertools_says']
```
**Describe alternatives you've considered**
It's easy to enough to just import each of them and check their versions, but this would provide a single interface to them. It may also help debugging weird behavior, i.e. if the wrapper finds a different version than they think they have installed through `conda list`; this would directly store what the wrapper finds.
**Additional context**
This is motivated by writing a CLI, in which I think it would be useful to be able to access the versions of installed toolkits by only passing around a `ToolkitRegistry` object | 1.0 | Have toolkit wrappers know version of the toolkit they're wrapping - **Is your feature request related to a problem? Please describe.**
As far as I can tell, `Toolkitwrapper` and therefore `ToolkitRegistry` objects don't know the version of the toolkit that they're wrapping. Would these be useful features?
**Describe the solution you'd like**
* When instantiating a wrapper class, the version detected from the wrapped toolkit is saved
* Each wrapper knows the class, and registries can also query that for each wrapper
Something like
```python3
In [1]: from openforcefield.utils.toolkits import *
In [2]: reg = ToolkitRegistry(toolkit_precedence=[OpenEyeToolkitWrapper, RDKitToolkitWrapper, AmberToolsToolkitWrapper])
In [3]: # OpenEyeToolkitWrapper().wrapped_version() would return '2020.0.4'
In [4]: # RDKitToolkitWrapper().wrapped_version() would return '2020.03.4'
In [5]: # reg.wrapped_versions() would return something like ['2020.0.4', '2020.03.04', 'whatever_ambertools_says']
```
**Describe alternatives you've considered**
It's easy to enough to just import each of them and check their versions, but this would provide a single interface to them. It may also help debugging weird behavior, i.e. if the wrapper finds a different version than they think they have installed through `conda list`; this would directly store what the wrapper finds.
**Additional context**
This is motivated by writing a CLI, in which I think it would be useful to be able to access the versions of installed toolkits by only passing around a `ToolkitRegistry` object | priority | have toolkit wrappers know version of the toolkit they re wrapping is your feature request related to a problem please describe as far as i can tell toolkitwrapper and therefore toolkitregistry objects don t know the version of the toolkit that they re wrapping would these be useful features describe the solution you d like when instantiating a wrapper class the version detected from the wrapped toolkit is saved each wrapper knows the class and registries can also query that for each wrapper something like in from openforcefield utils toolkits import in reg toolkitregistry toolkit precedence in openeyetoolkitwrapper wrapped version would return in rdkittoolkitwrapper wrapped version would return in reg wrapped versions would return something like describe alternatives you ve considered it s easy to enough to just import each of them and check their versions but this would provide a single interface to them it may also help debugging weird behavior i e if the wrapper finds a different version than they think they have installed through conda list this would directly store what the wrapper finds additional context this is motivated by writing a cli in which i think it would be useful to be able to access the versions of installed toolkits by only passing around a toolkitregistry object | 1 |
530,284 | 15,420,020,385 | IssuesEvent | 2021-03-05 10:57:39 | geosolutions-it/MapStore2 | https://api.github.com/repos/geosolutions-it/MapStore2 | opened | Review of layer Identify behavior from Search tool | Internal Priority: Medium enhancement | ## Description
<!-- A few sentences describing new feature -->
<!-- screenshot, video, or link to mockup/prototype are welcome -->
Currently if you perform an Indentify request from the Search tool, 'application/json' ('PROPERTIES') is always used as GetFeatureInfo output format so that MS is able to filter GFI results and show in the Identify panel only the one selected in Seach tool results (this means the filtering is made client side). However, the Identify response should be provided in the format choosen by the user for the layer (through Layer Settings or in General Options). It is anyway useful, at the same time, to show in Identify panel only the information related to the selected item in Search tool. We should improve the current functionality to:
- Consider to make the current behavior configurable (active or not with a new true/false property). To discuss.
- If the current behavior is disabled, always use the Identify format configured for the layer and not always 'application/json'
- Include the support for [GeoServer vendor params](https://docs.geoserver.org/stable/en/user/services/wms/reference.html#getfeatureinfo) so that it is possible to filter the Identify result directly server side instead of client side (WMS servers that don't support the GeoServer vendor param will simply ignore it)
**What kind of improvement you want to add?** (check one with "x", remove the others)
- [X] Minor changes to existing features
- [ ] Code style update (formatting, local variables)
- [ ] Refactoring (no functional changes, no api changes)
- [ ] Build related changes
- [ ] CI related changes
- [ ] Other... Please describe:
## Other useful information
| 1.0 | Review of layer Identify behavior from Search tool - ## Description
<!-- A few sentences describing new feature -->
<!-- screenshot, video, or link to mockup/prototype are welcome -->
Currently if you perform an Indentify request from the Search tool, 'application/json' ('PROPERTIES') is always used as GetFeatureInfo output format so that MS is able to filter GFI results and show in the Identify panel only the one selected in Seach tool results (this means the filtering is made client side). However, the Identify response should be provided in the format choosen by the user for the layer (through Layer Settings or in General Options). It is anyway useful, at the same time, to show in Identify panel only the information related to the selected item in Search tool. We should improve the current functionality to:
- Consider to make the current behavior configurable (active or not with a new true/false property). To discuss.
- If the current behavior is disabled, always use the Identify format configured for the layer and not always 'application/json'
- Include the support for [GeoServer vendor params](https://docs.geoserver.org/stable/en/user/services/wms/reference.html#getfeatureinfo) so that it is possible to filter the Identify result directly server side instead of client side (WMS servers that don't support the GeoServer vendor param will simply ignore it)
**What kind of improvement you want to add?** (check one with "x", remove the others)
- [X] Minor changes to existing features
- [ ] Code style update (formatting, local variables)
- [ ] Refactoring (no functional changes, no api changes)
- [ ] Build related changes
- [ ] CI related changes
- [ ] Other... Please describe:
## Other useful information
| priority | review of layer identify behavior from search tool description currently if you perform an indentify request from the search tool application json properties is always used as getfeatureinfo output format so that ms is able to filter gfi results and show in the identify panel only the one selected in seach tool results this means the filtering is made client side however the identify response should be provided in the format choosen by the user for the layer through layer settings or in general options it is anyway useful at the same time to show in identify panel only the information related to the selected item in search tool we should improve the current functionality to consider to make the current behavior configurable active or not with a new true false property to discuss if the current behavior is disabled always use the identify format configured for the layer and not always application json include the support for so that it is possible to filter the identify result directly server side instead of client side wms servers that don t support the geoserver vendor param will simply ignore it what kind of improvement you want to add check one with x remove the others minor changes to existing features code style update formatting local variables refactoring no functional changes no api changes build related changes ci related changes other please describe other useful information | 1 |
55,692 | 3,074,253,946 | IssuesEvent | 2015-08-20 05:31:25 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | closed | Сделать возможность создавать и использовать ссылки на папки. | bug duplicate imported Priority-Medium | _From [mohn.m...@gmail.com](https://code.google.com/u/100863733192899356718/) on January 12, 2011 18:33:56_
Иногда выкладывается целый сериал, а серий очень много, было бы удобно размещать одну ссылку на целую папку с сериалом (для примера), а другие пользователи смогли бы скачать эту папку полностью или частично.
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=293_ | 1.0 | Сделать возможность создавать и использовать ссылки на папки. - _From [mohn.m...@gmail.com](https://code.google.com/u/100863733192899356718/) on January 12, 2011 18:33:56_
Иногда выкладывается целый сериал, а серий очень много, было бы удобно размещать одну ссылку на целую папку с сериалом (для примера), а другие пользователи смогли бы скачать эту папку полностью или частично.
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=293_ | priority | сделать возможность создавать и использовать ссылки на папки from on january иногда выкладывается целый сериал а серий очень много было бы удобно размещать одну ссылку на целую папку с сериалом для примера а другие пользователи смогли бы скачать эту папку полностью или частично original issue | 1 |
421,695 | 12,260,305,011 | IssuesEvent | 2020-05-06 18:04:06 | metrumresearchgroup/babylon | https://api.github.com/repos/metrumresearchgroup/babylon | closed | panic given mis parsing of names given multiple thetas on same line | bug ctx: fix priority: medium risk: low | Given a control stream where multiple thetas are on the same line, the bbi summary command panics, and when looking at the underlying structs, it is clear the parameter names does not match the dimensions of the param values.
The offending control stream had essentially the following structure:
```
$PK
...
M1EFF = THETA(3)*LOG(WT/50) + THETA(4)*LOG(PMO/18)
M2EFF = (WT/50)**THETA(5) * (PMO/18)**THETA(6)
...
$THETA
1
10
(0, 0.1, 10) (0, 0.1, 10)
(0, 0.1, 10) (0, 0.1, 10)
```
so the scientist could keep track of the "coupled" thetas. This is valid syntax.
| 1.0 | panic given mis parsing of names given multiple thetas on same line - Given a control stream where multiple thetas are on the same line, the bbi summary command panics, and when looking at the underlying structs, it is clear the parameter names does not match the dimensions of the param values.
The offending control stream had essentially the following structure:
```
$PK
...
M1EFF = THETA(3)*LOG(WT/50) + THETA(4)*LOG(PMO/18)
M2EFF = (WT/50)**THETA(5) * (PMO/18)**THETA(6)
...
$THETA
1
10
(0, 0.1, 10) (0, 0.1, 10)
(0, 0.1, 10) (0, 0.1, 10)
```
so the scientist could keep track of the "coupled" thetas. This is valid syntax.
| priority | panic given mis parsing of names given multiple thetas on same line given a control stream where multiple thetas are on the same line the bbi summary command panics and when looking at the underlying structs it is clear the parameter names does not match the dimensions of the param values the offending control stream had essentially the following structure pk theta log wt theta log pmo wt theta pmo theta theta so the scientist could keep track of the coupled thetas this is valid syntax | 1 |
453,748 | 13,089,691,249 | IssuesEvent | 2020-08-03 00:21:09 | SkriptLang/Skript | https://api.github.com/repos/SkriptLang/Skript | opened | Experience classinfo can't be parsed | bug good first issue priority: medium | ### Description
The experience classinfo can't be parsed, it does not have an `user`.
### Steps to Reproduce
`set {_test} to "5 exp" parsed as experience points
### Expected Behavior
The classinfo to have an user so it can be used within scripts.
### Server Information
* **Server version/platform:** unrelated
* **Skript version:** unrelated
### Additional Context
The classinfo's user should be `experience ?(points?)?` | 1.0 | Experience classinfo can't be parsed - ### Description
The experience classinfo can't be parsed, it does not have an `user`.
### Steps to Reproduce
`set {_test} to "5 exp" parsed as experience points
### Expected Behavior
The classinfo to have an user so it can be used within scripts.
### Server Information
* **Server version/platform:** unrelated
* **Skript version:** unrelated
### Additional Context
The classinfo's user should be `experience ?(points?)?` | priority | experience classinfo can t be parsed description the experience classinfo can t be parsed it does not have an user steps to reproduce set test to exp parsed as experience points expected behavior the classinfo to have an user so it can be used within scripts server information server version platform unrelated skript version unrelated additional context the classinfo s user should be experience points | 1 |
456,285 | 13,148,677,304 | IssuesEvent | 2020-08-08 23:15:49 | kiudee/bayes-skopt | https://api.github.com/repos/kiudee/bayes-skopt | closed | Unify the parameters of the tell and run method | Priority: Medium enhancement | **Is your feature request related to a problem? Please describe.**
The `Optimizer.run` method does not offer all of the parameters the `tell` method offers.
**Describe the solution you'd like**
The parameters and their default values should be consistent. | 1.0 | Unify the parameters of the tell and run method - **Is your feature request related to a problem? Please describe.**
The `Optimizer.run` method does not offer all of the parameters the `tell` method offers.
**Describe the solution you'd like**
The parameters and their default values should be consistent. | priority | unify the parameters of the tell and run method is your feature request related to a problem please describe the optimizer run method does not offer all of the parameters the tell method offers describe the solution you d like the parameters and their default values should be consistent | 1 |
606,837 | 18,769,187,197 | IssuesEvent | 2021-11-06 14:11:09 | code4romania/monitorizare-vot | https://api.github.com/repos/code4romania/monitorizare-vot | opened | Update county related endpoints | BE enhancement help wanted medium-priority good first issue counties | Some updates are needed to the CRUD endpoints for counties.
The current endpoints are:

Needed updates:
- the post county endpoint should only be used for adding new counties and not for editing existing counties. It should not need an id.
- a put by id endpoint for editing counties (this logic is probably now handled in the post endpoint)
- a delete county by id endpoint | 1.0 | Update county related endpoints - Some updates are needed to the CRUD endpoints for counties.
The current endpoints are:

Needed updates:
- the post county endpoint should only be used for adding new counties and not for editing existing counties. It should not need an id.
- a put by id endpoint for editing counties (this logic is probably now handled in the post endpoint)
- a delete county by id endpoint | priority | update county related endpoints some updates are needed to the crud endpoints for counties the current endpoints are needed updates the post county endpoint should only be used for adding new counties and not for editing existing counties it should not need an id a put by id endpoint for editing counties this logic is probably now handled in the post endpoint a delete county by id endpoint | 1 |
73,832 | 3,421,786,550 | IssuesEvent | 2015-12-08 20:12:49 | urbit/urbit | https://api.github.com/repos/urbit/urbit | closed | |mv seems to be broken | %clay bug dojo priority medium | ```hoon
> *foo/bar/hook 'test'
+ /~zod/home/207/foo/bar/hook
moved
exit
[%swim-take-vane %c %made ~]
/~zod/home/~2015.12.2..20.45.40..fb59/arvo/clay:<[2.754 3].[2.916 5]>
/~zod/home/~2015.12.2..20.45.40..fb59/arvo/clay:<[2.797 9].[2.797 46]>
/~zod/home/~2015.12.2..20.45.40..fb59/arvo/clay:<[700 5].[716 32]>
/~zod/home/~2015.12.2..20.45.40..fb59/arvo/clay:<[706 9].[710 13]>
/~zod/home/~2015.12.2..20.45.40..fb59/arvo/clay:<[668 5].[695 51]>
/~zod/home/~2015.12.2..20.45.40..fb59/arvo/clay:<[691 9].[691 33]>
/~zod/home/~2015.12.2..20.45.40..fb59/arvo/clay:<[1.435 7].[1.455 9]>
/~zod/home/~2015.12.2..20.45.40..fb59/arvo/clay:<[1.437 12].[1.451 25]>
/~zod/home/~2015.12.2..20.45.40..fb59/arvo/clay:<[1.474 7].[1.479 12]>
/~zod/home/~2015.12.2..20.45.40..fb59/arvo/clay:<[1.477 15].[1.477 55]>
/~zod/home/~2015.12.2..20.45.40..fb59/arvo/clay:<[1.344 7].[1.394 9]>
/~zod/home/~2015.12.2..20.45.40..fb59/arvo/clay:<[1.376 57].[1.376 59]>
```
Something about a delete failing. Now of course if you split the operations:
```hoon
> |cp %/foo/bar/hook %/baz/hook
+ /~zod/home/208/baz/hook
>=
removed
> |rm %/foo/bar/hook
- /~zod/home/209/foo/bar/hook
>=
```
everything works fine. | 1.0 | |mv seems to be broken - ```hoon
> *foo/bar/hook 'test'
+ /~zod/home/207/foo/bar/hook
moved
exit
[%swim-take-vane %c %made ~]
/~zod/home/~2015.12.2..20.45.40..fb59/arvo/clay:<[2.754 3].[2.916 5]>
/~zod/home/~2015.12.2..20.45.40..fb59/arvo/clay:<[2.797 9].[2.797 46]>
/~zod/home/~2015.12.2..20.45.40..fb59/arvo/clay:<[700 5].[716 32]>
/~zod/home/~2015.12.2..20.45.40..fb59/arvo/clay:<[706 9].[710 13]>
/~zod/home/~2015.12.2..20.45.40..fb59/arvo/clay:<[668 5].[695 51]>
/~zod/home/~2015.12.2..20.45.40..fb59/arvo/clay:<[691 9].[691 33]>
/~zod/home/~2015.12.2..20.45.40..fb59/arvo/clay:<[1.435 7].[1.455 9]>
/~zod/home/~2015.12.2..20.45.40..fb59/arvo/clay:<[1.437 12].[1.451 25]>
/~zod/home/~2015.12.2..20.45.40..fb59/arvo/clay:<[1.474 7].[1.479 12]>
/~zod/home/~2015.12.2..20.45.40..fb59/arvo/clay:<[1.477 15].[1.477 55]>
/~zod/home/~2015.12.2..20.45.40..fb59/arvo/clay:<[1.344 7].[1.394 9]>
/~zod/home/~2015.12.2..20.45.40..fb59/arvo/clay:<[1.376 57].[1.376 59]>
```
Something about a delete failing. Now of course if you split the operations:
```hoon
> |cp %/foo/bar/hook %/baz/hook
+ /~zod/home/208/baz/hook
>=
removed
> |rm %/foo/bar/hook
- /~zod/home/209/foo/bar/hook
>=
```
everything works fine. | priority | mv seems to be broken hoon foo bar hook test zod home foo bar hook moved exit zod home arvo clay zod home arvo clay zod home arvo clay zod home arvo clay zod home arvo clay zod home arvo clay zod home arvo clay zod home arvo clay zod home arvo clay zod home arvo clay zod home arvo clay zod home arvo clay something about a delete failing now of course if you split the operations hoon cp foo bar hook baz hook zod home baz hook removed rm foo bar hook zod home foo bar hook everything works fine | 1 |
374,271 | 11,082,992,338 | IssuesEvent | 2019-12-13 13:30:31 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | Bluetooth: l2cap do not recover when faced with long packets and run out of buffers | area: Bluetooth bug priority: medium | **Describe the bug**
l2cap do not recover when faced with long packets and runs out of buffers. Discovered while testing hci_usb with IPSP sample. After ping with long payload or short intervals the IPSP device remains unresponsive (does not recover / not respond anymore).
If the [l2cap_chan_le_send_sdu](https://github.com/zephyrproject-rtos/zephyr/blob/master/subsys/bluetooth/host/l2cap.c#L1891) returns EAGAIN and the buffer is queued and then in [l2cap_chan_le_send_resume](https://github.com/zephyrproject-rtos/zephyr/blob/master/subsys/bluetooth/host/l2cap.c#L1304) the same situation happens again because the l2cap_chan_le_send_sdu() and subsequent calls of
l2cap_chan_le_send()->
l2cap_chan_create_seg()->
l2cap_alloc_seg()->
bt_l2cap_create_pdu() **fail** (with _bt_conn: Unable to allocate buffer_), the device remains unresponsive or not recover even if the host stops sending data. l2cap also do not try l2cap_chan_le_send_resume() again.
**To Reproduce**
Build, flash, connect to IPSP sample, try `ping -i 0,3 -s 512 2001:db8::1` or `ping -i 1 -s 1024 2001:db8::1`
**Expected behavior**
The device should recover after ping flood, even if it does not have enough buffer for the data.
**console output**
```
[00:01:26.327,941] <dbg> bt_l2cap.l2cap_chan_le_send_resume: buf 0x20009f90 sent 113
[00:01:26.327,972] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x20009f58 len 10
[00:01:26.328,002] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 10 credits 50
[00:01:26.328,033] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 10
[00:01:26.328,063] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x20009f90 len 23
[00:01:26.328,094] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 23 credits 49
[00:01:26.328,125] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 23
[00:01:26.328,155] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x20009fe4 len 23
[00:01:26.328,186] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 23 credits 48
[00:01:26.328,216] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 23
[00:01:26.328,247] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x2000a070 len 23
[00:01:26.328,277] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 23 credits 47
[00:01:26.328,308] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 23
[00:01:26.328,338] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x2000a08c len 23
[00:01:26.328,369] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 23 credits 46
[00:01:26.328,369] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 23
[00:01:26.328,430] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x2000a0a8 len 23
[00:01:26.328,460] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 23 credits 45
[00:01:26.328,460] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 23
[00:01:26.328,521] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x2000a0c4 len 13
[00:01:26.328,521] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 13 credits 44
[00:01:26.328,552] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 13
[00:01:26.328,613] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x2000a054 len 23
[00:01:26.328,643] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 23 credits 43
[00:01:26.328,643] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 23
[00:01:26.328,704] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x20009f20 len 23
[00:01:26.328,704] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 23 credits 42
[00:01:26.328,735] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 23
[00:01:26.328,796] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x20009ecc len 23
[00:01:26.328,796] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 23 credits 41
[00:01:26.328,826] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 23
[00:01:26.328,857] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x20009654 len 23
[00:01:26.328,887] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 23 credits 40
[00:01:26.328,918] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 23
[00:01:26.328,948] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x2000961c len 23
[00:01:26.328,979] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 23 credits 39
[00:01:26.328,979] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 23
[00:01:26.329,040] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x20009638 len 13
[00:01:26.329,071] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 13 credits 38
[00:01:26.329,071] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 13
[00:01:26.329,132] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x20009f74 len 23
[00:01:26.329,132] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 23 credits 37
[00:01:26.329,162] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 23
[00:01:26.329,193] <wrn> bt_conn: Unable to allocate buffer with K_NO_WAIT
[00:01:26.329,193] <wrn> bt_conn: Unable to allocate buffer: timeout -1
[00:01:27.971,221] <dbg> bt_l2cap.bt_l2cap_recv: Packet for CID 64 len 22
[00:01:27.971,282] <dbg> bt_l2cap.l2cap_rx_process: chan 0x2000d18c buf 0x20009590
[00:01:27.971,313] <dbg> bt_l2cap.l2cap_chan_le_recv: chan 0x2000d18c len 20 sdu_len 20
[00:01:27.971,313] <dbg> net_bt.ipsp_alloc_buf: (0x20002de0): Channel 0x2000d18c requires buffer
[00:01:27.971,343] <dbg> bt_l2cap.l2cap_chan_le_recv_seg: chan 0x2000d18c seg 1 len 20
[00:01:27.971,374] <dbg> bt_l2cap.l2cap_chan_le_recv_sdu: chan 0x2000d18c len 20
``` | 1.0 | Bluetooth: l2cap do not recover when faced with long packets and run out of buffers - **Describe the bug**
l2cap do not recover when faced with long packets and runs out of buffers. Discovered while testing hci_usb with IPSP sample. After ping with long payload or short intervals the IPSP device remains unresponsive (does not recover / not respond anymore).
If the [l2cap_chan_le_send_sdu](https://github.com/zephyrproject-rtos/zephyr/blob/master/subsys/bluetooth/host/l2cap.c#L1891) returns EAGAIN and the buffer is queued and then in [l2cap_chan_le_send_resume](https://github.com/zephyrproject-rtos/zephyr/blob/master/subsys/bluetooth/host/l2cap.c#L1304) the same situation happens again because the l2cap_chan_le_send_sdu() and subsequent calls of
l2cap_chan_le_send()->
l2cap_chan_create_seg()->
l2cap_alloc_seg()->
bt_l2cap_create_pdu() **fail** (with _bt_conn: Unable to allocate buffer_), the device remains unresponsive or not recover even if the host stops sending data. l2cap also do not try l2cap_chan_le_send_resume() again.
**To Reproduce**
Build, flash, connect to IPSP sample, try `ping -i 0,3 -s 512 2001:db8::1` or `ping -i 1 -s 1024 2001:db8::1`
**Expected behavior**
The device should recover after ping flood, even if it does not have enough buffer for the data.
**console output**
```
[00:01:26.327,941] <dbg> bt_l2cap.l2cap_chan_le_send_resume: buf 0x20009f90 sent 113
[00:01:26.327,972] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x20009f58 len 10
[00:01:26.328,002] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 10 credits 50
[00:01:26.328,033] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 10
[00:01:26.328,063] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x20009f90 len 23
[00:01:26.328,094] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 23 credits 49
[00:01:26.328,125] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 23
[00:01:26.328,155] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x20009fe4 len 23
[00:01:26.328,186] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 23 credits 48
[00:01:26.328,216] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 23
[00:01:26.328,247] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x2000a070 len 23
[00:01:26.328,277] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 23 credits 47
[00:01:26.328,308] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 23
[00:01:26.328,338] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x2000a08c len 23
[00:01:26.328,369] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 23 credits 46
[00:01:26.328,369] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 23
[00:01:26.328,430] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x2000a0a8 len 23
[00:01:26.328,460] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 23 credits 45
[00:01:26.328,460] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 23
[00:01:26.328,521] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x2000a0c4 len 13
[00:01:26.328,521] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 13 credits 44
[00:01:26.328,552] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 13
[00:01:26.328,613] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x2000a054 len 23
[00:01:26.328,643] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 23 credits 43
[00:01:26.328,643] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 23
[00:01:26.328,704] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x20009f20 len 23
[00:01:26.328,704] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 23 credits 42
[00:01:26.328,735] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 23
[00:01:26.328,796] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x20009ecc len 23
[00:01:26.328,796] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 23 credits 41
[00:01:26.328,826] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 23
[00:01:26.328,857] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x20009654 len 23
[00:01:26.328,887] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 23 credits 40
[00:01:26.328,918] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 23
[00:01:26.328,948] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x2000961c len 23
[00:01:26.328,979] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 23 credits 39
[00:01:26.328,979] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 23
[00:01:26.329,040] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x20009638 len 13
[00:01:26.329,071] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 13 credits 38
[00:01:26.329,071] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 13
[00:01:26.329,132] <dbg> bt_l2cap.l2cap_chan_create_seg: ch 0x2000d18c seg 0x20009f74 len 23
[00:01:26.329,132] <dbg> bt_l2cap.l2cap_chan_le_send: ch 0x2000d18c cid 0x0040 len 23 credits 37
[00:01:26.329,162] <dbg> bt_l2cap.bt_l2cap_send_cb: conn 0x20000698 cid 64 len 23
[00:01:26.329,193] <wrn> bt_conn: Unable to allocate buffer with K_NO_WAIT
[00:01:26.329,193] <wrn> bt_conn: Unable to allocate buffer: timeout -1
[00:01:27.971,221] <dbg> bt_l2cap.bt_l2cap_recv: Packet for CID 64 len 22
[00:01:27.971,282] <dbg> bt_l2cap.l2cap_rx_process: chan 0x2000d18c buf 0x20009590
[00:01:27.971,313] <dbg> bt_l2cap.l2cap_chan_le_recv: chan 0x2000d18c len 20 sdu_len 20
[00:01:27.971,313] <dbg> net_bt.ipsp_alloc_buf: (0x20002de0): Channel 0x2000d18c requires buffer
[00:01:27.971,343] <dbg> bt_l2cap.l2cap_chan_le_recv_seg: chan 0x2000d18c seg 1 len 20
[00:01:27.971,374] <dbg> bt_l2cap.l2cap_chan_le_recv_sdu: chan 0x2000d18c len 20
``` | priority | bluetooth do not recover when faced with long packets and run out of buffers describe the bug do not recover when faced with long packets and runs out of buffers discovered while testing hci usb with ipsp sample after ping with long payload or short intervals the ipsp device remains unresponsive does not recover not respond anymore if the returns eagain and the buffer is queued and then in the same situation happens again because the chan le send sdu and subsequent calls of chan le send chan create seg alloc seg bt create pdu fail with bt conn unable to allocate buffer the device remains unresponsive or not recover even if the host stops sending data also do not try chan le send resume again to reproduce build flash connect to ipsp sample try ping i s or ping i s expected behavior the device should recover after ping flood even if it does not have enough buffer for the data console output bt chan le send resume buf sent bt chan create seg ch seg len bt chan le send ch cid len credits bt bt send cb conn cid len bt chan create seg ch seg len bt chan le send ch cid len credits bt bt send cb conn cid len bt chan create seg ch seg len bt chan le send ch cid len credits bt bt send cb conn cid len bt chan create seg ch seg len bt chan le send ch cid len credits bt bt send cb conn cid len bt chan create seg ch seg len bt chan le send ch cid len credits bt bt send cb conn cid len bt chan create seg ch seg len bt chan le send ch cid len credits bt bt send cb conn cid len bt chan create seg ch seg len bt chan le send ch cid len credits bt bt send cb conn cid len bt chan create seg ch seg len bt chan le send ch cid len credits bt bt send cb conn cid len bt chan create seg ch seg len bt chan le send ch cid len credits bt bt send cb conn cid len bt chan create seg ch seg len bt chan le send ch cid len credits bt bt send cb conn cid len bt chan create seg ch seg len bt chan le send ch cid len credits bt bt send cb conn cid len bt chan create seg ch seg len bt chan le send ch cid len credits bt bt send cb conn cid len bt chan create seg ch seg len bt chan le send ch cid len credits bt bt send cb conn cid len bt chan create seg ch seg len bt chan le send ch cid len credits bt bt send cb conn cid len bt conn unable to allocate buffer with k no wait bt conn unable to allocate buffer timeout bt bt recv packet for cid len bt rx process chan buf bt chan le recv chan len sdu len net bt ipsp alloc buf channel requires buffer bt chan le recv seg chan seg len bt chan le recv sdu chan len | 1 |
235,020 | 7,733,878,918 | IssuesEvent | 2018-05-26 17:05:48 | vinitkumar/googlecl | https://api.github.com/repos/vinitkumar/googlecl | closed | Adding google calendar entries uses timezone of calendar web app not of submitting host | Priority-Medium bug imported | _From [apittman@gmail.com](https://code.google.com/u/apittman@gmail.com/) on June 19, 2010 03:59:34_
What steps will reproduce the problem? 1. Use calendar app in Paris converting timezone of web app to France.
2. Travel back to England, converting timezone of laptop to England
3. Add event at a specific time. What is the expected output? What do you see instead? I expect to see the event added at the time at which I specified it. Instead it gets added an hour earlier. What version of the product are you using? On what operating system? google 0.9.5/Snow Leopard Please provide any additional information below. It would probably work for other time zones as well.
_Original issue: http://code.google.com/p/googlecl/issues/detail?id=50_
| 1.0 | Adding google calendar entries uses timezone of calendar web app not of submitting host - _From [apittman@gmail.com](https://code.google.com/u/apittman@gmail.com/) on June 19, 2010 03:59:34_
What steps will reproduce the problem? 1. Use calendar app in Paris converting timezone of web app to France.
2. Travel back to England, converting timezone of laptop to England
3. Add event at a specific time. What is the expected output? What do you see instead? I expect to see the event added at the time at which I specified it. Instead it gets added an hour earlier. What version of the product are you using? On what operating system? google 0.9.5/Snow Leopard Please provide any additional information below. It would probably work for other time zones as well.
_Original issue: http://code.google.com/p/googlecl/issues/detail?id=50_
| priority | adding google calendar entries uses timezone of calendar web app not of submitting host from on june what steps will reproduce the problem use calendar app in paris converting timezone of web app to france travel back to england converting timezone of laptop to england add event at a specific time what is the expected output what do you see instead i expect to see the event added at the time at which i specified it instead it gets added an hour earlier what version of the product are you using on what operating system google snow leopard please provide any additional information below it would probably work for other time zones as well original issue | 1 |
158,656 | 6,033,224,990 | IssuesEvent | 2017-06-09 07:38:30 | moosetechnology/Moose | https://api.github.com/repos/moosetechnology/Moose | closed | DSM in MoosePanel without the name of the entities | Component-DSM Component-MooseTools Priority-Medium Type-Enhancement | Originally reported on Google Code with ID 1057
```
In the MoosePanel, when right clicking on Package or Namespace, there is the possibility
to get various DSM. However, we can't do nothing with it, since the name of the involved
entities are not present. It could be nice that at least when mouse over, the name
of the involved entities appears.
Please fill in the labels with the following information:
* Type-Defect, Type-Enhancement, Type-Engineering, Type-Review, Type-Other
* Component-XXX
```
Reported by `anne.etien` on 2014-03-20 09:53:04
| 1.0 | DSM in MoosePanel without the name of the entities - Originally reported on Google Code with ID 1057
```
In the MoosePanel, when right clicking on Package or Namespace, there is the possibility
to get various DSM. However, we can't do nothing with it, since the name of the involved
entities are not present. It could be nice that at least when mouse over, the name
of the involved entities appears.
Please fill in the labels with the following information:
* Type-Defect, Type-Enhancement, Type-Engineering, Type-Review, Type-Other
* Component-XXX
```
Reported by `anne.etien` on 2014-03-20 09:53:04
| priority | dsm in moosepanel without the name of the entities originally reported on google code with id in the moosepanel when right clicking on package or namespace there is the possibility to get various dsm however we can t do nothing with it since the name of the involved entities are not present it could be nice that at least when mouse over the name of the involved entities appears please fill in the labels with the following information type defect type enhancement type engineering type review type other component xxx reported by anne etien on | 1 |
378,990 | 11,211,820,758 | IssuesEvent | 2020-01-06 16:13:42 | AugurProject/augur | https://api.github.com/repos/AugurProject/augur | closed | publicCreateOrders does not ensure dynamic arrays are identical length | Priority: Medium V2 Audit | https://github.com/AugurProject/augur/blob/6b856caae657a83fa3cdfe9e8532dc2ecf7a5ecf/packages/augur-core/source/contracts/trading/CreateOrder.sol#L95
The impact here is partially limited by Order.create checks for `_prices` and `_attoshareAmounts` being > 0, but if the _outcomes or _types array length is less than _types, the orders will accidentally be placed for invalid or `BID`. | 1.0 | publicCreateOrders does not ensure dynamic arrays are identical length - https://github.com/AugurProject/augur/blob/6b856caae657a83fa3cdfe9e8532dc2ecf7a5ecf/packages/augur-core/source/contracts/trading/CreateOrder.sol#L95
The impact here is partially limited by Order.create checks for `_prices` and `_attoshareAmounts` being > 0, but if the _outcomes or _types array length is less than _types, the orders will accidentally be placed for invalid or `BID`. | priority | publiccreateorders does not ensure dynamic arrays are identical length the impact here is partially limited by order create checks for prices and attoshareamounts being but if the outcomes or types array length is less than types the orders will accidentally be placed for invalid or bid | 1 |
437,758 | 12,602,053,490 | IssuesEvent | 2020-06-11 10:59:20 | eclipse/dirigible | https://api.github.com/repos/eclipse/dirigible | closed | Runtime-only or light version of the SAP All release | efforts-low priority-medium usability wontfix | A light version of dirigible without any unneeded functionalities would greatly improve performance on an instance where applications no longer require complex development.
| 1.0 | Runtime-only or light version of the SAP All release - A light version of dirigible without any unneeded functionalities would greatly improve performance on an instance where applications no longer require complex development.
| priority | runtime only or light version of the sap all release a light version of dirigible without any unneeded functionalities would greatly improve performance on an instance where applications no longer require complex development | 1 |
79,134 | 3,520,837,689 | IssuesEvent | 2016-01-12 22:32:09 | tschoppi/starsystem-gen | https://api.github.com/repos/tschoppi/starsystem-gen | opened | Improve label positioning in stellar orbit graphic | enhancement medium priority | Currently the label offset is calculated based on which quadrant the planet is in. This was fine when the stellar orbit graphic was static, but leads to unacceptable jumping around, now that the graphic is animated.
This should be improved.
It might be, however, that solving #45 resolves this issue too. | 1.0 | Improve label positioning in stellar orbit graphic - Currently the label offset is calculated based on which quadrant the planet is in. This was fine when the stellar orbit graphic was static, but leads to unacceptable jumping around, now that the graphic is animated.
This should be improved.
It might be, however, that solving #45 resolves this issue too. | priority | improve label positioning in stellar orbit graphic currently the label offset is calculated based on which quadrant the planet is in this was fine when the stellar orbit graphic was static but leads to unacceptable jumping around now that the graphic is animated this should be improved it might be however that solving resolves this issue too | 1 |
106,627 | 4,281,184,309 | IssuesEvent | 2016-07-15 01:06:31 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | opened | probably lost lcm backwards compatibility? | priority: medium team: kitware type: installation and distribution | it occurred to me that if the new build logic can no longer find and use lcm from pkg-config, then we have almost certainly broken the open-humanoids build again. At the very least, OH will have to upgrade to the latest LCM to make things work. (Is there a minimum version number?)
@wxmerkt -- perhaps you can confirm?
| 1.0 | probably lost lcm backwards compatibility? - it occurred to me that if the new build logic can no longer find and use lcm from pkg-config, then we have almost certainly broken the open-humanoids build again. At the very least, OH will have to upgrade to the latest LCM to make things work. (Is there a minimum version number?)
@wxmerkt -- perhaps you can confirm?
| priority | probably lost lcm backwards compatibility it occurred to me that if the new build logic can no longer find and use lcm from pkg config then we have almost certainly broken the open humanoids build again at the very least oh will have to upgrade to the latest lcm to make things work is there a minimum version number wxmerkt perhaps you can confirm | 1 |
795,922 | 28,092,330,766 | IssuesEvent | 2023-03-30 13:48:34 | CodeYourFuture/Module-JS3 | https://api.github.com/repos/CodeYourFuture/Module-JS3 | opened | JavaScript Exercises | 🏕 Priority Mandatory 🐂 Size Medium | ### Link to the coursework
https://github.com/CodeYourFuture/JavaScript-Core-3-Coursework-Week1
### Why are we doing this?
This set of exercise will help you to solidify your knowledge of the concepts in JavaScript Module 3.
### Maximum time in hours (Tech has max 16 per week total)
4
### How to get help
https://syllabus.codeyourfuture.io/guides/asking-questions
### How to submit
1. Fork the repo to your Github account
2. When you are ready, open a PR to the CYF repo
### How to review
1. Complete your PR template
2. Ask for review from a classmate or mentor
3. Make changes based on their feedback
4. Review and refactor again once the coursework solutions are released.
### Anything else?
_No response_ | 1.0 | JavaScript Exercises - ### Link to the coursework
https://github.com/CodeYourFuture/JavaScript-Core-3-Coursework-Week1
### Why are we doing this?
This set of exercise will help you to solidify your knowledge of the concepts in JavaScript Module 3.
### Maximum time in hours (Tech has max 16 per week total)
4
### How to get help
https://syllabus.codeyourfuture.io/guides/asking-questions
### How to submit
1. Fork the repo to your Github account
2. When you are ready, open a PR to the CYF repo
### How to review
1. Complete your PR template
2. Ask for review from a classmate or mentor
3. Make changes based on their feedback
4. Review and refactor again once the coursework solutions are released.
### Anything else?
_No response_ | priority | javascript exercises link to the coursework why are we doing this this set of exercise will help you to solidify your knowledge of the concepts in javascript module maximum time in hours tech has max per week total how to get help how to submit fork the repo to your github account when you are ready open a pr to the cyf repo how to review complete your pr template ask for review from a classmate or mentor make changes based on their feedback review and refactor again once the coursework solutions are released anything else no response | 1 |
640,233 | 20,777,408,512 | IssuesEvent | 2022-03-16 11:49:11 | vincetiu8/zombie-game | https://api.github.com/repos/vincetiu8/zombie-game | closed | Add objects to farm map | type/enhancement area/map size/s priority/medium | Currently, the farm map doesn't have object. We should lay out where we want things to be placed (to also get an idea for gameplay of the map). | 1.0 | Add objects to farm map - Currently, the farm map doesn't have object. We should lay out where we want things to be placed (to also get an idea for gameplay of the map). | priority | add objects to farm map currently the farm map doesn t have object we should lay out where we want things to be placed to also get an idea for gameplay of the map | 1 |
628,033 | 19,960,006,300 | IssuesEvent | 2022-01-28 07:04:43 | chaotic-aur/packages | https://api.github.com/repos/chaotic-aur/packages | closed | [Request] Delete glow | request:remove-pkg priority:medium | ### Link to the package in the AUR
[AUR (Deleted)](https://aur.archlinux.org/packages/glow/)
[Community](https://archlinux.org/packages/community/x86_64/glow/)
### More information
This package is now in Community, and the associated AUR package is deleted, so it can be removed from Chaotic. Here is the [relevant Twitter post](https://twitter.com/charmcli/status/1486413603892867075). Please set the label to `request:remove-pkg`. | 1.0 | [Request] Delete glow - ### Link to the package in the AUR
[AUR (Deleted)](https://aur.archlinux.org/packages/glow/)
[Community](https://archlinux.org/packages/community/x86_64/glow/)
### More information
This package is now in Community, and the associated AUR package is deleted, so it can be removed from Chaotic. Here is the [relevant Twitter post](https://twitter.com/charmcli/status/1486413603892867075). Please set the label to `request:remove-pkg`. | priority | delete glow link to the package in the aur more information this package is now in community and the associated aur package is deleted so it can be removed from chaotic here is the please set the label to request remove pkg | 1 |
56,506 | 3,080,060,433 | IssuesEvent | 2015-08-21 19:48:58 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | closed | Синхронизация хабов провайдера (ISP_Favorites.xml) Часть 2 | bug imported Priority-Medium | _From [bobrikov](https://code.google.com/u/bobrikov/) on March 04, 2012 22:01:21_
уйдём от тега коннект и станет всё проще на 90%
читаем удалённый фэйворитс
Первое действие. Что делаем с тегом Connect, найденном на локальном компе
Если хаб найден на сервере и совпал, и:
- не имеет группы = переносим его в группу ISP и Connect=1
- имеет группу ISP Recycled = переносим его в группу ISP и Connect=1
- имеет группу, отличную от ISP или от ISP Recycled = не переносим и не включаем / не выключаем
Если хаб удален (не найден) на сервере, и:
- не имеет группы = переносить в ISP Recycled и Connect=0
- имеет группу ISP = переносить в ISP Recycled и Connect=0
- имеет группу, отличную от ISP или от ISP Recycled = не переносим и не выключаем
Второе действие. Что делаем с тегами Description, Name и т.д., найденными на локальном компе
Если хаб:
- имеет группу ISP или ISP recycled = заменять/добавлять
- имеет группу, отличную от ISP или от ISP Recycled = не заменять/добавлять
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=705_ | 1.0 | Синхронизация хабов провайдера (ISP_Favorites.xml) Часть 2 - _From [bobrikov](https://code.google.com/u/bobrikov/) on March 04, 2012 22:01:21_
уйдём от тега коннект и станет всё проще на 90%
читаем удалённый фэйворитс
Первое действие. Что делаем с тегом Connect, найденном на локальном компе
Если хаб найден на сервере и совпал, и:
- не имеет группы = переносим его в группу ISP и Connect=1
- имеет группу ISP Recycled = переносим его в группу ISP и Connect=1
- имеет группу, отличную от ISP или от ISP Recycled = не переносим и не включаем / не выключаем
Если хаб удален (не найден) на сервере, и:
- не имеет группы = переносить в ISP Recycled и Connect=0
- имеет группу ISP = переносить в ISP Recycled и Connect=0
- имеет группу, отличную от ISP или от ISP Recycled = не переносим и не выключаем
Второе действие. Что делаем с тегами Description, Name и т.д., найденными на локальном компе
Если хаб:
- имеет группу ISP или ISP recycled = заменять/добавлять
- имеет группу, отличную от ISP или от ISP Recycled = не заменять/добавлять
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=705_ | priority | синхронизация хабов провайдера isp favorites xml часть from on march уйдём от тега коннект и станет всё проще на читаем удалённый фэйворитс первое действие что делаем с тегом connect найденном на локальном компе если хаб найден на сервере и совпал и не имеет группы переносим его в группу isp и connect имеет группу isp recycled переносим его в группу isp и connect имеет группу отличную от isp или от isp recycled не переносим и не включаем не выключаем если хаб удален не найден на сервере и не имеет группы переносить в isp recycled и connect имеет группу isp переносить в isp recycled и connect имеет группу отличную от isp или от isp recycled не переносим и не выключаем второе действие что делаем с тегами description name и т д найденными на локальном компе если хаб имеет группу isp или isp recycled заменять добавлять имеет группу отличную от isp или от isp recycled не заменять добавлять original issue | 1 |
411,956 | 12,033,966,278 | IssuesEvent | 2020-04-13 15:12:27 | uutils/coreutils | https://api.github.com/repos/uutils/coreutils | closed | problem building some programs on musl | B - medium priority P - Linux U - All (uucore) | cabulertion.
i had problem while building uutils on musl
namely i am getting errors :
error[E0432]: unresolved import `libc::utmpx`
--> src/uucore/utmpx.rs:44:5
|
44 | use libc::utmpx;
| ^^^^^^^^^^^ no `utmpx` in the root
error[E0432]: unresolved import `libc::getutxent`
--> src/uucore/utmpx.rs:48:9
|
48 | pub use libc::getutxent;
| ^^^^^^---------
| | |
| | help: a similar name exists in the module: `getgrent`
| no `getutxent` in the root
error[E0432]: unresolved import `libc::setutxent`
--> src/uucore/utmpx.rs:49:9
|
49 | pub use libc::setutxent;
| ^^^^^^---------
| | |
| | help: a similar name exists in the module: `setgrent`
| no `setutxent` in the root
error[E0432]: unresolved import `libc::endutxent`
--> src/uucore/utmpx.rs:50:9
|
50 | pub use libc::endutxent;
| ^^^^^^---------
| | |
| | help: a similar name exists in the module: `endmntent`
| no `endutxent` in the root
error[E0432]: unresolved import `libc::utmpxname`
--> src/uucore/utmpx.rs:52:9
|
52 | pub use libc::utmpxname;
| ^^^^^^---------
| | |
| | help: a similar name exists in the module: `tmpnam`
| no `utmpxname` in the root
error[E0432]: unresolved import `libc::__UT_LINESIZE`
--> src/uucore/utmpx.rs:69:13
|
69 | pub use libc::__UT_LINESIZE as UT_LINESIZE;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no `__UT_LINESIZE` in the root
error[E0432]: unresolved import `libc::__UT_NAMESIZE` | 1.0 | problem building some programs on musl - cabulertion.
i had problem while building uutils on musl
namely i am getting errors :
error[E0432]: unresolved import `libc::utmpx`
--> src/uucore/utmpx.rs:44:5
|
44 | use libc::utmpx;
| ^^^^^^^^^^^ no `utmpx` in the root
error[E0432]: unresolved import `libc::getutxent`
--> src/uucore/utmpx.rs:48:9
|
48 | pub use libc::getutxent;
| ^^^^^^---------
| | |
| | help: a similar name exists in the module: `getgrent`
| no `getutxent` in the root
error[E0432]: unresolved import `libc::setutxent`
--> src/uucore/utmpx.rs:49:9
|
49 | pub use libc::setutxent;
| ^^^^^^---------
| | |
| | help: a similar name exists in the module: `setgrent`
| no `setutxent` in the root
error[E0432]: unresolved import `libc::endutxent`
--> src/uucore/utmpx.rs:50:9
|
50 | pub use libc::endutxent;
| ^^^^^^---------
| | |
| | help: a similar name exists in the module: `endmntent`
| no `endutxent` in the root
error[E0432]: unresolved import `libc::utmpxname`
--> src/uucore/utmpx.rs:52:9
|
52 | pub use libc::utmpxname;
| ^^^^^^---------
| | |
| | help: a similar name exists in the module: `tmpnam`
| no `utmpxname` in the root
error[E0432]: unresolved import `libc::__UT_LINESIZE`
--> src/uucore/utmpx.rs:69:13
|
69 | pub use libc::__UT_LINESIZE as UT_LINESIZE;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no `__UT_LINESIZE` in the root
error[E0432]: unresolved import `libc::__UT_NAMESIZE` | priority | problem building some programs on musl cabulertion i had problem while building uutils on musl namely i am getting errors error unresolved import libc utmpx src uucore utmpx rs use libc utmpx no utmpx in the root error unresolved import libc getutxent src uucore utmpx rs pub use libc getutxent help a similar name exists in the module getgrent no getutxent in the root error unresolved import libc setutxent src uucore utmpx rs pub use libc setutxent help a similar name exists in the module setgrent no setutxent in the root error unresolved import libc endutxent src uucore utmpx rs pub use libc endutxent help a similar name exists in the module endmntent no endutxent in the root error unresolved import libc utmpxname src uucore utmpx rs pub use libc utmpxname help a similar name exists in the module tmpnam no utmpxname in the root error unresolved import libc ut linesize src uucore utmpx rs pub use libc ut linesize as ut linesize no ut linesize in the root error unresolved import libc ut namesize | 1 |
538,711 | 15,776,117,041 | IssuesEvent | 2021-04-01 04:05:27 | AY2021S2-CS2103T-W15-3/tp | https://api.github.com/repos/AY2021S2-CS2103T-W15-3/tp | closed | As a restaurant owner I wan to edit dishes on the menu | priority.Medium type.Story | ... so that I can rectify a typos on the menu | 1.0 | As a restaurant owner I wan to edit dishes on the menu - ... so that I can rectify a typos on the menu | priority | as a restaurant owner i wan to edit dishes on the menu so that i can rectify a typos on the menu | 1 |
11,388 | 2,610,117,335 | IssuesEvent | 2015-02-26 18:36:25 | chrsmith/scribefire-chrome | https://api.github.com/repos/chrsmith/scribefire-chrome | opened | Save revisions of posts in progress | auto-migrated Priority-Medium Type-Enhancement | ```
I lost an entire post due to this. I'm pissed. Enough said.
```
-----
Original issue reported on code.google.com by `paulana....@gmail.com` on 3 Aug 2010 at 5:18 | 1.0 | Save revisions of posts in progress - ```
I lost an entire post due to this. I'm pissed. Enough said.
```
-----
Original issue reported on code.google.com by `paulana....@gmail.com` on 3 Aug 2010 at 5:18 | priority | save revisions of posts in progress i lost an entire post due to this i m pissed enough said original issue reported on code google com by paulana gmail com on aug at | 1 |
170,706 | 6,469,704,431 | IssuesEvent | 2017-08-17 06:58:24 | tendermint/ethermint | https://api.github.com/repos/tendermint/ethermint | closed | Move all user facing documentation into docs folder to be build and hosted by readthedocs.io | Difficulty: Medium Priority: Medium Status: In Progress Type: Enhancement | All our documentation should be build and hosted by readthedocs.io . That documentation should contain everything that isn't comments in the code. | 1.0 | Move all user facing documentation into docs folder to be build and hosted by readthedocs.io - All our documentation should be build and hosted by readthedocs.io . That documentation should contain everything that isn't comments in the code. | priority | move all user facing documentation into docs folder to be build and hosted by readthedocs io all our documentation should be build and hosted by readthedocs io that documentation should contain everything that isn t comments in the code | 1 |
118,037 | 4,731,296,067 | IssuesEvent | 2016-10-19 01:16:28 | loomio/loomio | https://api.github.com/repos/loomio/loomio | closed | Intercom messenger fails to open when I click Contact Us | Bug Priority: Medium | <img width="462" alt="screen shot 2016-10-19 at 1 25 49 pm" src="https://cloud.githubusercontent.com/assets/970124/19501143/a118bd92-95ff-11e6-9d57-25d6e01df5e8.png">
Sometimes it fails with this error, sometimes it takes me to the rails contact form. | 1.0 | Intercom messenger fails to open when I click Contact Us - <img width="462" alt="screen shot 2016-10-19 at 1 25 49 pm" src="https://cloud.githubusercontent.com/assets/970124/19501143/a118bd92-95ff-11e6-9d57-25d6e01df5e8.png">
Sometimes it fails with this error, sometimes it takes me to the rails contact form. | priority | intercom messenger fails to open when i click contact us img width alt screen shot at pm src sometimes it fails with this error sometimes it takes me to the rails contact form | 1 |
594,210 | 18,040,476,921 | IssuesEvent | 2021-09-18 01:19:59 | monoai/GDAPDEV_MP | https://api.github.com/repos/monoai/GDAPDEV_MP | closed | Settings and preferences | medium priority | Our settings should actually store the player's preferences for either that session or any future session onwards, plus the settings they choose should apply to the game itself.
Probably one way to do this is to either use PlayerPrefs or our DataManager, but let's aim for elegance yeah?
## Remaining Tasks
- [ ] #13 (More Values)
- [x] #8 (Movement Speed)
- [x] Control types
- [x] Programming the game to actually recognize the settings | 1.0 | Settings and preferences - Our settings should actually store the player's preferences for either that session or any future session onwards, plus the settings they choose should apply to the game itself.
Probably one way to do this is to either use PlayerPrefs or our DataManager, but let's aim for elegance yeah?
## Remaining Tasks
- [ ] #13 (More Values)
- [x] #8 (Movement Speed)
- [x] Control types
- [x] Programming the game to actually recognize the settings | priority | settings and preferences our settings should actually store the player s preferences for either that session or any future session onwards plus the settings they choose should apply to the game itself probably one way to do this is to either use playerprefs or our datamanager but let s aim for elegance yeah remaining tasks more values movement speed control types programming the game to actually recognize the settings | 1 |
189,765 | 6,801,611,155 | IssuesEvent | 2017-11-02 17:21:17 | SmartlyDressedGames/Unturned-4.x-Community | https://api.github.com/repos/SmartlyDressedGames/Unturned-4.x-Community | closed | Useable Bind Inputs | Priority: Medium Status: Complete Type: Feature | Allow useable to bind actions rather than just the primary/secondary action. | 1.0 | Useable Bind Inputs - Allow useable to bind actions rather than just the primary/secondary action. | priority | useable bind inputs allow useable to bind actions rather than just the primary secondary action | 1 |
620,786 | 19,569,895,086 | IssuesEvent | 2022-01-04 08:35:26 | bounswe/2021SpringGroup9 | https://api.github.com/repos/bounswe/2021SpringGroup9 | closed | Implement Unit Tests | priority: critical difficulty: medium backend postory | ### Task:
Unit tests will be implemented for API functionalities.
### Type of task (new feature, writing tests, refactoring):
Writing Tests
**Deadline: 31.12.2021*
| 1.0 | Implement Unit Tests - ### Task:
Unit tests will be implemented for API functionalities.
### Type of task (new feature, writing tests, refactoring):
Writing Tests
**Deadline: 31.12.2021*
| priority | implement unit tests task unit tests will be implemented for api functionalities type of task new feature writing tests refactoring writing tests deadline | 1 |
121,928 | 4,823,045,935 | IssuesEvent | 2016-11-06 05:33:42 | CS2103AUG2016-W09-C4/main | https://api.github.com/repos/CS2103AUG2016-W09-C4/main | closed | Create new set of images for User Guide | priority.medium status.complete type.enhancement | The current markdown version of it has "labels" but need to be used separately for CS2101 documentation and the CS2103T version.
| 1.0 | Create new set of images for User Guide - The current markdown version of it has "labels" but need to be used separately for CS2101 documentation and the CS2103T version.
| priority | create new set of images for user guide the current markdown version of it has labels but need to be used separately for documentation and the version | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.