Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
855
| labels
stringlengths 4
721
| body
stringlengths 1
261k
| index
stringclasses 13
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
240k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
27,914
| 2,697,429,021
|
IssuesEvent
|
2015-04-02 19:54:50
|
ropensci/elastic
|
https://api.github.com/repos/ropensci/elastic
|
closed
|
Fix auth workflow so that if use/pwd or other auth used, that passed to http request
|
highpriority
|
e.g.,
* `connect(es_user="scott", es_pwd="foobar")` then we use `httr::authenticate(...)` and pass to e.g., `GET()`
* etc. for other auth methods,
|
1.0
|
Fix auth workflow so that if use/pwd or other auth used, that passed to http request - e.g.,
* `connect(es_user="scott", es_pwd="foobar")` then we use `httr::authenticate(...)` and pass to e.g., `GET()`
* etc. for other auth methods,
|
priority
|
fix auth workflow so that if use pwd or other auth used that passed to http request e g connect es user scott es pwd foobar then we use httr authenticate and pass to e g get etc for other auth methods
| 1
|
121,849
| 4,822,236,255
|
IssuesEvent
|
2016-11-05 19:03:27
|
sr320/LabDocs
|
https://api.github.com/repos/sr320/LabDocs
|
closed
|
Take action on genomes..
|
high priority
|
We need to move on these genomes...
We should discuss how to approach BGI
Also do we have any more material (tissue / DNA) from either of animals sent off for sequencing?
|
1.0
|
Take action on genomes.. - We need to move on these genomes...
We should discuss how to approach BGI
Also do we have any more material (tissue / DNA) from either of animals sent off for sequencing?
|
priority
|
take action on genomes we need to move on these genomes we should discuss how to approach bgi also do we have any more material tissue dna from either of animals sent off for sequencing
| 1
|
665,101
| 22,299,562,087
|
IssuesEvent
|
2022-06-13 07:24:04
|
kubermatic/kubermatic
|
https://api.github.com/repos/kubermatic/kubermatic
|
closed
|
"Upgrade system on first boot" is not working with vSphere Cloud Provider
|
kind/bug priority/high customer-request
|
### What happened?
<!-- Try to provide as much information as possible.
If you're reporting a security issue, please check the guidelines for reporting security issues:
https://github.com/kubermatic/kubermatic/blob/master/CONTRIBUTING.md#reporting-a-security-vulnerability -->
During cluster creation process using vSphere Provider, when "Upgrade system on first boot" option is selected then the machinesets are not working as expected. When this option is turned off, it works fine.
### Expected behavior
<!-- What did you expected to happen? -->
When you select the "Upgrade system on first boot", the cluster creation should work fine as well, without any issues. Similar to how it works when this option is not selected.
### How to reproduce the issue?
<!-- Please provide as much information as possible, so we can reproduce the issue on our own. -->
From KKP dashboard, navigate to the project and select the "Create Cluster" under "Clusters" menu.
Then select "vSphere" at "Provider" tab, next fill out the information at "Cluster" tab, next at the "Settings" tab under "Basic Settings" select the "Upgrade system on first boot" option and proceed with other details to create cluster.
You will observe that the machinesets is not working properly (status does not turns to Running).
### How is your environment configured?
- KKP version: 2.19.2
- Shared or separate master/seed clusters?: Separate Master and Seed Cluster.
### Provide your KKP manifest here (if applicable)
<!-- Providing an applicable manifest (KubermaticConfiguration, Seed, Cluster or other resources) will help us to reproduce the issue.
Please make sure to redact all secrets (e.g. passwords, URLs...)! -->
<details>
```yaml
# paste manifest here
```
</details>
### What cloud provider are you running on?
<!-- AWS, Azure, DigitalOcean, GCP, Hetzner Cloud, Nutanix, OpenStack, Equinix Metal (Packet), VMware vSphere, Other (e.g. baremetal or non-natively supported provider) -->
Azure for KKP Master setup and vSphere for Seed and User Clusters setup
### What operating system are you running in your user cluster?
<!-- Ubuntu 20.04, CentOS 7, Rocky Linux 8, Flatcar Linux, ... (optional, bug might not be related to user cluster) -->
Operating System: Ubuntu 20.04
### Additional information
<!-- Additional information about the bug you're reporting (optional). -->
|
1.0
|
"Upgrade system on first boot" is not working with vSphere Cloud Provider - ### What happened?
<!-- Try to provide as much information as possible.
If you're reporting a security issue, please check the guidelines for reporting security issues:
https://github.com/kubermatic/kubermatic/blob/master/CONTRIBUTING.md#reporting-a-security-vulnerability -->
During cluster creation process using vSphere Provider, when "Upgrade system on first boot" option is selected then the machinesets are not working as expected. When this option is turned off, it works fine.
### Expected behavior
<!-- What did you expected to happen? -->
When you select the "Upgrade system on first boot", the cluster creation should work fine as well, without any issues. Similar to how it works when this option is not selected.
### How to reproduce the issue?
<!-- Please provide as much information as possible, so we can reproduce the issue on our own. -->
From KKP dashboard, navigate to the project and select the "Create Cluster" under "Clusters" menu.
Then select "vSphere" at "Provider" tab, next fill out the information at "Cluster" tab, next at the "Settings" tab under "Basic Settings" select the "Upgrade system on first boot" option and proceed with other details to create cluster.
You will observe that the machinesets is not working properly (status does not turns to Running).
### How is your environment configured?
- KKP version: 2.19.2
- Shared or separate master/seed clusters?: Separate Master and Seed Cluster.
### Provide your KKP manifest here (if applicable)
<!-- Providing an applicable manifest (KubermaticConfiguration, Seed, Cluster or other resources) will help us to reproduce the issue.
Please make sure to redact all secrets (e.g. passwords, URLs...)! -->
<details>
```yaml
# paste manifest here
```
</details>
### What cloud provider are you running on?
<!-- AWS, Azure, DigitalOcean, GCP, Hetzner Cloud, Nutanix, OpenStack, Equinix Metal (Packet), VMware vSphere, Other (e.g. baremetal or non-natively supported provider) -->
Azure for KKP Master setup and vSphere for Seed and User Clusters setup
### What operating system are you running in your user cluster?
<!-- Ubuntu 20.04, CentOS 7, Rocky Linux 8, Flatcar Linux, ... (optional, bug might not be related to user cluster) -->
Operating System: Ubuntu 20.04
### Additional information
<!-- Additional information about the bug you're reporting (optional). -->
|
priority
|
upgrade system on first boot is not working with vsphere cloud provider what happened try to provide as much information as possible if you re reporting a security issue please check the guidelines for reporting security issues during cluster creation process using vsphere provider when upgrade system on first boot option is selected then the machinesets are not working as expected when this option is turned off it works fine expected behavior when you select the upgrade system on first boot the cluster creation should work fine as well without any issues similar to how it works when this option is not selected how to reproduce the issue from kkp dashboard navigate to the project and select the create cluster under clusters menu then select vsphere at provider tab next fill out the information at cluster tab next at the settings tab under basic settings select the upgrade system on first boot option and proceed with other details to create cluster you will observe that the machinesets is not working properly status does not turns to running how is your environment configured kkp version shared or separate master seed clusters separate master and seed cluster provide your kkp manifest here if applicable providing an applicable manifest kubermaticconfiguration seed cluster or other resources will help us to reproduce the issue please make sure to redact all secrets e g passwords urls yaml paste manifest here what cloud provider are you running on azure for kkp master setup and vsphere for seed and user clusters setup what operating system are you running in your user cluster operating system ubuntu additional information
| 1
|
311,917
| 9,540,304,700
|
IssuesEvent
|
2019-04-30 19:10:27
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
Yet Another Sever Crash
|
Fixed High Priority
|
> --BEGIN DUMP--
> Dump Time
> 04/21/2019 18:44:04
>
> Exception
> Exception: KeyNotFoundException
> Message:Der angegebene Schlüssel war nicht im Wörterbuch angegeben.
> Source:mscorlib
>
> System.Collections.Generic.KeyNotFoundException: Der angegebene Schlüssel war nicht im Wörterbuch angegeben.
> at System.ThrowHelper.ThrowKeyNotFoundException()
> at System.Collections.Generic.Dictionary`2.get_Item(TKey key)
> at Eco.Simulation.RouteProbing.RouteManager.AllWalkablesY(WorldPosition3i pos, Int32 searchRange, Int16 matchRegion)
> at Eco.Simulation.Agents.AI.AIUtilities.FindRoute(Vector3 position, Single minRadius, Single maxRadius, Vector2 direction, Single minDirectionOffsetDegrees, Single maxDirectionOffsetDegrees, Int32 tryCount)
> at Eco.Mods.Organisms.Behaviors.MovementBehaviors.LandMovement(Animal agent, Vector2 direction, Single speed, AnimalAnimationState state, Single minDistance, Single maxDistance, Single minDirectionOffsetDegrees, Single maxDirectionOffsetDegrees, Int32 tryCount)
> at Eco.Mods.Organisms.Behaviors.GroupBehaviors.FollowLeader(Animal agent, Single maxDistance)
> at Eco.Simulation.Agents.AI.FunctionalBehaviorTree`1.Selector(T context, Func`2[] children)
> at Eco.Simulation.Agents.Animal.Tick()
> at Eco.Simulation.Simulation.TickAll()
> at Eco.Core.Plugins.TickTimeUtil.TimeSubprocess(Action func)
> at Eco.Simulation.EcoSim.DoTick(TickSample tick)
> at Eco.Simulation.EcoSim.Run()
> at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
> at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
> at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
> at System.Threading.ThreadHelper.ThreadStart()
>
> --END DUMP--
>
|
1.0
|
Yet Another Sever Crash - > --BEGIN DUMP--
> Dump Time
> 04/21/2019 18:44:04
>
> Exception
> Exception: KeyNotFoundException
> Message:Der angegebene Schlüssel war nicht im Wörterbuch angegeben.
> Source:mscorlib
>
> System.Collections.Generic.KeyNotFoundException: Der angegebene Schlüssel war nicht im Wörterbuch angegeben.
> at System.ThrowHelper.ThrowKeyNotFoundException()
> at System.Collections.Generic.Dictionary`2.get_Item(TKey key)
> at Eco.Simulation.RouteProbing.RouteManager.AllWalkablesY(WorldPosition3i pos, Int32 searchRange, Int16 matchRegion)
> at Eco.Simulation.Agents.AI.AIUtilities.FindRoute(Vector3 position, Single minRadius, Single maxRadius, Vector2 direction, Single minDirectionOffsetDegrees, Single maxDirectionOffsetDegrees, Int32 tryCount)
> at Eco.Mods.Organisms.Behaviors.MovementBehaviors.LandMovement(Animal agent, Vector2 direction, Single speed, AnimalAnimationState state, Single minDistance, Single maxDistance, Single minDirectionOffsetDegrees, Single maxDirectionOffsetDegrees, Int32 tryCount)
> at Eco.Mods.Organisms.Behaviors.GroupBehaviors.FollowLeader(Animal agent, Single maxDistance)
> at Eco.Simulation.Agents.AI.FunctionalBehaviorTree`1.Selector(T context, Func`2[] children)
> at Eco.Simulation.Agents.Animal.Tick()
> at Eco.Simulation.Simulation.TickAll()
> at Eco.Core.Plugins.TickTimeUtil.TimeSubprocess(Action func)
> at Eco.Simulation.EcoSim.DoTick(TickSample tick)
> at Eco.Simulation.EcoSim.Run()
> at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
> at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
> at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
> at System.Threading.ThreadHelper.ThreadStart()
>
> --END DUMP--
>
|
priority
|
yet another sever crash begin dump dump time exception exception keynotfoundexception message der angegebene schlüssel war nicht im wörterbuch angegeben source mscorlib system collections generic keynotfoundexception der angegebene schlüssel war nicht im wörterbuch angegeben at system throwhelper throwkeynotfoundexception at system collections generic dictionary get item tkey key at eco simulation routeprobing routemanager allwalkablesy pos searchrange matchregion at eco simulation agents ai aiutilities findroute position single minradius single maxradius direction single mindirectionoffsetdegrees single maxdirectionoffsetdegrees trycount at eco mods organisms behaviors movementbehaviors landmovement animal agent direction single speed animalanimationstate state single mindistance single maxdistance single mindirectionoffsetdegrees single maxdirectionoffsetdegrees trycount at eco mods organisms behaviors groupbehaviors followleader animal agent single maxdistance at eco simulation agents ai functionalbehaviortree selector t context func children at eco simulation agents animal tick at eco simulation simulation tickall at eco core plugins ticktimeutil timesubprocess action func at eco simulation ecosim dotick ticksample tick at eco simulation ecosim run at system threading executioncontext runinternal executioncontext executioncontext contextcallback callback object state boolean preservesyncctx at system threading executioncontext run executioncontext executioncontext contextcallback callback object state boolean preservesyncctx at system threading executioncontext run executioncontext executioncontext contextcallback callback object state at system threading threadhelper threadstart end dump
| 1
|
578,651
| 17,149,614,460
|
IssuesEvent
|
2021-07-13 18:40:37
|
wso2/product-is
|
https://api.github.com/repos/wso2/product-is
|
closed
|
Incorrect step is resolved from AuthenticationContext.sequenceConfig when building JSGraph for adaptive script
|
Component/Adaptive Auth Component/Auth Framework Priority/Highest bug
|
**Describe the issue:**
`stepMap` in `AuthenticationContext.sequenceConfig` is in the executed order of the steps, not the order defined in the `authenticationSequence`. In here [1], `stepConfig` is retrieved considering the `stepMap` in `authenticationSequence` and `AuthenticationContext.sequenceConfig` are in same order.
[1] https://github.com/wso2/carbon-identity-framework/blob/master/components/authentication-framework/org.wso2.carbon.identity.application.authentication.framework/src/main/java/org/wso2/carbon/identity/application/authentication/framework/config/model/graph/JsGraphBuilder.java#L565
|
1.0
|
Incorrect step is resolved from AuthenticationContext.sequenceConfig when building JSGraph for adaptive script - **Describe the issue:**
`stepMap` in `AuthenticationContext.sequenceConfig` is in the executed order of the steps, not the order defined in the `authenticationSequence`. In here [1], `stepConfig` is retrieved considering the `stepMap` in `authenticationSequence` and `AuthenticationContext.sequenceConfig` are in same order.
[1] https://github.com/wso2/carbon-identity-framework/blob/master/components/authentication-framework/org.wso2.carbon.identity.application.authentication.framework/src/main/java/org/wso2/carbon/identity/application/authentication/framework/config/model/graph/JsGraphBuilder.java#L565
|
priority
|
incorrect step is resolved from authenticationcontext sequenceconfig when building jsgraph for adaptive script describe the issue stepmap in authenticationcontext sequenceconfig is in the executed order of the steps not the order defined in the authenticationsequence in here stepconfig is retrieved considering the stepmap in authenticationsequence and authenticationcontext sequenceconfig are in same order
| 1
|
350,110
| 10,478,463,275
|
IssuesEvent
|
2019-09-24 00:08:32
|
BCcampus/edehr
|
https://api.github.com/repos/BCcampus/edehr
|
closed
|
User can navigate between different sections of a page with horizontal tabs/links
|
Effort - Medium Epic - Navigation Priority - High ~Feature
|
Pages that use these tabs include:
Vital signs
|
1.0
|
User can navigate between different sections of a page with horizontal tabs/links - Pages that use these tabs include:
Vital signs
|
priority
|
user can navigate between different sections of a page with horizontal tabs links pages that use these tabs include vital signs
| 1
|
618,501
| 19,472,008,729
|
IssuesEvent
|
2021-12-24 03:52:23
|
Quizeal/Frontend
|
https://api.github.com/repos/Quizeal/Frontend
|
closed
|
Quiz Test: Timer indicating quiz duration
|
bug enhancement frontend Priority: High
|
Currently timmer showing is in seconds
handle it if quiz duration is in minutes and so on..
|
1.0
|
Quiz Test: Timer indicating quiz duration - Currently timmer showing is in seconds
handle it if quiz duration is in minutes and so on..
|
priority
|
quiz test timer indicating quiz duration currently timmer showing is in seconds handle it if quiz duration is in minutes and so on
| 1
|
770,829
| 27,058,458,423
|
IssuesEvent
|
2023-02-13 17:48:54
|
union-platform/union-mobile-app
|
https://api.github.com/repos/union-platform/union-mobile-app
|
opened
|
User wants to tell about himself in order to get access to the platform's features
|
priority: high type: feature
|
**Scope of action:** Profile Creation Screen
**Precondition:** The system did not detect the profile after entering the data in the authorization and registration scenarios
**Design:** https://www.figma.com/file/2St3zSul4fHnLffqy3WK7P/union-mobile?node-id=1160%3A1027
## Use cases:
1. The user enters the name
2. The system checks the name for unacceptable characters (only letters and spaces, from 4 to 32 symbols)
(a):
1. The system informs the user about incorrect characters in the name
2. The user deletes invalid characters, then the script continues from step 3
3. The user clicks continue
4. The system redirects the user to the interests selection screen
5. The user enters their interests in the form of tags
(a):
1. The user deletes the added interest
6. The system shows hints from the tag cloud
7. The user selects tags from the suggestions
(a):
1. The user creates a new tag, if there is no such tag yet
8. The user clicks continue
9. The system creates a new profile and redirects the user to the search screen
-----
**Область действия:** Экран создания профиля
**Предусловие:** Система не обнаружила профиля после ввода данных в сценарии авторизации и регистрации
**Дизайн:** https://www.figma.com/file/2St3zSul4fHnLffqy3WK7P/union-mobile?node-id=1160%3A1027
## Сценарий:
1. Пользователь вводит имя
2. Система проверяет имя на наличие неприемлемых символов (разрешены только буквы и пробельный символ, от 4 до 32 символов)
(а):
1. Система сообщает пользователю о некорректных символах в имени
2. Пользователь удаляет недопустимые символы, далее сценарий продолжается с шага 3
3. Пользователь нажимает продолжить
4. Система перенаправляет пользователя на экран выбора интересов
5. Пользователь вводит свои интересы, в виде тегов
(а):
1. Пользователь удаляет добавленный интерес
6. Система показывает подсказки из облака тегов
7. Пользователь выбирает теги из подсказок
(а):
1. Пользователь создаёт новый тег, если такого тега ещё нет
8. Пользователь нажимает продолжить
9. Система создаёт новый профиль и перенаправляет пользователя на экран поиска
|
1.0
|
User wants to tell about himself in order to get access to the platform's features - **Scope of action:** Profile Creation Screen
**Precondition:** The system did not detect the profile after entering the data in the authorization and registration scenarios
**Design:** https://www.figma.com/file/2St3zSul4fHnLffqy3WK7P/union-mobile?node-id=1160%3A1027
## Use cases:
1. The user enters the name
2. The system checks the name for unacceptable characters (only letters and spaces, from 4 to 32 symbols)
(a):
1. The system informs the user about incorrect characters in the name
2. The user deletes invalid characters, then the script continues from step 3
3. The user clicks continue
4. The system redirects the user to the interests selection screen
5. The user enters their interests in the form of tags
(a):
1. The user deletes the added interest
6. The system shows hints from the tag cloud
7. The user selects tags from the suggestions
(a):
1. The user creates a new tag, if there is no such tag yet
8. The user clicks continue
9. The system creates a new profile and redirects the user to the search screen
-----
**Область действия:** Экран создания профиля
**Предусловие:** Система не обнаружила профиля после ввода данных в сценарии авторизации и регистрации
**Дизайн:** https://www.figma.com/file/2St3zSul4fHnLffqy3WK7P/union-mobile?node-id=1160%3A1027
## Сценарий:
1. Пользователь вводит имя
2. Система проверяет имя на наличие неприемлемых символов (разрешены только буквы и пробельный символ, от 4 до 32 символов)
(а):
1. Система сообщает пользователю о некорректных символах в имени
2. Пользователь удаляет недопустимые символы, далее сценарий продолжается с шага 3
3. Пользователь нажимает продолжить
4. Система перенаправляет пользователя на экран выбора интересов
5. Пользователь вводит свои интересы, в виде тегов
(а):
1. Пользователь удаляет добавленный интерес
6. Система показывает подсказки из облака тегов
7. Пользователь выбирает теги из подсказок
(а):
1. Пользователь создаёт новый тег, если такого тега ещё нет
8. Пользователь нажимает продолжить
9. Система создаёт новый профиль и перенаправляет пользователя на экран поиска
|
priority
|
user wants to tell about himself in order to get access to the platform s features scope of action profile creation screen precondition the system did not detect the profile after entering the data in the authorization and registration scenarios design use cases the user enters the name the system checks the name for unacceptable characters only letters and spaces from to symbols a the system informs the user about incorrect characters in the name the user deletes invalid characters then the script continues from step the user clicks continue the system redirects the user to the interests selection screen the user enters their interests in the form of tags a the user deletes the added interest the system shows hints from the tag cloud the user selects tags from the suggestions a the user creates a new tag if there is no such tag yet the user clicks continue the system creates a new profile and redirects the user to the search screen область действия экран создания профиля предусловие система не обнаружила профиля после ввода данных в сценарии авторизации и регистрации дизайн сценарий пользователь вводит имя система проверяет имя на наличие неприемлемых символов разрешены только буквы и пробельный символ от до символов а система сообщает пользователю о некорректных символах в имени пользователь удаляет недопустимые символы далее сценарий продолжается с шага пользователь нажимает продолжить система перенаправляет пользователя на экран выбора интересов пользователь вводит свои интересы в виде тегов а пользователь удаляет добавленный интерес система показывает подсказки из облака тегов пользователь выбирает теги из подсказок а пользователь создаёт новый тег если такого тега ещё нет пользователь нажимает продолжить система создаёт новый профиль и перенаправляет пользователя на экран поиска
| 1
|
174,146
| 6,537,145,549
|
IssuesEvent
|
2017-08-31 21:04:15
|
ampproject/amphtml
|
https://api.github.com/repos/ampproject/amphtml
|
closed
|
update conformance config with banned methods
|
Category: Tooling P1: High Priority Type: Feature Request
|
per @jridgewell
Object.values, Object.entries, Object.getOwnPropertyDescriptors, String#padStart, String#padEnd
#banemall
|
1.0
|
update conformance config with banned methods - per @jridgewell
Object.values, Object.entries, Object.getOwnPropertyDescriptors, String#padStart, String#padEnd
#banemall
|
priority
|
update conformance config with banned methods per jridgewell object values object entries object getownpropertydescriptors string padstart string padend banemall
| 1
|
578,123
| 17,144,776,727
|
IssuesEvent
|
2021-07-13 13:35:13
|
perfectsense/gyro
|
https://api.github.com/repos/perfectsense/gyro
|
closed
|
`@for` directives not setting variable if it has been used set earlier in config
|
bug priority:high
|
**Describe the bug**
If a variable is reused inside a @for directive, after it's been set to some value, then it will not be set to the values in the array that was passed in.
**To Reproduce**
You can run the following gyro config to reproduce the issue
```
some_var: "item"
a_list: ["hi", "hello", "how", "are"]
@for some_var -in $(a_list)
# The excepted output is "hi" "hello" "how" "are"
# but we get "item1" instead.
@print: $(some_var)
@end
```
**Expected behavior**
The variable that was reused to capture items in a collection should be set to each of those items, or gyro should print an error if variable shadowing is not allowed.
**Additional context**
The following will also fail, even though the variable "some_var" should be out of scope by the time it gets to the second for loop.
```
@for thing -in ["item"]
some_var: $(thing)
@print: "Here is the first value: $(some_var)"
@end
a_list: ["hi", "hello", "how", "are"]
@for some_var -in $(a_list)
@print: "A value from the array: $(some_var)"
@end
```
|
1.0
|
`@for` directives not setting variable if it has been used set earlier in config - **Describe the bug**
If a variable is reused inside a @for directive, after it's been set to some value, then it will not be set to the values in the array that was passed in.
**To Reproduce**
You can run the following gyro config to reproduce the issue
```
some_var: "item"
a_list: ["hi", "hello", "how", "are"]
@for some_var -in $(a_list)
# The excepted output is "hi" "hello" "how" "are"
# but we get "item1" instead.
@print: $(some_var)
@end
```
**Expected behavior**
The variable that was reused to capture items in a collection should be set to each of those items, or gyro should print an error if variable shadowing is not allowed.
**Additional context**
The following will also fail, even though the variable "some_var" should be out of scope by the time it gets to the second for loop.
```
@for thing -in ["item"]
some_var: $(thing)
@print: "Here is the first value: $(some_var)"
@end
a_list: ["hi", "hello", "how", "are"]
@for some_var -in $(a_list)
@print: "A value from the array: $(some_var)"
@end
```
|
priority
|
for directives not setting variable if it has been used set earlier in config describe the bug if a variable is reused inside a for directive after it s been set to some value then it will not be set to the values in the array that was passed in to reproduce you can run the following gyro config to reproduce the issue some var item a list for some var in a list the excepted output is hi hello how are but we get instead print some var end expected behavior the variable that was reused to capture items in a collection should be set to each of those items or gyro should print an error if variable shadowing is not allowed additional context the following will also fail even though the variable some var should be out of scope by the time it gets to the second for loop for thing in some var thing print here is the first value some var end a list for some var in a list print a value from the array some var end
| 1
|
618,961
| 19,501,068,659
|
IssuesEvent
|
2021-12-28 03:20:52
|
0auBSQ/OpenTaiko
|
https://api.github.com/repos/0auBSQ/OpenTaiko
|
opened
|
[Improvement] Make balloons/rolls counts being processed in real time instead of after the roll/balloon pop
|
enhancement priority : medium-to-high
|
Would be more convenient for branches judgement or dan charts
|
1.0
|
[Improvement] Make balloons/rolls counts being processed in real time instead of after the roll/balloon pop - Would be more convenient for branches judgement or dan charts
|
priority
|
make balloons rolls counts being processed in real time instead of after the roll balloon pop would be more convenient for branches judgement or dan charts
| 1
|
526,190
| 15,283,181,928
|
IssuesEvent
|
2021-02-23 10:31:26
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
[0.9.3 staging-1930] Transaction window doesn't work
|
Category: Gameplay Priority: High Regression Squad: Mountain Goat Status: Fixed Type: Bug
|
Step to reproduce:
- make some money transfers, buy/sell something or just create currnecy in mint.
- open Transaction window by clicking on account name.
- transaction window is empty:

|
1.0
|
[0.9.3 staging-1930] Transaction window doesn't work - Step to reproduce:
- make some money transfers, buy/sell something or just create currnecy in mint.
- open Transaction window by clicking on account name.
- transaction window is empty:

|
priority
|
transaction window doesn t work step to reproduce make some money transfers buy sell something or just create currnecy in mint open transaction window by clicking on account name transaction window is empty
| 1
|
825,940
| 31,479,293,694
|
IssuesEvent
|
2023-08-30 12:54:53
|
gamefreedomgit/Maelstrom
|
https://api.github.com/repos/gamefreedomgit/Maelstrom
|
closed
|
Baleroc - shards spawning
|
Priority: High Raid: Firelands
|
**Description:**
During Baleroc enounter shards are supposed to spawn randomly on any range or melee except tank. Currently its'a always spawning on melee players
**How to reproduce:**
Pull Baleroc and see it's always spawning on melee
|
1.0
|
Baleroc - shards spawning - **Description:**
During Baleroc enounter shards are supposed to spawn randomly on any range or melee except tank. Currently its'a always spawning on melee players
**How to reproduce:**
Pull Baleroc and see it's always spawning on melee
|
priority
|
baleroc shards spawning description during baleroc enounter shards are supposed to spawn randomly on any range or melee except tank currently its a always spawning on melee players how to reproduce pull baleroc and see it s always spawning on melee
| 1
|
781,412
| 27,436,465,878
|
IssuesEvent
|
2023-03-02 07:52:38
|
fkie-cad/dewolf
|
https://api.github.com/repos/fkie-cad/dewolf
|
closed
|
[Frontend] AttributeError: 'NoneType' object has no attribute 'requirements' during CFG creation
|
bug priority-high
|
### What happened?
possibly related to #150
```python
Traceback (most recent call last):
File "/home/user/dewolf/decompile.py", line 82, in <module>
main(Decompiler)
File "/home/user/dewolf/decompiler/util/commandline.py", line 81, in main
undecorated_code = decompiler.decompile_all(options)
File "/home/user/dewolf/decompile.py", line 71, in decompile_all
task = self._frontend.create_task(function, task_options)
File "/home/user/dewolf/decompiler/frontend/binaryninja/frontend.py", line 137, in create_task
raise e
File "/home/user/dewolf/decompiler/frontend/binaryninja/frontend.py", line 126, in create_task
cfg = self._extract_cfg(function.function, options)
File "/home/user/dewolf/decompiler/frontend/binaryninja/frontend.py", line 156, in _extract_cfg
return parser.parse(function)
File "/home/user/dewolf/decompiler/frontend/binaryninja/parser.py", line 34, in parse
index_to_BasicBlock[basic_block.index] = BasicBlock(basic_block.index, instructions=list(self._lift_instructions(basic_block)))
File "/home/user/dewolf/decompiler/structures/graphs/basicblock.py", line 38, in __init__
self._update()
File "/home/user/dewolf/decompiler/structures/graphs/basicblock.py", line 183, in _update
for dependency in instruction.requirements:
File "/home/user/dewolf/decompiler/structures/pseudo/instructions.py", line 142, in requirements
return self._destination.requirements + self._value.requirements
File "/home/user/dewolf/decompiler/structures/pseudo/operations.py", line 226, in requirements
return self._collect_required_variables(self.operands)
File "/home/user/dewolf/decompiler/structures/pseudo/operations.py", line 246, in _collect_required_variables
return list(InsertionOrderedSet(chain(*[op.requirements for op in operands])))
File "/home/user/dewolf/decompiler/structures/pseudo/operations.py", line 246, in <listcomp>
return list(InsertionOrderedSet(chain(*[op.requirements for op in operands])))
```
### How to reproduce?
```bash
python decompile.py 7923f949e7422ac02c2dc5148950861f8102c83859f00b7d09857b95f16f7caf sub_40321f --pipeline.debug
```
### Affected Binary Ninja Version(s)
3.3.3996 (Build ID e34a955e)
|
1.0
|
[Frontend] AttributeError: 'NoneType' object has no attribute 'requirements' during CFG creation - ### What happened?
possibly related to #150
```python
Traceback (most recent call last):
File "/home/user/dewolf/decompile.py", line 82, in <module>
main(Decompiler)
File "/home/user/dewolf/decompiler/util/commandline.py", line 81, in main
undecorated_code = decompiler.decompile_all(options)
File "/home/user/dewolf/decompile.py", line 71, in decompile_all
task = self._frontend.create_task(function, task_options)
File "/home/user/dewolf/decompiler/frontend/binaryninja/frontend.py", line 137, in create_task
raise e
File "/home/user/dewolf/decompiler/frontend/binaryninja/frontend.py", line 126, in create_task
cfg = self._extract_cfg(function.function, options)
File "/home/user/dewolf/decompiler/frontend/binaryninja/frontend.py", line 156, in _extract_cfg
return parser.parse(function)
File "/home/user/dewolf/decompiler/frontend/binaryninja/parser.py", line 34, in parse
index_to_BasicBlock[basic_block.index] = BasicBlock(basic_block.index, instructions=list(self._lift_instructions(basic_block)))
File "/home/user/dewolf/decompiler/structures/graphs/basicblock.py", line 38, in __init__
self._update()
File "/home/user/dewolf/decompiler/structures/graphs/basicblock.py", line 183, in _update
for dependency in instruction.requirements:
File "/home/user/dewolf/decompiler/structures/pseudo/instructions.py", line 142, in requirements
return self._destination.requirements + self._value.requirements
File "/home/user/dewolf/decompiler/structures/pseudo/operations.py", line 226, in requirements
return self._collect_required_variables(self.operands)
File "/home/user/dewolf/decompiler/structures/pseudo/operations.py", line 246, in _collect_required_variables
return list(InsertionOrderedSet(chain(*[op.requirements for op in operands])))
File "/home/user/dewolf/decompiler/structures/pseudo/operations.py", line 246, in <listcomp>
return list(InsertionOrderedSet(chain(*[op.requirements for op in operands])))
```
### How to reproduce?
```bash
python decompile.py 7923f949e7422ac02c2dc5148950861f8102c83859f00b7d09857b95f16f7caf sub_40321f --pipeline.debug
```
### Affected Binary Ninja Version(s)
3.3.3996 (Build ID e34a955e)
|
priority
|
attributeerror nonetype object has no attribute requirements during cfg creation what happened possibly related to python traceback most recent call last file home user dewolf decompile py line in main decompiler file home user dewolf decompiler util commandline py line in main undecorated code decompiler decompile all options file home user dewolf decompile py line in decompile all task self frontend create task function task options file home user dewolf decompiler frontend binaryninja frontend py line in create task raise e file home user dewolf decompiler frontend binaryninja frontend py line in create task cfg self extract cfg function function options file home user dewolf decompiler frontend binaryninja frontend py line in extract cfg return parser parse function file home user dewolf decompiler frontend binaryninja parser py line in parse index to basicblock basicblock basic block index instructions list self lift instructions basic block file home user dewolf decompiler structures graphs basicblock py line in init self update file home user dewolf decompiler structures graphs basicblock py line in update for dependency in instruction requirements file home user dewolf decompiler structures pseudo instructions py line in requirements return self destination requirements self value requirements file home user dewolf decompiler structures pseudo operations py line in requirements return self collect required variables self operands file home user dewolf decompiler structures pseudo operations py line in collect required variables return list insertionorderedset chain file home user dewolf decompiler structures pseudo operations py line in return list insertionorderedset chain how to reproduce bash python decompile py sub pipeline debug affected binary ninja version s build id
| 1
|
26,922
| 2,688,818,863
|
IssuesEvent
|
2015-03-31 04:27:35
|
david415/HoneyBadger
|
https://api.github.com/repos/david415/HoneyBadger
|
closed
|
fix bug in detectInjection
|
bug highest priority
|
panic: getRingSlice: sliceStart 7240 >= head len 1900
goroutine 1388 [running]:
github.com/david415/HoneyBadger.getRingSlice(0xc2095b5a40, 0xc2095b5ac0, 0x1c48, 0x0, 0x0, 0x0, 0x0)
/home/human/golang/gopkg/src/github.com/david415/HoneyBadger/retrospective.go:132 +0x293
github.com/david415/HoneyBadger.(*Connection).getOverlapBytes(0xc2095a5b80, 0xc2095b5a40, 0xc2095b5ac0, 0x3063618f, 0x30636cde, 0x0, 0x0, 0x0, 0x0, 0x5d1db96a)
/home/human/golang/gopkg/src/github.com/david415/HoneyBadger/state_machine.go:262 +0x3f0
github.com/david415/HoneyBadger.(*Connection).detectInjection(0xc2095a5b80, 0xecc7cbed8, 0xc205c86860, 0x9d3a40, 0x1, 0x4, 0x4, 0x2a61eb9, 0x0, 0x5d1db96a, ...)
|
1.0
|
fix bug in detectInjection - panic: getRingSlice: sliceStart 7240 >= head len 1900
goroutine 1388 [running]:
github.com/david415/HoneyBadger.getRingSlice(0xc2095b5a40, 0xc2095b5ac0, 0x1c48, 0x0, 0x0, 0x0, 0x0)
/home/human/golang/gopkg/src/github.com/david415/HoneyBadger/retrospective.go:132 +0x293
github.com/david415/HoneyBadger.(*Connection).getOverlapBytes(0xc2095a5b80, 0xc2095b5a40, 0xc2095b5ac0, 0x3063618f, 0x30636cde, 0x0, 0x0, 0x0, 0x0, 0x5d1db96a)
/home/human/golang/gopkg/src/github.com/david415/HoneyBadger/state_machine.go:262 +0x3f0
github.com/david415/HoneyBadger.(*Connection).detectInjection(0xc2095a5b80, 0xecc7cbed8, 0xc205c86860, 0x9d3a40, 0x1, 0x4, 0x4, 0x2a61eb9, 0x0, 0x5d1db96a, ...)
|
priority
|
fix bug in detectinjection panic getringslice slicestart head len goroutine github com honeybadger getringslice home human golang gopkg src github com honeybadger retrospective go github com honeybadger connection getoverlapbytes home human golang gopkg src github com honeybadger state machine go github com honeybadger connection detectinjection
| 1
|
738,716
| 25,573,051,425
|
IssuesEvent
|
2022-11-30 19:24:15
|
WordPress/openverse-frontend
|
https://api.github.com/repos/WordPress/openverse-frontend
|
closed
|
Cannot select search type on mobile
|
🟧 priority: high 🛠 goal: fix 🕹 aspect: interface
|
## Description
<!-- Concisely describe the bug. Compare your experience with what you expected to happen. -->
<!-- For example: "I clicked the 'submit' button and instead of seeing a thank you message, I saw a blank page." -->
When you change the search type on mobile, the modal does not close. Locally in dev, you get redirected to a 404 page.
## Reproduction
<!-- Provide detailed steps to reproduce the bug. -->
1. <!-- Step 1 ... -->Go to https://wordpress.org/openverse/search/image?q=cat on a narrow screen.
2. <!-- Step 2 ... -->Try to change the search type using the content switcher in the header.
3. <!-- Step 3 ... -->When you click on the content switcher, the modal opens.
4. When you click on 'Audio', the search type changes to audio, but the modal isn't closed, so it's unclear for the user that the content changed. If you try the save locally, using `pnpm:dev`, you will get a 404 page.
## Screenshots
<!-- Add screenshots to show the problem; or delete the section entirely. -->

## Resolution
<!-- Replace the [ ] with [x] to check the box. -->
- [x] 🙋 I would be interested in resolving this bug.
|
1.0
|
Cannot select search type on mobile - ## Description
<!-- Concisely describe the bug. Compare your experience with what you expected to happen. -->
<!-- For example: "I clicked the 'submit' button and instead of seeing a thank you message, I saw a blank page." -->
When you change the search type on mobile, the modal does not close. Locally in dev, you get redirected to a 404 page.
## Reproduction
<!-- Provide detailed steps to reproduce the bug. -->
1. <!-- Step 1 ... -->Go to https://wordpress.org/openverse/search/image?q=cat on a narrow screen.
2. <!-- Step 2 ... -->Try to change the search type using the content switcher in the header.
3. <!-- Step 3 ... -->When you click on the content switcher, the modal opens.
4. When you click on 'Audio', the search type changes to audio, but the modal isn't closed, so it's unclear for the user that the content changed. If you try the save locally, using `pnpm:dev`, you will get a 404 page.
## Screenshots
<!-- Add screenshots to show the problem; or delete the section entirely. -->

## Resolution
<!-- Replace the [ ] with [x] to check the box. -->
- [x] 🙋 I would be interested in resolving this bug.
|
priority
|
cannot select search type on mobile description when you change the search type on mobile the modal does not close locally in dev you get redirected to a page reproduction go to on a narrow screen try to change the search type using the content switcher in the header when you click on the content switcher the modal opens when you click on audio the search type changes to audio but the modal isn t closed so it s unclear for the user that the content changed if you try the save locally using pnpm dev you will get a page screenshots resolution 🙋 i would be interested in resolving this bug
| 1
|
502,627
| 14,563,203,213
|
IssuesEvent
|
2020-12-17 01:55:54
|
ctm/mb2-doc
|
https://api.github.com/repos/ctm/mb2-doc
|
opened
|
command box doesn't work in shrink mode in Safari
|
bug easy high priority
|
If the window is in shrink mode in Safari, enter in the command box tries to call time in NLHE. I think it toggles the show number of hands remaining button in a mix.
I believe this is because I put the icon row inside the form for layout reasons and then didn't test it in Safari.
Since I can reproduce this trivially, I can fix it trivially.
FWIW, this is probably the bug that was preventing someone from redeeming lammers on an iPad during vATLARGE. I don't remember who reported it.
|
1.0
|
command box doesn't work in shrink mode in Safari - If the window is in shrink mode in Safari, enter in the command box tries to call time in NLHE. I think it toggles the show number of hands remaining button in a mix.
I believe this is because I put the icon row inside the form for layout reasons and then didn't test it in Safari.
Since I can reproduce this trivially, I can fix it trivially.
FWIW, this is probably the bug that was preventing someone from redeeming lammers on an iPad during vATLARGE. I don't remember who reported it.
|
priority
|
command box doesn t work in shrink mode in safari if the window is in shrink mode in safari enter in the command box tries to call time in nlhe i think it toggles the show number of hands remaining button in a mix i believe this is because i put the icon row inside the form for layout reasons and then didn t test it in safari since i can reproduce this trivially i can fix it trivially fwiw this is probably the bug that was preventing someone from redeeming lammers on an ipad during vatlarge i don t remember who reported it
| 1
|
524,942
| 15,225,797,632
|
IssuesEvent
|
2021-02-18 07:56:37
|
bcgov/platform-services-registry
|
https://api.github.com/repos/bcgov/platform-services-registry
|
closed
|
Provisioner Container Image Failed to Build (SPIKE)
|
bug component/bot high priority
|
**Describe the issue**
We're currently unable to build the container image for the project provisioner.
**Additional context**
Cryptography, a python package, is a dependency of Ansible and has changed the way it is built to require the Rust toolchain in version 3.4.4+.
**Definition of done**
- [x] Container Image can build successfully
- [x] PR Created and Merged
- [x] New Container Image tested with provisioner and registry
|
1.0
|
Provisioner Container Image Failed to Build (SPIKE) - **Describe the issue**
We're currently unable to build the container image for the project provisioner.
**Additional context**
Cryptography, a python package, is a dependency of Ansible and has changed the way it is built to require the Rust toolchain in version 3.4.4+.
**Definition of done**
- [x] Container Image can build successfully
- [x] PR Created and Merged
- [x] New Container Image tested with provisioner and registry
|
priority
|
provisioner container image failed to build spike describe the issue we re currently unable to build the container image for the project provisioner additional context cryptography a python package is a dependency of ansible and has changed the way it is built to require the rust toolchain in version definition of done container image can build successfully pr created and merged new container image tested with provisioner and registry
| 1
|
351,216
| 10,514,299,404
|
IssuesEvent
|
2019-09-27 23:49:32
|
radical-cybertools/radical.analytics
|
https://api.github.com/repos/radical-cybertools/radical.analytics
|
closed
|
Radical.analytics example fails with "local variable 'ext' referenced before assignment"
|
priority:high type:bug
|
```
Traceback (most recent call last):
File "test.py", line 24, in <module>
session = ra.Session(src=src, sid = sid, stype='radical.entk')
File "/home/aymen/miniconda2/envs/conda_env2/lib/python2.7/site-packages/radical/analytics/session.py", line 38, in __init__
sid, src, tgt, ext = self._get_sid(sid, src)
File "/home/aymen/miniconda2/envs/conda_env2/lib/python2.7/site-packages/radical/analytics/session.py", line 205, in _get_sid
return sid, src, tgt, ext
UnboundLocalError: local variable 'ext' referenced before assignment
```
Radical Stack :
```
python : 2.7.15
pythonpath :
virtualenv : conda_env1
radical.analytics : 0.72.0
radical.entk : 0.72.1
radical.pilot : 0.72.0
radical.saga : 0.72.1
radical.utils : 0.72.0
```
|
1.0
|
Radical.analytics example fails with "local variable 'ext' referenced before assignment" - ```
Traceback (most recent call last):
File "test.py", line 24, in <module>
session = ra.Session(src=src, sid = sid, stype='radical.entk')
File "/home/aymen/miniconda2/envs/conda_env2/lib/python2.7/site-packages/radical/analytics/session.py", line 38, in __init__
sid, src, tgt, ext = self._get_sid(sid, src)
File "/home/aymen/miniconda2/envs/conda_env2/lib/python2.7/site-packages/radical/analytics/session.py", line 205, in _get_sid
return sid, src, tgt, ext
UnboundLocalError: local variable 'ext' referenced before assignment
```
Radical Stack :
```
python : 2.7.15
pythonpath :
virtualenv : conda_env1
radical.analytics : 0.72.0
radical.entk : 0.72.1
radical.pilot : 0.72.0
radical.saga : 0.72.1
radical.utils : 0.72.0
```
|
priority
|
radical analytics example fails with local variable ext referenced before assignment traceback most recent call last file test py line in session ra session src src sid sid stype radical entk file home aymen envs conda lib site packages radical analytics session py line in init sid src tgt ext self get sid sid src file home aymen envs conda lib site packages radical analytics session py line in get sid return sid src tgt ext unboundlocalerror local variable ext referenced before assignment radical stack python pythonpath virtualenv conda radical analytics radical entk radical pilot radical saga radical utils
| 1
|
749,597
| 26,171,058,889
|
IssuesEvent
|
2023-01-01 23:07:05
|
Progrunning/BoardGamesCompanion
|
https://api.github.com/repos/Progrunning/BoardGamesCompanion
|
closed
|
Game stats improvements
|
enhancement feedback high priority
|
# User feedback:
> Especially great with the recent Improvement of entering scores. Only features I'm missing are ways to look up total time spent playing games (either a specific game or all games) or score data by game beyond high score and last game score (average score, for example).
# ACs
- Add information about total time spent on a specific game
- See how/where to add total time spent on all games played
- Expand the game stats with:
- average score
- average number of players
- [idea] pie charts illustrating number of players and amount of game
- [idea] personal bests
- [idea] percentage of wins per player
- [idea] biggest/smallest difference of scores
IDEA/NOTE: Maybe it would be worth creating a separate overall stats page?
|
1.0
|
Game stats improvements - # User feedback:
> Especially great with the recent Improvement of entering scores. Only features I'm missing are ways to look up total time spent playing games (either a specific game or all games) or score data by game beyond high score and last game score (average score, for example).
# ACs
- Add information about total time spent on a specific game
- See how/where to add total time spent on all games played
- Expand the game stats with:
- average score
- average number of players
- [idea] pie charts illustrating number of players and amount of game
- [idea] personal bests
- [idea] percentage of wins per player
- [idea] biggest/smallest difference of scores
IDEA/NOTE: Maybe it would be worth creating a separate overall stats page?
|
priority
|
game stats improvements user feedback especially great with the recent improvement of entering scores only features i m missing are ways to look up total time spent playing games either a specific game or all games or score data by game beyond high score and last game score average score for example acs add information about total time spent on a specific game see how where to add total time spent on all games played expand the game stats with average score average number of players pie charts illustrating number of players and amount of game personal bests percentage of wins per player biggest smallest difference of scores idea note maybe it would be worth creating a separate overall stats page
| 1
|
267,051
| 8,378,663,636
|
IssuesEvent
|
2018-10-06 16:35:15
|
Templarian/MaterialDesign
|
https://api.github.com/repos/Templarian/MaterialDesign
|
closed
|
Waze Icon
|
Brand Icon :bookmark: Contribution :heavy_check_mark: High Priority :grey_exclamation: Icon Request :pencil2:
|
I've found no icon for the waze navigation app but I think one would really be of use!
|
1.0
|
Waze Icon - I've found no icon for the waze navigation app but I think one would really be of use!
|
priority
|
waze icon i ve found no icon for the waze navigation app but i think one would really be of use
| 1
|
617,748
| 19,403,735,520
|
IssuesEvent
|
2021-12-19 16:39:25
|
WarEmu/WarBugs
|
https://api.github.com/repos/WarEmu/WarBugs
|
closed
|
Squigarmor and +power seem to not reset
|
Ability Fix Pending High Priority
|
Hello,
Expected behavior:
When you as a SH enter Squigarmor and equip a bunch of +ranged power gear such as Vanq belt, Warlord 5piece bonus. The +power is added to your Squigarmor stats. But if you take the gear off, zoneswap or even pvp die the stats are still remembered and you can keep them to have a high bonusdmg on top of running for example a Boost 5 proc from Offensive sove while douple dipping into both a 99range power bonus from one set, and Boost 5 from an other.
With soeve gear on:
https://media.discordapp.net/attachments/915654064539303986/921231092927848478/unknown.png?width=391&height=505
With Warlord & vanq belt boots on for big ranged power boost:
https://media.discordapp.net/attachments/915654064539303986/921231259043258428/unknown.png?width=376&height=505
With sove back on, after having entered Squigarmor and not left it while its beeing boosted by the +ranged set:
https://media.discordapp.net/attachments/915654064539303986/921231741119774770/unknown.png?width=382&height=505
At first i was thinking it was related to swapping preset builds at the trainer from a Balistic build to Str build going from Ranged to Melee build. but it does seem to be replicatable entering the following steps:
1 Enter Squig armor. (questionable if needed) check bonus dmg and stats.
2 Equip all the +ranged power you can find (unsure if works with mainstats too)
3 Reenter Squig armor. Compare the stats and bonusdmg.
4 take the +power gear off and equip your other armor (used offensive main set sove for my example to get boost 5 proc)
5 compare your stats and bonus dmg without the +power gear equiped while still having the effects.
|
1.0
|
Squigarmor and +power seem to not reset - Hello,
Expected behavior:
When you as a SH enter Squigarmor and equip a bunch of +ranged power gear such as Vanq belt, Warlord 5piece bonus. The +power is added to your Squigarmor stats. But if you take the gear off, zoneswap or even pvp die the stats are still remembered and you can keep them to have a high bonusdmg on top of running for example a Boost 5 proc from Offensive sove while douple dipping into both a 99range power bonus from one set, and Boost 5 from an other.
With soeve gear on:
https://media.discordapp.net/attachments/915654064539303986/921231092927848478/unknown.png?width=391&height=505
With Warlord & vanq belt boots on for big ranged power boost:
https://media.discordapp.net/attachments/915654064539303986/921231259043258428/unknown.png?width=376&height=505
With sove back on, after having entered Squigarmor and not left it while its beeing boosted by the +ranged set:
https://media.discordapp.net/attachments/915654064539303986/921231741119774770/unknown.png?width=382&height=505
At first i was thinking it was related to swapping preset builds at the trainer from a Balistic build to Str build going from Ranged to Melee build. but it does seem to be replicatable entering the following steps:
1 Enter Squig armor. (questionable if needed) check bonus dmg and stats.
2 Equip all the +ranged power you can find (unsure if works with mainstats too)
3 Reenter Squig armor. Compare the stats and bonusdmg.
4 take the +power gear off and equip your other armor (used offensive main set sove for my example to get boost 5 proc)
5 compare your stats and bonus dmg without the +power gear equiped while still having the effects.
|
priority
|
squigarmor and power seem to not reset hello expected behavior when you as a sh enter squigarmor and equip a bunch of ranged power gear such as vanq belt warlord bonus the power is added to your squigarmor stats but if you take the gear off zoneswap or even pvp die the stats are still remembered and you can keep them to have a high bonusdmg on top of running for example a boost proc from offensive sove while douple dipping into both a power bonus from one set and boost from an other with soeve gear on with warlord vanq belt boots on for big ranged power boost with sove back on after having entered squigarmor and not left it while its beeing boosted by the ranged set at first i was thinking it was related to swapping preset builds at the trainer from a balistic build to str build going from ranged to melee build but it does seem to be replicatable entering the following steps enter squig armor questionable if needed check bonus dmg and stats equip all the ranged power you can find unsure if works with mainstats too reenter squig armor compare the stats and bonusdmg take the power gear off and equip your other armor used offensive main set sove for my example to get boost proc compare your stats and bonus dmg without the power gear equiped while still having the effects
| 1
|
683,460
| 23,383,157,347
|
IssuesEvent
|
2022-08-11 11:26:56
|
lifebit-ai/cloudos-cli
|
https://api.github.com/repos/lifebit-ai/cloudos-cli
|
closed
|
💡 As a CloudOS cli user, I want to pass flag-value pairs from params eg -p "count=5" -p "genome=hg19"
|
enhancement jobs priority:high
|
```
nextflow run --input s3://lifebit/this.txt --count 5
cloudos job run -p "input=s3://lifebit/this.txt" -p "count=5"
```
Reference implementation in [`nteract/papermill#cli.py#L50-L52`](https://github.com/nteract/papermill/blob/98013f00c9dc610bcdaf06176e34a1d479e51582/papermill/cli.py#L50-L52)
```
@click.option(
'--parameters', '-p', nargs=2, multiple=True, help='Parameters to pass to the parameters cell.'
)
```
|
1.0
|
💡 As a CloudOS cli user, I want to pass flag-value pairs from params eg -p "count=5" -p "genome=hg19" - ```
nextflow run --input s3://lifebit/this.txt --count 5
cloudos job run -p "input=s3://lifebit/this.txt" -p "count=5"
```
Reference implementation in [`nteract/papermill#cli.py#L50-L52`](https://github.com/nteract/papermill/blob/98013f00c9dc610bcdaf06176e34a1d479e51582/papermill/cli.py#L50-L52)
```
@click.option(
'--parameters', '-p', nargs=2, multiple=True, help='Parameters to pass to the parameters cell.'
)
```
|
priority
|
💡 as a cloudos cli user i want to pass flag value pairs from params eg p count p genome nextflow run input lifebit this txt count cloudos job run p input lifebit this txt p count reference implementation in click option parameters p nargs multiple true help parameters to pass to the parameters cell
| 1
|
541,178
| 15,822,858,567
|
IssuesEvent
|
2021-04-05 23:15:31
|
yugabyte/yugabyte-db
|
https://api.github.com/repos/yugabyte/yugabyte-db
|
closed
|
[docdb] Make WritableFileWriter buffer gflag controllable
|
area/docdb priority/high
|
@rahuldesirazu noticed this as part of the investigation in #6676
> buf_.AllocateNewBuffer(65536);
We should turn this into a gflag. Maybe we can also increase this exponentially up to a certain size, as needed? cc @kmuthukk
|
1.0
|
[docdb] Make WritableFileWriter buffer gflag controllable - @rahuldesirazu noticed this as part of the investigation in #6676
> buf_.AllocateNewBuffer(65536);
We should turn this into a gflag. Maybe we can also increase this exponentially up to a certain size, as needed? cc @kmuthukk
|
priority
|
make writablefilewriter buffer gflag controllable rahuldesirazu noticed this as part of the investigation in buf allocatenewbuffer we should turn this into a gflag maybe we can also increase this exponentially up to a certain size as needed cc kmuthukk
| 1
|
209,520
| 7,176,819,682
|
IssuesEvent
|
2018-01-31 11:21:18
|
kuzzleio/kuzzle
|
https://api.github.com/repos/kuzzleio/kuzzle
|
opened
|
PluginExecute `callback is not a function`
|
bug priority-high
|
It appears this error is coming when Kuzzle is under pressure, exceeding its max concurrent requests.
As a work around, increasing it to 500 makes it work perfectly well.
|
1.0
|
PluginExecute `callback is not a function` - It appears this error is coming when Kuzzle is under pressure, exceeding its max concurrent requests.
As a work around, increasing it to 500 makes it work perfectly well.
|
priority
|
pluginexecute callback is not a function it appears this error is coming when kuzzle is under pressure exceeding its max concurrent requests as a work around increasing it to makes it work perfectly well
| 1
|
672,501
| 22,828,158,056
|
IssuesEvent
|
2022-07-12 10:26:30
|
COS301-SE-2022/Twitter-Summariser
|
https://api.github.com/repos/COS301-SE-2022/Twitter-Summariser
|
opened
|
(API) Add endpoint to reorder tweets
|
priority:high status:not-ready type:enhance role:backend-engineer scope:backend
|
The logic of this feature is a bit shaky as it is not yet known if new endpoints are needed or existing endpoints will be used.
|
1.0
|
(API) Add endpoint to reorder tweets - The logic of this feature is a bit shaky as it is not yet known if new endpoints are needed or existing endpoints will be used.
|
priority
|
api add endpoint to reorder tweets the logic of this feature is a bit shaky as it is not yet known if new endpoints are needed or existing endpoints will be used
| 1
|
645,646
| 21,011,616,595
|
IssuesEvent
|
2022-03-30 07:14:08
|
FalloutStudios/Axis
|
https://api.github.com/repos/FalloutStudios/Axis
|
closed
|
Logger issue on Windows
|
Priority: High Status: In Progress
|
Logging logs to file causes error when renaming file because of an illegal character for Windows.
|
1.0
|
Logger issue on Windows - Logging logs to file causes error when renaming file because of an illegal character for Windows.
|
priority
|
logger issue on windows logging logs to file causes error when renaming file because of an illegal character for windows
| 1
|
113,959
| 4,586,746,769
|
IssuesEvent
|
2016-09-20 00:22:54
|
TheTyee/design-article.thetyee.ca
|
https://api.github.com/repos/TheTyee/design-article.thetyee.ca
|
reopened
|
ad-blocker js broken in github repo
|
Priority: High Status: Bug
|
The ad-blocker test in aritcle_custom.js doesn't work when run locally and on github.io:
```
// add .ad-blocker if ad blocker present
if(typeof canRunAds == "undefined") {
$("body").addClass("ad-blocker");
}
```
The ad-blocker class is added even when no ad-blocker is active.
See: http://thetyee.github.io/design-article.thetyee.ca/library
This makes it impossible to test the placement and appearance of ads.
Can the conditional be rewritten so that it functions when run locally and from github?
|
1.0
|
ad-blocker js broken in github repo - The ad-blocker test in aritcle_custom.js doesn't work when run locally and on github.io:
```
// add .ad-blocker if ad blocker present
if(typeof canRunAds == "undefined") {
$("body").addClass("ad-blocker");
}
```
The ad-blocker class is added even when no ad-blocker is active.
See: http://thetyee.github.io/design-article.thetyee.ca/library
This makes it impossible to test the placement and appearance of ads.
Can the conditional be rewritten so that it functions when run locally and from github?
|
priority
|
ad blocker js broken in github repo the ad blocker test in aritcle custom js doesn t work when run locally and on github io add ad blocker if ad blocker present if typeof canrunads undefined body addclass ad blocker the ad blocker class is added even when no ad blocker is active see this makes it impossible to test the placement and appearance of ads can the conditional be rewritten so that it functions when run locally and from github
| 1
|
821,386
| 30,820,888,813
|
IssuesEvent
|
2023-08-01 16:15:07
|
Signbank/Global-signbank
|
https://api.github.com/repos/Signbank/Global-signbank
|
opened
|
Error uploading recorded video
|
bug high priority
|
I'm playing around with the senses/translations, finding that Safari & DuckDuckGo on Mac won't upload a video at all (nothing knew), but now Chrome on Mac also gives an error:
*TransactionManagementError at /video/upload/
*An error occurred in the current transaction. You can't execute queries until the end of the 'atomic' block.
|
1.0
|
Error uploading recorded video - I'm playing around with the senses/translations, finding that Safari & DuckDuckGo on Mac won't upload a video at all (nothing knew), but now Chrome on Mac also gives an error:
*TransactionManagementError at /video/upload/
*An error occurred in the current transaction. You can't execute queries until the end of the 'atomic' block.
|
priority
|
error uploading recorded video i m playing around with the senses translations finding that safari duckduckgo on mac won t upload a video at all nothing knew but now chrome on mac also gives an error transactionmanagementerror at video upload an error occurred in the current transaction you can t execute queries until the end of the atomic block
| 1
|
496,111
| 14,332,602,409
|
IssuesEvent
|
2020-11-27 03:01:53
|
elementary/switchboard
|
https://api.github.com/repos/elementary/switchboard
|
opened
|
Opening some pane from wingpanel indicators makes the search entry sensitive
|
Priority: High
|
<!--
* Please read and follow these tips: https://elementary.io/docs/code/reference#be-prepared-to-provide-more-information
* Be sure to search open and closed issues for duplicates
* A detailed report will help us address your issue more quickly. Do your best!
-->
## What Happened
<!--Describe the issue in detail-->
If you open System Settings and move to some pane the search entry get insensitive. However, opening some pane from wingpanel indicators directly makes the search entry sensitive and any text typed are passed into it. This causes a problem that if you mean to input some text entry in the view you can't do:

# Behavior
<!--Explain how what happened is different from what you wanted to happen-->
Opening some pane from wingpanel indicators directly should make the search entry sensitive. Also, typed text should be inserted where the cursor is.
## Steps to Reproduce
<!--Explain the exact steps one would take to experience the issue. If applicable, add screenshots or screen recordings.-->
1. Open System Settings from dock/application indicator and go to some pane. The search entry get insensitive.
2. Close System Settings and click "Keyboard Settings…" from keyboard indicator to open keyboard pane directly
3. See the search entry is sensitive.
4. Also, click the "Type to test your layout" and try to type some text. Any inputs are given to the search entry
## Platform Information
<!--
* The version of elementary OS you are using, or other operating system
* The version of the software you are using such as "1.0", "Compiled from git", or "Latest release" if you're not sure but you have run updates
* Relevant hardware information such as graphics drivers, unconventional setups, etc.
* If you're unsure, copy or screenshot information at System Settings -> About
-->
- elementary OS Odin Daily
- `switchboard-2.4.0~r1075+pkg73~daily~ubuntu6.0.1`
<!--Please be sure to preview your issue before saving. Thanks!-->
|
1.0
|
Opening some pane from wingpanel indicators makes the search entry sensitive - <!--
* Please read and follow these tips: https://elementary.io/docs/code/reference#be-prepared-to-provide-more-information
* Be sure to search open and closed issues for duplicates
* A detailed report will help us address your issue more quickly. Do your best!
-->
## What Happened
<!--Describe the issue in detail-->
If you open System Settings and move to some pane the search entry get insensitive. However, opening some pane from wingpanel indicators directly makes the search entry sensitive and any text typed are passed into it. This causes a problem that if you mean to input some text entry in the view you can't do:

# Behavior
<!--Explain how what happened is different from what you wanted to happen-->
Opening some pane from wingpanel indicators directly should make the search entry sensitive. Also, typed text should be inserted where the cursor is.
## Steps to Reproduce
<!--Explain the exact steps one would take to experience the issue. If applicable, add screenshots or screen recordings.-->
1. Open System Settings from dock/application indicator and go to some pane. The search entry get insensitive.
2. Close System Settings and click "Keyboard Settings…" from keyboard indicator to open keyboard pane directly
3. See the search entry is sensitive.
4. Also, click the "Type to test your layout" and try to type some text. Any inputs are given to the search entry
## Platform Information
<!--
* The version of elementary OS you are using, or other operating system
* The version of the software you are using such as "1.0", "Compiled from git", or "Latest release" if you're not sure but you have run updates
* Relevant hardware information such as graphics drivers, unconventional setups, etc.
* If you're unsure, copy or screenshot information at System Settings -> About
-->
- elementary OS Odin Daily
- `switchboard-2.4.0~r1075+pkg73~daily~ubuntu6.0.1`
<!--Please be sure to preview your issue before saving. Thanks!-->
|
priority
|
opening some pane from wingpanel indicators makes the search entry sensitive please read and follow these tips be sure to search open and closed issues for duplicates a detailed report will help us address your issue more quickly do your best what happened if you open system settings and move to some pane the search entry get insensitive however opening some pane from wingpanel indicators directly makes the search entry sensitive and any text typed are passed into it this causes a problem that if you mean to input some text entry in the view you can t do behavior opening some pane from wingpanel indicators directly should make the search entry sensitive also typed text should be inserted where the cursor is steps to reproduce open system settings from dock application indicator and go to some pane the search entry get insensitive close system settings and click keyboard settings… from keyboard indicator to open keyboard pane directly see the search entry is sensitive also click the type to test your layout and try to type some text any inputs are given to the search entry platform information the version of elementary os you are using or other operating system the version of the software you are using such as compiled from git or latest release if you re not sure but you have run updates relevant hardware information such as graphics drivers unconventional setups etc if you re unsure copy or screenshot information at system settings about elementary os odin daily switchboard daily
| 1
|
120,050
| 4,779,668,124
|
IssuesEvent
|
2016-10-27 23:27:24
|
shelljs/shx
|
https://api.github.com/repos/shelljs/shx
|
opened
|
chore: add node v7 to CI
|
chore high priority
|
v7 is available on `nvm` now, so we should make sure it's added to CI when possible:
- [ ] Travis
- [ ] Appveyor (not sure if it's available yet)
@levithomason can take this, otherwise I'll get to it in a few days.
|
1.0
|
chore: add node v7 to CI - v7 is available on `nvm` now, so we should make sure it's added to CI when possible:
- [ ] Travis
- [ ] Appveyor (not sure if it's available yet)
@levithomason can take this, otherwise I'll get to it in a few days.
|
priority
|
chore add node to ci is available on nvm now so we should make sure it s added to ci when possible travis appveyor not sure if it s available yet levithomason can take this otherwise i ll get to it in a few days
| 1
|
592,198
| 17,872,182,591
|
IssuesEvent
|
2021-09-06 17:30:10
|
turbot/steampipe-plugin-azure
|
https://api.github.com/repos/turbot/steampipe-plugin-azure
|
opened
|
Add table azure_api_management_services
|
enhancement priority:high new table
|
**References**
https://docs.microsoft.com/en-us/rest/api/apimanagement/2021-01-01-preview/api-management-service/list
Note: This detail is required for NIST SP 800 53 Rev 5 controls.
|
1.0
|
Add table azure_api_management_services - **References**
https://docs.microsoft.com/en-us/rest/api/apimanagement/2021-01-01-preview/api-management-service/list
Note: This detail is required for NIST SP 800 53 Rev 5 controls.
|
priority
|
add table azure api management services references note this detail is required for nist sp rev controls
| 1
|
344,466
| 10,345,159,393
|
IssuesEvent
|
2019-09-04 12:55:21
|
aimalz/proclam
|
https://api.github.com/repos/aimalz/proclam
|
closed
|
Minor writing issues for paper resubmission
|
good first issue help wanted high priority
|
The referee requests that we address a handful of small issues that should be relatively easy to resolve.
- [x] ~~Update the language throughout the paper to acknowledge that PLAsTiCC has concluded.~~ Add a footnote to acknowledge that PLAsTiCC has concluded. @rbiswas4
- [x] Use the proper AASTeX `\software{}` tag to cite `proclam`'s code dependencies.
- [x] ~~Explain the confusing w\_n (as opposed to w\_m) in Equation 5; the weights were defined on a per object basis but we only considered schemes where weights were shared within each class, so we should either change the notation or clearly present our weights as a special case.~~ Update the equation to match both [Kaggle's implementation](https://www.kaggle.com/c/PLAsTiCC-2018/overview/evaluation) and earlier equations in the paper. @aimalz
- [x] Fix typo: 'te Brier score' in the caption of Table 1.
- [x] Revise/clarify the caption of Table 1. @rbiswas4
- [x] Consistency of subsections in Discussion. @rbiswas4
- [x] Separate out anomaly detection into its own subsection within the Discussion section.
- [x] In the Conclusion section, add a discussion about the weights including both how they can be (and were) gamed and their impact on interpretability of the metric we ended up choosing. @emilleishida
|
1.0
|
Minor writing issues for paper resubmission - The referee requests that we address a handful of small issues that should be relatively easy to resolve.
- [x] ~~Update the language throughout the paper to acknowledge that PLAsTiCC has concluded.~~ Add a footnote to acknowledge that PLAsTiCC has concluded. @rbiswas4
- [x] Use the proper AASTeX `\software{}` tag to cite `proclam`'s code dependencies.
- [x] ~~Explain the confusing w\_n (as opposed to w\_m) in Equation 5; the weights were defined on a per object basis but we only considered schemes where weights were shared within each class, so we should either change the notation or clearly present our weights as a special case.~~ Update the equation to match both [Kaggle's implementation](https://www.kaggle.com/c/PLAsTiCC-2018/overview/evaluation) and earlier equations in the paper. @aimalz
- [x] Fix typo: 'te Brier score' in the caption of Table 1.
- [x] Revise/clarify the caption of Table 1. @rbiswas4
- [x] Consistency of subsections in Discussion. @rbiswas4
- [x] Separate out anomaly detection into its own subsection within the Discussion section.
- [x] In the Conclusion section, add a discussion about the weights including both how they can be (and were) gamed and their impact on interpretability of the metric we ended up choosing. @emilleishida
|
priority
|
minor writing issues for paper resubmission the referee requests that we address a handful of small issues that should be relatively easy to resolve update the language throughout the paper to acknowledge that plasticc has concluded add a footnote to acknowledge that plasticc has concluded use the proper aastex software tag to cite proclam s code dependencies explain the confusing w n as opposed to w m in equation the weights were defined on a per object basis but we only considered schemes where weights were shared within each class so we should either change the notation or clearly present our weights as a special case update the equation to match both and earlier equations in the paper aimalz fix typo te brier score in the caption of table revise clarify the caption of table consistency of subsections in discussion separate out anomaly detection into its own subsection within the discussion section in the conclusion section add a discussion about the weights including both how they can be and were gamed and their impact on interpretability of the metric we ended up choosing emilleishida
| 1
|
785,835
| 27,625,172,231
|
IssuesEvent
|
2023-03-10 05:53:03
|
wso2/api-manager
|
https://api.github.com/repos/wso2/api-manager
|
opened
|
Handle communication failures between CP and GW for proper policy enforcement.
|
Type/Task Priority/High Component/APIM
|
### Description
We need to handle communication failures between the Control plane and Gateway for proper policy enforcement.
### Affected Component
APIM
### Version
4.2.0
### Related Issues
_No response_
### Suggested Labels
_No response_
|
1.0
|
Handle communication failures between CP and GW for proper policy enforcement. - ### Description
We need to handle communication failures between the Control plane and Gateway for proper policy enforcement.
### Affected Component
APIM
### Version
4.2.0
### Related Issues
_No response_
### Suggested Labels
_No response_
|
priority
|
handle communication failures between cp and gw for proper policy enforcement description we need to handle communication failures between the control plane and gateway for proper policy enforcement affected component apim version related issues no response suggested labels no response
| 1
|
280,936
| 8,688,430,264
|
IssuesEvent
|
2018-12-03 16:06:22
|
kubeapps/kubeapps
|
https://api.github.com/repos/kubeapps/kubeapps
|
closed
|
Add some documentation in dockerHUB
|
kind/docs priority/high
|
Currently, the description of our images in dockerhub show no description. It might make sense to add a link to the getting started guide or something there.
https://hub.docker.com/r/kubeapps/dashboard/
|
1.0
|
Add some documentation in dockerHUB - Currently, the description of our images in dockerhub show no description. It might make sense to add a link to the getting started guide or something there.
https://hub.docker.com/r/kubeapps/dashboard/
|
priority
|
add some documentation in dockerhub currently the description of our images in dockerhub show no description it might make sense to add a link to the getting started guide or something there
| 1
|
214,782
| 7,276,699,791
|
IssuesEvent
|
2018-02-21 17:05:59
|
AyuntamientoMadrid/consul
|
https://api.github.com/repos/AyuntamientoMadrid/consul
|
opened
|
Displaying old Spending Proposal id in Budget Investment list
|
Bug High priority
|
Following `admin/budgets/[:budget_id]/budget_investments` there's the list of Investments of a Budget. When one of those investments has been previously migrated from an old Spending Proposal, the id taken to build references to other views and actions is the one in the `original_spending_proposal_id` attribute (see https://github.com/AyuntamientoMadrid/consul/blob/master/app/views/admin/budget_investments/_investments.html.erb#L45).
So the URL to create a new milestone for one of those investments has an incorrect id, and the database reference points to other Investment with (casually, in the best case) the same id number as the old Spending Proposal, so the milestone is created in other Investment in the same or other Budget.
This has to be fixed and show the Investment id instead the old Spending Proposal one.
|
1.0
|
Displaying old Spending Proposal id in Budget Investment list - Following `admin/budgets/[:budget_id]/budget_investments` there's the list of Investments of a Budget. When one of those investments has been previously migrated from an old Spending Proposal, the id taken to build references to other views and actions is the one in the `original_spending_proposal_id` attribute (see https://github.com/AyuntamientoMadrid/consul/blob/master/app/views/admin/budget_investments/_investments.html.erb#L45).
So the URL to create a new milestone for one of those investments has an incorrect id, and the database reference points to other Investment with (casually, in the best case) the same id number as the old Spending Proposal, so the milestone is created in other Investment in the same or other Budget.
This has to be fixed and show the Investment id instead the old Spending Proposal one.
|
priority
|
displaying old spending proposal id in budget investment list following admin budgets budget investments there s the list of investments of a budget when one of those investments has been previously migrated from an old spending proposal the id taken to build references to other views and actions is the one in the original spending proposal id attribute see so the url to create a new milestone for one of those investments has an incorrect id and the database reference points to other investment with casually in the best case the same id number as the old spending proposal so the milestone is created in other investment in the same or other budget this has to be fixed and show the investment id instead the old spending proposal one
| 1
|
107,622
| 4,312,040,941
|
IssuesEvent
|
2016-07-22 02:23:26
|
YsuSERESL/iTrace
|
https://api.github.com/repos/YsuSERESL/iTrace
|
closed
|
Display some identification of gaze events in status bar
|
enhancement high priority
|
Display some identification such as time in the status bar. This will help with syncing with screen recordings.
|
1.0
|
Display some identification of gaze events in status bar - Display some identification such as time in the status bar. This will help with syncing with screen recordings.
|
priority
|
display some identification of gaze events in status bar display some identification such as time in the status bar this will help with syncing with screen recordings
| 1
|
305,303
| 9,367,702,615
|
IssuesEvent
|
2019-04-03 06:39:05
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
closed
|
GRPC stdlib should reuse KeyStore and TrustStore from crypto stdlib
|
Component/gRPC Component/stdlib Priority/High Type/Improvement
|
**Description:**
`crypto` stdlib provides `KeyStore` and `TrustStore` records usable for configuration cryptographic operations.
It is required to remove the records defined in `GRPC` module and re-use the records from `crypto` stdlib.
[1] https://github.com/ballerina-platform/ballerina-lang/blob/c995c53a465779b82dc7f81d6ee3a3bd6027cbda/stdlib/grpc/src/main/ballerina/grpc/grpc_commons.bal#L21
|
1.0
|
GRPC stdlib should reuse KeyStore and TrustStore from crypto stdlib - **Description:**
`crypto` stdlib provides `KeyStore` and `TrustStore` records usable for configuration cryptographic operations.
It is required to remove the records defined in `GRPC` module and re-use the records from `crypto` stdlib.
[1] https://github.com/ballerina-platform/ballerina-lang/blob/c995c53a465779b82dc7f81d6ee3a3bd6027cbda/stdlib/grpc/src/main/ballerina/grpc/grpc_commons.bal#L21
|
priority
|
grpc stdlib should reuse keystore and truststore from crypto stdlib description crypto stdlib provides keystore and truststore records usable for configuration cryptographic operations it is required to remove the records defined in grpc module and re use the records from crypto stdlib
| 1
|
457,679
| 13,160,049,865
|
IssuesEvent
|
2020-08-10 16:53:20
|
isovera/Tufts-Friedman
|
https://api.github.com/repos/isovera/Tufts-Friedman
|
opened
|
Error when trying to push/pull content across dev/stage/prod on this site: https://nutritioninnovationlab.org/
|
Priority - High
|
**Problem:**
An error occurs when trying to push/pull content across dev/stage/prod on this site:
https://nutritioninnovationlab.org/
Xia has described what she and the server admin at Tufts (Marv) have found as the issue, but the client would like us to take a look and advise.
Background from Xia: The current challenge with NIL is that none of the content is migrating across the different sites. So far we've accessed with our system admin guy, Marv, that the NIL is a drupal 8 site built with a tarball, but the new drupal 8 sites are built as composer drupal project. I have a feeling we might have to just rebuilt it with drupal 8 with composer.
**Tasks**
- [ ] Identify questions we have for the client
- [ ] Discuss on a client call
|
1.0
|
Error when trying to push/pull content across dev/stage/prod on this site: https://nutritioninnovationlab.org/ - **Problem:**
An error occurs when trying to push/pull content across dev/stage/prod on this site:
https://nutritioninnovationlab.org/
Xia has described what she and the server admin at Tufts (Marv) have found as the issue, but the client would like us to take a look and advise.
Background from Xia: The current challenge with NIL is that none of the content is migrating across the different sites. So far we've accessed with our system admin guy, Marv, that the NIL is a drupal 8 site built with a tarball, but the new drupal 8 sites are built as composer drupal project. I have a feeling we might have to just rebuilt it with drupal 8 with composer.
**Tasks**
- [ ] Identify questions we have for the client
- [ ] Discuss on a client call
|
priority
|
error when trying to push pull content across dev stage prod on this site problem an error occurs when trying to push pull content across dev stage prod on this site xia has described what she and the server admin at tufts marv have found as the issue but the client would like us to take a look and advise background from xia the current challenge with nil is that none of the content is migrating across the different sites so far we ve accessed with our system admin guy marv that the nil is a drupal site built with a tarball but the new drupal sites are built as composer drupal project i have a feeling we might have to just rebuilt it with drupal with composer tasks identify questions we have for the client discuss on a client call
| 1
|
287,899
| 8,822,462,038
|
IssuesEvent
|
2019-01-02 09:30:48
|
bolt/four
|
https://api.github.com/repos/bolt/four
|
closed
|
Two issues with `multiselect`
|
priority: high topic: Vue
|
The new "multiselect" items work, but there are some missing bits to make them fully useable:
```
<editor-select
:value="'{{ value }}'". <- There
:name="'{{ name }}'"
:id="'{{ id }}'"
:options="{{ options|json_encode() }}"
:form="'{{ form }}'"
></editor-select>
```
1) You can pass in a single value like `books` or `movies` and it works. It seems like you can't pass in an array like `["books", "movies"]` to select 2 initially.
2) Selected values don't persist when `POST`ing. You can select items, but these aren't included when the form is submitted
3) There's an odd empty initial selection:

@jackiboy Could you take a look at these?
|
1.0
|
Two issues with `multiselect` - The new "multiselect" items work, but there are some missing bits to make them fully useable:
```
<editor-select
:value="'{{ value }}'". <- There
:name="'{{ name }}'"
:id="'{{ id }}'"
:options="{{ options|json_encode() }}"
:form="'{{ form }}'"
></editor-select>
```
1) You can pass in a single value like `books` or `movies` and it works. It seems like you can't pass in an array like `["books", "movies"]` to select 2 initially.
2) Selected values don't persist when `POST`ing. You can select items, but these aren't included when the form is submitted
3) There's an odd empty initial selection:

@jackiboy Could you take a look at these?
|
priority
|
two issues with multiselect the new multiselect items work but there are some missing bits to make them fully useable editor select value value there name name id id options options json encode form form you can pass in a single value like books or movies and it works it seems like you can t pass in an array like to select initially selected values don t persist when post ing you can select items but these aren t included when the form is submitted there s an odd empty initial selection jackiboy could you take a look at these
| 1
|
1,580
| 2,515,784,287
|
IssuesEvent
|
2015-01-15 21:00:46
|
IQSS/geoconnect
|
https://api.github.com/repos/IQSS/geoconnect
|
closed
|
Dataverse: view map page for general users
|
Component: High-level Priority: High Status: Dev system: Dataverse Type: Feature
|
- [ ] Add "Explore" button on dataset.xhtml
- [ ] Make separate page. mapview.xhtml
- [ ] Remove large map being displayed in the data file listing
|
1.0
|
Dataverse: view map page for general users - - [ ] Add "Explore" button on dataset.xhtml
- [ ] Make separate page. mapview.xhtml
- [ ] Remove large map being displayed in the data file listing
|
priority
|
dataverse view map page for general users add explore button on dataset xhtml make separate page mapview xhtml remove large map being displayed in the data file listing
| 1
|
21,379
| 2,639,713,156
|
IssuesEvent
|
2015-03-11 05:19:01
|
cs2103jan2015-w15-2j/main
|
https://api.github.com/repos/cs2103jan2015-w15-2j/main
|
closed
|
Parser should able to parse user input for correct command type and parameters
|
priority.high status.ongoing type.task
|
Parse user input string and call the command function with parameters (encapsulated as an object) as argument
|
1.0
|
Parser should able to parse user input for correct command type and parameters - Parse user input string and call the command function with parameters (encapsulated as an object) as argument
|
priority
|
parser should able to parse user input for correct command type and parameters parse user input string and call the command function with parameters encapsulated as an object as argument
| 1
|
401,596
| 11,795,198,434
|
IssuesEvent
|
2020-03-18 08:28:08
|
thaliawww/concrexit
|
https://api.github.com/repos/thaliawww/concrexit
|
closed
|
KeyError: 'delete_selected'
|
bug easy and fun events priority: high
|
In GitLab by _thaliatechnicie on Dec 20, 2019, 19:08
Sentry Issue: [CONCREXIT-1S](https://sentry.io/organizations/thalia/issues/1399281021/?referrer=gitlab_integration)
```
KeyError: 'delete_selected'
(7 additional frame(s) were not displayed)
...
File "django/utils/decorators.py", line 45, in _wrapper
return bound_method(*args, **kwargs)
File "django/utils/decorators.py", line 142, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "django/contrib/admin/options.py", line 1685, in changelist_view
cl = self.get_changelist_instance(request)
File "django/contrib/admin/options.py", line 727, in get_changelist_instance
if self.get_actions(request):
File "events/admin.py", line 205, in get_actions
del actions['delete_selected']
```
|
1.0
|
KeyError: 'delete_selected' - In GitLab by _thaliatechnicie on Dec 20, 2019, 19:08
Sentry Issue: [CONCREXIT-1S](https://sentry.io/organizations/thalia/issues/1399281021/?referrer=gitlab_integration)
```
KeyError: 'delete_selected'
(7 additional frame(s) were not displayed)
...
File "django/utils/decorators.py", line 45, in _wrapper
return bound_method(*args, **kwargs)
File "django/utils/decorators.py", line 142, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "django/contrib/admin/options.py", line 1685, in changelist_view
cl = self.get_changelist_instance(request)
File "django/contrib/admin/options.py", line 727, in get_changelist_instance
if self.get_actions(request):
File "events/admin.py", line 205, in get_actions
del actions['delete_selected']
```
|
priority
|
keyerror delete selected in gitlab by thaliatechnicie on dec sentry issue keyerror delete selected additional frame s were not displayed file django utils decorators py line in wrapper return bound method args kwargs file django utils decorators py line in wrapped view response view func request args kwargs file django contrib admin options py line in changelist view cl self get changelist instance request file django contrib admin options py line in get changelist instance if self get actions request file events admin py line in get actions del actions
| 1
|
665,418
| 22,318,558,120
|
IssuesEvent
|
2022-06-14 02:26:58
|
ArctosDB/arctos
|
https://api.github.com/repos/ArctosDB/arctos
|
closed
|
Data entry - collecting source required?
|
Priority-High (Needed for work) Function-DataEntry/Bulkloading Collection Type - Cultural Collections Collection Type - Geological Data Quality
|
My staff noticed yesterday while cataloging that collecting source is now a required field. The box is not yellow:
<img width="853" alt="Screen Shot 2022-06-07 at 12 49 44 PM" src="https://user-images.githubusercontent.com/17605945/172649523-84179c9f-2125-4178-a714-869f3431b06a.png">
But, when she tried to save with a NULL value she got this:

This field does not make sense for cultural collections or paleo/geology collections.
Can we please un-require this field for data entry? The two options are not appropriate and using 'unknown' makes our data seem unreliable, especially if someone looks at the code table documentation:
<img width="933" alt="Screen Shot 2022-06-08 at 7 05 41 AM" src="https://user-images.githubusercontent.com/17605945/172651103-de3e3ed6-c4ae-4f29-a0fb-0ced7cfb13f1.png">
I know I weighed in on this issue already but I can't find the actual github issue.
|
1.0
|
Data entry - collecting source required? - My staff noticed yesterday while cataloging that collecting source is now a required field. The box is not yellow:
<img width="853" alt="Screen Shot 2022-06-07 at 12 49 44 PM" src="https://user-images.githubusercontent.com/17605945/172649523-84179c9f-2125-4178-a714-869f3431b06a.png">
But, when she tried to save with a NULL value she got this:

This field does not make sense for cultural collections or paleo/geology collections.
Can we please un-require this field for data entry? The two options are not appropriate and using 'unknown' makes our data seem unreliable, especially if someone looks at the code table documentation:
<img width="933" alt="Screen Shot 2022-06-08 at 7 05 41 AM" src="https://user-images.githubusercontent.com/17605945/172651103-de3e3ed6-c4ae-4f29-a0fb-0ced7cfb13f1.png">
I know I weighed in on this issue already but I can't find the actual github issue.
|
priority
|
data entry collecting source required my staff noticed yesterday while cataloging that collecting source is now a required field the box is not yellow img width alt screen shot at pm src but when she tried to save with a null value she got this this field does not make sense for cultural collections or paleo geology collections can we please un require this field for data entry the two options are not appropriate and using unknown makes our data seem unreliable especially if someone looks at the code table documentation img width alt screen shot at am src i know i weighed in on this issue already but i can t find the actual github issue
| 1
|
46,282
| 2,955,366,202
|
IssuesEvent
|
2015-07-08 02:14:45
|
tokenly/swapbot
|
https://api.github.com/repos/tokenly/swapbot
|
closed
|
Add basic Swapbot Operator Specific Branding
|
enhancement high priority
|

Please add two fields to the Swapbot Admin interface
1. Upload and replace the background image (the boardwalk in all swapbots currently)
2. Change the color of the transparent color overlay (this would work either allowing any color or with a few primary presets that we know will look good)
3. Upload swapbot head size Logo (it should go in the highlighted area of the above image)
|
1.0
|
Add basic Swapbot Operator Specific Branding - 
Please add two fields to the Swapbot Admin interface
1. Upload and replace the background image (the boardwalk in all swapbots currently)
2. Change the color of the transparent color overlay (this would work either allowing any color or with a few primary presets that we know will look good)
3. Upload swapbot head size Logo (it should go in the highlighted area of the above image)
|
priority
|
add basic swapbot operator specific branding please add two fields to the swapbot admin interface upload and replace the background image the boardwalk in all swapbots currently change the color of the transparent color overlay this would work either allowing any color or with a few primary presets that we know will look good upload swapbot head size logo it should go in the highlighted area of the above image
| 1
|
60,772
| 3,134,177,684
|
IssuesEvent
|
2015-09-10 08:33:23
|
UnifiedViews/Core
|
https://api.github.com/repos/UnifiedViews/Core
|
closed
|
FileDataUnit - entry.getFileURIString() points to non-existing file due to using "//" instead of "/" in the file uri
|
priority: High resolution: invalid severity: bug status: resolved
|
When accessing entry.getFileURIString() in xls2csv DPU (but this problem should appear also in other DPUs), I got:
```
/Users/tomasknap/Documents/PROJECTS/ETL-SWProj/UnifiedView/Core/backend/working/exec_45/storage/dpu_35/0/%252Fdata%252F268381848687814613
```
Nevertheless, the file is on the file system under the URI:
```
/Users/tomasknap/Documents/PROJECTS/ETL-SWProj/UnifiedView/Core/backend/working/exec_45/storage/dpu_35/0/%2Fdata%2F268381848687814613
```
So there is a difference - in one case there "//data//xxx" and in second case there is "/data/xxxx" at the end of the file URI string.
To reproduce:
1) checkout and import the DPU:
https://github.com/mff-uk/DPUs/tree/master/dpu-domain-specific/cssz-xls2csv
2) add 2 e-filesDownload, which are interlink as below:
<img width="649" alt="screen shot 2015-07-12 at 14 47 16" src="https://cloud.githubusercontent.com/assets/3014917/8637815/f40613a2-28a4-11e5-9f4c-a5d86db254c2.png">
3) Each of these two e-filesDownload DPUs is loading one XLS file from the [zip](http://leteckaposta.cz/483335043)
4) Debug till xls2csv, you will see errors "Problem copying file" which are cause by trying to read file from wrong location (with // instead of / in the fileURI)
Tested on 2.0.2 release. Was able to reproduce on laptop and uv.opendata.cz/unifiedviews
|
1.0
|
FileDataUnit - entry.getFileURIString() points to non-existing file due to using "//" instead of "/" in the file uri - When accessing entry.getFileURIString() in xls2csv DPU (but this problem should appear also in other DPUs), I got:
```
/Users/tomasknap/Documents/PROJECTS/ETL-SWProj/UnifiedView/Core/backend/working/exec_45/storage/dpu_35/0/%252Fdata%252F268381848687814613
```
Nevertheless, the file is on the file system under the URI:
```
/Users/tomasknap/Documents/PROJECTS/ETL-SWProj/UnifiedView/Core/backend/working/exec_45/storage/dpu_35/0/%2Fdata%2F268381848687814613
```
So there is a difference - in one case there "//data//xxx" and in second case there is "/data/xxxx" at the end of the file URI string.
To reproduce:
1) checkout and import the DPU:
https://github.com/mff-uk/DPUs/tree/master/dpu-domain-specific/cssz-xls2csv
2) add 2 e-filesDownload, which are interlink as below:
<img width="649" alt="screen shot 2015-07-12 at 14 47 16" src="https://cloud.githubusercontent.com/assets/3014917/8637815/f40613a2-28a4-11e5-9f4c-a5d86db254c2.png">
3) Each of these two e-filesDownload DPUs is loading one XLS file from the [zip](http://leteckaposta.cz/483335043)
4) Debug till xls2csv, you will see errors "Problem copying file" which are cause by trying to read file from wrong location (with // instead of / in the fileURI)
Tested on 2.0.2 release. Was able to reproduce on laptop and uv.opendata.cz/unifiedviews
|
priority
|
filedataunit entry getfileuristring points to non existing file due to using instead of in the file uri when accessing entry getfileuristring in dpu but this problem should appear also in other dpus i got users tomasknap documents projects etl swproj unifiedview core backend working exec storage dpu nevertheless the file is on the file system under the uri users tomasknap documents projects etl swproj unifiedview core backend working exec storage dpu so there is a difference in one case there data xxx and in second case there is data xxxx at the end of the file uri string to reproduce checkout and import the dpu add e filesdownload which are interlink as below img width alt screen shot at src each of these two e filesdownload dpus is loading one xls file from the debug till you will see errors problem copying file which are cause by trying to read file from wrong location with instead of in the fileuri tested on release was able to reproduce on laptop and uv opendata cz unifiedviews
| 1
|
88,670
| 3,783,730,769
|
IssuesEvent
|
2016-03-19 09:58:02
|
Baystation12/Baystation12
|
https://api.github.com/repos/Baystation12/Baystation12
|
closed
|
Dream Seeker crashing
|
could not reproduce priority: high
|
<!--
PUT YOUR ANSWERS ON THE BLANK LINES BELOW THE HEADERS
(The lines with four #'s)
Don't edit them or delete them it's part of the formatting
-->
#### Brief description of the issue
Dream Seeker client crashes prior to fully downloading server resources.
#### What you expected to happen
Client not to crash.
#### What actually happened
Client crashed, saw error messages immediately prior to crash.

#### Steps to reproduce
1. Load game server byond://baystation12.net:8000
2. Wait.
#### Additional info:
- **Server Revision**: unknown, cannot obtain due to client crashing.
- **Game ID**: bHG-a3CA
|
1.0
|
Dream Seeker crashing - <!--
PUT YOUR ANSWERS ON THE BLANK LINES BELOW THE HEADERS
(The lines with four #'s)
Don't edit them or delete them it's part of the formatting
-->
#### Brief description of the issue
Dream Seeker client crashes prior to fully downloading server resources.
#### What you expected to happen
Client not to crash.
#### What actually happened
Client crashed, saw error messages immediately prior to crash.

#### Steps to reproduce
1. Load game server byond://baystation12.net:8000
2. Wait.
#### Additional info:
- **Server Revision**: unknown, cannot obtain due to client crashing.
- **Game ID**: bHG-a3CA
|
priority
|
dream seeker crashing put your answers on the blank lines below the headers the lines with four s don t edit them or delete them it s part of the formatting brief description of the issue dream seeker client crashes prior to fully downloading server resources what you expected to happen client not to crash what actually happened client crashed saw error messages immediately prior to crash steps to reproduce load game server byond net wait additional info server revision unknown cannot obtain due to client crashing game id bhg
| 1
|
779,175
| 27,342,527,504
|
IssuesEvent
|
2023-02-26 23:35:22
|
LuanRT/YouTube.js
|
https://api.github.com/repos/LuanRT/YouTube.js
|
closed
|
3.0.0: Error when retrieving info of a video with several dub tracks via Android client.
|
bug good first issue priority: high
|
### Steps to reproduce
```js
import { Innertube } from 'youtubei.js';
const yt = await Innertube.create();
let info = await yt.getBasicInfo('TJ2ifmkGGus', 'ANDROID');
```
[Video in code sample](https://www.youtube.com/watch?v=TJ2ifmkGGus) has several audio tracks. Any such video causes this error.
### Failure Logs
```shell
node_modules/youtubei.js/dist/src/parser/youtube/VideoInfo.js:65
throw new InnertubeError('This video is unavailable', info.playability_status);
^
InnertubeError: This video is unavailable
at new VideoInfo (node_modules/youtubei.js/dist/src/parser/youtube/VideoInfo.js:65:19)
at Innertube.<anonymous> (node_modules/youtubei.js/dist/src/Innertube.js:73:20)
at Generator.next (<anonymous>)
at fulfilled (node_modules/youtubei.js/dist/src/Innertube.js:4:58)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
{
info: {
status: 'ERROR',
reason: 'This video is unavailable',
embeddable: false,
audio_only_playablility: null,
error_screen: PlayerErrorMessage {
type: 'PlayerErrorMessage',
subreason: Text {
runs: [
TextRun {
text: 'Watch on the latest version of YouTube.',
bold: false,
italics: false,
strikethrough: false,
endpoint: [NavigationEndpoint]
}
],
text: 'Watch on the latest version of YouTube.'
},
reason: Text {
runs: [
TextRun {
text: 'The following content is not available on this app.',
bold: false,
italics: false,
strikethrough: false,
endpoint: undefined
}
],
text: 'The following content is not available on this app.'
},
proceed_button: null,
thumbnails: [
Thumbnail {
url: '//s.ytimg.com/yts/img/meh7-vflGevej7.png',
width: 140,
height: 100
},
Thumbnail {
url: '//s.ytimg.com/yts/img/meh7-vflGevej7.png',
width: 140,
height: 100
}
],
icon_type: 'ERROR_OUTLINE'
}
},
date: 2023-02-25T11:09:05.107Z,
version: '3.0.0'
}
```
### Expected behavior
I expected to get video info along with all possible dubs, just like what `WEB` client returns.
### Current behavior
InnerTube throws an error and doesn't provide **any** video info if the video in question has several dub tracks.
### Version
Default
### Anything else?
Updating the Android app version [here](https://github.com/LuanRT/YouTube.js/blob/a0bfe164279ec27b0c49c6b0c32222c1a92df5c3/src/utils/Constants.ts#L50) would probably fix this issue, as error suggests.
### Checklist
- [x] I am running the latest version.
- [X] I checked the documentation and found no answer.
- [X] I have searched the existing issues and made sure this is not a duplicate.
- [X] I have provided sufficient information.
|
1.0
|
3.0.0: Error when retrieving info of a video with several dub tracks via Android client. - ### Steps to reproduce
```js
import { Innertube } from 'youtubei.js';
const yt = await Innertube.create();
let info = await yt.getBasicInfo('TJ2ifmkGGus', 'ANDROID');
```
[Video in code sample](https://www.youtube.com/watch?v=TJ2ifmkGGus) has several audio tracks. Any such video causes this error.
### Failure Logs
```shell
node_modules/youtubei.js/dist/src/parser/youtube/VideoInfo.js:65
throw new InnertubeError('This video is unavailable', info.playability_status);
^
InnertubeError: This video is unavailable
at new VideoInfo (node_modules/youtubei.js/dist/src/parser/youtube/VideoInfo.js:65:19)
at Innertube.<anonymous> (node_modules/youtubei.js/dist/src/Innertube.js:73:20)
at Generator.next (<anonymous>)
at fulfilled (node_modules/youtubei.js/dist/src/Innertube.js:4:58)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
{
info: {
status: 'ERROR',
reason: 'This video is unavailable',
embeddable: false,
audio_only_playablility: null,
error_screen: PlayerErrorMessage {
type: 'PlayerErrorMessage',
subreason: Text {
runs: [
TextRun {
text: 'Watch on the latest version of YouTube.',
bold: false,
italics: false,
strikethrough: false,
endpoint: [NavigationEndpoint]
}
],
text: 'Watch on the latest version of YouTube.'
},
reason: Text {
runs: [
TextRun {
text: 'The following content is not available on this app.',
bold: false,
italics: false,
strikethrough: false,
endpoint: undefined
}
],
text: 'The following content is not available on this app.'
},
proceed_button: null,
thumbnails: [
Thumbnail {
url: '//s.ytimg.com/yts/img/meh7-vflGevej7.png',
width: 140,
height: 100
},
Thumbnail {
url: '//s.ytimg.com/yts/img/meh7-vflGevej7.png',
width: 140,
height: 100
}
],
icon_type: 'ERROR_OUTLINE'
}
},
date: 2023-02-25T11:09:05.107Z,
version: '3.0.0'
}
```
### Expected behavior
I expected to get video info along with all possible dubs, just like what `WEB` client returns.
### Current behavior
InnerTube throws an error and doesn't provide **any** video info if the video in question has several dub tracks.
### Version
Default
### Anything else?
Updating the Android app version [here](https://github.com/LuanRT/YouTube.js/blob/a0bfe164279ec27b0c49c6b0c32222c1a92df5c3/src/utils/Constants.ts#L50) would probably fix this issue, as error suggests.
### Checklist
- [x] I am running the latest version.
- [X] I checked the documentation and found no answer.
- [X] I have searched the existing issues and made sure this is not a duplicate.
- [X] I have provided sufficient information.
|
priority
|
error when retrieving info of a video with several dub tracks via android client steps to reproduce js import innertube from youtubei js const yt await innertube create let info await yt getbasicinfo android has several audio tracks any such video causes this error failure logs shell node modules youtubei js dist src parser youtube videoinfo js throw new innertubeerror this video is unavailable info playability status innertubeerror this video is unavailable at new videoinfo node modules youtubei js dist src parser youtube videoinfo js at innertube node modules youtubei js dist src innertube js at generator next at fulfilled node modules youtubei js dist src innertube js at process processticksandrejections node internal process task queues info status error reason this video is unavailable embeddable false audio only playablility null error screen playererrormessage type playererrormessage subreason text runs textrun text watch on the latest version of youtube bold false italics false strikethrough false endpoint text watch on the latest version of youtube reason text runs textrun text the following content is not available on this app bold false italics false strikethrough false endpoint undefined text the following content is not available on this app proceed button null thumbnails thumbnail url s ytimg com yts img png width height thumbnail url s ytimg com yts img png width height icon type error outline date version expected behavior i expected to get video info along with all possible dubs just like what web client returns current behavior innertube throws an error and doesn t provide any video info if the video in question has several dub tracks version default anything else updating the android app version would probably fix this issue as error suggests checklist i am running the latest version i checked the documentation and found no answer i have searched the existing issues and made sure this is not a duplicate i have provided sufficient information
| 1
|
768,539
| 26,968,484,784
|
IssuesEvent
|
2023-02-09 01:22:58
|
ddnomad/printables
|
https://api.github.com/repos/ddnomad/printables
|
closed
|
Implement scripts for compiling models into STL files
|
Priority: High Type: Enhancement
|
Need support for compiling with specific flags / parameters
|
1.0
|
Implement scripts for compiling models into STL files - Need support for compiling with specific flags / parameters
|
priority
|
implement scripts for compiling models into stl files need support for compiling with specific flags parameters
| 1
|
422,504
| 12,279,524,860
|
IssuesEvent
|
2020-05-08 12:23:24
|
Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth
|
https://api.github.com/repos/Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth
|
opened
|
Scorpid armor
|
:exclamation: priority high :question: suggestion - feature :star: :question: suggestion - lore :books: :question: suggestion :question:
|
<!--
DO NOT REMOVE PRE-EXISTING LINES
IF YOU WANT TO SUGGEST A FEW THINGS, OPEN A NEW ISSUE PER EVERY SUGGESTION
----------------------------------------------------------------------------------------------------------
-->
**Describe your suggestion in full detail below:**
You must be able to wear scorpid as armor.
https://wow.gamepedia.com/Skitter_(scorpid)
|
1.0
|
Scorpid armor - <!--
DO NOT REMOVE PRE-EXISTING LINES
IF YOU WANT TO SUGGEST A FEW THINGS, OPEN A NEW ISSUE PER EVERY SUGGESTION
----------------------------------------------------------------------------------------------------------
-->
**Describe your suggestion in full detail below:**
You must be able to wear scorpid as armor.
https://wow.gamepedia.com/Skitter_(scorpid)
|
priority
|
scorpid armor do not remove pre existing lines if you want to suggest a few things open a new issue per every suggestion describe your suggestion in full detail below you must be able to wear scorpid as armor
| 1
|
187,072
| 6,744,678,729
|
IssuesEvent
|
2017-10-20 16:32:14
|
TerraFusion/basicFusion
|
https://api.github.com/repos/TerraFusion/basicFusion
|
closed
|
Fix Python module not being loaded in configureEnv.sh
|
bug High Priority
|
Python might not be loaded by user when running script. Need to be sure to load a Python 2.7 module before script runs.
|
1.0
|
Fix Python module not being loaded in configureEnv.sh - Python might not be loaded by user when running script. Need to be sure to load a Python 2.7 module before script runs.
|
priority
|
fix python module not being loaded in configureenv sh python might not be loaded by user when running script need to be sure to load a python module before script runs
| 1
|
435,185
| 12,532,470,417
|
IssuesEvent
|
2020-06-04 16:00:03
|
getkirby/kirby
|
https://api.github.com/repos/getkirby/kirby
|
closed
|
Queries fail silently when a page is missing
|
priority: high 🔥 type: bug 🐛
|
**Describe the bug**
When you query a page that doesn't exist, the rest of the instructions carry on using the previous result.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a pages field with the query `site.page("foo").children.flip`
2. Make sure a page `foo` doesn't exist
3. You will see the _site_ children flipped
**Expected behavior**
Since the page doesn't exist, it's expected that the query fails and throws an error. If I create the page `foo`, it works as expected. However, if that page doesn't exist, the query is interpreted as `site.children.flip`, which can lead to unexpected results.
**Kirby Version**
3.3.6
|
1.0
|
Queries fail silently when a page is missing - **Describe the bug**
When you query a page that doesn't exist, the rest of the instructions carry on using the previous result.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a pages field with the query `site.page("foo").children.flip`
2. Make sure a page `foo` doesn't exist
3. You will see the _site_ children flipped
**Expected behavior**
Since the page doesn't exist, it's expected that the query fails and throws an error. If I create the page `foo`, it works as expected. However, if that page doesn't exist, the query is interpreted as `site.children.flip`, which can lead to unexpected results.
**Kirby Version**
3.3.6
|
priority
|
queries fail silently when a page is missing describe the bug when you query a page that doesn t exist the rest of the instructions carry on using the previous result to reproduce steps to reproduce the behavior create a pages field with the query site page foo children flip make sure a page foo doesn t exist you will see the site children flipped expected behavior since the page doesn t exist it s expected that the query fails and throws an error if i create the page foo it works as expected however if that page doesn t exist the query is interpreted as site children flip which can lead to unexpected results kirby version
| 1
|
214,496
| 7,273,749,868
|
IssuesEvent
|
2018-02-21 07:04:11
|
wso2/product-is
|
https://api.github.com/repos/wso2/product-is
|
opened
|
Provide a sample private_key_jwt issuer to run this sample.
|
Affected/5.5.0-Alpha Priority/Highest Type/Docs
|
Provide a sample private_key_jwt issuer [1] to run this sample at least point to a sample location
[1] https://docs.wso2.com/display/IS550/Private+Key+JWT+Client+Authentication+for+OIDC
Below is the description :
The JWT must contain some REQUIRED claim values and may contain some OPTIONAL claim values. For more information on the required and optional claim values needed for the JWT for private_key_jwt authentication, click here.
|
1.0
|
Provide a sample private_key_jwt issuer to run this sample. - Provide a sample private_key_jwt issuer [1] to run this sample at least point to a sample location
[1] https://docs.wso2.com/display/IS550/Private+Key+JWT+Client+Authentication+for+OIDC
Below is the description :
The JWT must contain some REQUIRED claim values and may contain some OPTIONAL claim values. For more information on the required and optional claim values needed for the JWT for private_key_jwt authentication, click here.
|
priority
|
provide a sample private key jwt issuer to run this sample provide a sample private key jwt issuer to run this sample at least point to a sample location below is the description the jwt must contain some required claim values and may contain some optional claim values for more information on the required and optional claim values needed for the jwt for private key jwt authentication click here
| 1
|
25,916
| 2,684,042,472
|
IssuesEvent
|
2015-03-28 16:05:42
|
oxyplot/oxyplot
|
https://api.github.com/repos/oxyplot/oxyplot
|
closed
|
Xamarin.Forms Android Crashes 100%
|
Android help-wanted high-priority unconfirmed-bug Xamarin.Forms you-take-it
|
First of all, awesome project!!!!
Running it on Android via Xamarin.Forms crashes (use the examples to test)
```
System.MissingMethodException: Method not found: 'Android.Views.View.set_Background'.
11-25 12:59:29.630 E/mono (12912): at
Xamarin.Forms.Platform.Android.VisualElementRenderer`1[OxyPlot.XamarinForms.PlotView].SetElement
(OxyPlot.XamarinForms.PlotView element) [0x00000] in <filename unknown>:0
11-25 12:59:29.630 E/mono (12912): at
Xamarin.Forms.Platform.Android.VisualElementRenderer`1[OxyPlot.XamarinForms.PlotView].Xamarin.Forms.Platform.Android.IVisualElementRenderer.SetElement
(Xamarin.Forms.VisualElement element) [0x00000] in <filename unknown>:0
11-25 12:59:29.630 E/mono (12912): at
Xamarin.Forms.Platform.Android.RendererFactory.GetRenderer
(Xamarin.Forms.VisualElement view) [0x00000] in <filename unknown>:0
The program 'Mono' has exited with code 0 (0x0).
```
|
1.0
|
Xamarin.Forms Android Crashes 100% - First of all, awesome project!!!!
Running it on Android via Xamarin.Forms crashes (use the examples to test)
```
System.MissingMethodException: Method not found: 'Android.Views.View.set_Background'.
11-25 12:59:29.630 E/mono (12912): at
Xamarin.Forms.Platform.Android.VisualElementRenderer`1[OxyPlot.XamarinForms.PlotView].SetElement
(OxyPlot.XamarinForms.PlotView element) [0x00000] in <filename unknown>:0
11-25 12:59:29.630 E/mono (12912): at
Xamarin.Forms.Platform.Android.VisualElementRenderer`1[OxyPlot.XamarinForms.PlotView].Xamarin.Forms.Platform.Android.IVisualElementRenderer.SetElement
(Xamarin.Forms.VisualElement element) [0x00000] in <filename unknown>:0
11-25 12:59:29.630 E/mono (12912): at
Xamarin.Forms.Platform.Android.RendererFactory.GetRenderer
(Xamarin.Forms.VisualElement view) [0x00000] in <filename unknown>:0
The program 'Mono' has exited with code 0 (0x0).
```
|
priority
|
xamarin forms android crashes first of all awesome project running it on android via xamarin forms crashes use the examples to test system missingmethodexception method not found android views view set background e mono at xamarin forms platform android visualelementrenderer setelement oxyplot xamarinforms plotview element in e mono at xamarin forms platform android visualelementrenderer xamarin forms platform android ivisualelementrenderer setelement xamarin forms visualelement element in e mono at xamarin forms platform android rendererfactory getrenderer xamarin forms visualelement view in the program mono has exited with code
| 1
|
39,513
| 2,856,059,251
|
IssuesEvent
|
2015-06-02 13:18:54
|
gapt/gapt
|
https://api.github.com/repos/gapt/gapt
|
closed
|
Multiset equals between sequents does not work properly
|
Bug High Priority Imported
|
_From [shaoli...@gmail.com](https://code.google.com/u/113190107447576027220/) on December 15, 2010 18:42:00_
adding the following test to LKTest fails
"MultisetEquals" should {
"compute correctly in multiset equality of the empty sequent and a sequent without succedent" in {
(Sequent(f2::Nil,Nil) multisetEquals Sequent(Nil,Nil)) must beEqual (false)
}
}
_Original issue: http://code.google.com/p/gapt/issues/detail?id=95_
|
1.0
|
Multiset equals between sequents does not work properly - _From [shaoli...@gmail.com](https://code.google.com/u/113190107447576027220/) on December 15, 2010 18:42:00_
adding the following test to LKTest fails
"MultisetEquals" should {
"compute correctly in multiset equality of the empty sequent and a sequent without succedent" in {
(Sequent(f2::Nil,Nil) multisetEquals Sequent(Nil,Nil)) must beEqual (false)
}
}
_Original issue: http://code.google.com/p/gapt/issues/detail?id=95_
|
priority
|
multiset equals between sequents does not work properly from on december adding the following test to lktest fails multisetequals should compute correctly in multiset equality of the empty sequent and a sequent without succedent in sequent nil nil multisetequals sequent nil nil must beequal false original issue
| 1
|
291,802
| 8,949,759,534
|
IssuesEvent
|
2019-01-25 08:48:08
|
metasfresh/metasfresh
|
https://api.github.com/repos/metasfresh/metasfresh
|
closed
|
Errors in Tax Codes Report
|
branch:master priority:high topic:Accounting
|
### Is this a bug or feature request?
Bug
### What is the current behavior?
Seems that the report "Fibu Kontenblatt is not considering all Tax Codes.
#### Which are the steps to reproduce?
### What is the expected or desired behavior?
|
1.0
|
Errors in Tax Codes Report - ### Is this a bug or feature request?
Bug
### What is the current behavior?
Seems that the report "Fibu Kontenblatt is not considering all Tax Codes.
#### Which are the steps to reproduce?
### What is the expected or desired behavior?
|
priority
|
errors in tax codes report is this a bug or feature request bug what is the current behavior seems that the report fibu kontenblatt is not considering all tax codes which are the steps to reproduce what is the expected or desired behavior
| 1
|
550,385
| 16,110,841,948
|
IssuesEvent
|
2021-04-27 20:57:45
|
PREreview/prereview
|
https://api.github.com/repos/PREreview/prereview
|
opened
|
Add request workflow
|
bug high priority
|
When user adds request it should lead to Add request tab rather than on read tab.
|
1.0
|
Add request workflow - When user adds request it should lead to Add request tab rather than on read tab.
|
priority
|
add request workflow when user adds request it should lead to add request tab rather than on read tab
| 1
|
612,182
| 19,006,365,572
|
IssuesEvent
|
2021-11-23 00:44:31
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
Potential strict aliasing rule violation in bitwise_binary_op (on ARM/NEON)
|
high priority triaged module: correctness (silent) module: arm
|
## 🐛 Bug
The following code snippet fails on a source build on ARM (Neoverse N1) when the vectorized code path is triggered (tensor length >=16) when compiling with `-O1 -fstrict-aliasing` (or any higher optimization):
```python
import torch
device = 'cpu'
size = 16
a = torch.full((size,), 6, dtype=torch.int32, device=device)
b = torch.full((size,), 3, dtype=torch.int32, device=device)
print(a & b)
tensor([ 40960, 0, 46473216, 43691, 201326912, 43691,
0, 0, 40960, 0, 46473216, 43691,
201326912, 43691, 0, 0], dtype=torch.int32)
```
Expected output:
```python
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2], dtype=torch.int32)
```
## TL;DR
### Potential root cause
Strict aliasing rule violation in
https://github.com/pytorch/pytorch/blob/2a5116e1599be7d7fa1be9572f47c316716b74c3/aten/src/ATen/cpu/vec/vec_base.h#L742-L743
### Fix could be
```cpp
template<class T, typename Op>
static inline Vectorized<T> bitwise_binary_op(const Vectorized<T> &a, const Vectorized<T> &b, Op op) {
static constexpr uint32_t element_no = VECTOR_WIDTH / sizeof(intmax_t);
__at_align__ intmax_t buffer[element_no];
intmax_t a_ptr[element_no]; // !
intmax_t b_ptr[element_no]; // !
std::memcpy(&a_ptr, &a, sizeof(intmax_t) * element_no); // !
std::memcpy(&b_ptr, &b, sizeof(intmax_t) * element_no); // !
for (uint32_t i = 0U; i < element_no; ++ i) {
buffer[i] = op(a_ptr[i], b_ptr[i]);
}
return Vectorized<T>::loadu(buffer);
}
```
Reference: [What is the Strict Aliasing Rule and Why do we care?](https://gist.github.com/shafik/848ae25ee209f698763cffee272a58f8#how-do-we-type-pun-correctly) by @shafik
We've verified this fix locally on tensor shapes in `arange(1, 1280)`, but please add others as needed to check, if the proposed fix is valid or not.
## Details
### Vectorized code path
We've seen unit test failures on ARM nodes for `bitwise_and` operations and first narrowed it down to the vectorized code path, as all sizes <16 work as expected and to optimized code (`-O0` did not reproduce this issue and we needed to rebuild with `REL_WITH_DEB_INFO=1`).
### Heisenbug
While trying to further isolate the issue it turned out to be a [Heisenbug](https://en.wikipedia.org/wiki/Heisenbug), which "disappeared" once we looked too close into the functions in question.
E.g. by adding a `std::cout` here:
https://github.com/pytorch/pytorch/blob/94d621584a8d2780252546aa787aab23203221b2/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp#L258
the method worked correctly again. Also, `gdb` refused to step into
https://github.com/pytorch/pytorch/blob/c5bee1ec4f261f3e250ea3ee974a0e13fb79de3b/aten/src/ATen/native/cpu/Loops.h#L216-L217
or break in `bitwise_binary_op`, but we were unsure, if this is due to the release + debug symbols build or not.
Adding a `raise(SIGTRAP)` in `bitwise_binary_op` also restored the functionality again.
As the next step we've used
```cpp
template<class T, typename Op>
static inline Vectorized<T> __attribute__((optimize("O0"))) bitwise_binary_op(const Vectorized<T> &a, const Vectorized<T> &b, Op op) {
```
to disable any optimizations in this method to be able to step into it via `gdb`, but this of course also fixed the issue (it also turns out that `__attribute((optimize("-fno-strict-aliasing")))` is also a valid workaround).
`valgrind --track-origins=yes` also confirmed the usage of uninitialized memory in `at::native::bitwise_and_kernel` in the failing cases and wasn't reporting issues otherwise.
### Bisecting compiler opimization flags
This lead us to the assumption, that one of the optimizations might trigger the failure and we bisected it to `-O1 -fstrict-aliasing`.
After reading @shafik's [great blog post](https://gist.github.com/shafik/848ae25ee209f698763cffee272a58f8#how-do-we-type-pun-correctly) (I would highly recommend reading it in case you are interested why the linked type punning is invalid) we realized that the vectorized code path might indeed violate the strict aliasing rule.
Replacing the (invalid) `reinterpret_cast` with the `memcpy` restores the behavior.
### Why no failures on x86/AVX?
It's a bit unclear why this issue was only visible on this particular ARM node as [vec_base.h](https://github.com/pytorch/pytorch/blob/2a5116e1599be7d7fa1be9572f47c316716b74c3/aten/src/ATen/cpu/vec/vec_base.h#L691) seems to use the invalid `reinterpret_cast` in more places (in AVX code) so this issue should also have been visible on x86 (we might also want to check and fix it).
### NIT
It seems that `-Wno-strict-aliasing` is used
https://github.com/pytorch/pytorch/blob/2a5116e1599be7d7fa1be9572f47c316716b74c3/CMakeLists.txt#L751
in combination with `-O2`
https://github.com/pytorch/pytorch/blob/2a5116e1599be7d7fa1be9572f47c316716b74c3/CMakeLists.txt#L734
which seems dangerous, as it would disable the warnings while the compiler could still use this optimization.
## Environment
```
PyTorch version: 1.10.0a0+0aef44c
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (aarch64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: version 3.21.3
Libc version: glibc-2.31
Python version: 3.8.12 | packaged by conda-forge | (default, Sep 29 2021, 20:54:09) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-84-generic-aarch64-with-glibc2.17
```
CC @zasdfgbnm @mcarilli (thanks both of you for the great help!) @malfet for visibility.
Please add others as needed to check, if the proposed fix is valid or not.
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @malfet
|
1.0
|
Potential strict aliasing rule violation in bitwise_binary_op (on ARM/NEON) - ## 🐛 Bug
The following code snippet fails on a source build on ARM (Neoverse N1) when the vectorized code path is triggered (tensor length >=16) when compiling with `-O1 -fstrict-aliasing` (or any higher optimization):
```python
import torch
device = 'cpu'
size = 16
a = torch.full((size,), 6, dtype=torch.int32, device=device)
b = torch.full((size,), 3, dtype=torch.int32, device=device)
print(a & b)
tensor([ 40960, 0, 46473216, 43691, 201326912, 43691,
0, 0, 40960, 0, 46473216, 43691,
201326912, 43691, 0, 0], dtype=torch.int32)
```
Expected output:
```python
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2], dtype=torch.int32)
```
## TL;DR
### Potential root cause
Strict aliasing rule violation in
https://github.com/pytorch/pytorch/blob/2a5116e1599be7d7fa1be9572f47c316716b74c3/aten/src/ATen/cpu/vec/vec_base.h#L742-L743
### Fix could be
```cpp
template<class T, typename Op>
static inline Vectorized<T> bitwise_binary_op(const Vectorized<T> &a, const Vectorized<T> &b, Op op) {
static constexpr uint32_t element_no = VECTOR_WIDTH / sizeof(intmax_t);
__at_align__ intmax_t buffer[element_no];
intmax_t a_ptr[element_no]; // !
intmax_t b_ptr[element_no]; // !
std::memcpy(&a_ptr, &a, sizeof(intmax_t) * element_no); // !
std::memcpy(&b_ptr, &b, sizeof(intmax_t) * element_no); // !
for (uint32_t i = 0U; i < element_no; ++ i) {
buffer[i] = op(a_ptr[i], b_ptr[i]);
}
return Vectorized<T>::loadu(buffer);
}
```
Reference: [What is the Strict Aliasing Rule and Why do we care?](https://gist.github.com/shafik/848ae25ee209f698763cffee272a58f8#how-do-we-type-pun-correctly) by @shafik
We've verified this fix locally on tensor shapes in `arange(1, 1280)`, but please add others as needed to check, if the proposed fix is valid or not.
## Details
### Vectorized code path
We've seen unit test failures on ARM nodes for `bitwise_and` operations and first narrowed it down to the vectorized code path, as all sizes <16 work as expected and to optimized code (`-O0` did not reproduce this issue and we needed to rebuild with `REL_WITH_DEB_INFO=1`).
### Heisenbug
While trying to further isolate the issue it turned out to be a [Heisenbug](https://en.wikipedia.org/wiki/Heisenbug), which "disappeared" once we looked too close into the functions in question.
E.g. by adding a `std::cout` here:
https://github.com/pytorch/pytorch/blob/94d621584a8d2780252546aa787aab23203221b2/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp#L258
the method worked correctly again. Also, `gdb` refused to step into
https://github.com/pytorch/pytorch/blob/c5bee1ec4f261f3e250ea3ee974a0e13fb79de3b/aten/src/ATen/native/cpu/Loops.h#L216-L217
or break in `bitwise_binary_op`, but we were unsure, if this is due to the release + debug symbols build or not.
Adding a `raise(SIGTRAP)` in `bitwise_binary_op` also restored the functionality again.
As the next step we've used
```cpp
template<class T, typename Op>
static inline Vectorized<T> __attribute__((optimize("O0"))) bitwise_binary_op(const Vectorized<T> &a, const Vectorized<T> &b, Op op) {
```
to disable any optimizations in this method to be able to step into it via `gdb`, but this of course also fixed the issue (it also turns out that `__attribute((optimize("-fno-strict-aliasing")))` is also a valid workaround).
`valgrind --track-origins=yes` also confirmed the usage of uninitialized memory in `at::native::bitwise_and_kernel` in the failing cases and wasn't reporting issues otherwise.
### Bisecting compiler opimization flags
This lead us to the assumption, that one of the optimizations might trigger the failure and we bisected it to `-O1 -fstrict-aliasing`.
After reading @shafik's [great blog post](https://gist.github.com/shafik/848ae25ee209f698763cffee272a58f8#how-do-we-type-pun-correctly) (I would highly recommend reading it in case you are interested why the linked type punning is invalid) we realized that the vectorized code path might indeed violate the strict aliasing rule.
Replacing the (invalid) `reinterpret_cast` with the `memcpy` restores the behavior.
### Why no failures on x86/AVX?
It's a bit unclear why this issue was only visible on this particular ARM node as [vec_base.h](https://github.com/pytorch/pytorch/blob/2a5116e1599be7d7fa1be9572f47c316716b74c3/aten/src/ATen/cpu/vec/vec_base.h#L691) seems to use the invalid `reinterpret_cast` in more places (in AVX code) so this issue should also have been visible on x86 (we might also want to check and fix it).
### NIT
It seems that `-Wno-strict-aliasing` is used
https://github.com/pytorch/pytorch/blob/2a5116e1599be7d7fa1be9572f47c316716b74c3/CMakeLists.txt#L751
in combination with `-O2`
https://github.com/pytorch/pytorch/blob/2a5116e1599be7d7fa1be9572f47c316716b74c3/CMakeLists.txt#L734
which seems dangerous, as it would disable the warnings while the compiler could still use this optimization.
## Environment
```
PyTorch version: 1.10.0a0+0aef44c
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (aarch64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: version 3.21.3
Libc version: glibc-2.31
Python version: 3.8.12 | packaged by conda-forge | (default, Sep 29 2021, 20:54:09) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-84-generic-aarch64-with-glibc2.17
```
CC @zasdfgbnm @mcarilli (thanks both of you for the great help!) @malfet for visibility.
Please add others as needed to check, if the proposed fix is valid or not.
cc @ezyang @gchanan @zou3519 @bdhirsh @jbschlosser @malfet
|
priority
|
potential strict aliasing rule violation in bitwise binary op on arm neon 🐛 bug the following code snippet fails on a source build on arm neoverse when the vectorized code path is triggered tensor length when compiling with fstrict aliasing or any higher optimization python import torch device cpu size a torch full size dtype torch device device b torch full size dtype torch device device print a b tensor dtype torch expected output python tensor dtype torch tl dr potential root cause strict aliasing rule violation in fix could be cpp template static inline vectorized bitwise binary op const vectorized a const vectorized b op op static constexpr t element no vector width sizeof intmax t at align intmax t buffer intmax t a ptr intmax t b ptr std memcpy a ptr a sizeof intmax t element no std memcpy b ptr b sizeof intmax t element no for t i i element no i buffer op a ptr b ptr return vectorized loadu buffer reference by shafik we ve verified this fix locally on tensor shapes in arange but please add others as needed to check if the proposed fix is valid or not details vectorized code path we ve seen unit test failures on arm nodes for bitwise and operations and first narrowed it down to the vectorized code path as all sizes work as expected and to optimized code did not reproduce this issue and we needed to rebuild with rel with deb info heisenbug while trying to further isolate the issue it turned out to be a which disappeared once we looked too close into the functions in question e g by adding a std cout here the method worked correctly again also gdb refused to step into or break in bitwise binary op but we were unsure if this is due to the release debug symbols build or not adding a raise sigtrap in bitwise binary op also restored the functionality again as the next step we ve used cpp template static inline vectorized attribute optimize bitwise binary op const vectorized a const vectorized b op op to disable any optimizations in this method to be able to step into it via gdb but this of course also fixed the issue it also turns out that attribute optimize fno strict aliasing is also a valid workaround valgrind track origins yes also confirmed the usage of uninitialized memory in at native bitwise and kernel in the failing cases and wasn t reporting issues otherwise bisecting compiler opimization flags this lead us to the assumption that one of the optimizations might trigger the failure and we bisected it to fstrict aliasing after reading shafik s i would highly recommend reading it in case you are interested why the linked type punning is invalid we realized that the vectorized code path might indeed violate the strict aliasing rule replacing the invalid reinterpret cast with the memcpy restores the behavior why no failures on avx it s a bit unclear why this issue was only visible on this particular arm node as seems to use the invalid reinterpret cast in more places in avx code so this issue should also have been visible on we might also want to check and fix it nit it seems that wno strict aliasing is used in combination with which seems dangerous as it would disable the warnings while the compiler could still use this optimization environment pytorch version is debug build false cuda used to build pytorch could not collect rocm used to build pytorch n a os ubuntu lts gcc version ubuntu clang version could not collect cmake version version libc version glibc python version packaged by conda forge default sep bit runtime python platform linux generic with cc zasdfgbnm mcarilli thanks both of you for the great help malfet for visibility please add others as needed to check if the proposed fix is valid or not cc ezyang gchanan bdhirsh jbschlosser malfet
| 1
|
435,798
| 12,541,505,598
|
IssuesEvent
|
2020-06-05 12:28:45
|
tud-zih-energy/lo2s
|
https://api.github.com/repos/tud-zih-energy/lo2s
|
closed
|
Duplicate registry entries
|
bug high priority
|
It seems that #144 broke `lo2s` - maybe only directly visible without `NDEBUG`. I assume the registry is crudely complaining about duplicate entries.
I get the following error for the simplest use cases
```
./cmake-build-debug/lo2s ls
lo2s: lo2s/lib/otf2xx/include/otf2xx/registry.hpp:286: std::enable_if_t<otf2::lookup_definition_holder<Definition, KeyList>::has_type<Key, std::tuple<_Elements ...> >::value, Definition&> otf2::lookup_definition_holder<Definition, KeyList>::create(Key, Args&& ...) [with Key = lo2s::trace::SimpleKeyType<int, lo2s::trace::ByThreadTag>; Args = {otf2::definition::region&, otf2::definition::source_code_location}; Definition = otf2::definition::calling_context; KeyList = {lo2s::trace::SimpleKeyType<int, lo2s::trace::ByThreadTag>}; std::enable_if_t<otf2::lookup_definition_holder<Definition, KeyList>::has_type<Key, std::tuple<_Elements ...> >::value, Definition&> = otf2::definition::calling_context&]: Assertion `result.second' failed.
```
This is coming from `Trace::add_threads` which tries to add the tid map. Not sure why there's a duplicate.
```
./cmake-build-debug/lo2s -a
lo2s: lo2s/lib/otf2xx/include/otf2xx/registry.hpp:286: std::enable_if_t<otf2::lookup_definition_holder<Definition, KeyList>::has_type<Key, std::tuple<_Elements ...> >::value, Definition&> otf2::lookup_definition_holder<Definition, KeyList>::create(Key, Args&& ...) [with Key = lo2s::trace::SimpleKeyType<std::__cxx11::basic_string<char>, lo2s::trace::ByStringTag>; Args = {const otf2::definition::string&, otf2::common::paradigm_type, otf2::common::group_flag_type}; Definition = otf2::definition::group<otf2::definition::region, otf2::common::group_type::regions>; KeyList = {lo2s::trace::SimpleKeyType<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, lo2s::trace::ByStringTag>}; std::enable_if_t<otf2::lookup_definition_holder<Definition, KeyList>::has_type<Key, std::tuple<_Elements ...> >::value, Definition&> = otf2::definition::group<otf2::definition::region, otf2::common::group_type::regions>&]: Assertion `result.second' failed.
```
|
1.0
|
Duplicate registry entries - It seems that #144 broke `lo2s` - maybe only directly visible without `NDEBUG`. I assume the registry is crudely complaining about duplicate entries.
I get the following error for the simplest use cases
```
./cmake-build-debug/lo2s ls
lo2s: lo2s/lib/otf2xx/include/otf2xx/registry.hpp:286: std::enable_if_t<otf2::lookup_definition_holder<Definition, KeyList>::has_type<Key, std::tuple<_Elements ...> >::value, Definition&> otf2::lookup_definition_holder<Definition, KeyList>::create(Key, Args&& ...) [with Key = lo2s::trace::SimpleKeyType<int, lo2s::trace::ByThreadTag>; Args = {otf2::definition::region&, otf2::definition::source_code_location}; Definition = otf2::definition::calling_context; KeyList = {lo2s::trace::SimpleKeyType<int, lo2s::trace::ByThreadTag>}; std::enable_if_t<otf2::lookup_definition_holder<Definition, KeyList>::has_type<Key, std::tuple<_Elements ...> >::value, Definition&> = otf2::definition::calling_context&]: Assertion `result.second' failed.
```
This is coming from `Trace::add_threads` which tries to add the tid map. Not sure why there's a duplicate.
```
./cmake-build-debug/lo2s -a
lo2s: lo2s/lib/otf2xx/include/otf2xx/registry.hpp:286: std::enable_if_t<otf2::lookup_definition_holder<Definition, KeyList>::has_type<Key, std::tuple<_Elements ...> >::value, Definition&> otf2::lookup_definition_holder<Definition, KeyList>::create(Key, Args&& ...) [with Key = lo2s::trace::SimpleKeyType<std::__cxx11::basic_string<char>, lo2s::trace::ByStringTag>; Args = {const otf2::definition::string&, otf2::common::paradigm_type, otf2::common::group_flag_type}; Definition = otf2::definition::group<otf2::definition::region, otf2::common::group_type::regions>; KeyList = {lo2s::trace::SimpleKeyType<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, lo2s::trace::ByStringTag>}; std::enable_if_t<otf2::lookup_definition_holder<Definition, KeyList>::has_type<Key, std::tuple<_Elements ...> >::value, Definition&> = otf2::definition::group<otf2::definition::region, otf2::common::group_type::regions>&]: Assertion `result.second' failed.
```
|
priority
|
duplicate registry entries it seems that broke maybe only directly visible without ndebug i assume the registry is crudely complaining about duplicate entries i get the following error for the simplest use cases cmake build debug ls lib include registry hpp std enable if t has type value definition lookup definition holder create key args assertion result second failed this is coming from trace add threads which tries to add the tid map not sure why there s a duplicate cmake build debug a lib include registry hpp std enable if t has type value definition lookup definition holder create key args assertion result second failed
| 1
|
192,313
| 6,848,579,281
|
IssuesEvent
|
2017-11-13 19:00:44
|
USGCRP/gcis
|
https://api.github.com/repos/USGCRP/gcis
|
opened
|
Add Images for Figures with a Single Panel
|
context Content Management priority high type content
|
For the CSSR, we only populated Images for Figures with multiple subpanels in the TSU system. We should go through and populate the images for single-panel figures.
|
1.0
|
Add Images for Figures with a Single Panel - For the CSSR, we only populated Images for Figures with multiple subpanels in the TSU system. We should go through and populate the images for single-panel figures.
|
priority
|
add images for figures with a single panel for the cssr we only populated images for figures with multiple subpanels in the tsu system we should go through and populate the images for single panel figures
| 1
|
610,515
| 18,910,436,488
|
IssuesEvent
|
2021-11-16 13:37:07
|
Michael-J-Scofield/discord-anti-spam
|
https://api.github.com/repos/Michael-J-Scofield/discord-anti-spam
|
closed
|
DiscordAPIError: Unknown Message
|
priority: high status: confirmed type: bug
|
I tried using this package a long time ago and due to this problem I gave up on trying to use it. I saw that the issue had been "fixed," so I gave it a try again, and I still am getting this issue when there is spam. I spam, I get the warning to please stop spamming, and then crash.
throw new DiscordAPIError(request.path, data, request.method, res.status);
^
DiscordAPIError: Unknown Message
at RequestHandler.execute (C:\Users\.....\node_modules\discord.js\src\rest\RequestHandler.js:154:13)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async RequestHandler.push (C:\Users\.....\node_modules\discord.js\src\rest\RequestHandler.js:39:14)
at async MessageManager.delete (C:\Users\.....\node_modules\discord.js\src\managers\MessageManager.js:126:5) {
method: 'delete',
path: '/channels/839381329891164211/messages/890143110783463445',
code: 10008,
httpStatus: 404
}
I have no other bots deleting messages. So I do not know what is causing it.

As you can see, spam, warning message, all messages got deleted before the warning, then app crash.
|
1.0
|
DiscordAPIError: Unknown Message - I tried using this package a long time ago and due to this problem I gave up on trying to use it. I saw that the issue had been "fixed," so I gave it a try again, and I still am getting this issue when there is spam. I spam, I get the warning to please stop spamming, and then crash.
throw new DiscordAPIError(request.path, data, request.method, res.status);
^
DiscordAPIError: Unknown Message
at RequestHandler.execute (C:\Users\.....\node_modules\discord.js\src\rest\RequestHandler.js:154:13)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async RequestHandler.push (C:\Users\.....\node_modules\discord.js\src\rest\RequestHandler.js:39:14)
at async MessageManager.delete (C:\Users\.....\node_modules\discord.js\src\managers\MessageManager.js:126:5) {
method: 'delete',
path: '/channels/839381329891164211/messages/890143110783463445',
code: 10008,
httpStatus: 404
}
I have no other bots deleting messages. So I do not know what is causing it.

As you can see, spam, warning message, all messages got deleted before the warning, then app crash.
|
priority
|
discordapierror unknown message i tried using this package a long time ago and due to this problem i gave up on trying to use it i saw that the issue had been fixed so i gave it a try again and i still am getting this issue when there is spam i spam i get the warning to please stop spamming and then crash throw new discordapierror request path data request method res status discordapierror unknown message at requesthandler execute c users node modules discord js src rest requesthandler js at processticksandrejections node internal process task queues at async requesthandler push c users node modules discord js src rest requesthandler js at async messagemanager delete c users node modules discord js src managers messagemanager js method delete path channels messages code httpstatus i have no other bots deleting messages so i do not know what is causing it as you can see spam warning message all messages got deleted before the warning then app crash
| 1
|
96,535
| 3,969,294,155
|
IssuesEvent
|
2016-05-03 22:55:30
|
Baystation12/Baystation12
|
https://api.github.com/repos/Baystation12/Baystation12
|
closed
|
Forced antag spawn while ghosting in autotraitor
|
⚠ priority: high ⚠
|
<!--
PUT YOUR ANSWERS ON THE BLANK LINES BELOW THE HEADERS
(The lines with four #'s)
Don't edit them or delete them it's part of the formatting
-->
#### Brief description of the issue
Game mode was autotraitor, I latejoined as an observing ghost, approximately 15-20 minutes later I was forcibly spawned as an antag, butt nude, on my last ghost coordinate. No warning or hints were observed prior to spawning.
#### What you expected to happen
Not to get spawned suddenly when ghosting.
#### What actually happened
I spawned in the middle of a crowd as antag, butt nude, while ghosting.
#### Steps to reproduce
1. Set game mode to autotraitor
2. Enable antag roles in character setup
3. Latejoin as ghost
4. Wait until the game spawns you (?)
#### Additional info:
- **Server Revision**: a235b5f8c426d33d7204b2259592445b03ad3803
- **Game ID**: bIl-bCkN
- **Logs**:


Admin logs (provided by Ravensdale)
> ADMIN LOG: AUTOTRAITOR: Attempting autospawn.
ADMIN LOG: AUTOTRAITOR: traitor selected for spawn attempt.
DEBUG: TRAITOR: Found 1/3 active Traitors.
DEBUG: Haswell had traitor enabled, so we are drafting them.
DEBUG: Haswell has been selected for Traitor by lottery.
ADMIN LOG: AUTOTRAITOR: Auto-added a new Traitor.
ADMIN LOG: There are now 2/3 active Traitors.
Request for Help:: Haswell/(Samuel Anderson)(?) (PP) (VV) (SM) (JMP) (CA): uh
Request for Help:: Haswell/(Samuel Anderson)(?) (PP) (VV) (SM) (JMP) (CA): what...
PM to Haswell/(Samuel Anderson)(?): ?
Request for Help:: Haswell/(Samuel Anderson)(?) (PP) (VV) (SM) (JMP) (CA): I suddenly spawned as a butt nude guy, while ghosting
PM to Haswell/(Samuel Anderson)(?): You're certain you hit observe, and did it say anything else to you?
[Player PM] Haswell/(Samuel Anderson)(?): yes, I was ghosted for about 20 minutes too
PM to Haswell/(Samuel Anderson)(?): And then your randomly... became person?
[Player PM] Haswell/(Samuel Anderson)(?): was watching R&D, then suddenly I got spawned in
[Player PM] Haswell/(Samuel Anderson)(?): exactly
|
1.0
|
Forced antag spawn while ghosting in autotraitor - <!--
PUT YOUR ANSWERS ON THE BLANK LINES BELOW THE HEADERS
(The lines with four #'s)
Don't edit them or delete them it's part of the formatting
-->
#### Brief description of the issue
Game mode was autotraitor, I latejoined as an observing ghost, approximately 15-20 minutes later I was forcibly spawned as an antag, butt nude, on my last ghost coordinate. No warning or hints were observed prior to spawning.
#### What you expected to happen
Not to get spawned suddenly when ghosting.
#### What actually happened
I spawned in the middle of a crowd as antag, butt nude, while ghosting.
#### Steps to reproduce
1. Set game mode to autotraitor
2. Enable antag roles in character setup
3. Latejoin as ghost
4. Wait until the game spawns you (?)
#### Additional info:
- **Server Revision**: a235b5f8c426d33d7204b2259592445b03ad3803
- **Game ID**: bIl-bCkN
- **Logs**:


Admin logs (provided by Ravensdale)
> ADMIN LOG: AUTOTRAITOR: Attempting autospawn.
ADMIN LOG: AUTOTRAITOR: traitor selected for spawn attempt.
DEBUG: TRAITOR: Found 1/3 active Traitors.
DEBUG: Haswell had traitor enabled, so we are drafting them.
DEBUG: Haswell has been selected for Traitor by lottery.
ADMIN LOG: AUTOTRAITOR: Auto-added a new Traitor.
ADMIN LOG: There are now 2/3 active Traitors.
Request for Help:: Haswell/(Samuel Anderson)(?) (PP) (VV) (SM) (JMP) (CA): uh
Request for Help:: Haswell/(Samuel Anderson)(?) (PP) (VV) (SM) (JMP) (CA): what...
PM to Haswell/(Samuel Anderson)(?): ?
Request for Help:: Haswell/(Samuel Anderson)(?) (PP) (VV) (SM) (JMP) (CA): I suddenly spawned as a butt nude guy, while ghosting
PM to Haswell/(Samuel Anderson)(?): You're certain you hit observe, and did it say anything else to you?
[Player PM] Haswell/(Samuel Anderson)(?): yes, I was ghosted for about 20 minutes too
PM to Haswell/(Samuel Anderson)(?): And then your randomly... became person?
[Player PM] Haswell/(Samuel Anderson)(?): was watching R&D, then suddenly I got spawned in
[Player PM] Haswell/(Samuel Anderson)(?): exactly
|
priority
|
forced antag spawn while ghosting in autotraitor put your answers on the blank lines below the headers the lines with four s don t edit them or delete them it s part of the formatting brief description of the issue game mode was autotraitor i latejoined as an observing ghost approximately minutes later i was forcibly spawned as an antag butt nude on my last ghost coordinate no warning or hints were observed prior to spawning what you expected to happen not to get spawned suddenly when ghosting what actually happened i spawned in the middle of a crowd as antag butt nude while ghosting steps to reproduce set game mode to autotraitor enable antag roles in character setup latejoin as ghost wait until the game spawns you additional info server revision game id bil bckn logs admin logs provided by ravensdale admin log autotraitor attempting autospawn admin log autotraitor traitor selected for spawn attempt debug traitor found active traitors debug haswell had traitor enabled so we are drafting them debug haswell has been selected for traitor by lottery admin log autotraitor auto added a new traitor admin log there are now active traitors request for help haswell samuel anderson pp vv sm jmp ca uh request for help haswell samuel anderson pp vv sm jmp ca what pm to haswell samuel anderson request for help haswell samuel anderson pp vv sm jmp ca i suddenly spawned as a butt nude guy while ghosting pm to haswell samuel anderson you re certain you hit observe and did it say anything else to you haswell samuel anderson yes i was ghosted for about minutes too pm to haswell samuel anderson and then your randomly became person haswell samuel anderson was watching r d then suddenly i got spawned in haswell samuel anderson exactly
| 1
|
680,335
| 23,266,598,211
|
IssuesEvent
|
2022-08-04 18:01:36
|
chakra-ui/chakra-ui
|
https://api.github.com/repos/chakra-ui/chakra-ui
|
closed
|
Preview build of CodeSandbox is broken
|
Priority: High 🚨
|
### Description
When I've submitted a PR and open CodeSandbox preview, an error has been occurred regardless of the content of PR.
### Link to Reproduction
https://codesandbox.io/s/create-react-app-ts-x2jy8c
### Steps to reproduce
1. Submit PR
2. Create CodeSandbox with preview build hosted on `pkg.csb.dev`.
3. The error `React.createContext is not a function` has been occurred
### Chakra UI Version
v2.2.3
### Browser
_No response_
### Operating System
- [ ] macOS
- [ ] Windows
- [ ] Linux
### Additional Information
Maybe #6356 is the cause.
|
1.0
|
Preview build of CodeSandbox is broken - ### Description
When I've submitted a PR and open CodeSandbox preview, an error has been occurred regardless of the content of PR.
### Link to Reproduction
https://codesandbox.io/s/create-react-app-ts-x2jy8c
### Steps to reproduce
1. Submit PR
2. Create CodeSandbox with preview build hosted on `pkg.csb.dev`.
3. The error `React.createContext is not a function` has been occurred
### Chakra UI Version
v2.2.3
### Browser
_No response_
### Operating System
- [ ] macOS
- [ ] Windows
- [ ] Linux
### Additional Information
Maybe #6356 is the cause.
|
priority
|
preview build of codesandbox is broken description when i ve submitted a pr and open codesandbox preview an error has been occurred regardless of the content of pr link to reproduction steps to reproduce submit pr create codesandbox with preview build hosted on pkg csb dev the error react createcontext is not a function has been occurred chakra ui version browser no response operating system macos windows linux additional information maybe is the cause
| 1
|
794,042
| 28,020,249,086
|
IssuesEvent
|
2023-03-28 04:27:14
|
decompme/decomp.me
|
https://api.github.com/repos/decompme/decomp.me
|
closed
|
Codemirror state not being properly updated with latest scratch contents after saving and then navigating to and from the scratch page
|
bug help wanted frontend high priority
|
https://discord.com/channels/897066363951128586/897066363951128590/1083756322874458253
Video that reproduces it above.
I can confirm this happens both on local builds and on local `yarn dev`. I also noticed that the network request for the scratch data includes the correct, up-to-date source code, and the site even knows that what is in the codemirror box is out of date because it says "unsaved" as soon as you navigate to the page. So as I thought,
<img width="296" alt="image" src="https://user-images.githubusercontent.com/2985314/227715336-faac338c-afc3-4f61-b288-9028154c6722.png">
|
1.0
|
Codemirror state not being properly updated with latest scratch contents after saving and then navigating to and from the scratch page - https://discord.com/channels/897066363951128586/897066363951128590/1083756322874458253
Video that reproduces it above.
I can confirm this happens both on local builds and on local `yarn dev`. I also noticed that the network request for the scratch data includes the correct, up-to-date source code, and the site even knows that what is in the codemirror box is out of date because it says "unsaved" as soon as you navigate to the page. So as I thought,
<img width="296" alt="image" src="https://user-images.githubusercontent.com/2985314/227715336-faac338c-afc3-4f61-b288-9028154c6722.png">
|
priority
|
codemirror state not being properly updated with latest scratch contents after saving and then navigating to and from the scratch page video that reproduces it above i can confirm this happens both on local builds and on local yarn dev i also noticed that the network request for the scratch data includes the correct up to date source code and the site even knows that what is in the codemirror box is out of date because it says unsaved as soon as you navigate to the page so as i thought img width alt image src
| 1
|
130,663
| 5,119,236,517
|
IssuesEvent
|
2017-01-08 15:57:08
|
graphcool/console
|
https://api.github.com/repos/graphcool/console
|
opened
|
Improvements with DateTime in databrowser
|
area/models/databrowser enhancement priority/high
|
When I create a `DateTime` field in the playground or in a script, this is the behaviour:
```graphql
mutation {
createMovie(
releaseDate: "2016-02-02"
title: "Movie"
) {
releaseDate
}
}
```
```json
{
"data": {
"createMovie": {
"movie": {
"releaseDate": "2016-02-02T00:00:00.000Z"
}
}
}
}
```
This is really helpful when I want to create dates and don't care about the time, for example for birthdays or release dates as in this example.
Unfortunately this behaviour is currently not possible in the console, because the date time picker automatically takes my timezone into account and I cannot add custom strings.
So if I select the 2nd February 2016 in the date time picker, this is the resulting ISO String shown in the date time picker: `2016-02-02 00:00:00+0100` which results in this ISO String stored in the backend: `2016-02-01T23:00:00.000Z`.
(As a side note, an unselected cell in the data browser also takes my timezone into account, which adds another level of confusion because now the datetime looks to have the correct value: `2/2/2016, 12:00:00 AM`. Maybe we could even use ISO format here to keep it consistent.)
To summarize, I want the ability in the data browser to specify dates that end up in this format in my project's backend: `2016-02-02T00:00:00.000Z`. Currently this is only possible when having the UTC timezone locally.
|
1.0
|
Improvements with DateTime in databrowser - When I create a `DateTime` field in the playground or in a script, this is the behaviour:
```graphql
mutation {
createMovie(
releaseDate: "2016-02-02"
title: "Movie"
) {
releaseDate
}
}
```
```json
{
"data": {
"createMovie": {
"movie": {
"releaseDate": "2016-02-02T00:00:00.000Z"
}
}
}
}
```
This is really helpful when I want to create dates and don't care about the time, for example for birthdays or release dates as in this example.
Unfortunately this behaviour is currently not possible in the console, because the date time picker automatically takes my timezone into account and I cannot add custom strings.
So if I select the 2nd February 2016 in the date time picker, this is the resulting ISO String shown in the date time picker: `2016-02-02 00:00:00+0100` which results in this ISO String stored in the backend: `2016-02-01T23:00:00.000Z`.
(As a side note, an unselected cell in the data browser also takes my timezone into account, which adds another level of confusion because now the datetime looks to have the correct value: `2/2/2016, 12:00:00 AM`. Maybe we could even use ISO format here to keep it consistent.)
To summarize, I want the ability in the data browser to specify dates that end up in this format in my project's backend: `2016-02-02T00:00:00.000Z`. Currently this is only possible when having the UTC timezone locally.
|
priority
|
improvements with datetime in databrowser when i create a datetime field in the playground or in a script this is the behaviour graphql mutation createmovie releasedate title movie releasedate json data createmovie movie releasedate this is really helpful when i want to create dates and don t care about the time for example for birthdays or release dates as in this example unfortunately this behaviour is currently not possible in the console because the date time picker automatically takes my timezone into account and i cannot add custom strings so if i select the february in the date time picker this is the resulting iso string shown in the date time picker which results in this iso string stored in the backend as a side note an unselected cell in the data browser also takes my timezone into account which adds another level of confusion because now the datetime looks to have the correct value am maybe we could even use iso format here to keep it consistent to summarize i want the ability in the data browser to specify dates that end up in this format in my project s backend currently this is only possible when having the utc timezone locally
| 1
|
107,302
| 4,301,152,168
|
IssuesEvent
|
2016-07-20 06:11:13
|
ClinGen/clincoded
|
https://api.github.com/repos/ClinGen/clincoded
|
closed
|
Remove the "Return to Evidence" button
|
priority: high R7alpha1 release ready variant curation interface
|
Once in an Interpretation we will not give curators the ability to return to the evidence pool. For now we will remove the "Return to Evidence" button.
|
1.0
|
Remove the "Return to Evidence" button - Once in an Interpretation we will not give curators the ability to return to the evidence pool. For now we will remove the "Return to Evidence" button.
|
priority
|
remove the return to evidence button once in an interpretation we will not give curators the ability to return to the evidence pool for now we will remove the return to evidence button
| 1
|
815,044
| 30,534,043,766
|
IssuesEvent
|
2023-07-19 16:00:59
|
awslabs/aws-dataall
|
https://api.github.com/repos/awslabs/aws-dataall
|
closed
|
Feature request: disable profiling results view to secret datasets
|
type: newfeature priority: high
|
**Is your idea related to a problem? Please describe.**
Since datasets preview is disabled for "secret" datasets it also seems logical to restrict the view of the profiling results.
**Describe the solution you'd like**
If a dataset has a confidentiality=secret, profiling results are not visible
*P.S. Don't attach files. Please, prefer add code snippets directly in the message body.*
|
1.0
|
Feature request: disable profiling results view to secret datasets - **Is your idea related to a problem? Please describe.**
Since datasets preview is disabled for "secret" datasets it also seems logical to restrict the view of the profiling results.
**Describe the solution you'd like**
If a dataset has a confidentiality=secret, profiling results are not visible
*P.S. Don't attach files. Please, prefer add code snippets directly in the message body.*
|
priority
|
feature request disable profiling results view to secret datasets is your idea related to a problem please describe since datasets preview is disabled for secret datasets it also seems logical to restrict the view of the profiling results describe the solution you d like if a dataset has a confidentiality secret profiling results are not visible p s don t attach files please prefer add code snippets directly in the message body
| 1
|
484,425
| 13,939,618,187
|
IssuesEvent
|
2020-10-22 16:44:11
|
interferences-at/mpop
|
https://api.github.com/repos/interferences-at/mpop
|
closed
|
Draw a black region on the right of the screen (dataviz)
|
OpenGL difficulty: medium mpop_dataviz platform: windows priority: high question
|
The screen is 1920x1080, but we want to draw only black on the right.
|
1.0
|
Draw a black region on the right of the screen (dataviz) - The screen is 1920x1080, but we want to draw only black on the right.
|
priority
|
draw a black region on the right of the screen dataviz the screen is but we want to draw only black on the right
| 1
|
606,980
| 18,771,258,570
|
IssuesEvent
|
2021-11-06 21:54:15
|
certbot/certbot
|
https://api.github.com/repos/certbot/certbot
|
opened
|
windows: webroot web.config can cause an HTTP 500 due to duplicate mimeMap
|
area: webroot area: windows priority: high
|
The feature we added in #9054 is causing problems.
If there is another `web.config` file, high up the directory tree than in `acme-challenge/`, which also defines a `mimeMap` for extensionless files, then IIS will crash on the duplicate `mimeMap` as it merges the configs.
I think this probably a likely scenario for our Windows user to encounter, because I recall seeing (and have posted myself) advice to create `web.config` files in the `.well-known` directory rather than the `acme-challenge` directory.
Rather than traversing the filesystem looking for other `web.config` files, one way to fix this could be to use the `remove` element to remove any existing `mimeMap` for extensionless files, before we add ours:
```xml
<staticContent>
<remove fileExtension="."/>
<mimeMap fileExtension="." mimeType="text/plain" />
</staticContent>
```
That seems to work on Windows Server 2019.
One thing that makes me nervous is that `remove` is consipciously missing from the documentation for `mimeMap` and `staticContent`. I found the solution on some random blog.
cc @adferrand
|
1.0
|
windows: webroot web.config can cause an HTTP 500 due to duplicate mimeMap - The feature we added in #9054 is causing problems.
If there is another `web.config` file, high up the directory tree than in `acme-challenge/`, which also defines a `mimeMap` for extensionless files, then IIS will crash on the duplicate `mimeMap` as it merges the configs.
I think this probably a likely scenario for our Windows user to encounter, because I recall seeing (and have posted myself) advice to create `web.config` files in the `.well-known` directory rather than the `acme-challenge` directory.
Rather than traversing the filesystem looking for other `web.config` files, one way to fix this could be to use the `remove` element to remove any existing `mimeMap` for extensionless files, before we add ours:
```xml
<staticContent>
<remove fileExtension="."/>
<mimeMap fileExtension="." mimeType="text/plain" />
</staticContent>
```
That seems to work on Windows Server 2019.
One thing that makes me nervous is that `remove` is consipciously missing from the documentation for `mimeMap` and `staticContent`. I found the solution on some random blog.
cc @adferrand
|
priority
|
windows webroot web config can cause an http due to duplicate mimemap the feature we added in is causing problems if there is another web config file high up the directory tree than in acme challenge which also defines a mimemap for extensionless files then iis will crash on the duplicate mimemap as it merges the configs i think this probably a likely scenario for our windows user to encounter because i recall seeing and have posted myself advice to create web config files in the well known directory rather than the acme challenge directory rather than traversing the filesystem looking for other web config files one way to fix this could be to use the remove element to remove any existing mimemap for extensionless files before we add ours xml that seems to work on windows server one thing that makes me nervous is that remove is consipciously missing from the documentation for mimemap and staticcontent i found the solution on some random blog cc adferrand
| 1
|
477,839
| 13,769,028,621
|
IssuesEvent
|
2020-10-07 17:58:52
|
yairEO/tagify
|
https://api.github.com/repos/yairEO/tagify
|
closed
|
Tag when edited by double clicking changed to HTML code
|
Bug: high priority
|
It appears to be a bug triggered by double clicking a tag, type the new tag name. The result is the tag became scrambled with HTML code. The issue was reproduced on the Vue Example https://codesandbox.io/s/tagify-tags-component-vue-example-l8ok4

|
1.0
|
Tag when edited by double clicking changed to HTML code - It appears to be a bug triggered by double clicking a tag, type the new tag name. The result is the tag became scrambled with HTML code. The issue was reproduced on the Vue Example https://codesandbox.io/s/tagify-tags-component-vue-example-l8ok4

|
priority
|
tag when edited by double clicking changed to html code it appears to be a bug triggered by double clicking a tag type the new tag name the result is the tag became scrambled with html code the issue was reproduced on the vue example
| 1
|
465,590
| 13,388,477,664
|
IssuesEvent
|
2020-09-02 17:25:48
|
chime-experiment/coco
|
https://api.github.com/repos/chime-experiment/coco
|
opened
|
Add option to directly reply the reply of a single host
|
enhancement priority/high
|
For https://github.com/chime-experiment/Infrastructure/issues/154 coco needs a report option to simplify a reply from a single host to just that reply, without the typical coco report overhead:
```
["cn3g2:12048", "cn3g8:12048", "cs8g0:12048"]
```
instead of
```
{"success": true, "blocklist": {"http://coco/": {"reply": ["cn3g2:12048", "cn3g8:12048", "cs8g0:12048"], "status": 200}}}
```
|
1.0
|
Add option to directly reply the reply of a single host - For https://github.com/chime-experiment/Infrastructure/issues/154 coco needs a report option to simplify a reply from a single host to just that reply, without the typical coco report overhead:
```
["cn3g2:12048", "cn3g8:12048", "cs8g0:12048"]
```
instead of
```
{"success": true, "blocklist": {"http://coco/": {"reply": ["cn3g2:12048", "cn3g8:12048", "cs8g0:12048"], "status": 200}}}
```
|
priority
|
add option to directly reply the reply of a single host for coco needs a report option to simplify a reply from a single host to just that reply without the typical coco report overhead instead of success true blocklist reply status
| 1
|
388,800
| 11,492,694,272
|
IssuesEvent
|
2020-02-11 21:30:30
|
netdata/netdata
|
https://api.github.com/repos/netdata/netdata
|
closed
|
ACLK - Packaging and deploying the custom fork of libmosquitto
|
area/packaging feature request priority/high
|
##### Feature idea summary
In order to deploy the ACLK feature into production we are using a custom version of libmosquitto that has been upgraded to support our use-case (wss+MQTT). This currently exists as a repo that we have forked. The long-term plan is to push the changes upstream and improve the external library.
In the short-term we must deploy our custom version to be able to establish the ACLK. This needs to be a custom build step that:
* Pulls in the latest version of the forked library (netdata/mosquitto).
* Compiles a static library.
* Compiles netdata to include the static library.
Issue #7740 will result in:
* Instructions for pulling the fork / building the library.
* Necessary changes to autotools / CMake.
##### Expected behavior
All Netdata installation options (the installer, kickstart, binary packages) need to be updated to include these new changes and check that we can deploy them to supported platforms.
These changes are required to allow deployment of the ACLK for the February release.
|
1.0
|
ACLK - Packaging and deploying the custom fork of libmosquitto - ##### Feature idea summary
In order to deploy the ACLK feature into production we are using a custom version of libmosquitto that has been upgraded to support our use-case (wss+MQTT). This currently exists as a repo that we have forked. The long-term plan is to push the changes upstream and improve the external library.
In the short-term we must deploy our custom version to be able to establish the ACLK. This needs to be a custom build step that:
* Pulls in the latest version of the forked library (netdata/mosquitto).
* Compiles a static library.
* Compiles netdata to include the static library.
Issue #7740 will result in:
* Instructions for pulling the fork / building the library.
* Necessary changes to autotools / CMake.
##### Expected behavior
All Netdata installation options (the installer, kickstart, binary packages) need to be updated to include these new changes and check that we can deploy them to supported platforms.
These changes are required to allow deployment of the ACLK for the February release.
|
priority
|
aclk packaging and deploying the custom fork of libmosquitto feature idea summary in order to deploy the aclk feature into production we are using a custom version of libmosquitto that has been upgraded to support our use case wss mqtt this currently exists as a repo that we have forked the long term plan is to push the changes upstream and improve the external library in the short term we must deploy our custom version to be able to establish the aclk this needs to be a custom build step that pulls in the latest version of the forked library netdata mosquitto compiles a static library compiles netdata to include the static library issue will result in instructions for pulling the fork building the library necessary changes to autotools cmake expected behavior all netdata installation options the installer kickstart binary packages need to be updated to include these new changes and check that we can deploy them to supported platforms these changes are required to allow deployment of the aclk for the february release
| 1
|
236,291
| 7,748,349,644
|
IssuesEvent
|
2018-05-30 08:02:48
|
Gloirin/m2gTest
|
https://api.github.com/repos/Gloirin/m2gTest
|
closed
|
0002308:
add printing for calendar
|
Calendar Feature Request high priority
|
**Reported by pschuele on 18 Feb 2010 10:38**
add printing for calendar
|
1.0
|
0002308:
add printing for calendar - **Reported by pschuele on 18 Feb 2010 10:38**
add printing for calendar
|
priority
|
add printing for calendar reported by pschuele on feb add printing for calendar
| 1
|
87,082
| 3,736,748,091
|
IssuesEvent
|
2016-03-08 16:54:19
|
macmillanpublishers/Word-template
|
https://api.github.com/repos/macmillanpublishers/Word-template
|
opened
|
Info/Help button/notification
|
effort:low priority:high type:enhancement
|
Want to have template included on standard disk image w/ new software, so people w/o training will have access to macros. We'll need to add a Help button w/ basic description of the toolbar and link to Confluence, AND/OR a popup the first time the template is run (can write to registry/plist) letting the user know about the tab.
|
1.0
|
Info/Help button/notification - Want to have template included on standard disk image w/ new software, so people w/o training will have access to macros. We'll need to add a Help button w/ basic description of the toolbar and link to Confluence, AND/OR a popup the first time the template is run (can write to registry/plist) letting the user know about the tab.
|
priority
|
info help button notification want to have template included on standard disk image w new software so people w o training will have access to macros we ll need to add a help button w basic description of the toolbar and link to confluence and or a popup the first time the template is run can write to registry plist letting the user know about the tab
| 1
|
180,771
| 6,653,333,050
|
IssuesEvent
|
2017-09-29 07:56:15
|
geosolutions-it/MapStore2
|
https://api.github.com/repos/geosolutions-it/MapStore2
|
closed
|
Introduce autocomplete editor into feature editor plugin
|
enhancement in progress Priority: High Project: C040
|
We want to use the same autocomplete field present in the query form.
- [ ] Perform a decribeProcess request
If the wps is available (**only for string** field):
- [ ] then render an autocomplete editor
- [ ] else render the basic editors based on the type
|
1.0
|
Introduce autocomplete editor into feature editor plugin - We want to use the same autocomplete field present in the query form.
- [ ] Perform a decribeProcess request
If the wps is available (**only for string** field):
- [ ] then render an autocomplete editor
- [ ] else render the basic editors based on the type
|
priority
|
introduce autocomplete editor into feature editor plugin we want to use the same autocomplete field present in the query form perform a decribeprocess request if the wps is available only for string field then render an autocomplete editor else render the basic editors based on the type
| 1
|
153,064
| 5,874,178,034
|
IssuesEvent
|
2017-05-15 15:30:48
|
xcat2/xcat-core
|
https://api.github.com/repos/xcat2/xcat-core
|
closed
|
Regular Expression broken it lastest development build 5/12/17
|
priority:high status:pending type:bug
|
Some changes that were implemented in #2412 are now broken with the latest build.
```
[root@fs3 IB]# lsxcatd -v
Version 2.13.4 (git commit 6fb4f5aa4ad67cad9febf241428b5a6b60e9ad7b, built Fri May 12 06:15:48 EDT 2017)
```
@hu-weihua @neo954 Can we look at the regular expression testcases and find out why we did not cover it... ? https://github.com/xcat2/xcat-core/pull/3007 was added to cover some, but looks like it does not catch this issue? Can we create some testcase asap against today's build so we can validate that it does catch it? This is not the first time we broke regex support.
Using todays development build, rerunning the testcase in #2412....
```
[root@fs3 IB]# mkdef -t group -o testnicips
Warning: Cannot determine a member list for group 'testnicips'.
1 object definitions have been created or modified.
[root@fs3 IB]# chdef -t group -o testnicips ip='|\D+(\d+)\D+(\d+)|10.1.($1).($2)|'
1 object definitions have been created or modified.
[root@fs3 IB]# chdef -t group -o testnicips nicips.eth1='|\D+(\d+)\D+(\d+)|10.1.($1).($2)|'
1 object definitions have been created or modified.
[root@fs3 IB]# lsdef -t group -o testnicips
Object name: testnicips
grouptype=static
ip=|\D+(\d+)\D+(\d+)|10.1.($1).($2)|
members=
nicips.eth1=testnicips <---- this is even worse, it's resolving to the name of the object?!
```
|
1.0
|
Regular Expression broken it lastest development build 5/12/17 - Some changes that were implemented in #2412 are now broken with the latest build.
```
[root@fs3 IB]# lsxcatd -v
Version 2.13.4 (git commit 6fb4f5aa4ad67cad9febf241428b5a6b60e9ad7b, built Fri May 12 06:15:48 EDT 2017)
```
@hu-weihua @neo954 Can we look at the regular expression testcases and find out why we did not cover it... ? https://github.com/xcat2/xcat-core/pull/3007 was added to cover some, but looks like it does not catch this issue? Can we create some testcase asap against today's build so we can validate that it does catch it? This is not the first time we broke regex support.
Using todays development build, rerunning the testcase in #2412....
```
[root@fs3 IB]# mkdef -t group -o testnicips
Warning: Cannot determine a member list for group 'testnicips'.
1 object definitions have been created or modified.
[root@fs3 IB]# chdef -t group -o testnicips ip='|\D+(\d+)\D+(\d+)|10.1.($1).($2)|'
1 object definitions have been created or modified.
[root@fs3 IB]# chdef -t group -o testnicips nicips.eth1='|\D+(\d+)\D+(\d+)|10.1.($1).($2)|'
1 object definitions have been created or modified.
[root@fs3 IB]# lsdef -t group -o testnicips
Object name: testnicips
grouptype=static
ip=|\D+(\d+)\D+(\d+)|10.1.($1).($2)|
members=
nicips.eth1=testnicips <---- this is even worse, it's resolving to the name of the object?!
```
|
priority
|
regular expression broken it lastest development build some changes that were implemented in are now broken with the latest build lsxcatd v version git commit built fri may edt hu weihua can we look at the regular expression testcases and find out why we did not cover it was added to cover some but looks like it does not catch this issue can we create some testcase asap against today s build so we can validate that it does catch it this is not the first time we broke regex support using todays development build rerunning the testcase in mkdef t group o testnicips warning cannot determine a member list for group testnicips object definitions have been created or modified chdef t group o testnicips ip d d d d object definitions have been created or modified chdef t group o testnicips nicips d d d d object definitions have been created or modified lsdef t group o testnicips object name testnicips grouptype static ip d d d d members nicips testnicips this is even worse it s resolving to the name of the object
| 1
|
371,184
| 10,962,649,482
|
IssuesEvent
|
2019-11-27 17:43:55
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
long duration timeouts can cause loss of time
|
area: Kernel bug has-pr priority: high
|
Multiple timer drivers work by recording a counter value at the point ticks were last announced, and tracking elapsed time since that event to get the current time. Deadlines for new events are calculated by adding the requested duration to the elapsed time and setting a compare event at the last announce time plus this offset. (With rounding/conversion from ticks to cycles.)
Several driver implementations try to avoid counter overflows by limiting the maximum number of ticks to a value that can be represented without overflowing the cycle counter span. This worked in pre-tickless builds because the elapsed time was always less than one tick.
It fails in tickless because when the elapsed time is added to the converted maximum ticks it can cause a counter wrap.
This is the problem underlying #20892. It's also present in hpet and systick, and probably multiple other drivers.
|
1.0
|
long duration timeouts can cause loss of time - Multiple timer drivers work by recording a counter value at the point ticks were last announced, and tracking elapsed time since that event to get the current time. Deadlines for new events are calculated by adding the requested duration to the elapsed time and setting a compare event at the last announce time plus this offset. (With rounding/conversion from ticks to cycles.)
Several driver implementations try to avoid counter overflows by limiting the maximum number of ticks to a value that can be represented without overflowing the cycle counter span. This worked in pre-tickless builds because the elapsed time was always less than one tick.
It fails in tickless because when the elapsed time is added to the converted maximum ticks it can cause a counter wrap.
This is the problem underlying #20892. It's also present in hpet and systick, and probably multiple other drivers.
|
priority
|
long duration timeouts can cause loss of time multiple timer drivers work by recording a counter value at the point ticks were last announced and tracking elapsed time since that event to get the current time deadlines for new events are calculated by adding the requested duration to the elapsed time and setting a compare event at the last announce time plus this offset with rounding conversion from ticks to cycles several driver implementations try to avoid counter overflows by limiting the maximum number of ticks to a value that can be represented without overflowing the cycle counter span this worked in pre tickless builds because the elapsed time was always less than one tick it fails in tickless because when the elapsed time is added to the converted maximum ticks it can cause a counter wrap this is the problem underlying it s also present in hpet and systick and probably multiple other drivers
| 1
|
293,828
| 9,009,853,757
|
IssuesEvent
|
2019-02-05 10:16:58
|
Taxmannen/In-Heaven
|
https://api.github.com/repos/Taxmannen/In-Heaven
|
closed
|
[Script][NewBossFrameWork][BossAttack] Pattern Shot
|
Class High Priority
|
Boss shots in a specified pattern.
#### Behavior
Boss shoots out bullets according to one Pattern.
#### Adjustable Variables
List of Variables:
- Pattern: Bullets with information where and when it should be shoot.
- NumberOfPatternCycles: How many times it shuold shoot that pattern.
- TimeBetweenPatterns: The time between completing a pattern and then using it again.
- WaitTimeWhenDone: The time between the completion of this attack and starting a new Attack.
##### If Possible / Future Task / Discussion
Might be a Idea of it shooting multiple patterns during the attack. So it will hold multiple Patterns, and shoot them in the order they are in a Array or by time.
#### Attatched Documents
Link and Description on where to find information related to the Task
|
1.0
|
[Script][NewBossFrameWork][BossAttack] Pattern Shot - Boss shots in a specified pattern.
#### Behavior
Boss shoots out bullets according to one Pattern.
#### Adjustable Variables
List of Variables:
- Pattern: Bullets with information where and when it should be shoot.
- NumberOfPatternCycles: How many times it shuold shoot that pattern.
- TimeBetweenPatterns: The time between completing a pattern and then using it again.
- WaitTimeWhenDone: The time between the completion of this attack and starting a new Attack.
##### If Possible / Future Task / Discussion
Might be a Idea of it shooting multiple patterns during the attack. So it will hold multiple Patterns, and shoot them in the order they are in a Array or by time.
#### Attatched Documents
Link and Description on where to find information related to the Task
|
priority
|
pattern shot boss shots in a specified pattern behavior boss shoots out bullets according to one pattern adjustable variables list of variables pattern bullets with information where and when it should be shoot numberofpatterncycles how many times it shuold shoot that pattern timebetweenpatterns the time between completing a pattern and then using it again waittimewhendone the time between the completion of this attack and starting a new attack if possible future task discussion might be a idea of it shooting multiple patterns during the attack so it will hold multiple patterns and shoot them in the order they are in a array or by time attatched documents link and description on where to find information related to the task
| 1
|
296,896
| 9,134,584,677
|
IssuesEvent
|
2019-02-26 00:28:56
|
giampaolo/psutil
|
https://api.github.com/repos/giampaolo/psutil
|
closed
|
psutil-5.4.7 fails tests on macOS
|
OpSys-OSX Priority-High bug
|
make tests fails on macOS with python 3.4, 3.5, 3.6, and 3.7:
```
... skip long output ...
psutil.tests.test_windows.TestSystemAPIs.test_pids ... skipped 'WINDOWS only'
psutil.tests.test_windows.TestSystemAPIs.test_total_phymem ... skipped 'WINDOWS only'
======================================================================
ERROR: psutil.tests.test_unicode.TestFSAPIs.test_memory_maps
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/test_unicode.py", line 276, in test_memory_maps
with copyload_shared_lib(dst_prefix=self.funky_name) as funky_path:
File "/sw/lib/python3.5/contextlib.py", line 59, in __enter__
return next(self.gen)
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/__init__.py", line 1197, in copyload_shared_lib
libs = [x.path for x in psutil.Process().memory_maps() if
File "/Users/michael/Downloads/psutil-5.4.7/psutil/__init__.py", line 1111, in memory_maps
it = self._proc.memory_maps()
File "/Users/michael/Downloads/psutil-5.4.7/psutil/_psosx.py", line 335, in wrapper
return fun(self, *args, **kwargs)
File "/Users/michael/Downloads/psutil-5.4.7/psutil/_psosx.py", line 576, in memory_maps
return cext.proc_memory_maps(self.pid)
OSError: [Errno 22] Invalid argument
======================================================================
ERROR: setUpClass (psutil.tests.test_unicode.TestFSAPIsWithInvalidPath)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/test_unicode.py", line 149, in setUpClass
create_exe(cls.funky_name)
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/__init__.py", line 783, in create_exe
shutil.copyfile(PYTHON_EXE, outpath)
File "/sw/lib/python3.5/shutil.py", line 121, in copyfile
with open(dst, 'wb') as fdst:
OSError: [Errno 92] Illegal byte sequence: '/Users/michael/Downloads/psutil-5.4.7/@psutil-test-72120f\udcc0\udc80'
======================================================================
ERROR: psutil.tests.test_misc.TestMisc.test_process_as_dict_no_new_names
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/test_misc.py", line 210, in test_process_as_dict_no_new_names
self.assertNotIn('foo', p.as_dict())
File "/Users/michael/Downloads/psutil-5.4.7/psutil/__init__.py", line 526, in as_dict
ret = meth()
File "/Users/michael/Downloads/psutil-5.4.7/psutil/__init__.py", line 1111, in memory_maps
it = self._proc.memory_maps()
File "/Users/michael/Downloads/psutil-5.4.7/psutil/_psosx.py", line 335, in wrapper
return fun(self, *args, **kwargs)
File "/Users/michael/Downloads/psutil-5.4.7/psutil/_psosx.py", line 576, in memory_maps
return cext.proc_memory_maps(self.pid)
OSError: [Errno 22] Invalid argument
======================================================================
ERROR: psutil.tests.test_misc.TestMisc.test_serialization
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/test_misc.py", line 352, in test_serialization
check(psutil.Process().as_dict())
File "/Users/michael/Downloads/psutil-5.4.7/psutil/__init__.py", line 526, in as_dict
ret = meth()
File "/Users/michael/Downloads/psutil-5.4.7/psutil/__init__.py", line 1111, in memory_maps
it = self._proc.memory_maps()
File "/Users/michael/Downloads/psutil-5.4.7/psutil/_psosx.py", line 335, in wrapper
return fun(self, *args, **kwargs)
File "/Users/michael/Downloads/psutil-5.4.7/psutil/_psosx.py", line 576, in memory_maps
return cext.proc_memory_maps(self.pid)
OSError: [Errno 22] Invalid argument
======================================================================
ERROR: psutil.tests.test_posix.TestProcess.test_num_fds
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/test_posix.py", line 271, in test_num_fds
call(p, name)
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/test_posix.py", line 251, in call
attr(*args)
File "/Users/michael/Downloads/psutil-5.4.7/psutil/__init__.py", line 1111, in memory_maps
it = self._proc.memory_maps()
File "/Users/michael/Downloads/psutil-5.4.7/psutil/_psosx.py", line 335, in wrapper
return fun(self, *args, **kwargs)
File "/Users/michael/Downloads/psutil-5.4.7/psutil/_psosx.py", line 576, in memory_maps
return cext.proc_memory_maps(self.pid)
OSError: [Errno 22] Invalid argument
======================================================================
ERROR: psutil.tests.test_process.TestProcess.test_memory_maps
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/test_process.py", line 615, in test_memory_maps
maps = p.memory_maps()
File "/Users/michael/Downloads/psutil-5.4.7/psutil/__init__.py", line 1111, in memory_maps
it = self._proc.memory_maps()
File "/Users/michael/Downloads/psutil-5.4.7/psutil/_psosx.py", line 335, in wrapper
return fun(self, *args, **kwargs)
File "/Users/michael/Downloads/psutil-5.4.7/psutil/_psosx.py", line 576, in memory_maps
return cext.proc_memory_maps(self.pid)
OSError: [Errno 22] Invalid argument
======================================================================
ERROR: psutil.tests.test_process.TestProcess.test_memory_maps_lists_lib
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/test_process.py", line 656, in test_memory_maps_lists_lib
with copyload_shared_lib() as path:
File "/sw/lib/python3.5/contextlib.py", line 59, in __enter__
return next(self.gen)
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/__init__.py", line 1197, in copyload_shared_lib
libs = [x.path for x in psutil.Process().memory_maps() if
File "/Users/michael/Downloads/psutil-5.4.7/psutil/__init__.py", line 1111, in memory_maps
it = self._proc.memory_maps()
File "/Users/michael/Downloads/psutil-5.4.7/psutil/_psosx.py", line 335, in wrapper
return fun(self, *args, **kwargs)
File "/Users/michael/Downloads/psutil-5.4.7/psutil/_psosx.py", line 576, in memory_maps
return cext.proc_memory_maps(self.pid)
OSError: [Errno 22] Invalid argument
======================================================================
FAIL: psutil.tests.test_contracts.TestFetchAllProcesses.test_fetch_all
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/test_contracts.py", line 355, in test_fetch_all
meth(ret, p)
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/test_contracts.py", line 381, in exe
assert os.access(ret, os.X_OK)
AssertionError
======================================================================
FAIL: psutil.tests.test_posix.TestProcess.test_name
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/test_posix.py", line 159, in test_name
self.assertEqual(name_ps, name_psutil)
AssertionError: 'python' != 'pythonm'
- python
+ pythonm
? +
======================================================================
FAIL: psutil.tests.test_system.TestSystemAPIs.test_cpu_times_time_increases
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/test_system.py", line 325, in test_cpu_times_time_increases
self.fail("difference %s" % difference)
AssertionError: difference 0.0
----------------------------------------------------------------------
Ran 518 tests in 21.425s
FAILED (failures=3, errors=7, skipped=241)
```
|
1.0
|
psutil-5.4.7 fails tests on macOS - make tests fails on macOS with python 3.4, 3.5, 3.6, and 3.7:
```
... skip long output ...
psutil.tests.test_windows.TestSystemAPIs.test_pids ... skipped 'WINDOWS only'
psutil.tests.test_windows.TestSystemAPIs.test_total_phymem ... skipped 'WINDOWS only'
======================================================================
ERROR: psutil.tests.test_unicode.TestFSAPIs.test_memory_maps
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/test_unicode.py", line 276, in test_memory_maps
with copyload_shared_lib(dst_prefix=self.funky_name) as funky_path:
File "/sw/lib/python3.5/contextlib.py", line 59, in __enter__
return next(self.gen)
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/__init__.py", line 1197, in copyload_shared_lib
libs = [x.path for x in psutil.Process().memory_maps() if
File "/Users/michael/Downloads/psutil-5.4.7/psutil/__init__.py", line 1111, in memory_maps
it = self._proc.memory_maps()
File "/Users/michael/Downloads/psutil-5.4.7/psutil/_psosx.py", line 335, in wrapper
return fun(self, *args, **kwargs)
File "/Users/michael/Downloads/psutil-5.4.7/psutil/_psosx.py", line 576, in memory_maps
return cext.proc_memory_maps(self.pid)
OSError: [Errno 22] Invalid argument
======================================================================
ERROR: setUpClass (psutil.tests.test_unicode.TestFSAPIsWithInvalidPath)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/test_unicode.py", line 149, in setUpClass
create_exe(cls.funky_name)
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/__init__.py", line 783, in create_exe
shutil.copyfile(PYTHON_EXE, outpath)
File "/sw/lib/python3.5/shutil.py", line 121, in copyfile
with open(dst, 'wb') as fdst:
OSError: [Errno 92] Illegal byte sequence: '/Users/michael/Downloads/psutil-5.4.7/@psutil-test-72120f\udcc0\udc80'
======================================================================
ERROR: psutil.tests.test_misc.TestMisc.test_process_as_dict_no_new_names
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/test_misc.py", line 210, in test_process_as_dict_no_new_names
self.assertNotIn('foo', p.as_dict())
File "/Users/michael/Downloads/psutil-5.4.7/psutil/__init__.py", line 526, in as_dict
ret = meth()
File "/Users/michael/Downloads/psutil-5.4.7/psutil/__init__.py", line 1111, in memory_maps
it = self._proc.memory_maps()
File "/Users/michael/Downloads/psutil-5.4.7/psutil/_psosx.py", line 335, in wrapper
return fun(self, *args, **kwargs)
File "/Users/michael/Downloads/psutil-5.4.7/psutil/_psosx.py", line 576, in memory_maps
return cext.proc_memory_maps(self.pid)
OSError: [Errno 22] Invalid argument
======================================================================
ERROR: psutil.tests.test_misc.TestMisc.test_serialization
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/test_misc.py", line 352, in test_serialization
check(psutil.Process().as_dict())
File "/Users/michael/Downloads/psutil-5.4.7/psutil/__init__.py", line 526, in as_dict
ret = meth()
File "/Users/michael/Downloads/psutil-5.4.7/psutil/__init__.py", line 1111, in memory_maps
it = self._proc.memory_maps()
File "/Users/michael/Downloads/psutil-5.4.7/psutil/_psosx.py", line 335, in wrapper
return fun(self, *args, **kwargs)
File "/Users/michael/Downloads/psutil-5.4.7/psutil/_psosx.py", line 576, in memory_maps
return cext.proc_memory_maps(self.pid)
OSError: [Errno 22] Invalid argument
======================================================================
ERROR: psutil.tests.test_posix.TestProcess.test_num_fds
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/test_posix.py", line 271, in test_num_fds
call(p, name)
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/test_posix.py", line 251, in call
attr(*args)
File "/Users/michael/Downloads/psutil-5.4.7/psutil/__init__.py", line 1111, in memory_maps
it = self._proc.memory_maps()
File "/Users/michael/Downloads/psutil-5.4.7/psutil/_psosx.py", line 335, in wrapper
return fun(self, *args, **kwargs)
File "/Users/michael/Downloads/psutil-5.4.7/psutil/_psosx.py", line 576, in memory_maps
return cext.proc_memory_maps(self.pid)
OSError: [Errno 22] Invalid argument
======================================================================
ERROR: psutil.tests.test_process.TestProcess.test_memory_maps
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/test_process.py", line 615, in test_memory_maps
maps = p.memory_maps()
File "/Users/michael/Downloads/psutil-5.4.7/psutil/__init__.py", line 1111, in memory_maps
it = self._proc.memory_maps()
File "/Users/michael/Downloads/psutil-5.4.7/psutil/_psosx.py", line 335, in wrapper
return fun(self, *args, **kwargs)
File "/Users/michael/Downloads/psutil-5.4.7/psutil/_psosx.py", line 576, in memory_maps
return cext.proc_memory_maps(self.pid)
OSError: [Errno 22] Invalid argument
======================================================================
ERROR: psutil.tests.test_process.TestProcess.test_memory_maps_lists_lib
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/test_process.py", line 656, in test_memory_maps_lists_lib
with copyload_shared_lib() as path:
File "/sw/lib/python3.5/contextlib.py", line 59, in __enter__
return next(self.gen)
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/__init__.py", line 1197, in copyload_shared_lib
libs = [x.path for x in psutil.Process().memory_maps() if
File "/Users/michael/Downloads/psutil-5.4.7/psutil/__init__.py", line 1111, in memory_maps
it = self._proc.memory_maps()
File "/Users/michael/Downloads/psutil-5.4.7/psutil/_psosx.py", line 335, in wrapper
return fun(self, *args, **kwargs)
File "/Users/michael/Downloads/psutil-5.4.7/psutil/_psosx.py", line 576, in memory_maps
return cext.proc_memory_maps(self.pid)
OSError: [Errno 22] Invalid argument
======================================================================
FAIL: psutil.tests.test_contracts.TestFetchAllProcesses.test_fetch_all
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/test_contracts.py", line 355, in test_fetch_all
meth(ret, p)
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/test_contracts.py", line 381, in exe
assert os.access(ret, os.X_OK)
AssertionError
======================================================================
FAIL: psutil.tests.test_posix.TestProcess.test_name
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/test_posix.py", line 159, in test_name
self.assertEqual(name_ps, name_psutil)
AssertionError: 'python' != 'pythonm'
- python
+ pythonm
? +
======================================================================
FAIL: psutil.tests.test_system.TestSystemAPIs.test_cpu_times_time_increases
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/michael/Downloads/psutil-5.4.7/psutil/tests/test_system.py", line 325, in test_cpu_times_time_increases
self.fail("difference %s" % difference)
AssertionError: difference 0.0
----------------------------------------------------------------------
Ran 518 tests in 21.425s
FAILED (failures=3, errors=7, skipped=241)
```
|
priority
|
psutil fails tests on macos make tests fails on macos with python and skip long output psutil tests test windows testsystemapis test pids skipped windows only psutil tests test windows testsystemapis test total phymem skipped windows only error psutil tests test unicode testfsapis test memory maps traceback most recent call last file users michael downloads psutil psutil tests test unicode py line in test memory maps with copyload shared lib dst prefix self funky name as funky path file sw lib contextlib py line in enter return next self gen file users michael downloads psutil psutil tests init py line in copyload shared lib libs x path for x in psutil process memory maps if file users michael downloads psutil psutil init py line in memory maps it self proc memory maps file users michael downloads psutil psutil psosx py line in wrapper return fun self args kwargs file users michael downloads psutil psutil psosx py line in memory maps return cext proc memory maps self pid oserror invalid argument error setupclass psutil tests test unicode testfsapiswithinvalidpath traceback most recent call last file users michael downloads psutil psutil tests test unicode py line in setupclass create exe cls funky name file users michael downloads psutil psutil tests init py line in create exe shutil copyfile python exe outpath file sw lib shutil py line in copyfile with open dst wb as fdst oserror illegal byte sequence users michael downloads psutil psutil test error psutil tests test misc testmisc test process as dict no new names traceback most recent call last file users michael downloads psutil psutil tests test misc py line in test process as dict no new names self assertnotin foo p as dict file users michael downloads psutil psutil init py line in as dict ret meth file users michael downloads psutil psutil init py line in memory maps it self proc memory maps file users michael downloads psutil psutil psosx py line in wrapper return fun self args kwargs file users michael downloads psutil psutil psosx py line in memory maps return cext proc memory maps self pid oserror invalid argument error psutil tests test misc testmisc test serialization traceback most recent call last file users michael downloads psutil psutil tests test misc py line in test serialization check psutil process as dict file users michael downloads psutil psutil init py line in as dict ret meth file users michael downloads psutil psutil init py line in memory maps it self proc memory maps file users michael downloads psutil psutil psosx py line in wrapper return fun self args kwargs file users michael downloads psutil psutil psosx py line in memory maps return cext proc memory maps self pid oserror invalid argument error psutil tests test posix testprocess test num fds traceback most recent call last file users michael downloads psutil psutil tests test posix py line in test num fds call p name file users michael downloads psutil psutil tests test posix py line in call attr args file users michael downloads psutil psutil init py line in memory maps it self proc memory maps file users michael downloads psutil psutil psosx py line in wrapper return fun self args kwargs file users michael downloads psutil psutil psosx py line in memory maps return cext proc memory maps self pid oserror invalid argument error psutil tests test process testprocess test memory maps traceback most recent call last file users michael downloads psutil psutil tests test process py line in test memory maps maps p memory maps file users michael downloads psutil psutil init py line in memory maps it self proc memory maps file users michael downloads psutil psutil psosx py line in wrapper return fun self args kwargs file users michael downloads psutil psutil psosx py line in memory maps return cext proc memory maps self pid oserror invalid argument error psutil tests test process testprocess test memory maps lists lib traceback most recent call last file users michael downloads psutil psutil tests test process py line in test memory maps lists lib with copyload shared lib as path file sw lib contextlib py line in enter return next self gen file users michael downloads psutil psutil tests init py line in copyload shared lib libs x path for x in psutil process memory maps if file users michael downloads psutil psutil init py line in memory maps it self proc memory maps file users michael downloads psutil psutil psosx py line in wrapper return fun self args kwargs file users michael downloads psutil psutil psosx py line in memory maps return cext proc memory maps self pid oserror invalid argument fail psutil tests test contracts testfetchallprocesses test fetch all traceback most recent call last file users michael downloads psutil psutil tests test contracts py line in test fetch all meth ret p file users michael downloads psutil psutil tests test contracts py line in exe assert os access ret os x ok assertionerror fail psutil tests test posix testprocess test name traceback most recent call last file users michael downloads psutil psutil tests test posix py line in test name self assertequal name ps name psutil assertionerror python pythonm python pythonm fail psutil tests test system testsystemapis test cpu times time increases traceback most recent call last file users michael downloads psutil psutil tests test system py line in test cpu times time increases self fail difference s difference assertionerror difference ran tests in failed failures errors skipped
| 1
|
488,337
| 14,076,193,810
|
IssuesEvent
|
2020-11-04 10:07:50
|
dhis2/d2
|
https://api.github.com/repos/dhis2/d2
|
closed
|
After saving a newly created Model update the instance with the Id
|
core enhancement model priority:high wontfix
|
After calling `.save()` on a model, and we send a `POST` to create the new entity. The `model` should be updated to reflect this newly created instance.
- The `id` should be correctly set onto the model.
- Reload a `fresh` copy of this instance to get the default values that were added.
@see dhis2/maintenance-app#14
|
1.0
|
After saving a newly created Model update the instance with the Id - After calling `.save()` on a model, and we send a `POST` to create the new entity. The `model` should be updated to reflect this newly created instance.
- The `id` should be correctly set onto the model.
- Reload a `fresh` copy of this instance to get the default values that were added.
@see dhis2/maintenance-app#14
|
priority
|
after saving a newly created model update the instance with the id after calling save on a model and we send a post to create the new entity the model should be updated to reflect this newly created instance the id should be correctly set onto the model reload a fresh copy of this instance to get the default values that were added see maintenance app
| 1
|
275,836
| 8,581,247,513
|
IssuesEvent
|
2018-11-13 14:15:10
|
swe-ms-boun/2018fall-swe574-g1
|
https://api.github.com/repos/swe-ms-boun/2018fall-swe574-g1
|
opened
|
Web Annotation Protocol
|
module.annotation priority.high type.annotation type.document type.research
|
Turns out API syle calls are inadequate for our purposes. A study and report on following are necessary
[Web Annotation Protocol](https://www.w3.org/TR/2017/REC-annotation-protocol-20170223/#annotation-retrieval)
[Web Annotation Data Model](https://www.w3.org/TR/2017/REC-annotation-model-20170223/) (this is the document we've always been checking so far)
|
1.0
|
Web Annotation Protocol - Turns out API syle calls are inadequate for our purposes. A study and report on following are necessary
[Web Annotation Protocol](https://www.w3.org/TR/2017/REC-annotation-protocol-20170223/#annotation-retrieval)
[Web Annotation Data Model](https://www.w3.org/TR/2017/REC-annotation-model-20170223/) (this is the document we've always been checking so far)
|
priority
|
web annotation protocol turns out api syle calls are inadequate for our purposes a study and report on following are necessary this is the document we ve always been checking so far
| 1
|
640,980
| 20,814,294,284
|
IssuesEvent
|
2022-03-18 08:29:47
|
wso2/product-apim
|
https://api.github.com/repos/wso2/product-apim
|
closed
|
APIs and API Products get disappeared from the dev portal anonymous view after a restore
|
Type/Bug Priority/High 4.x.x APIM - 4.1.0
|
### Description:
APIs and API Products get disappeared from the dev portal **anonymous view** after it is restored back to a revision state even though the dev portal visibility is still set to **public**..
### Steps to reproduce:
**APIs**
1. Create API and add a revision (Devportal visibility set to public).
2. Go to the devportal and confirm the API is visible in the store.
3. Make some changes on the current state and restore back to the revision state.
4. Go to the devportal (anonymous view) again, and the API will be missing from the store.
**API Products**
1. Create and API Product and create a revision (Devportal visibility set to public).
2. Go to the devportal and confirm the product is visible in the store.
3. Make some changes to the current state of the API Product and restore back to the old revision.
4. Go to the devportal (anonymous view) again, and the API Product will be missing from the store.
### Affected Product Version:
carbon-apimgt 9.0.373-SNAPSHOT
### Environment details (with versions):
- OS:
- Client:
- Env (Docker/K8s):
---
### Optional Fields
#### Related Issues:
<!-- Any related issues from this/other repositories-->
#### Suggested Labels:
<!--Only to be used by non-members-->
#### Suggested Assignees:
<!--Only to be used by non-members-->
|
1.0
|
APIs and API Products get disappeared from the dev portal anonymous view after a restore - ### Description:
APIs and API Products get disappeared from the dev portal **anonymous view** after it is restored back to a revision state even though the dev portal visibility is still set to **public**..
### Steps to reproduce:
**APIs**
1. Create API and add a revision (Devportal visibility set to public).
2. Go to the devportal and confirm the API is visible in the store.
3. Make some changes on the current state and restore back to the revision state.
4. Go to the devportal (anonymous view) again, and the API will be missing from the store.
**API Products**
1. Create and API Product and create a revision (Devportal visibility set to public).
2. Go to the devportal and confirm the product is visible in the store.
3. Make some changes to the current state of the API Product and restore back to the old revision.
4. Go to the devportal (anonymous view) again, and the API Product will be missing from the store.
### Affected Product Version:
carbon-apimgt 9.0.373-SNAPSHOT
### Environment details (with versions):
- OS:
- Client:
- Env (Docker/K8s):
---
### Optional Fields
#### Related Issues:
<!-- Any related issues from this/other repositories-->
#### Suggested Labels:
<!--Only to be used by non-members-->
#### Suggested Assignees:
<!--Only to be used by non-members-->
|
priority
|
apis and api products get disappeared from the dev portal anonymous view after a restore description apis and api products get disappeared from the dev portal anonymous view after it is restored back to a revision state even though the dev portal visibility is still set to public steps to reproduce apis create api and add a revision devportal visibility set to public go to the devportal and confirm the api is visible in the store make some changes on the current state and restore back to the revision state go to the devportal anonymous view again and the api will be missing from the store api products create and api product and create a revision devportal visibility set to public go to the devportal and confirm the product is visible in the store make some changes to the current state of the api product and restore back to the old revision go to the devportal anonymous view again and the api product will be missing from the store affected product version carbon apimgt snapshot environment details with versions os client env docker optional fields related issues suggested labels suggested assignees
| 1
|
512,194
| 14,889,761,634
|
IssuesEvent
|
2021-01-20 21:55:43
|
AlexsLemonade/resources-portal
|
https://api.github.com/repos/AlexsLemonade/resources-portal
|
closed
|
Notification digest: requested changes to the weekly digest email
|
High Priority
|
### Context
@dvenprasad and @annagreene both received their weekly digest emails at 10am as expected, but there are a couple requested changes.
### Problem or idea
There are two issues to be resolved (please feel free to break this up if you feel it's more appropriate):
* The spacing in the top of the email was squished, both in Outlook on mobile and in Gmail (Web UI). See the screenshots below.
<img src="https://user-images.githubusercontent.com/19534205/99288795-21fdd780-280a-11eb-859a-ee296ac7e1a7.png" width="300">
<img src="https://user-images.githubusercontent.com/19534205/99288810-25915e80-280a-11eb-9e79-4d1f1e1442eb.png" width="600">
* @annagreene suggested that we change the title from `CCRR: Weekly Notification Digest` to include ALSF in the title: `ALSF CCRR: Weekly Notification Digest`
### Solution or next step
Fix the spacing at the top of the email and change the title.
|
1.0
|
Notification digest: requested changes to the weekly digest email - ### Context
@dvenprasad and @annagreene both received their weekly digest emails at 10am as expected, but there are a couple requested changes.
### Problem or idea
There are two issues to be resolved (please feel free to break this up if you feel it's more appropriate):
* The spacing in the top of the email was squished, both in Outlook on mobile and in Gmail (Web UI). See the screenshots below.
<img src="https://user-images.githubusercontent.com/19534205/99288795-21fdd780-280a-11eb-859a-ee296ac7e1a7.png" width="300">
<img src="https://user-images.githubusercontent.com/19534205/99288810-25915e80-280a-11eb-9e79-4d1f1e1442eb.png" width="600">
* @annagreene suggested that we change the title from `CCRR: Weekly Notification Digest` to include ALSF in the title: `ALSF CCRR: Weekly Notification Digest`
### Solution or next step
Fix the spacing at the top of the email and change the title.
|
priority
|
notification digest requested changes to the weekly digest email context dvenprasad and annagreene both received their weekly digest emails at as expected but there are a couple requested changes problem or idea there are two issues to be resolved please feel free to break this up if you feel it s more appropriate the spacing in the top of the email was squished both in outlook on mobile and in gmail web ui see the screenshots below annagreene suggested that we change the title from ccrr weekly notification digest to include alsf in the title alsf ccrr weekly notification digest solution or next step fix the spacing at the top of the email and change the title
| 1
|
812,130
| 30,318,649,889
|
IssuesEvent
|
2023-07-10 17:25:56
|
cidgoh/COVID-MVP
|
https://api.github.com/repos/cidgoh/COVID-MVP
|
closed
|
Smaller set of default lineages shown on launch
|
enhancement high priority
|
To increase application speed, we should show the fastest growing lineages in Canada of the last 120 days on launch.
This will require pre-selecting some hidden strains.
|
1.0
|
Smaller set of default lineages shown on launch - To increase application speed, we should show the fastest growing lineages in Canada of the last 120 days on launch.
This will require pre-selecting some hidden strains.
|
priority
|
smaller set of default lineages shown on launch to increase application speed we should show the fastest growing lineages in canada of the last days on launch this will require pre selecting some hidden strains
| 1
|
652,181
| 21,524,673,980
|
IssuesEvent
|
2022-04-28 17:11:33
|
CloudWithChris/hugo-creator
|
https://api.github.com/repos/CloudWithChris/hugo-creator
|
closed
|
[Feature]: Migrate to Google Analytics 4
|
Type/Enhancement Priority/High
|
### Provide a general summary of the suggestion
Google Analytics Universal Property is ending in 2023. Need to migrate to the GA4 standard - https://support.google.com/analytics/answer/11583528
### Context
Ensure continuity of analytics
### Possible Implementation
https://support.google.com/analytics/answer/11583528
### Your Environment
_No response_
|
1.0
|
[Feature]: Migrate to Google Analytics 4 - ### Provide a general summary of the suggestion
Google Analytics Universal Property is ending in 2023. Need to migrate to the GA4 standard - https://support.google.com/analytics/answer/11583528
### Context
Ensure continuity of analytics
### Possible Implementation
https://support.google.com/analytics/answer/11583528
### Your Environment
_No response_
|
priority
|
migrate to google analytics provide a general summary of the suggestion google analytics universal property is ending in need to migrate to the standard context ensure continuity of analytics possible implementation your environment no response
| 1
|
561,870
| 16,626,348,148
|
IssuesEvent
|
2021-06-03 10:02:37
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
reopened
|
[0.9.4 develop-277] Doesn't allow editing districts/ properties on the whole map
|
Category: Gameplay Priority: High Regression Squad: Wild Turkey Type: Bug
|
Step to reproduce:
- start to edit any District or Property.
- set map in one position.
- you can draw district only inside this square:

|
1.0
|
[0.9.4 develop-277] Doesn't allow editing districts/ properties on the whole map - Step to reproduce:
- start to edit any District or Property.
- set map in one position.
- you can draw district only inside this square:

|
priority
|
doesn t allow editing districts properties on the whole map step to reproduce start to edit any district or property set map in one position you can draw district only inside this square
| 1
|
71,284
| 3,355,016,467
|
IssuesEvent
|
2015-11-18 14:55:46
|
IMAGINARY/imaginary-web
|
https://api.github.com/repos/IMAGINARY/imaginary-web
|
closed
|
http://imaginary.org/projects disfunctional?
|
bug extremely urgent high priority work in progress
|
The page http://imaginary.org/projects does not load (even if the rest of the site is working). It is linked from the small main menu on top of the imaginary.org page. Could you please check? (maybe it is related to our other server issues?)
|
1.0
|
http://imaginary.org/projects disfunctional? - The page http://imaginary.org/projects does not load (even if the rest of the site is working). It is linked from the small main menu on top of the imaginary.org page. Could you please check? (maybe it is related to our other server issues?)
|
priority
|
disfunctional the page does not load even if the rest of the site is working it is linked from the small main menu on top of the imaginary org page could you please check maybe it is related to our other server issues
| 1
|
536,673
| 15,712,277,022
|
IssuesEvent
|
2021-03-27 11:23:13
|
AY2021S2-CS2113T-F08-1/tp
|
https://api.github.com/repos/AY2021S2-CS2113T-F08-1/tp
|
closed
|
Add Assignment Solution
|
priority.High type.Story
|
As a TA, I can add assignment solutions so that I can refer to them easily or use them to autograde MCQ and Short Answer Assignments
|
1.0
|
Add Assignment Solution - As a TA, I can add assignment solutions so that I can refer to them easily or use them to autograde MCQ and Short Answer Assignments
|
priority
|
add assignment solution as a ta i can add assignment solutions so that i can refer to them easily or use them to autograde mcq and short answer assignments
| 1
|
309,607
| 9,477,529,967
|
IssuesEvent
|
2019-04-19 18:57:27
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
Unable to pickle torch dtype objects in Python 3.5
|
high priority
|
## 🐛 Bug
When pickling a `torch.dtype` object, Python 3.5 reports an obscure error "can't pickle int objects".
## To Reproduce
Steps to reproduce the behavior:
```python
In [1]: import torch
In [2]: import pickle
In [3]: with open('/tmp/a', 'wb') as f:
...: pickle.dump(torch.float32, f)
...:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-3-769b4901f38c> in <module>()
1 with open('/tmp/a', 'wb') as f:
----> 2 pickle.dump(torch.float32, f)
3
~/anaconda3/envs/tmp/lib/python3.5/copyreg.py in _reduce_ex(self, proto)
63 else:
64 if base is self.__class__:
---> 65 raise TypeError("can't pickle %s objects" % base.__name__)
66 state = base(self)
67 args = (self.__class__, base, state)
TypeError: can't pickle int objects
```
## Expected behavior
In Python 3.6 one can pickle torch dtypes successfully.
## Environment
```
Collecting environment information...
PyTorch version: 0.4.1.post2
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Fedora release 29 (Twenty Nine)
GCC version: (GCC) 8.2.1 20181011 (Red Hat 8.2.1-4)
CMake version: version 3.12.1
Python version: 3.5
Is CUDA available: Yes
CUDA runtime version: 9.2.148
GPU models and configuration: GPU 0: GeForce GTX 1070
Nvidia driver version: 410.73
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy (1.15.2)
[pip] torch (0.4.1.post2)
[conda] pytorch 0.4.1 py35_py27__9.0.176_7.1.2_2 pytorch
```
|
1.0
|
Unable to pickle torch dtype objects in Python 3.5 - ## 🐛 Bug
When pickling a `torch.dtype` object, Python 3.5 reports an obscure error "can't pickle int objects".
## To Reproduce
Steps to reproduce the behavior:
```python
In [1]: import torch
In [2]: import pickle
In [3]: with open('/tmp/a', 'wb') as f:
...: pickle.dump(torch.float32, f)
...:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-3-769b4901f38c> in <module>()
1 with open('/tmp/a', 'wb') as f:
----> 2 pickle.dump(torch.float32, f)
3
~/anaconda3/envs/tmp/lib/python3.5/copyreg.py in _reduce_ex(self, proto)
63 else:
64 if base is self.__class__:
---> 65 raise TypeError("can't pickle %s objects" % base.__name__)
66 state = base(self)
67 args = (self.__class__, base, state)
TypeError: can't pickle int objects
```
## Expected behavior
In Python 3.6 one can pickle torch dtypes successfully.
## Environment
```
Collecting environment information...
PyTorch version: 0.4.1.post2
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Fedora release 29 (Twenty Nine)
GCC version: (GCC) 8.2.1 20181011 (Red Hat 8.2.1-4)
CMake version: version 3.12.1
Python version: 3.5
Is CUDA available: Yes
CUDA runtime version: 9.2.148
GPU models and configuration: GPU 0: GeForce GTX 1070
Nvidia driver version: 410.73
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy (1.15.2)
[pip] torch (0.4.1.post2)
[conda] pytorch 0.4.1 py35_py27__9.0.176_7.1.2_2 pytorch
```
|
priority
|
unable to pickle torch dtype objects in python 🐛 bug when pickling a torch dtype object python reports an obscure error can t pickle int objects to reproduce steps to reproduce the behavior python in import torch in import pickle in with open tmp a wb as f pickle dump torch f typeerror traceback most recent call last in with open tmp a wb as f pickle dump torch f envs tmp lib copyreg py in reduce ex self proto else if base is self class raise typeerror can t pickle s objects base name state base self args self class base state typeerror can t pickle int objects expected behavior in python one can pickle torch dtypes successfully environment collecting environment information pytorch version is debug build no cuda used to build pytorch os fedora release twenty nine gcc version gcc red hat cmake version version python version is cuda available yes cuda runtime version gpu models and configuration gpu geforce gtx nvidia driver version cudnn version could not collect versions of relevant libraries numpy torch pytorch pytorch
| 1
|
770,655
| 27,049,531,350
|
IssuesEvent
|
2023-02-13 12:16:17
|
SuadHus/D0020E-VR
|
https://api.github.com/repos/SuadHus/D0020E-VR
|
closed
|
Interaction with the planks
|
High priority Low risk
|
Make the planks interactible and add some logic. Estimated time: 30 hours
|
1.0
|
Interaction with the planks - Make the planks interactible and add some logic. Estimated time: 30 hours
|
priority
|
interaction with the planks make the planks interactible and add some logic estimated time hours
| 1
|
608,495
| 18,840,519,654
|
IssuesEvent
|
2021-11-11 09:01:25
|
TencentBlueKing/bk-iam-saas
|
https://api.github.com/repos/TencentBlueKing/bk-iam-saas
|
closed
|
[SaaS] 修改分级管理员授权范围同步权限模板已授权信息
|
Type: Enhancement Layer: SaaS Priority: High Size: S
|
主要问题:
- 删除的操作, 同步时需要用户确认删除已授权的用户组
- 授权范围中的实例被删掉时, 需要同步已授权到用户组的授权信息
---
考虑从交互上解决该问题, 需要让用户明确知道他在做什么, 导致的后果是什么
|
1.0
|
[SaaS] 修改分级管理员授权范围同步权限模板已授权信息 - 主要问题:
- 删除的操作, 同步时需要用户确认删除已授权的用户组
- 授权范围中的实例被删掉时, 需要同步已授权到用户组的授权信息
---
考虑从交互上解决该问题, 需要让用户明确知道他在做什么, 导致的后果是什么
|
priority
|
修改分级管理员授权范围同步权限模板已授权信息 主要问题 删除的操作 同步时需要用户确认删除已授权的用户组 授权范围中的实例被删掉时 需要同步已授权到用户组的授权信息 考虑从交互上解决该问题 需要让用户明确知道他在做什么 导致的后果是什么
| 1
|
30,390
| 2,723,624,774
|
IssuesEvent
|
2015-04-14 13:44:28
|
CruxFramework/crux-widgets
|
https://api.github.com/repos/CruxFramework/crux-widgets
|
closed
|
"allowedPackages" and "ignoredPackages" properties are being cached across executions of SchemaGenerator task
|
bug imported Milestone-3.0.0 Priority-High
|
_From [gessedafe@gmail.com](https://code.google.com/u/gessedafe@gmail.com/) on June 14, 2010 18:04:33_
SchemaGenerator is reading old values of AllowedPackages and IgnoredPackages properties.
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=135_
|
1.0
|
"allowedPackages" and "ignoredPackages" properties are being cached across executions of SchemaGenerator task - _From [gessedafe@gmail.com](https://code.google.com/u/gessedafe@gmail.com/) on June 14, 2010 18:04:33_
SchemaGenerator is reading old values of AllowedPackages and IgnoredPackages properties.
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=135_
|
priority
|
allowedpackages and ignoredpackages properties are being cached across executions of schemagenerator task from on june schemagenerator is reading old values of allowedpackages and ignoredpackages properties original issue
| 1
|
801,641
| 28,496,305,660
|
IssuesEvent
|
2023-04-18 14:28:22
|
FSanchez-UF/Galactic-Glide
|
https://api.github.com/repos/FSanchez-UF/Galactic-Glide
|
closed
|
Difficulty Levels
|
priority:high
|
- [x] We need to add UI to select between normal and hard difficulty
- [x] Normal difficulty can be the game we have now
- [x] Hard difficulty will have more scaling (ie possibly to level 15 or more)
- [x] It will also scale faster
- [x] We should increase the fire rate of the enemies (shoot every 3-5 seconds instead of 4-6)
- [x] Possibly limit the number of lasers the player can shoot (capacity of 20?)
- [x] Recharge lasers count over time (1 per second?)
- [x] Or if the player hits an enemy refund laser
|
1.0
|
Difficulty Levels - - [x] We need to add UI to select between normal and hard difficulty
- [x] Normal difficulty can be the game we have now
- [x] Hard difficulty will have more scaling (ie possibly to level 15 or more)
- [x] It will also scale faster
- [x] We should increase the fire rate of the enemies (shoot every 3-5 seconds instead of 4-6)
- [x] Possibly limit the number of lasers the player can shoot (capacity of 20?)
- [x] Recharge lasers count over time (1 per second?)
- [x] Or if the player hits an enemy refund laser
|
priority
|
difficulty levels we need to add ui to select between normal and hard difficulty normal difficulty can be the game we have now hard difficulty will have more scaling ie possibly to level or more it will also scale faster we should increase the fire rate of the enemies shoot every seconds instead of possibly limit the number of lasers the player can shoot capacity of recharge lasers count over time per second or if the player hits an enemy refund laser
| 1
|
155,520
| 5,956,665,411
|
IssuesEvent
|
2017-05-28 18:55:57
|
WatzekDigitalInitiatives/fitbit-ror
|
https://api.github.com/repos/WatzekDigitalInitiatives/fitbit-ror
|
closed
|
Broken user previews on team event show page
|
bug front end high priority
|
need to either write a new function to handle this, or beef up _user_preview partial to handle multiple cases - or just hard-code in a different version.
|
1.0
|
Broken user previews on team event show page - need to either write a new function to handle this, or beef up _user_preview partial to handle multiple cases - or just hard-code in a different version.
|
priority
|
broken user previews on team event show page need to either write a new function to handle this or beef up user preview partial to handle multiple cases or just hard code in a different version
| 1
|
20,805
| 2,631,250,220
|
IssuesEvent
|
2015-03-07 00:02:30
|
chocolatey/choco
|
https://api.github.com/repos/chocolatey/choco
|
closed
|
InstallArguments option should work for 'choco install'
|
3 - Done Bug FeatureParity Priority_HIGH
|
It looks to me that there is a variable name mismatch in Install-ChocolateyInstallPackage.ps1 (dc704e97001cd10149fd687276cdb8973f4b3f6f) which is preventing the use of the --installargs option to 'choco install'. Line 72 brings in a value from the environment named "chocolateyInstallArguments", but as far as I can see, line 163 of PowershellService.cs stores the installer arguements in the environment as "installerArguments" earlier on.
The installer arguments option was working in the released 0.9.8.
|
1.0
|
InstallArguments option should work for 'choco install' - It looks to me that there is a variable name mismatch in Install-ChocolateyInstallPackage.ps1 (dc704e97001cd10149fd687276cdb8973f4b3f6f) which is preventing the use of the --installargs option to 'choco install'. Line 72 brings in a value from the environment named "chocolateyInstallArguments", but as far as I can see, line 163 of PowershellService.cs stores the installer arguements in the environment as "installerArguments" earlier on.
The installer arguments option was working in the released 0.9.8.
|
priority
|
installarguments option should work for choco install it looks to me that there is a variable name mismatch in install chocolateyinstallpackage which is preventing the use of the installargs option to choco install line brings in a value from the environment named chocolateyinstallarguments but as far as i can see line of powershellservice cs stores the installer arguements in the environment as installerarguments earlier on the installer arguments option was working in the released
| 1
|
258,315
| 8,167,620,991
|
IssuesEvent
|
2018-08-26 01:32:47
|
WazeDev/WME-Place-Harmonizer
|
https://api.github.com/repos/WazeDev/WME-Place-Harmonizer
|
closed
|
Parking Lot checks & more info tab (PLEASE DISCUSS!)
|
Enhancement Priority: High UI/CSS
|
COST=null blue severity. note "No Cost selected. Please select a Cost for this lot."
if cost is selected
COST=Free uncheck cc cash checks payment types and POSSIBLY disable them.
COST=anything but free, at least one payment type should be checked, if not show warning of yellow (or blue if required to allow automatic lock), allow automatic locking, and show a note (or something shorter...): "No type of payment selected, but payment is required. Please select a payment type in the More Info tab"
Create banner buttons for each of the Services for a place and allow them to be automatically turned on for an operator.
Ask if disability parking is available if not already turned on. This should almost always be checked ON in USA. Note: "Is disablity parking available here?" buttons for Yes/No. Yes turns service on. No = whitelist for disability.
Lot Type
At least one type checked. or lot should be red (prevent locking) and display note: At least one lot type is required.
|
1.0
|
Parking Lot checks & more info tab (PLEASE DISCUSS!) - COST=null blue severity. note "No Cost selected. Please select a Cost for this lot."
if cost is selected
COST=Free uncheck cc cash checks payment types and POSSIBLY disable them.
COST=anything but free, at least one payment type should be checked, if not show warning of yellow (or blue if required to allow automatic lock), allow automatic locking, and show a note (or something shorter...): "No type of payment selected, but payment is required. Please select a payment type in the More Info tab"
Create banner buttons for each of the Services for a place and allow them to be automatically turned on for an operator.
Ask if disability parking is available if not already turned on. This should almost always be checked ON in USA. Note: "Is disablity parking available here?" buttons for Yes/No. Yes turns service on. No = whitelist for disability.
Lot Type
At least one type checked. or lot should be red (prevent locking) and display note: At least one lot type is required.
|
priority
|
parking lot checks more info tab please discuss cost null blue severity note no cost selected please select a cost for this lot if cost is selected cost free uncheck cc cash checks payment types and possibly disable them cost anything but free at least one payment type should be checked if not show warning of yellow or blue if required to allow automatic lock allow automatic locking and show a note or something shorter no type of payment selected but payment is required please select a payment type in the more info tab create banner buttons for each of the services for a place and allow them to be automatically turned on for an operator ask if disability parking is available if not already turned on this should almost always be checked on in usa note is disablity parking available here buttons for yes no yes turns service on no whitelist for disability lot type at least one type checked or lot should be red prevent locking and display note at least one lot type is required
| 1
|
786,138
| 27,636,106,374
|
IssuesEvent
|
2023-03-10 14:33:55
|
tyler-technologies-oss/forge
|
https://api.github.com/repos/tyler-technologies-oss/forge
|
opened
|
[select] use arrow keys to open dropdown instead of change value
|
bug priority: high accessibility complexity: low
|
**Describe the bug:**
The arrow keys should be used to open the dropdown instead of changing the value of the field immediately.
One issue with the current implementation is that screen readers do not announce the change in this scenario either. So if we do decide to keep this "feature" then we need to ensure that value is properly announced.
Another thing to note is that we allow for typing characters to naively "filter" the options while the dropdown is closed. This does not announce either and it should.
**To Reproduce:**
Steps to reproduce the behavior:
1. Tab to <forge-select>
2. Use up or down arrow keys to change value
3. Observe that screen reader does not announce the value changing
**Expected behavior:**
1. The <forge-select> should open the dropdown with the up or down arrow keys instead of immediately changing the value.
2. It should also announce value changes when the dropdown is closed
|
1.0
|
[select] use arrow keys to open dropdown instead of change value - **Describe the bug:**
The arrow keys should be used to open the dropdown instead of changing the value of the field immediately.
One issue with the current implementation is that screen readers do not announce the change in this scenario either. So if we do decide to keep this "feature" then we need to ensure that value is properly announced.
Another thing to note is that we allow for typing characters to naively "filter" the options while the dropdown is closed. This does not announce either and it should.
**To Reproduce:**
Steps to reproduce the behavior:
1. Tab to <forge-select>
2. Use up or down arrow keys to change value
3. Observe that screen reader does not announce the value changing
**Expected behavior:**
1. The <forge-select> should open the dropdown with the up or down arrow keys instead of immediately changing the value.
2. It should also announce value changes when the dropdown is closed
|
priority
|
use arrow keys to open dropdown instead of change value describe the bug the arrow keys should be used to open the dropdown instead of changing the value of the field immediately one issue with the current implementation is that screen readers do not announce the change in this scenario either so if we do decide to keep this feature then we need to ensure that value is properly announced another thing to note is that we allow for typing characters to naively filter the options while the dropdown is closed this does not announce either and it should to reproduce steps to reproduce the behavior tab to use up or down arrow keys to change value observe that screen reader does not announce the value changing expected behavior the should open the dropdown with the up or down arrow keys instead of immediately changing the value it should also announce value changes when the dropdown is closed
| 1
|
739,803
| 25,721,275,689
|
IssuesEvent
|
2022-12-07 13:52:25
|
fractal-analytics-platform/fractal-server
|
https://api.github.com/repos/fractal-analytics-platform/fractal-server
|
closed
|
Review/rename RUNNER env variables
|
High Priority
|
Let's aim for a homogeneous set of variable names, all starting with `FRACTAL`.
|
1.0
|
Review/rename RUNNER env variables - Let's aim for a homogeneous set of variable names, all starting with `FRACTAL`.
|
priority
|
review rename runner env variables let s aim for a homogeneous set of variable names all starting with fractal
| 1
|
640,815
| 20,799,993,636
|
IssuesEvent
|
2022-03-17 13:05:07
|
gofiber/fiber
|
https://api.github.com/repos/gofiber/fiber
|
reopened
|
C.JSON not work on go 1.18
|
🚨 High Priority ☢️ Bug
|
in go 1.17.8 c.json work fine,but after i upgrade go version , c.json not work report "runtime error empty pointer"
|
1.0
|
C.JSON not work on go 1.18 - in go 1.17.8 c.json work fine,but after i upgrade go version , c.json not work report "runtime error empty pointer"
|
priority
|
c json not work on go in go c json work fine but after i upgrade go version c json not work report runtime error empty pointer
| 1
|
555,172
| 16,448,479,856
|
IssuesEvent
|
2021-05-20 23:32:29
|
nilearn/nilearn
|
https://api.github.com/repos/nilearn/nilearn
|
closed
|
Axes Cutoff in Example 9.2.15.9 (plotting.plot_img_on_surf)
|
Bug effort: medium impact: medium priority: high
|
[`Example 9.2.15.9`](https://nilearn.github.io/auto_examples/01_plotting/plot_3d_map_to_surface_projection.html#plot-multiple-views-of-the-3d-volume-on-a-surface) features a quick plot showing multiple views of a volumetric stat map on an average surface.

However, the brain is cutoff on both axes (picture shown) both in the online example and when I use it on nilearn 0.7.1 within a jupyter notebook using the following code
```python
fig, ax = plotting.plot_img_on_surf(new_image,
views=['lateral','medial'],
hemispheres=['left', 'right'],
inflate=True,
colorbar=True
)
```
Perhaps this is easily fixed post-hoc by adjusting matplotlib parameters, but it is not obvious to me.
|
1.0
|
Axes Cutoff in Example 9.2.15.9 (plotting.plot_img_on_surf) - [`Example 9.2.15.9`](https://nilearn.github.io/auto_examples/01_plotting/plot_3d_map_to_surface_projection.html#plot-multiple-views-of-the-3d-volume-on-a-surface) features a quick plot showing multiple views of a volumetric stat map on an average surface.

However, the brain is cutoff on both axes (picture shown) both in the online example and when I use it on nilearn 0.7.1 within a jupyter notebook using the following code
```python
fig, ax = plotting.plot_img_on_surf(new_image,
views=['lateral','medial'],
hemispheres=['left', 'right'],
inflate=True,
colorbar=True
)
```
Perhaps this is easily fixed post-hoc by adjusting matplotlib parameters, but it is not obvious to me.
|
priority
|
axes cutoff in example plotting plot img on surf features a quick plot showing multiple views of a volumetric stat map on an average surface however the brain is cutoff on both axes picture shown both in the online example and when i use it on nilearn within a jupyter notebook using the following code python fig ax plotting plot img on surf new image views hemispheres inflate true colorbar true perhaps this is easily fixed post hoc by adjusting matplotlib parameters but it is not obvious to me
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.