Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
855
| labels
stringlengths 4
721
| body
stringlengths 1
261k
| index
stringclasses 13
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
240k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
432,686
| 12,496,729,755
|
IssuesEvent
|
2020-06-01 15:18:35
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
range.second - range.first == t.size() INTERNAL ASSERT FAILED
|
high priority module: autograd triage review triaged
|
This is my first entry about a problem. So, please feel free to ask anything to clarify the problem.
## π Bug
I run the same code in three different machines separately. In two of them, I encounter an error when running "loss.backward()" function. The networks never learn.
## To Reproduce
Steps to reproduce the behavior:
1. Run the code for dueling double deep q-learning (Dueling DDQN) and/or DDQNs.
2. Simulation runs until the number of samples in training sample buffer is enough to train the networks.
3. Backpropagation function is called and code crashed.
The error message is as below:
dueling_ddqn_agent.py in learn(self)
149
150 loss = self.q_eval.loss(q_target, q_pred).to(self.q_eval.device)
--> 151 loss.backward()
152 self.q_eval.optimizer.step()
153 self.learn_step_counter += 1
~\Anaconda3\lib\site-packages\torch\tensor.py in backward(self, gradient, retain_graph, create_graph)
196 products. Defaults to ``False``.
197 """
--> 198 torch.autograd.backward(self, gradient, retain_graph, create_graph)
199
200 def register_hook(self, hook):
~\Anaconda3\lib\site-packages\torch\autograd\__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
98 Variable._execution_engine.run_backward(
99 tensors, grad_tensors, retain_graph, create_graph,
--> 100 allow_unreachable=True) # allow_unreachable flag
101
102
RuntimeError: range.second - range.first == t.size() INTERNAL ASSERT FAILED at ..\torch\csrc\autograd\generated\Functions.cpp:57, please report a bug to PyTorch. inconsistent range for TensorList output (copy_range at ..\torch\csrc\autograd\generated\Functions.cpp:57)
(no backtrace available)
## Expected behavior
In one of the computers agents learn as expected. The same code runs without any errors.
## Environment
PyTorch version: 1.5.0
Is debug build: No
CUDA used to build PyTorch: 10.2
OS: Microsoft Windows 10 Home Single Language
GCC version: Could not collect
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce GTX 750 Ti
Nvidia driver version: 441.12
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.17.4
[pip] numpydoc==0.9.1
[pip] torch==1.5.0
[pip] torchvision==0.6.0
[conda] blas 1.0 mkl
[conda] mkl 2019.4 245
[conda] mkl-service 2.3.0 py37hb782905_0
[conda] mkl_fft 1.0.15 py37h14836fe_0
[conda] mkl_random 1.1.0 py37h675688f_0
[conda] numpy 1.17.4 py37h4320e6b_0
[conda] numpy-base 1.17.4 py37hc3f5095_0
[conda] numpydoc 0.9.1 py_0
[conda] torch 1.5.0 pypi_0 pypi
[conda] torchvision 0.6.0 pypi_0 pypi
## Additional context
The same problem occurs with Linear and Conv2d layers.
cc @ezyang @gchanan @zou3519 @SsnL @albanD @gqchen
|
1.0
|
range.second - range.first == t.size() INTERNAL ASSERT FAILED - This is my first entry about a problem. So, please feel free to ask anything to clarify the problem.
## π Bug
I run the same code in three different machines separately. In two of them, I encounter an error when running "loss.backward()" function. The networks never learn.
## To Reproduce
Steps to reproduce the behavior:
1. Run the code for dueling double deep q-learning (Dueling DDQN) and/or DDQNs.
2. Simulation runs until the number of samples in training sample buffer is enough to train the networks.
3. Backpropagation function is called and code crashed.
The error message is as below:
dueling_ddqn_agent.py in learn(self)
149
150 loss = self.q_eval.loss(q_target, q_pred).to(self.q_eval.device)
--> 151 loss.backward()
152 self.q_eval.optimizer.step()
153 self.learn_step_counter += 1
~\Anaconda3\lib\site-packages\torch\tensor.py in backward(self, gradient, retain_graph, create_graph)
196 products. Defaults to ``False``.
197 """
--> 198 torch.autograd.backward(self, gradient, retain_graph, create_graph)
199
200 def register_hook(self, hook):
~\Anaconda3\lib\site-packages\torch\autograd\__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
98 Variable._execution_engine.run_backward(
99 tensors, grad_tensors, retain_graph, create_graph,
--> 100 allow_unreachable=True) # allow_unreachable flag
101
102
RuntimeError: range.second - range.first == t.size() INTERNAL ASSERT FAILED at ..\torch\csrc\autograd\generated\Functions.cpp:57, please report a bug to PyTorch. inconsistent range for TensorList output (copy_range at ..\torch\csrc\autograd\generated\Functions.cpp:57)
(no backtrace available)
## Expected behavior
In one of the computers agents learn as expected. The same code runs without any errors.
## Environment
PyTorch version: 1.5.0
Is debug build: No
CUDA used to build PyTorch: 10.2
OS: Microsoft Windows 10 Home Single Language
GCC version: Could not collect
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce GTX 750 Ti
Nvidia driver version: 441.12
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.17.4
[pip] numpydoc==0.9.1
[pip] torch==1.5.0
[pip] torchvision==0.6.0
[conda] blas 1.0 mkl
[conda] mkl 2019.4 245
[conda] mkl-service 2.3.0 py37hb782905_0
[conda] mkl_fft 1.0.15 py37h14836fe_0
[conda] mkl_random 1.1.0 py37h675688f_0
[conda] numpy 1.17.4 py37h4320e6b_0
[conda] numpy-base 1.17.4 py37hc3f5095_0
[conda] numpydoc 0.9.1 py_0
[conda] torch 1.5.0 pypi_0 pypi
[conda] torchvision 0.6.0 pypi_0 pypi
## Additional context
The same problem occurs with Linear and Conv2d layers.
cc @ezyang @gchanan @zou3519 @SsnL @albanD @gqchen
|
priority
|
range second range first t size internal assert failed this is my first entry about a problem so please feel free to ask anything to clarify the problem π bug i run the same code in three different machines separately in two of them i encounter an error when running loss backward function the networks never learn to reproduce steps to reproduce the behavior run the code for dueling double deep q learning dueling ddqn and or ddqns simulation runs until the number of samples in training sample buffer is enough to train the networks backpropagation function is called and code crashed the error message is as below dueling ddqn agent py in learn self loss self q eval loss q target q pred to self q eval device loss backward self q eval optimizer step self learn step counter lib site packages torch tensor py in backward self gradient retain graph create graph products defaults to false torch autograd backward self gradient retain graph create graph def register hook self hook lib site packages torch autograd init py in backward tensors grad tensors retain graph create graph grad variables variable execution engine run backward tensors grad tensors retain graph create graph allow unreachable true allow unreachable flag runtimeerror range second range first t size internal assert failed at torch csrc autograd generated functions cpp please report a bug to pytorch inconsistent range for tensorlist output copy range at torch csrc autograd generated functions cpp no backtrace available expected behavior in one of the computers agents learn as expected the same code runs without any errors environment pytorch version is debug build no cuda used to build pytorch os microsoft windows home single language gcc version could not collect cmake version could not collect python version is cuda available yes cuda runtime version could not collect gpu models and configuration gpu geforce gtx ti nvidia driver version cudnn version could not collect versions of relevant libraries numpy numpydoc torch torchvision blas mkl mkl mkl service mkl fft mkl random numpy numpy base numpydoc py torch pypi pypi torchvision pypi pypi additional context the same problem occurs with linear and layers cc ezyang gchanan ssnl alband gqchen
| 1
|
412,163
| 12,036,020,384
|
IssuesEvent
|
2020-04-13 18:59:30
|
ASbeletsky/TimeOffTracker
|
https://api.github.com/repos/ASbeletsky/TimeOffTracker
|
closed
|
Organize vacation approving process: client-form part
|
done enhancement high priority mutable
|
## Overview
Nowadays the app doesn't support a possibility to organise a chain of vacation approvement request, where accountants and other managers could approve or decline the request and see, who already had approved one, and usual employees could track a current state of the request approvement. Also the employee should have a possibility to choose people, who will apply the request, and have an access to review the history of his requests. On the other hand, all managers should have an access to all requests.
## Requirements
Implement the following pages from client side:
- to apply the request, where the employee chooses type, date and duration of vacation with required managers, who should approve the request;
- a timeline vacation page, where the employee watches a current and previous states of the request;
- a table history request page, where the employees chooses a needed applying request to review, or the manager chooses a needed approved/declined request to review;
- vacation detalis page, where the manager or the employee watches details of the request (and if the request not watched yet,add a possibility to accept or decline it);
Estimate: 60 hours
Deadline: 10.04.2020
|
1.0
|
Organize vacation approving process: client-form part - ## Overview
Nowadays the app doesn't support a possibility to organise a chain of vacation approvement request, where accountants and other managers could approve or decline the request and see, who already had approved one, and usual employees could track a current state of the request approvement. Also the employee should have a possibility to choose people, who will apply the request, and have an access to review the history of his requests. On the other hand, all managers should have an access to all requests.
## Requirements
Implement the following pages from client side:
- to apply the request, where the employee chooses type, date and duration of vacation with required managers, who should approve the request;
- a timeline vacation page, where the employee watches a current and previous states of the request;
- a table history request page, where the employees chooses a needed applying request to review, or the manager chooses a needed approved/declined request to review;
- vacation detalis page, where the manager or the employee watches details of the request (and if the request not watched yet,add a possibility to accept or decline it);
Estimate: 60 hours
Deadline: 10.04.2020
|
priority
|
organize vacation approving process client form part overview nowadays the app doesn t support a possibility to organise a chain of vacation approvement request where accountants and other managers could approve or decline the request and see who already had approved one and usual employees could track a current state of the request approvement also the employee should have a possibility to choose people who will apply the request and have an access to review the history of his requests on the other hand all managers should have an access to all requests requirements implement the following pages from client side to apply the request where the employee chooses type date and duration of vacation with required managers who should approve the request a timeline vacation page where the employee watches a current and previous states of the request a table history request page where the employees chooses a needed applying request to review or the manager chooses a needed approved declined request to review vacation detalis page where the manager or the employee watches details of the request and if the request not watched yet add a possibility to accept or decline it estimate hours deadline
| 1
|
294,104
| 9,013,381,676
|
IssuesEvent
|
2019-02-05 19:21:10
|
zephyrproject-rtos/west
|
https://api.github.com/repos/zephyrproject-rtos/west
|
closed
|
out-of-installation use of west
|
bug priority: high
|
User feedback noted that it's not possible by default to use west commands outside of the installation (i.e. not in a subdirectory of the directory containing .west).
Options:
1. One way to do this is to allow the user to specify the default west installation in a system- or user-level west configuration file. Their locations are described in this comment: https://github.com/zephyrproject-rtos/west/blob/master/src/west/config.py#L27
2. We may also be able to use the WEST_DIR environment variable to cover this case, so build, flash, etc. work normally even if things like the build directory are outside of the installation.
3. Another alternative if WEST_DIR is missing and west is invoked from outside an installation is to fall back on searching inside ZEPHYR_BASE, should that be defined.
Some notes for context:
- you can already build with something like `west build -s /tmp/app-source -d /tmp/build` and flash with `west flash -d /tmp/build` even if `/tmp/app-source` and `/tmp/build` are outside of the west installation. Just west itself has to be run from inside the installation to find the `build` and `flash` commands themselves.
- the use of a west installation, which is found by looking for a `.west` directory, is a deliberate design decision which allows users to keep parallel installations, cd around between them, and have them 'just work', without requiring environment variables which are cumbersome for some, and error-prone in general (since they can point to the wrong place).
|
1.0
|
out-of-installation use of west - User feedback noted that it's not possible by default to use west commands outside of the installation (i.e. not in a subdirectory of the directory containing .west).
Options:
1. One way to do this is to allow the user to specify the default west installation in a system- or user-level west configuration file. Their locations are described in this comment: https://github.com/zephyrproject-rtos/west/blob/master/src/west/config.py#L27
2. We may also be able to use the WEST_DIR environment variable to cover this case, so build, flash, etc. work normally even if things like the build directory are outside of the installation.
3. Another alternative if WEST_DIR is missing and west is invoked from outside an installation is to fall back on searching inside ZEPHYR_BASE, should that be defined.
Some notes for context:
- you can already build with something like `west build -s /tmp/app-source -d /tmp/build` and flash with `west flash -d /tmp/build` even if `/tmp/app-source` and `/tmp/build` are outside of the west installation. Just west itself has to be run from inside the installation to find the `build` and `flash` commands themselves.
- the use of a west installation, which is found by looking for a `.west` directory, is a deliberate design decision which allows users to keep parallel installations, cd around between them, and have them 'just work', without requiring environment variables which are cumbersome for some, and error-prone in general (since they can point to the wrong place).
|
priority
|
out of installation use of west user feedback noted that it s not possible by default to use west commands outside of the installation i e not in a subdirectory of the directory containing west options one way to do this is to allow the user to specify the default west installation in a system or user level west configuration file their locations are described in this comment we may also be able to use the west dir environment variable to cover this case so build flash etc work normally even if things like the build directory are outside of the installation another alternative if west dir is missing and west is invoked from outside an installation is to fall back on searching inside zephyr base should that be defined some notes for context you can already build with something like west build s tmp app source d tmp build and flash with west flash d tmp build even if tmp app source and tmp build are outside of the west installation just west itself has to be run from inside the installation to find the build and flash commands themselves the use of a west installation which is found by looking for a west directory is a deliberate design decision which allows users to keep parallel installations cd around between them and have them just work without requiring environment variables which are cumbersome for some and error prone in general since they can point to the wrong place
| 1
|
305,711
| 9,375,821,845
|
IssuesEvent
|
2019-04-04 05:57:14
|
nateraw/Lda2vec-Tensorflow
|
https://api.github.com/repos/nateraw/Lda2vec-Tensorflow
|
closed
|
Reproducible working example in new version of Lda2Vec
|
high priority in progress
|
I've made TONS of changes the last few weeks. This has caused things to break and has made it so my working example no longer works :cry: . So, a new reproducible example needs to be made. This is highly related to #8 , where you can see that we ended up with a working example. However, with the new changes, we should be able to remake this reliably, straight from running the run_20newsgroups.py file.
|
1.0
|
Reproducible working example in new version of Lda2Vec - I've made TONS of changes the last few weeks. This has caused things to break and has made it so my working example no longer works :cry: . So, a new reproducible example needs to be made. This is highly related to #8 , where you can see that we ended up with a working example. However, with the new changes, we should be able to remake this reliably, straight from running the run_20newsgroups.py file.
|
priority
|
reproducible working example in new version of i ve made tons of changes the last few weeks this has caused things to break and has made it so my working example no longer works cry so a new reproducible example needs to be made this is highly related to where you can see that we ended up with a working example however with the new changes we should be able to remake this reliably straight from running the run py file
| 1
|
316,882
| 9,658,173,021
|
IssuesEvent
|
2019-05-20 10:18:55
|
nim-lang/Nim
|
https://api.github.com/repos/nim-lang/Nim
|
closed
|
return NimNode from macro causes type mismatch
|
High Priority Macros
|
```Nim
import macros
macro testA: string =
result = newLit("testA")
macro testB: untyped =
newLit("testB")
macro testC: untyped =
return newLit("testC")
macro testD: string =
newLit("testD")
macro testE: string =
return newLit("testE")
```
compilation output:
```
scratch.nim(14, 3) Error: type mismatch: got (NimNode) but expected 'string'
Compilation exited abnormally with code 1 at Fri May 19 14:16:02
```
initially I posted the problem in the forum:
https://forum.nim-lang.org/t/2963
|
1.0
|
return NimNode from macro causes type mismatch -
```Nim
import macros
macro testA: string =
result = newLit("testA")
macro testB: untyped =
newLit("testB")
macro testC: untyped =
return newLit("testC")
macro testD: string =
newLit("testD")
macro testE: string =
return newLit("testE")
```
compilation output:
```
scratch.nim(14, 3) Error: type mismatch: got (NimNode) but expected 'string'
Compilation exited abnormally with code 1 at Fri May 19 14:16:02
```
initially I posted the problem in the forum:
https://forum.nim-lang.org/t/2963
|
priority
|
return nimnode from macro causes type mismatch nim import macros macro testa string result newlit testa macro testb untyped newlit testb macro testc untyped return newlit testc macro testd string newlit testd macro teste string return newlit teste compilation output scratch nim error type mismatch got nimnode but expected string compilation exited abnormally with code at fri may initially i posted the problem in the forum
| 1
|
238,478
| 7,779,857,967
|
IssuesEvent
|
2018-06-05 18:09:37
|
daily-bruin/meow
|
https://api.github.com/repos/daily-bruin/meow
|
closed
|
"Post Now button" with Confirmation Dialog
|
enhancement high priority low hanging fruit
|
Posts which are Readied should have a button that allows them to post immediately, with a confirmation dialog showing the message and everything to be sent out. The permission also needs to be set to Copy only.
### Tasks
- [ ] Post now button
- [ ] Confirmation Dialog
- [ ] Give permissions to Copy
|
1.0
|
"Post Now button" with Confirmation Dialog - Posts which are Readied should have a button that allows them to post immediately, with a confirmation dialog showing the message and everything to be sent out. The permission also needs to be set to Copy only.
### Tasks
- [ ] Post now button
- [ ] Confirmation Dialog
- [ ] Give permissions to Copy
|
priority
|
post now button with confirmation dialog posts which are readied should have a button that allows them to post immediately with a confirmation dialog showing the message and everything to be sent out the permission also needs to be set to copy only tasks post now button confirmation dialog give permissions to copy
| 1
|
4,330
| 2,550,285,799
|
IssuesEvent
|
2015-02-01 10:59:23
|
Araq/Nim
|
https://api.github.com/repos/Araq/Nim
|
closed
|
calling a large number of macros doing some computation fails
|
Easy High Priority VM
|
Consider the following code
```nim
import pegs, macros
proc parse*(fmt: string): int {.nosideeffect.} =
let p =
sequence(capture(?sequence(anyRune(), &charSet({'<', '>', '=', '^'}))),
capture(?charSet({'<', '>', '=', '^'})),
capture(?charSet({'-', '+', ' '})),
capture(?charSet({'#'})),
capture(?(+digits())),
capture(?charSet({','})),
capture(?sequence(charSet({'.'}), +digits())),
capture(?charSet({'b', 'c', 'd', 'e', 'E', 'f', 'F', 'g', 'G', 'n', 'o', 's', 'x', 'X', '%'})),
capture(?sequence(charSet({'a'}), *pegs.any())))
var caps: Captures
return fmt.rawmatch(p, 0, caps)
macro test(s: string{lit}): expr =
result = newIntLitNode(parse($s))
# the following line is repeated 7463 times
echo test("abc")
echo test("abc")
echo test("abc")
...
```
Compiling this code with nim 10.2 failes with the following error message:
```
nim c test
Hint: used config file '/path/Nim/config/nim.cfg' [Conf]
Hint: system [Processing]
Hint: test [Processing]
Hint: pegs [Processing]
Hint: strutils [Processing]
Hint: parseutils [Processing]
Hint: unicode [Processing]
Hint: macros [Processing]
stack trace: (most recent call last)
test.nim(18) test
test.nim(15) parse
lib/pure/pegs.nim(647) rawMatch
test.nim(7482, 9) Info: instantiation from here
lib/pure/pegs.nim(647, 22) Error: interpretation requires too many iterations
```
The compilation does not fail if the line `echo test("abc")` is repeated only 7462 times.
I assume this problem is caused by some safeguard so that calling macros cannot lead to infinite loops. However, each single call is fast and I would expect that the count starts again from 0 each time a new toplevel macro is called.
The code above is only an example. I ran into the issue in some real code using the (inofficial) `strfmt` package, which used `pegs` in very much the same way. In this case far less than 7463 calls to a strfmt-macro are required in order to get this error. Is there a way to enlarge the iteration bound?
|
1.0
|
calling a large number of macros doing some computation fails - Consider the following code
```nim
import pegs, macros
proc parse*(fmt: string): int {.nosideeffect.} =
let p =
sequence(capture(?sequence(anyRune(), &charSet({'<', '>', '=', '^'}))),
capture(?charSet({'<', '>', '=', '^'})),
capture(?charSet({'-', '+', ' '})),
capture(?charSet({'#'})),
capture(?(+digits())),
capture(?charSet({','})),
capture(?sequence(charSet({'.'}), +digits())),
capture(?charSet({'b', 'c', 'd', 'e', 'E', 'f', 'F', 'g', 'G', 'n', 'o', 's', 'x', 'X', '%'})),
capture(?sequence(charSet({'a'}), *pegs.any())))
var caps: Captures
return fmt.rawmatch(p, 0, caps)
macro test(s: string{lit}): expr =
result = newIntLitNode(parse($s))
# the following line is repeated 7463 times
echo test("abc")
echo test("abc")
echo test("abc")
...
```
Compiling this code with nim 10.2 failes with the following error message:
```
nim c test
Hint: used config file '/path/Nim/config/nim.cfg' [Conf]
Hint: system [Processing]
Hint: test [Processing]
Hint: pegs [Processing]
Hint: strutils [Processing]
Hint: parseutils [Processing]
Hint: unicode [Processing]
Hint: macros [Processing]
stack trace: (most recent call last)
test.nim(18) test
test.nim(15) parse
lib/pure/pegs.nim(647) rawMatch
test.nim(7482, 9) Info: instantiation from here
lib/pure/pegs.nim(647, 22) Error: interpretation requires too many iterations
```
The compilation does not fail if the line `echo test("abc")` is repeated only 7462 times.
I assume this problem is caused by some safeguard so that calling macros cannot lead to infinite loops. However, each single call is fast and I would expect that the count starts again from 0 each time a new toplevel macro is called.
The code above is only an example. I ran into the issue in some real code using the (inofficial) `strfmt` package, which used `pegs` in very much the same way. In this case far less than 7463 calls to a strfmt-macro are required in order to get this error. Is there a way to enlarge the iteration bound?
|
priority
|
calling a large number of macros doing some computation fails consider the following code nim import pegs macros proc parse fmt string int nosideeffect let p sequence capture sequence anyrune charset capture charset capture charset capture charset capture digits capture charset capture sequence charset digits capture charset b c d e e f f g g n o s x x capture sequence charset a pegs any var caps captures return fmt rawmatch p caps macro test s string lit expr result newintlitnode parse s the following line is repeated times echo test abc echo test abc echo test abc compiling this code with nim failes with the following error message nim c test hint used config file path nim config nim cfg hint system hint test hint pegs hint strutils hint parseutils hint unicode hint macros stack trace most recent call last test nim test test nim parse lib pure pegs nim rawmatch test nim info instantiation from here lib pure pegs nim error interpretation requires too many iterations the compilation does not fail if the line echo test abc is repeated only times i assume this problem is caused by some safeguard so that calling macros cannot lead to infinite loops however each single call is fast and i would expect that the count starts again from each time a new toplevel macro is called the code above is only an example i ran into the issue in some real code using the inofficial strfmt package which used pegs in very much the same way in this case far less than calls to a strfmt macro are required in order to get this error is there a way to enlarge the iteration bound
| 1
|
828,567
| 31,834,791,521
|
IssuesEvent
|
2023-09-14 12:55:18
|
filamentphp/filament
|
https://api.github.com/repos/filamentphp/filament
|
closed
|
canViewForRecord() in RelationManager Not change properly
|
bug unconfirmed high priority
|
### Package
filament/filament
### Package Version
v3.0.19
### Laravel Version
v10.19.0
### Livewire Version
_No response_
### PHP Version
PHP 8.1.21
### Problem description
After i implement [this](https://filamentphp.com/docs/3.x/panels/resources/relation-managers#conditionally-showing-relation-managers)
```php
public static function canViewForRecord(Model $ownerRecord, string $pageClass): bool
{
return $ownerRecord->type === FoodPackageType::PACKAGE;
}
```
the relation in edit page not changed properly
### Expected behavior
The relation is changed according to record set in canViewForRecord() method
### Steps to reproduce
1. Choose a food package

2. open edit page of a food package

3. change type to 'snack' and the relation is gone

4. the console give me this error

5. but if i refresh the page. the relation is back to normal and show according to canViewForRecord()

### Reproduction repository
https://github.com/chickgit/filament-canViewForRecord-bug
### Relevant log output
_No response_
|
1.0
|
canViewForRecord() in RelationManager Not change properly - ### Package
filament/filament
### Package Version
v3.0.19
### Laravel Version
v10.19.0
### Livewire Version
_No response_
### PHP Version
PHP 8.1.21
### Problem description
After i implement [this](https://filamentphp.com/docs/3.x/panels/resources/relation-managers#conditionally-showing-relation-managers)
```php
public static function canViewForRecord(Model $ownerRecord, string $pageClass): bool
{
return $ownerRecord->type === FoodPackageType::PACKAGE;
}
```
the relation in edit page not changed properly
### Expected behavior
The relation is changed according to record set in canViewForRecord() method
### Steps to reproduce
1. Choose a food package

2. open edit page of a food package

3. change type to 'snack' and the relation is gone

4. the console give me this error

5. but if i refresh the page. the relation is back to normal and show according to canViewForRecord()

### Reproduction repository
https://github.com/chickgit/filament-canViewForRecord-bug
### Relevant log output
_No response_
|
priority
|
canviewforrecord in relationmanager not change properly package filament filament package version laravel version livewire version no response php version php problem description after i implement php public static function canviewforrecord model ownerrecord string pageclass bool return ownerrecord type foodpackagetype package the relation in edit page not changed properly expected behavior the relation is changed according to record set in canviewforrecord method steps to reproduce choose a food package open edit page of a food package change type to snack and the relation is gone the console give me this error but if i refresh the page the relation is back to normal and show according to canviewforrecord reproduction repository relevant log output no response
| 1
|
4,335
| 2,550,401,613
|
IssuesEvent
|
2015-02-01 14:25:49
|
JasperHorn/GoodSuite
|
https://api.github.com/repos/JasperHorn/GoodSuite
|
closed
|
Proper API for getting object by id
|
bug high priority
|
I removed createDummy with a comment that there would soon be a different way to achieve its main purpose. And then that other way didn't arrive.
Currently, my tests use the `setId()` function, which works, but is definitely not part of the *public* API. There should be a better way, perhaps something like a `getById` function on a storage.
|
1.0
|
Proper API for getting object by id - I removed createDummy with a comment that there would soon be a different way to achieve its main purpose. And then that other way didn't arrive.
Currently, my tests use the `setId()` function, which works, but is definitely not part of the *public* API. There should be a better way, perhaps something like a `getById` function on a storage.
|
priority
|
proper api for getting object by id i removed createdummy with a comment that there would soon be a different way to achieve its main purpose and then that other way didn t arrive currently my tests use the setid function which works but is definitely not part of the public api there should be a better way perhaps something like a getbyid function on a storage
| 1
|
499,025
| 14,437,760,815
|
IssuesEvent
|
2020-12-07 12:02:35
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
[0.9.2 staging-1862] Blackout background that blocks all UIs
|
Category: Laws Priority: High
|
Step to reproduce:
- open court (I guess you can use any such civic objects):

- create any law:

- and press 'Add New Law to an Election', add to new election, submit:

- press ok:

- I have Blackout background and can't use any UI. Need to restart game to fix it.
[Player.log](https://github.com/StrangeLoopGames/EcoIssues/files/5635158/Player.log)
|
1.0
|
[0.9.2 staging-1862] Blackout background that blocks all UIs - Step to reproduce:
- open court (I guess you can use any such civic objects):

- create any law:

- and press 'Add New Law to an Election', add to new election, submit:

- press ok:

- I have Blackout background and can't use any UI. Need to restart game to fix it.
[Player.log](https://github.com/StrangeLoopGames/EcoIssues/files/5635158/Player.log)
|
priority
|
blackout background that blocks all uis step to reproduce open court i guess you can use any such civic objects create any law and press add new law to an election add to new election submit press ok i have blackout background and can t use any ui need to restart game to fix it
| 1
|
259,912
| 8,201,590,394
|
IssuesEvent
|
2018-09-01 19:36:49
|
keepassxreboot/keepassxc
|
https://api.github.com/repos/keepassxreboot/keepassxc
|
closed
|
Database corruption on merging to a locked database
|
bug high priority security
|
<!--- Provide a general summary of the issue in the title above -->
I just lost my database. Fingers were faster than they should've been, attempted a merge while my db was locked. That just opened the db I'm merging. Manually opening the db I merged into shows it's corrupt.
I guess it was because the db was locked, looking at similar corruption issues in the past. This is the second corruption I had resulting in loss of some data.
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
Merging into locked database should not corrupt it.
<!--- If you're suggesting a change/improvement, tell us how it should work -->
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1. Open database, set to auto lock after delay. Wait for db to be locked.
2. Select Database -> Merge from KeepassXC database
3. Poof. DB gets corrupt
## Debug Info
<!--- Paste debug info from Help β About here -->
KeePassXC - Version 2.3.1
Revision: 2fcaeea
Libraries:
- Qt 5.10.1
- libgcrypt 1.8.2
Operating system: Arch Linux
CPU architecture: x86_64
Kernel: linux 4.15.15-1-ARCH
Enabled extensions:
- Auto-Type
- Browser Integration
- Legacy Browser Integration (KeePassHTTP)
- SSH Agent
- YubiKey
|
1.0
|
Database corruption on merging to a locked database - <!--- Provide a general summary of the issue in the title above -->
I just lost my database. Fingers were faster than they should've been, attempted a merge while my db was locked. That just opened the db I'm merging. Manually opening the db I merged into shows it's corrupt.
I guess it was because the db was locked, looking at similar corruption issues in the past. This is the second corruption I had resulting in loss of some data.
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
Merging into locked database should not corrupt it.
<!--- If you're suggesting a change/improvement, tell us how it should work -->
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1. Open database, set to auto lock after delay. Wait for db to be locked.
2. Select Database -> Merge from KeepassXC database
3. Poof. DB gets corrupt
## Debug Info
<!--- Paste debug info from Help β About here -->
KeePassXC - Version 2.3.1
Revision: 2fcaeea
Libraries:
- Qt 5.10.1
- libgcrypt 1.8.2
Operating system: Arch Linux
CPU architecture: x86_64
Kernel: linux 4.15.15-1-ARCH
Enabled extensions:
- Auto-Type
- Browser Integration
- Legacy Browser Integration (KeePassHTTP)
- SSH Agent
- YubiKey
|
priority
|
database corruption on merging to a locked database i just lost my database fingers were faster than they should ve been attempted a merge while my db was locked that just opened the db i m merging manually opening the db i merged into shows it s corrupt i guess it was because the db was locked looking at similar corruption issues in the past this is the second corruption i had resulting in loss of some data expected behavior merging into locked database should not corrupt it steps to reproduce for bugs open database set to auto lock after delay wait for db to be locked select database merge from keepassxc database poof db gets corrupt debug info keepassxc version revision libraries qt libgcrypt operating system arch linux cpu architecture kernel linux arch enabled extensions auto type browser integration legacy browser integration keepasshttp ssh agent yubikey
| 1
|
223,713
| 7,460,058,875
|
IssuesEvent
|
2018-03-30 17:58:38
|
EvictionLab/eviction-maps
|
https://api.github.com/repos/EvictionLab/eviction-maps
|
closed
|
Add county level filing data to S3 interface
|
enhancement high priority
|
Not really map related but putting here to track it. Data should come in tomorrow (March 28)
|
1.0
|
Add county level filing data to S3 interface - Not really map related but putting here to track it. Data should come in tomorrow (March 28)
|
priority
|
add county level filing data to interface not really map related but putting here to track it data should come in tomorrow march
| 1
|
639,315
| 20,751,295,917
|
IssuesEvent
|
2022-03-15 07:52:05
|
AY2122s2-CS2113-F12-3/tp
|
https://api.github.com/repos/AY2122s2-CS2113-F12-3/tp
|
opened
|
Edit staff details
|
type.Story priority.High
|
As a user, I can add new staff, modify staff details and delete staff that has left the company.
|
1.0
|
Edit staff details - As a user, I can add new staff, modify staff details and delete staff that has left the company.
|
priority
|
edit staff details as a user i can add new staff modify staff details and delete staff that has left the company
| 1
|
186,927
| 6,743,660,916
|
IssuesEvent
|
2017-10-20 13:02:17
|
ActivityWatch/activitywatch
|
https://api.github.com/repos/ActivityWatch/activitywatch
|
closed
|
Fixing packaging on macOS
|
area: ci platform: macos priority: high size: small type: bug
|
There has been this annoying bug with the macOS builds:
```
Error loading Python lib '/Applications/activitywatch/.Python': dlopen(/Applications/activitywatch/.Python, 10): image not found
```
@jwiese had [the same issue](https://github.com/ActivityWatch/activitywatch/issues/78#issuecomment-325065686)
Then [someone on reddit had the same issue](https://www.reddit.com/r/Entrepreneur/comments/76qnbq/tell_us_about_your_startupbuisness/dojs27u/).
I thought a bit about it, and the fix might be stupidly easy: `cp src/* dest/` doesn't copy files beginning with a dot.
|
1.0
|
Fixing packaging on macOS - There has been this annoying bug with the macOS builds:
```
Error loading Python lib '/Applications/activitywatch/.Python': dlopen(/Applications/activitywatch/.Python, 10): image not found
```
@jwiese had [the same issue](https://github.com/ActivityWatch/activitywatch/issues/78#issuecomment-325065686)
Then [someone on reddit had the same issue](https://www.reddit.com/r/Entrepreneur/comments/76qnbq/tell_us_about_your_startupbuisness/dojs27u/).
I thought a bit about it, and the fix might be stupidly easy: `cp src/* dest/` doesn't copy files beginning with a dot.
|
priority
|
fixing packaging on macos there has been this annoying bug with the macos builds error loading python lib applications activitywatch python dlopen applications activitywatch python image not found jwiese had then i thought a bit about it and the fix might be stupidly easy cp src dest doesn t copy files beginning with a dot
| 1
|
22,756
| 2,650,829,285
|
IssuesEvent
|
2015-03-16 05:22:05
|
Glavin001/atom-beautify
|
https://api.github.com/repos/Glavin001/atom-beautify
|
closed
|
Atom.Object.defineProperty.get is deprecated.
|
high priority
|
Bug report from Atom:
> atom.workspaceView is no longer available.
> In most cases you will not need the view. See the Workspace docs for
> alternatives: https://atom.io/docs/api/latest/Workspace.
> If you do need the view, please use `atom.views.getView(atom.workspace)`,
> which returns an HTMLElement.
> ```
> Atom.Object.defineProperty.get (c:\Users\xxx\AppData\Local\atom\app-0.174.0\resources\app\src\atom.js:55:11)
> LoadingView.module.exports.LoadingView.show (c:\Users\xxx\.atom\packages\atom-beautify\lib\loading-view.coffee:38:11)
> ```
|
1.0
|
Atom.Object.defineProperty.get is deprecated. - Bug report from Atom:
> atom.workspaceView is no longer available.
> In most cases you will not need the view. See the Workspace docs for
> alternatives: https://atom.io/docs/api/latest/Workspace.
> If you do need the view, please use `atom.views.getView(atom.workspace)`,
> which returns an HTMLElement.
> ```
> Atom.Object.defineProperty.get (c:\Users\xxx\AppData\Local\atom\app-0.174.0\resources\app\src\atom.js:55:11)
> LoadingView.module.exports.LoadingView.show (c:\Users\xxx\.atom\packages\atom-beautify\lib\loading-view.coffee:38:11)
> ```
|
priority
|
atom object defineproperty get is deprecated bug report from atom atom workspaceview is no longer available in most cases you will not need the view see the workspace docs for alternatives if you do need the view please use atom views getview atom workspace which returns an htmlelement atom object defineproperty get c users xxx appdata local atom app resources app src atom js loadingview module exports loadingview show c users xxx atom packages atom beautify lib loading view coffee
| 1
|
745,598
| 25,991,126,576
|
IssuesEvent
|
2022-12-20 07:38:05
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
closed
|
Fix runtime type APIs to support type reference types
|
Type/Task Priority/High Team/jBallerina Points/7
|
**Description:**
$subject
**Describe your task(s)**
Currently, the following runtime APIs are modified to provide `TypeReferenceType` as a return type.
```
typedesc.getDescribingType()
recordField.getFieldType()
arrayValue.getElementType()
arrayType.getElementType()
```
But, we still have several APIs that need to be fixed for this support.
```
getMemberTypes()
getRestType()
getRestFieldType()
getReturnType()
getConstrainedType()
getEffectiveType()
getParamValueType()
getCompletionType()
getKeyType()
getImmutableType()
```
etc.
**Related Issues (optional):**
https://github.com/ballerina-platform/ballerina-lang/issues/35270
|
1.0
|
Fix runtime type APIs to support type reference types - **Description:**
$subject
**Describe your task(s)**
Currently, the following runtime APIs are modified to provide `TypeReferenceType` as a return type.
```
typedesc.getDescribingType()
recordField.getFieldType()
arrayValue.getElementType()
arrayType.getElementType()
```
But, we still have several APIs that need to be fixed for this support.
```
getMemberTypes()
getRestType()
getRestFieldType()
getReturnType()
getConstrainedType()
getEffectiveType()
getParamValueType()
getCompletionType()
getKeyType()
getImmutableType()
```
etc.
**Related Issues (optional):**
https://github.com/ballerina-platform/ballerina-lang/issues/35270
|
priority
|
fix runtime type apis to support type reference types description subject describe your task s currently the following runtime apis are modified to provide typereferencetype as a return type typedesc getdescribingtype recordfield getfieldtype arrayvalue getelementtype arraytype getelementtype but we still have several apis that need to be fixed for this support getmembertypes getresttype getrestfieldtype getreturntype getconstrainedtype geteffectivetype getparamvaluetype getcompletiontype getkeytype getimmutabletype etc related issues optional
| 1
|
291,324
| 8,923,416,639
|
IssuesEvent
|
2019-01-21 15:36:26
|
OpenNebula/one
|
https://api.github.com/repos/OpenNebula/one
|
opened
|
Distributed port groups not working in unmanaged nic
|
Category: vCenter Priority: High Status: Accepted Type: Bug
|
**Description**
It is not possible to import vCenter templates that use virtual distributed port groups
**To Reproduce**
in VSphere, convert to template a virtual_machine attached to vdportgroup.
Import into OpenNebula.
**Expected behavior**
Importation ends without any problem.
**Details**
- Affected Component: vmm
- Hypervisor: vCenter
- Version: development
**Additional context**
Add any other context about the problem here.
<!--////////////////////////////////////////////-->
<!-- THIS SECTION IS FOR THE DEVELOPMENT TEAM -->
<!-- BOTH FOR BUGS AND ENHANCEMENT REQUESTS -->
<!-- PROGRESS WILL BE REFLECTED HERE -->
<!--////////////////////////////////////////////-->
## Progress Status
- [ ] Branch created
- [ ] Code committed to development branch
- [ ] Testing - QA
- [ ] Documentation
- [ ] Release notes - resolved issues, compatibility, known issues
- [ ] Code committed to upstream release/hotfix branches
- [ ] Documentation committed to upstream release/hotfix branches
|
1.0
|
Distributed port groups not working in unmanaged nic - **Description**
It is not possible to import vCenter templates that use virtual distributed port groups
**To Reproduce**
in VSphere, convert to template a virtual_machine attached to vdportgroup.
Import into OpenNebula.
**Expected behavior**
Importation ends without any problem.
**Details**
- Affected Component: vmm
- Hypervisor: vCenter
- Version: development
**Additional context**
Add any other context about the problem here.
<!--////////////////////////////////////////////-->
<!-- THIS SECTION IS FOR THE DEVELOPMENT TEAM -->
<!-- BOTH FOR BUGS AND ENHANCEMENT REQUESTS -->
<!-- PROGRESS WILL BE REFLECTED HERE -->
<!--////////////////////////////////////////////-->
## Progress Status
- [ ] Branch created
- [ ] Code committed to development branch
- [ ] Testing - QA
- [ ] Documentation
- [ ] Release notes - resolved issues, compatibility, known issues
- [ ] Code committed to upstream release/hotfix branches
- [ ] Documentation committed to upstream release/hotfix branches
|
priority
|
distributed port groups not working in unmanaged nic description it is not possible to import vcenter templates that use virtual distributed port groups to reproduce in vsphere convert to template a virtual machine attached to vdportgroup import into opennebula expected behavior importation ends without any problem details affected component vmm hypervisor vcenter version development additional context add any other context about the problem here progress status branch created code committed to development branch testing qa documentation release notes resolved issues compatibility known issues code committed to upstream release hotfix branches documentation committed to upstream release hotfix branches
| 1
|
807,313
| 29,994,706,978
|
IssuesEvent
|
2023-06-26 03:57:35
|
HackerN64/HackerSM64
|
https://api.github.com/repos/HackerN64/HackerSM64
|
closed
|
Get rid of all inline asm in the repo
|
bug HOW high priority monkaS
|
Inline asm seems to potentially cause instruction scheduling issues in GCC, so get rid of all of it. Pretty much every instance can be replaced with a GCC builtin to generate the same or similar codegen, so this isn't an issue. The `construct_float` in the mtxf_to_mtx function can remain since we know that function has correct codegen.
|
1.0
|
Get rid of all inline asm in the repo - Inline asm seems to potentially cause instruction scheduling issues in GCC, so get rid of all of it. Pretty much every instance can be replaced with a GCC builtin to generate the same or similar codegen, so this isn't an issue. The `construct_float` in the mtxf_to_mtx function can remain since we know that function has correct codegen.
|
priority
|
get rid of all inline asm in the repo inline asm seems to potentially cause instruction scheduling issues in gcc so get rid of all of it pretty much every instance can be replaced with a gcc builtin to generate the same or similar codegen so this isn t an issue the construct float in the mtxf to mtx function can remain since we know that function has correct codegen
| 1
|
509,646
| 14,741,023,629
|
IssuesEvent
|
2021-01-07 09:59:06
|
quickcase/node-toolkit
|
https://api.github.com/repos/quickcase/node-toolkit
|
closed
|
Case: createCase(httpClient)(caseType)(event)(payload)
|
priority:high type:feature
|
A function to create new cases for a given case type using the provided event and case data.
### Example
```javascript
import {createCase, httpClient} from '@quickcase/node-toolkit';
// A configured `httpClient` is required to create case
const client = httpClient('http://data-store:4452')(() => Promise.resolve('access-token'));
const aCase = await createCase(client)('CaseType1')('CreateEvent')({
data: {
field1: 'value1',
field2: 'value2',
},
summary: 'Short text',
description: 'Longer description',
});
/*
{
id: '1234123412341238',
state: 'Created',
data: {
field1: 'value1',
field2: 'value2',
},
...
}
*/
```
|
1.0
|
Case: createCase(httpClient)(caseType)(event)(payload) - A function to create new cases for a given case type using the provided event and case data.
### Example
```javascript
import {createCase, httpClient} from '@quickcase/node-toolkit';
// A configured `httpClient` is required to create case
const client = httpClient('http://data-store:4452')(() => Promise.resolve('access-token'));
const aCase = await createCase(client)('CaseType1')('CreateEvent')({
data: {
field1: 'value1',
field2: 'value2',
},
summary: 'Short text',
description: 'Longer description',
});
/*
{
id: '1234123412341238',
state: 'Created',
data: {
field1: 'value1',
field2: 'value2',
},
...
}
*/
```
|
priority
|
case createcase httpclient casetype event payload a function to create new cases for a given case type using the provided event and case data example javascript import createcase httpclient from quickcase node toolkit a configured httpclient is required to create case const client httpclient promise resolve access token const acase await createcase client createevent data summary short text description longer description id state created data
| 1
|
493,890
| 14,240,381,126
|
IssuesEvent
|
2020-11-18 21:36:56
|
bounswe/bounswe2020group9
|
https://api.github.com/repos/bounswe/bounswe2020group9
|
closed
|
Add bootstrap to frontend project
|
Priority - High Type: Enhancement
|
Bootstrap and react-bootstrap design libraries should be added to the repo. I will try to add by tomorrow evening.
|
1.0
|
Add bootstrap to frontend project - Bootstrap and react-bootstrap design libraries should be added to the repo. I will try to add by tomorrow evening.
|
priority
|
add bootstrap to frontend project bootstrap and react bootstrap design libraries should be added to the repo i will try to add by tomorrow evening
| 1
|
67,489
| 3,274,502,794
|
IssuesEvent
|
2015-10-26 11:15:48
|
OCHA-DAP/hdx-ckan
|
https://api.github.com/repos/OCHA-DAP/hdx-ckan
|
opened
|
Geopreview: Everything looks ok with this one in QGIS, but geopreview has a crazy extent
|
GeoPreview Priority-High
|
Extents of the file look fine in QGIS: xMin,yMin 61.0015,23.9622 : xMax,yMax 79.2384,37.0316
But from our api: BOX(62.8094901807575 **-345.137507041407**,77.24811554 37.0315789417147)
When enabling geopreview, the exent is global and the data doesn't display. I've updated the geopreview twice with the same result.
It would be good to know if this is a problem at our end or in the file (more likely). And to consider how we handle this failures.
|
1.0
|
Geopreview: Everything looks ok with this one in QGIS, but geopreview has a crazy extent - Extents of the file look fine in QGIS: xMin,yMin 61.0015,23.9622 : xMax,yMax 79.2384,37.0316
But from our api: BOX(62.8094901807575 **-345.137507041407**,77.24811554 37.0315789417147)
When enabling geopreview, the exent is global and the data doesn't display. I've updated the geopreview twice with the same result.
It would be good to know if this is a problem at our end or in the file (more likely). And to consider how we handle this failures.
|
priority
|
geopreview everything looks ok with this one in qgis but geopreview has a crazy extent extents of the file look fine in qgis xmin ymin xmax ymax but from our api box when enabling geopreview the exent is global and the data doesn t display i ve updated the geopreview twice with the same result it would be good to know if this is a problem at our end or in the file more likely and to consider how we handle this failures
| 1
|
441,496
| 12,718,883,245
|
IssuesEvent
|
2020-06-24 08:20:48
|
Wirlie/EnhancedBungeeList
|
https://api.github.com/repos/Wirlie/EnhancedBungeeList
|
closed
|
Configuration file corruption
|
bug high-priority
|
Someone have reported that the configuration file (Config.yml) have been corrupted, so I need to investigate the cause of the issue.
Reference:
https://www.spigotmc.org/threads/enhancedbungeelist.303250/page-3#post-3773965
|
1.0
|
Configuration file corruption - Someone have reported that the configuration file (Config.yml) have been corrupted, so I need to investigate the cause of the issue.
Reference:
https://www.spigotmc.org/threads/enhancedbungeelist.303250/page-3#post-3773965
|
priority
|
configuration file corruption someone have reported that the configuration file config yml have been corrupted so i need to investigate the cause of the issue reference
| 1
|
80,754
| 3,574,107,314
|
IssuesEvent
|
2016-01-27 10:17:21
|
restlet/restlet-framework-java
|
https://api.github.com/repos/restlet/restlet-framework-java
|
closed
|
Method value caching broken depending on class initialization order
|
Module: Restlet API Priority: high State: waiting for input Type: bug Version: 2.3
|
The method `org.restlet.data.Method.valueOf(String)` usually returns cached values for common HTTP methods (GET etc.). In a particular application test case, I noticed the caching was not working, i.e., `valueOf` was *always* returning new instances.
The reason is the way the caching depends on class initialization order (and a tiny bug in Method.java).
- When Engine.getEngine() is called before class `Method` is being initialized, the caching works (I guess this is the case often)
- When class `Method` is loaded/initialized before Engine.getEngine() is called, the caching breaks.
The root cause is the initialization block in [Method.java#L.240](https://github.com/restlet/restlet-framework-java/blob/master/modules/org.restlet/src/org/restlet/data/Method.java#L240). This block is *supposed* to be called after the class constants (GET, etc.) have been initialized and assigned. But it is missing the `static {}` keyword, so it is not actually a class initialization block, but an object initialization block. It is thus called when the first constant instance is initialized, before the constants are assigned. This means that the constants read by `HttpProtocolHelper.registerMethods` are still `null`, and no Methods get registered.
I will provide a unit test and a patch in a pull request.
|
1.0
|
Method value caching broken depending on class initialization order - The method `org.restlet.data.Method.valueOf(String)` usually returns cached values for common HTTP methods (GET etc.). In a particular application test case, I noticed the caching was not working, i.e., `valueOf` was *always* returning new instances.
The reason is the way the caching depends on class initialization order (and a tiny bug in Method.java).
- When Engine.getEngine() is called before class `Method` is being initialized, the caching works (I guess this is the case often)
- When class `Method` is loaded/initialized before Engine.getEngine() is called, the caching breaks.
The root cause is the initialization block in [Method.java#L.240](https://github.com/restlet/restlet-framework-java/blob/master/modules/org.restlet/src/org/restlet/data/Method.java#L240). This block is *supposed* to be called after the class constants (GET, etc.) have been initialized and assigned. But it is missing the `static {}` keyword, so it is not actually a class initialization block, but an object initialization block. It is thus called when the first constant instance is initialized, before the constants are assigned. This means that the constants read by `HttpProtocolHelper.registerMethods` are still `null`, and no Methods get registered.
I will provide a unit test and a patch in a pull request.
|
priority
|
method value caching broken depending on class initialization order the method org restlet data method valueof string usually returns cached values for common http methods get etc in a particular application test case i noticed the caching was not working i e valueof was always returning new instances the reason is the way the caching depends on class initialization order and a tiny bug in method java when engine getengine is called before class method is being initialized the caching works i guess this is the case often when class method is loaded initialized before engine getengine is called the caching breaks the root cause is the initialization block in this block is supposed to be called after the class constants get etc have been initialized and assigned but it is missing the static keyword so it is not actually a class initialization block but an object initialization block it is thus called when the first constant instance is initialized before the constants are assigned this means that the constants read by httpprotocolhelper registermethods are still null and no methods get registered i will provide a unit test and a patch in a pull request
| 1
|
805,535
| 29,524,188,590
|
IssuesEvent
|
2023-06-05 06:14:30
|
ballerina-platform/ballerina-standard-library
|
https://api.github.com/repos/ballerina-platform/ballerina-standard-library
|
opened
|
Receive runtime error when query, path, and header parameters have subtype of integer
|
Priority/High Type/Bug module/http
|
**Description:**
We receive the below runtime error[2] while executing `bal run` for below Ballerina sample, But this didn't give any compilation error while executing `bal build`
**Steps to reproduce:**
[1] Ballerina code
```ballerina
import ballerina/http;
listener http:Listener ep0 = new (9090, config = {host: "localhost"});
service / on ep0 {
resource function get correspondence/listByPatientId/[int:Signed32 patientId]/out(int:Signed32 locationId, @http:Header int:Signed32 header) returns string {
return "Hello World!";
}
}
```
[2] error
```
Running executable
error: invalid query parameter type 'lang.int:Signed32'
error: invalid path parameter type 'lang.int:Signed32'
error: invalid header parameter type 'lang.int:Signed32'
```
**Affected Versions:**
Ballerina Swanlake 2201.6.0
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers canβt assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers canβt assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
|
1.0
|
Receive runtime error when query, path, and header parameters have subtype of integer - **Description:**
We receive the below runtime error[2] while executing `bal run` for below Ballerina sample, But this didn't give any compilation error while executing `bal build`
**Steps to reproduce:**
[1] Ballerina code
```ballerina
import ballerina/http;
listener http:Listener ep0 = new (9090, config = {host: "localhost"});
service / on ep0 {
resource function get correspondence/listByPatientId/[int:Signed32 patientId]/out(int:Signed32 locationId, @http:Header int:Signed32 header) returns string {
return "Hello World!";
}
}
```
[2] error
```
Running executable
error: invalid query parameter type 'lang.int:Signed32'
error: invalid path parameter type 'lang.int:Signed32'
error: invalid header parameter type 'lang.int:Signed32'
```
**Affected Versions:**
Ballerina Swanlake 2201.6.0
**OS, DB, other environment details and versions:**
**Related Issues (optional):**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
**Suggested Labels (optional):**
<!-- Optional comma separated list of suggested labels. Non committers canβt assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels-->
**Suggested Assignees (optional):**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers canβt assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
|
priority
|
receive runtime error when query path and header parameters have subtype of integer description we receive the below runtime error while executing bal run for below ballerina sample but this didn t give any compilation error while executing bal build steps to reproduce ballerina code ballerina import ballerina http listener http listener new config host localhost service on resource function get correspondence listbypatientid out int locationid http header int header returns string return hello world error running executable error invalid query parameter type lang int error invalid path parameter type lang int error invalid header parameter type lang int affected versions ballerina swanlake os db other environment details and versions related issues optional suggested labels optional suggested assignees optional
| 1
|
176,506
| 6,560,366,101
|
IssuesEvent
|
2017-09-07 09:01:39
|
salesagility/SuiteCRM
|
https://api.github.com/repos/salesagility/SuiteCRM
|
closed
|
Possible opportunity for SQL injection attack in file modules/Emails/EmailUIAjax.php
|
bug Fix Proposed High Priority Resolved: Next Release
|
inside:
`case "getTemplateAttachments":`
line:
`$where = "parent_id='{$_REQUEST['parent_id']}'";`
All user inputs must be used after validation/sanitisation/escaping in SQL commands.
Refer to usage of DBManager::quote() function
|
1.0
|
Possible opportunity for SQL injection attack in file modules/Emails/EmailUIAjax.php - inside:
`case "getTemplateAttachments":`
line:
`$where = "parent_id='{$_REQUEST['parent_id']}'";`
All user inputs must be used after validation/sanitisation/escaping in SQL commands.
Refer to usage of DBManager::quote() function
|
priority
|
possible opportunity for sql injection attack in file modules emails emailuiajax php inside case gettemplateattachments line where parent id request all user inputs must be used after validation sanitisation escaping in sql commands refer to usage of dbmanager quote function
| 1
|
170,139
| 6,424,876,188
|
IssuesEvent
|
2017-08-09 14:23:43
|
oneOCT3T/SARPbugs
|
https://api.github.com/repos/oneOCT3T/SARPbugs
|
closed
|
Calling CreateObjects to lift object in LS beach apartment
|
bug high priority
|
I don't have to explain it, a GMX can fix but we need a proper solution. Mainly Octet did that script if I am not wrong.
Request to Octet for checking the issue and fix the bug since it's not for the mapping objects but lift moveable bug(?)
|
1.0
|
Calling CreateObjects to lift object in LS beach apartment - I don't have to explain it, a GMX can fix but we need a proper solution. Mainly Octet did that script if I am not wrong.
Request to Octet for checking the issue and fix the bug since it's not for the mapping objects but lift moveable bug(?)
|
priority
|
calling createobjects to lift object in ls beach apartment i don t have to explain it a gmx can fix but we need a proper solution mainly octet did that script if i am not wrong request to octet for checking the issue and fix the bug since it s not for the mapping objects but lift moveable bug
| 1
|
214,565
| 7,274,444,352
|
IssuesEvent
|
2018-02-21 10:01:58
|
ballerina-lang/language-server
|
https://api.github.com/repos/ballerina-lang/language-server
|
closed
|
Go to variable definition support
|
Priority/High Type/Task
|
**Description:**
Currently, we only support go to definition for functions, structs and enums. We need to support variables, connectors and actions in future.
|
1.0
|
Go to variable definition support - **Description:**
Currently, we only support go to definition for functions, structs and enums. We need to support variables, connectors and actions in future.
|
priority
|
go to variable definition support description currently we only support go to definition for functions structs and enums we need to support variables connectors and actions in future
| 1
|
748,217
| 26,112,398,550
|
IssuesEvent
|
2022-12-27 22:22:40
|
yugabyte/yugabyte-db
|
https://api.github.com/repos/yugabyte/yugabyte-db
|
closed
|
Tserver Registration hazard - UUID can be blank
|
kind/bug area/docdb priority/high 2.12 Backport Required jira-originated 2.14 Backport Required 2.16 Backport Required
|
Jira Link: [DB-3832](https://yugabyte.atlassian.net/browse/DB-3832)
At tserver startup, it may encounter difficulty reading the instance file, and try to register with an empty UUID, as in the case below:
```1008 02:07:23.531638 26967 ts_manager.cc:140] Registered new tablet server { permanent_uuid: "" instance_seqno: 1665194843459852 start_time_us: 1665194843459852 } with Master, full list: [{22f43dd4c90a4f2bbb6ebd1516c25616, 0x000000001d472010 -> { permanent_uuid: 22f43dd4c90a4f2bbb6ebd1516c25616 registration: common { private_rpc_addresses { host: "10.88.16.80" port: 9100 } http_addresses { host: "10.88.16.80" ...```
The server should protect itself by verifying that a reasonable UUID is available, before attempting to register.
|
1.0
|
Tserver Registration hazard - UUID can be blank - Jira Link: [DB-3832](https://yugabyte.atlassian.net/browse/DB-3832)
At tserver startup, it may encounter difficulty reading the instance file, and try to register with an empty UUID, as in the case below:
```1008 02:07:23.531638 26967 ts_manager.cc:140] Registered new tablet server { permanent_uuid: "" instance_seqno: 1665194843459852 start_time_us: 1665194843459852 } with Master, full list: [{22f43dd4c90a4f2bbb6ebd1516c25616, 0x000000001d472010 -> { permanent_uuid: 22f43dd4c90a4f2bbb6ebd1516c25616 registration: common { private_rpc_addresses { host: "10.88.16.80" port: 9100 } http_addresses { host: "10.88.16.80" ...```
The server should protect itself by verifying that a reasonable UUID is available, before attempting to register.
|
priority
|
tserver registration hazard uuid can be blank jira link at tserver startup it may encounter difficulty reading the instance file and try to register with an empty uuid as in the case below ts manager cc registered new tablet server permanent uuid instance seqno start time us with master full list permanent uuid registration common private rpc addresses host port http addresses host the server should protect itself by verifying that a reasonable uuid is available before attempting to register
| 1
|
514,441
| 14,939,294,169
|
IssuesEvent
|
2021-01-25 16:46:32
|
diyabc/diyabcGUI
|
https://api.github.com/repos/diyabc/diyabcGUI
|
closed
|
Add specific abcranger output prefix to allow different runs in a single project
|
enhancement high priority
|
Specify name parameter or candidate models in prefix output for abcranger run.
abcranger option to do so:
```
-o, --output arg Prefix output (modelchoice_out or estimparam_out by
default)
```
Interest: run multiple parameter estimation or multiple model choice procedure in a single project
|
1.0
|
Add specific abcranger output prefix to allow different runs in a single project - Specify name parameter or candidate models in prefix output for abcranger run.
abcranger option to do so:
```
-o, --output arg Prefix output (modelchoice_out or estimparam_out by
default)
```
Interest: run multiple parameter estimation or multiple model choice procedure in a single project
|
priority
|
add specific abcranger output prefix to allow different runs in a single project specify name parameter or candidate models in prefix output for abcranger run abcranger option to do so o output arg prefix output modelchoice out or estimparam out by default interest run multiple parameter estimation or multiple model choice procedure in a single project
| 1
|
746,079
| 26,014,734,203
|
IssuesEvent
|
2022-12-21 07:15:13
|
akvo/akvo-rsr
|
https://api.github.com/repos/akvo/akvo-rsr
|
closed
|
Endpoint performance
|
Bug Type: Performance Priority: High python Epic stale
|
Endpoint performance has been a long-standing issue with some endpoints taking multiple seconds to complete.
This epic will track the most problematic endpoints and try to find either targeted or general improvements.
|
1.0
|
Endpoint performance - Endpoint performance has been a long-standing issue with some endpoints taking multiple seconds to complete.
This epic will track the most problematic endpoints and try to find either targeted or general improvements.
|
priority
|
endpoint performance endpoint performance has been a long standing issue with some endpoints taking multiple seconds to complete this epic will track the most problematic endpoints and try to find either targeted or general improvements
| 1
|
678,475
| 23,198,966,250
|
IssuesEvent
|
2022-08-01 19:20:08
|
azerothcore/azerothcore-wotlk
|
https://api.github.com/repos/azerothcore/azerothcore-wotlk
|
closed
|
Faction Change loses progress on Loremaster from Neutral quest
|
Confirmed ChromieCraft Generic Priority-High
|
### Current Behaviour
When completing neutral quest (e.g. Wastewander Justice from Gadgetzan), then switching faction, it will not count this quest towards your total goal
If you did too many of them, your character is locked out.
### Expected Blizzlike Behaviour
It should count towards total goal.
### Source
...!
### Steps to reproduce the problem
1. .q add 1690
2. .q complete 1690
3. .go quest ender 1690
4. check achievements for kalimdor loremaster see 1/xxx
5. faction change
6. check achievements for kalimdor, see 0/xxx
### Extra Notes


Alliance ONLY quest will count towards the Total quest done achievement after faction changing to horde, too which doesnt seem right.
### AC rev. hash/commit
07c043552aa833bd101ffc0eb2ed71594fdea773
### Operating system
W11x64
### Custom changes or Modules
none
|
1.0
|
Faction Change loses progress on Loremaster from Neutral quest - ### Current Behaviour
When completing neutral quest (e.g. Wastewander Justice from Gadgetzan), then switching faction, it will not count this quest towards your total goal
If you did too many of them, your character is locked out.
### Expected Blizzlike Behaviour
It should count towards total goal.
### Source
...!
### Steps to reproduce the problem
1. .q add 1690
2. .q complete 1690
3. .go quest ender 1690
4. check achievements for kalimdor loremaster see 1/xxx
5. faction change
6. check achievements for kalimdor, see 0/xxx
### Extra Notes


Alliance ONLY quest will count towards the Total quest done achievement after faction changing to horde, too which doesnt seem right.
### AC rev. hash/commit
07c043552aa833bd101ffc0eb2ed71594fdea773
### Operating system
W11x64
### Custom changes or Modules
none
|
priority
|
faction change loses progress on loremaster from neutral quest current behaviour when completing neutral quest e g wastewander justice from gadgetzan then switching faction it will not count this quest towards your total goal if you did too many of them your character is locked out expected blizzlike behaviour it should count towards total goal source steps to reproduce the problem q add q complete go quest ender check achievements for kalimdor loremaster see xxx faction change check achievements for kalimdor see xxx extra notes alliance only quest will count towards the total quest done achievement after faction changing to horde too which doesnt seem right ac rev hash commit operating system custom changes or modules none
| 1
|
375,439
| 11,104,509,285
|
IssuesEvent
|
2019-12-17 07:44:38
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
opened
|
Diagnostics vanish sometimes in VSCode plugin
|
Area/Tooling Component/LanguageServer Priority/High Type/Bug
|
**Description:**
Consider the following invalid code snippet and try the mentioned edit sequence to observe the behaviour.
import ballerina/http;
```
service serviceName on new http:Listener(8080) {
http:Client cc = new("");
}<cursor>
```
Step 1: Initially editor will show the diagnostics
Step 2: Add a new-line
Step 3: Remove the new line and then the diagnostics will vanish.
Step 4: Continue steps 2 and 3 to observe the issue
**Affected Versions:**
v1.0.0-beta at least
|
1.0
|
Diagnostics vanish sometimes in VSCode plugin - **Description:**
Consider the following invalid code snippet and try the mentioned edit sequence to observe the behaviour.
import ballerina/http;
```
service serviceName on new http:Listener(8080) {
http:Client cc = new("");
}<cursor>
```
Step 1: Initially editor will show the diagnostics
Step 2: Add a new-line
Step 3: Remove the new line and then the diagnostics will vanish.
Step 4: Continue steps 2 and 3 to observe the issue
**Affected Versions:**
v1.0.0-beta at least
|
priority
|
diagnostics vanish sometimes in vscode plugin description consider the following invalid code snippet and try the mentioned edit sequence to observe the behaviour import ballerina http service servicename on new http listener http client cc new step initially editor will show the diagnostics step add a new line step remove the new line and then the diagnostics will vanish step continue steps and to observe the issue affected versions beta at least
| 1
|
246,635
| 7,895,449,382
|
IssuesEvent
|
2018-06-29 03:21:26
|
Cloud-CV/EvalAI
|
https://api.github.com/repos/Cloud-CV/EvalAI
|
closed
|
Futile error message pops up while submission
|
bug easy_to_fix frontend priority-high
|
<img width="338" alt="screen shot 2018-06-29 at 1 11 14 am" src="https://user-images.githubusercontent.com/12206047/42070962-64af4354-7b76-11e8-8e80-dd681e469ec8.png">
This keeps popping up between consecutive submissions. Submissions are submitted and results are calculated successfully, so do not know why this pops up.
|
1.0
|
Futile error message pops up while submission -
<img width="338" alt="screen shot 2018-06-29 at 1 11 14 am" src="https://user-images.githubusercontent.com/12206047/42070962-64af4354-7b76-11e8-8e80-dd681e469ec8.png">
This keeps popping up between consecutive submissions. Submissions are submitted and results are calculated successfully, so do not know why this pops up.
|
priority
|
futile error message pops up while submission img width alt screen shot at am src this keeps popping up between consecutive submissions submissions are submitted and results are calculated successfully so do not know why this pops up
| 1
|
231,263
| 7,625,351,084
|
IssuesEvent
|
2018-05-03 21:08:00
|
borela/naomi
|
https://api.github.com/repos/borela/naomi
|
closed
|
Destructuring and renaming: Sublime freezing/infinite spinner
|
bug priority: high
|
There are certain circumstances which causes Sublime to freeze when using destructuring and renaming. This is easily reproducible and doesn't happen when switching to other syntax highlighters.
```
foo() {
}
baz (bar) {
}
```
After copying the above code, try to manually type the following code into `foo()` (copying and pasting seems to have no effect)
```js
const { a: b } = {};
```
Note that the syntax highlighting causes the editor to freeze up with a spinner. I tried to capture this below:

|
1.0
|
Destructuring and renaming: Sublime freezing/infinite spinner - There are certain circumstances which causes Sublime to freeze when using destructuring and renaming. This is easily reproducible and doesn't happen when switching to other syntax highlighters.
```
foo() {
}
baz (bar) {
}
```
After copying the above code, try to manually type the following code into `foo()` (copying and pasting seems to have no effect)
```js
const { a: b } = {};
```
Note that the syntax highlighting causes the editor to freeze up with a spinner. I tried to capture this below:

|
priority
|
destructuring and renaming sublime freezing infinite spinner there are certain circumstances which causes sublime to freeze when using destructuring and renaming this is easily reproducible and doesn t happen when switching to other syntax highlighters foo baz bar after copying the above code try to manually type the following code into foo copying and pasting seems to have no effect js const a b note that the syntax highlighting causes the editor to freeze up with a spinner i tried to capture this below
| 1
|
343,896
| 10,338,115,605
|
IssuesEvent
|
2019-09-03 16:10:16
|
Haivision/srt
|
https://api.github.com/repos/Haivision/srt
|
closed
|
srt-live-transmit - strange behaviour with jitter
|
Priority: High Type: Bug [core]
|
Hi there,
I recently tested srt-live-transmit under simulated network disturbances. I did this using traffic control/netem on Ubuntu 18.04.2/4.18.0-21, SRT Version 1.3.2 and noticed that just a little artificial jitter causes SRT to misbehave.
I was using a consumer grade internet connection with good performance: 40 mpbs upload, jitter avg. 4ms (0-34ms), ping avg. 25 ms (20-59ms), no packet loss. SRT runs fine on this line, adding roughly 5-10% overhead to encoded video. Measuring the simulated disturbances with nperf I did not see any irregularities. Values seem to rise in accordance to what I adjust. And still no packet loss.
BUT - introducing only 1ms of jitter (for instance: tc qdisc change dev eno2 root netem delay 50ms 1ms) will raise the outgoing datarate about 70%. In the logs I can see that about 4000 of 9000 packets have to be retransmitted. Going higher (delay 100ms jitter 30ms) the datarate reaches more than twice the original value and roughly every 2nd packet is retransmitted.
Adding packet loss is much more predictable and acting linearly. Also, delay without jitter does not do any harm.. so this seems to be a special case.
I guess this is not significant for "real world" network scenarios, nevertheless I'd like to have an explanation for that or hear if anybody else can confirm this issue.
SRT command looks like this: ./srt-live-transmit -loglevel:note -v -r:5000 -s:5000 "file://con" "srt://X.X.X.X:3196?inputbw=9000000&oheadbw=40&pkt_size=1316&transtype=live&latency=4000&mode=caller"
THANKS!
|
1.0
|
srt-live-transmit - strange behaviour with jitter - Hi there,
I recently tested srt-live-transmit under simulated network disturbances. I did this using traffic control/netem on Ubuntu 18.04.2/4.18.0-21, SRT Version 1.3.2 and noticed that just a little artificial jitter causes SRT to misbehave.
I was using a consumer grade internet connection with good performance: 40 mpbs upload, jitter avg. 4ms (0-34ms), ping avg. 25 ms (20-59ms), no packet loss. SRT runs fine on this line, adding roughly 5-10% overhead to encoded video. Measuring the simulated disturbances with nperf I did not see any irregularities. Values seem to rise in accordance to what I adjust. And still no packet loss.
BUT - introducing only 1ms of jitter (for instance: tc qdisc change dev eno2 root netem delay 50ms 1ms) will raise the outgoing datarate about 70%. In the logs I can see that about 4000 of 9000 packets have to be retransmitted. Going higher (delay 100ms jitter 30ms) the datarate reaches more than twice the original value and roughly every 2nd packet is retransmitted.
Adding packet loss is much more predictable and acting linearly. Also, delay without jitter does not do any harm.. so this seems to be a special case.
I guess this is not significant for "real world" network scenarios, nevertheless I'd like to have an explanation for that or hear if anybody else can confirm this issue.
SRT command looks like this: ./srt-live-transmit -loglevel:note -v -r:5000 -s:5000 "file://con" "srt://X.X.X.X:3196?inputbw=9000000&oheadbw=40&pkt_size=1316&transtype=live&latency=4000&mode=caller"
THANKS!
|
priority
|
srt live transmit strange behaviour with jitter hi there i recently tested srt live transmit under simulated network disturbances i did this using traffic control netem on ubuntu srt version and noticed that just a little artificial jitter causes srt to misbehave i was using a consumer grade internet connection with good performance mpbs upload jitter avg ping avg ms no packet loss srt runs fine on this line adding roughly overhead to encoded video measuring the simulated disturbances with nperf i did not see any irregularities values seem to rise in accordance to what i adjust and still no packet loss but introducing only of jitter for instance tc qdisc change dev root netem delay will raise the outgoing datarate about in the logs i can see that about of packets have to be retransmitted going higher delay jitter the datarate reaches more than twice the original value and roughly every packet is retransmitted adding packet loss is much more predictable and acting linearly also delay without jitter does not do any harm so this seems to be a special case i guess this is not significant for real world network scenarios nevertheless i d like to have an explanation for that or hear if anybody else can confirm this issue srt command looks like this srt live transmit loglevel note v r s file con srt x x x x inputbw oheadbw pkt size transtype live latency mode caller thanks
| 1
|
720,279
| 24,786,661,228
|
IssuesEvent
|
2022-10-24 10:21:09
|
AY2223S1-CS2103T-T10-3/tp
|
https://api.github.com/repos/AY2223S1-CS2103T-T10-3/tp
|
closed
|
As a forgetful student, I can sort the list of internship applications by dates
|
type.Story priority.High
|
... so that I can see and remember which interviews are upcoming to better prepare for them in case I have forgotten about them.
|
1.0
|
As a forgetful student, I can sort the list of internship applications by dates - ... so that I can see and remember which interviews are upcoming to better prepare for them in case I have forgotten about them.
|
priority
|
as a forgetful student i can sort the list of internship applications by dates so that i can see and remember which interviews are upcoming to better prepare for them in case i have forgotten about them
| 1
|
340,747
| 10,277,861,195
|
IssuesEvent
|
2019-08-25 09:11:18
|
noobaa/noobaa-core
|
https://api.github.com/repos/noobaa/noobaa-core
|
closed
|
get_cloud_services_stats - does not return empty stats for configured services which where not used yet
|
Comp-Statuses Priority 2 High Severity 3 Supportability
|
### Environment info
- Version: **VERSION**
- Deployment: **AZURE | AWS | GCLOUD | ESX | VBOX | DEV**
- Customer: **NAME**
### Actual behavior
1. Stats returned only for services which where accessed
### Expected behavior
1. Stats should be returned for all services which where configured
### Steps to reproduce
1.
### Screenshots or Logs or other output that would be helpful
|
1.0
|
get_cloud_services_stats - does not return empty stats for configured services which where not used yet - ### Environment info
- Version: **VERSION**
- Deployment: **AZURE | AWS | GCLOUD | ESX | VBOX | DEV**
- Customer: **NAME**
### Actual behavior
1. Stats returned only for services which where accessed
### Expected behavior
1. Stats should be returned for all services which where configured
### Steps to reproduce
1.
### Screenshots or Logs or other output that would be helpful
|
priority
|
get cloud services stats does not return empty stats for configured services which where not used yet environment info version version deployment azure aws gcloud esx vbox dev customer name actual behavior stats returned only for services which where accessed expected behavior stats should be returned for all services which where configured steps to reproduce screenshots or logs or other output that would be helpful
| 1
|
827,019
| 31,722,093,753
|
IssuesEvent
|
2023-09-10 14:23:12
|
EtalumaSupport/LumaViewPro
|
https://api.github.com/repos/EtalumaSupport/LumaViewPro
|
closed
|
Save folder does not show current folder...
|
High Priority
|
When you click on the Save Destination folder icon, it comes up with a brand new folder selection dialog box.
It would be better if this could show a fully expanded folder selection, showing where you currently are, so that you can verify that you are in the correct folder.
|
1.0
|
Save folder does not show current folder... - When you click on the Save Destination folder icon, it comes up with a brand new folder selection dialog box.
It would be better if this could show a fully expanded folder selection, showing where you currently are, so that you can verify that you are in the correct folder.
|
priority
|
save folder does not show current folder when you click on the save destination folder icon it comes up with a brand new folder selection dialog box it would be better if this could show a fully expanded folder selection showing where you currently are so that you can verify that you are in the correct folder
| 1
|
240,411
| 7,801,400,105
|
IssuesEvent
|
2018-06-09 20:41:33
|
hassio-addons/addon-ide
|
https://api.github.com/repos/hassio-addons/addon-ide
|
closed
|
Can't install or upgrade to 0.2.0
|
Accepted Closed: Done Priority: High Type: Feature
|
Yo, thanks for your work on this project! Love the Cloud9 implementation for HA! Super nice.
I've been running 0.1.0 without issue for a while, but I'm having no success either A) installing 0.2.0 on a clean instance of Hass.IO in my dev environment. Or B) upgrading from 0.1.0 to 0.2.0 in my other HASS.IO instances.
When I click either the "upgrade" or "install" buttons, it bounces for a few seconds and nothing happens.

Cheers!
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/59164791-can-t-install-or-upgrade-to-0-2-0?utm_campaign=plugin&utm_content=tracker%2F70775578&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F70775578&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
1.0
|
Can't install or upgrade to 0.2.0 - Yo, thanks for your work on this project! Love the Cloud9 implementation for HA! Super nice.
I've been running 0.1.0 without issue for a while, but I'm having no success either A) installing 0.2.0 on a clean instance of Hass.IO in my dev environment. Or B) upgrading from 0.1.0 to 0.2.0 in my other HASS.IO instances.
When I click either the "upgrade" or "install" buttons, it bounces for a few seconds and nothing happens.

Cheers!
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/59164791-can-t-install-or-upgrade-to-0-2-0?utm_campaign=plugin&utm_content=tracker%2F70775578&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F70775578&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
priority
|
can t install or upgrade to yo thanks for your work on this project love the implementation for ha super nice i ve been running without issue for a while but i m having no success either a installing on a clean instance of hass io in my dev environment or b upgrading from to in my other hass io instances when i click either the upgrade or install buttons it bounces for a few seconds and nothing happens cheers want to back this issue we accept bounties via
| 1
|
401,881
| 11,799,345,600
|
IssuesEvent
|
2020-03-18 15:45:49
|
AY1920S2-CS2103T-T10-3/main
|
https://api.github.com/repos/AY1920S2-CS2103T-T10-3/main
|
closed
|
Command : ExpCommands
|
priority.High type.epic
|
Continue refactoring (if needed) and rewrite code to reflect behaviour of the respective commands
|
1.0
|
Command : ExpCommands - Continue refactoring (if needed) and rewrite code to reflect behaviour of the respective commands
|
priority
|
command expcommands continue refactoring if needed and rewrite code to reflect behaviour of the respective commands
| 1
|
664,164
| 22,241,657,359
|
IssuesEvent
|
2022-06-09 06:11:52
|
altoxml/schema
|
https://api.github.com/repos/altoxml/schema
|
closed
|
Clarify implicit reading order
|
8 published high priority
|
There are currently no mentions of reading order anywhere in the standard and most people treat the sequence of elements as the order these elements should be read, e.g. the n-th `<String>` in a `<TextLine>` is the n-th word a human reader would read in that line.
Apparently this isn't evident to everyone out there. [These](https://twitter.com/tillgrallert/status/1369761658437054464) tweets document that Transkribus's ALTO output sorts `<String>` elements from left to right which causes an inversion for RTL text. We should probably clarify that `<TextLine>/<String>/<Glyph>` are to be ordered in a way that corresponds to the text flow.
|
1.0
|
Clarify implicit reading order - There are currently no mentions of reading order anywhere in the standard and most people treat the sequence of elements as the order these elements should be read, e.g. the n-th `<String>` in a `<TextLine>` is the n-th word a human reader would read in that line.
Apparently this isn't evident to everyone out there. [These](https://twitter.com/tillgrallert/status/1369761658437054464) tweets document that Transkribus's ALTO output sorts `<String>` elements from left to right which causes an inversion for RTL text. We should probably clarify that `<TextLine>/<String>/<Glyph>` are to be ordered in a way that corresponds to the text flow.
|
priority
|
clarify implicit reading order there are currently no mentions of reading order anywhere in the standard and most people treat the sequence of elements as the order these elements should be read e g the n th in a is the n th word a human reader would read in that line apparently this isn t evident to everyone out there tweets document that transkribus s alto output sorts elements from left to right which causes an inversion for rtl text we should probably clarify that are to be ordered in a way that corresponds to the text flow
| 1
|
712,943
| 24,511,975,796
|
IssuesEvent
|
2022-10-10 22:48:09
|
phetsims/scenery
|
https://api.github.com/repos/phetsims/scenery
|
closed
|
rename `textProperty` to `stringProperty`
|
priority:2-high status:ready-for-review
|
@kathy-phet, @samreid, and I realized today that `textProperty` is totally a misnomer because it holds a string. We recently changed to underlying translatable string properties to be called `*StringProperty`, so this fits with that work.
|
1.0
|
rename `textProperty` to `stringProperty` - @kathy-phet, @samreid, and I realized today that `textProperty` is totally a misnomer because it holds a string. We recently changed to underlying translatable string properties to be called `*StringProperty`, so this fits with that work.
|
priority
|
rename textproperty to stringproperty kathy phet samreid and i realized today that textproperty is totally a misnomer because it holds a string we recently changed to underlying translatable string properties to be called stringproperty so this fits with that work
| 1
|
783,441
| 27,530,901,171
|
IssuesEvent
|
2023-03-06 22:01:25
|
CCICB/CRUX
|
https://api.github.com/repos/CCICB/CRUX
|
opened
|
Build PHI identification instructions into the manual and data-import workflow
|
High Priority
|
- [ ] Add a popup when user tries to import a clinical data file that warns they should be careful to avoid importing data with PHI. Link-outs to other resources (the manual / a video describing the process of PHI identification) would be valuable
|
1.0
|
Build PHI identification instructions into the manual and data-import workflow - - [ ] Add a popup when user tries to import a clinical data file that warns they should be careful to avoid importing data with PHI. Link-outs to other resources (the manual / a video describing the process of PHI identification) would be valuable
|
priority
|
build phi identification instructions into the manual and data import workflow add a popup when user tries to import a clinical data file that warns they should be careful to avoid importing data with phi link outs to other resources the manual a video describing the process of phi identification would be valuable
| 1
|
667,458
| 22,474,016,845
|
IssuesEvent
|
2022-06-22 10:35:47
|
WordPress/gutenberg
|
https://api.github.com/repos/WordPress/gutenberg
|
closed
|
Same Url for post and page and page taking priority over post
|
[Type] Bug [Priority] High [Type] WP Core Bug
|
I found a bug in latest version of wordpress 6.0 ( if its really is ) If you create a page and a post with same title, their url remain same, and due to this at that url you can only access the page ( taking priority) and not able to see the post.
Step to regenerate the issue:-
1. Create a new page with a title for example test (remember to set post name as select option in settings => Permalink)
2. Create a new post with same title "test"
3. You will see both page and post is getting same url for example example.com/test/
4. Now open the same url in frontend without login and see example.com/test/ will open the page, now their is no way to see that post content, as the post have also the same url as the page is. ( I think it should add a number in the url if the post is with same title )
|
1.0
|
Same Url for post and page and page taking priority over post - I found a bug in latest version of wordpress 6.0 ( if its really is ) If you create a page and a post with same title, their url remain same, and due to this at that url you can only access the page ( taking priority) and not able to see the post.
Step to regenerate the issue:-
1. Create a new page with a title for example test (remember to set post name as select option in settings => Permalink)
2. Create a new post with same title "test"
3. You will see both page and post is getting same url for example example.com/test/
4. Now open the same url in frontend without login and see example.com/test/ will open the page, now their is no way to see that post content, as the post have also the same url as the page is. ( I think it should add a number in the url if the post is with same title )
|
priority
|
same url for post and page and page taking priority over post i found a bug in latest version of wordpress if its really is if you create a page and a post with same title their url remain same and due to this at that url you can only access the page taking priority and not able to see the post step to regenerate the issue create a new page with a title for example test remember to set post name as select option in settings permalink create a new post with same title test you will see both page and post is getting same url for example example com test now open the same url in frontend without login and see example com test will open the page now their is no way to see that post content as the post have also the same url as the page is i think it should add a number in the url if the post is with same title
| 1
|
363,524
| 10,742,030,483
|
IssuesEvent
|
2019-10-29 21:32:05
|
CredentialEngine/CredentialRegistry
|
https://api.github.com/repos/CredentialEngine/CredentialRegistry
|
opened
|
Registry configuration files
|
High Priority
|
We need documentation related to configuration files for all communities, current and future:
- File name
- Description
- The nature of the data
- Example data
We need this for:
- Configuration file for communities
- Configuration file for JSON-LD contexts
- Configuration file for top-level classes
- Any other configuration files
We also need any information/configurations that are specific to individual communities (e.g. the Navy community).
We need to know if files that retrieve data cache/store that data, and for what length of time (e.g. does the system that retrieves JSON-LD context data do so on-demand, daily, weekly, etc.?)
We need to be able to generate all of the information to fill out these configuration files so that we know what you need and are able to provide it without a manual process.
|
1.0
|
Registry configuration files - We need documentation related to configuration files for all communities, current and future:
- File name
- Description
- The nature of the data
- Example data
We need this for:
- Configuration file for communities
- Configuration file for JSON-LD contexts
- Configuration file for top-level classes
- Any other configuration files
We also need any information/configurations that are specific to individual communities (e.g. the Navy community).
We need to know if files that retrieve data cache/store that data, and for what length of time (e.g. does the system that retrieves JSON-LD context data do so on-demand, daily, weekly, etc.?)
We need to be able to generate all of the information to fill out these configuration files so that we know what you need and are able to provide it without a manual process.
|
priority
|
registry configuration files we need documentation related to configuration files for all communities current and future file name description the nature of the data example data we need this for configuration file for communities configuration file for json ld contexts configuration file for top level classes any other configuration files we also need any information configurations that are specific to individual communities e g the navy community we need to know if files that retrieve data cache store that data and for what length of time e g does the system that retrieves json ld context data do so on demand daily weekly etc we need to be able to generate all of the information to fill out these configuration files so that we know what you need and are able to provide it without a manual process
| 1
|
48,198
| 2,994,574,011
|
IssuesEvent
|
2015-07-22 12:41:05
|
learnweb/moodle-mod_ratingallocate
|
https://api.github.com/repos/learnweb/moodle-mod_ratingallocate
|
closed
|
Save Rating does not work in yes_no_maybe strategy
|
bug Priority: High
|
Saving does not take any effekt.
the reason may be somewhere here:
i get false in mod/ratingallocate/locallib.php:248
else if ($mform->is_submitted() && $mform->is_validated() && $data = $mform->get_data() )
so the save save_ratings_to_db() will not be reached.
If i try the just the yes/no - strategy... everythings works fine.
Someone any ideas about this issue?
Thanks
Stefan
|
1.0
|
Save Rating does not work in yes_no_maybe strategy - Saving does not take any effekt.
the reason may be somewhere here:
i get false in mod/ratingallocate/locallib.php:248
else if ($mform->is_submitted() && $mform->is_validated() && $data = $mform->get_data() )
so the save save_ratings_to_db() will not be reached.
If i try the just the yes/no - strategy... everythings works fine.
Someone any ideas about this issue?
Thanks
Stefan
|
priority
|
save rating does not work in yes no maybe strategy saving does not take any effekt the reason may be somewhere here i get false in mod ratingallocate locallib php else if mform is submitted mform is validated data mform get data so the save save ratings to db will not be reached if i try the just the yes no strategy everythings works fine someone any ideas about this issue thanks stefan
| 1
|
354,614
| 10,570,208,129
|
IssuesEvent
|
2019-10-07 01:10:40
|
AY1920S1-CS2103T-W11-4/main
|
https://api.github.com/repos/AY1920S1-CS2103T-W11-4/main
|
opened
|
As a user I want to see how many calories I have left in todayβs budget
|
priority.High type.Story
|
Know what I can eat later, and stay in budget.
|
1.0
|
As a user I want to see how many calories I have left in todayβs budget - Know what I can eat later, and stay in budget.
|
priority
|
as a user i want to see how many calories i have left in todayβs budget know what i can eat later and stay in budget
| 1
|
283,251
| 8,718,296,720
|
IssuesEvent
|
2018-12-07 19:57:10
|
conveyal/datatools-ui
|
https://api.github.com/repos/conveyal/datatools-ui
|
closed
|
Defining schedule exception with SWAP type provides inconsistent results
|
bug high-priority imported initial-fix
|
<a href="https://github.com/landonreed"><img src="https://avatars2.githubusercontent.com/u/2370911?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [landonreed](https://github.com/landonreed)**
_Thursday Apr 26, 2018 at 17:13 GMT_
_Originally opened as https://github.com/catalogueglobal/datatools-ui/issues/74_
----
|
1.0
|
Defining schedule exception with SWAP type provides inconsistent results - <a href="https://github.com/landonreed"><img src="https://avatars2.githubusercontent.com/u/2370911?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [landonreed](https://github.com/landonreed)**
_Thursday Apr 26, 2018 at 17:13 GMT_
_Originally opened as https://github.com/catalogueglobal/datatools-ui/issues/74_
----
|
priority
|
defining schedule exception with swap type provides inconsistent results issue by thursday apr at gmt originally opened as
| 1
|
484,382
| 13,938,984,484
|
IssuesEvent
|
2020-10-22 15:54:40
|
inverse-inc/packetfence
|
https://api.github.com/repos/inverse-inc/packetfence
|
opened
|
RADIUS filter attribute search glitch on maintenance/10.0 (most likely applies to devel too)
|
Priority: High Type: Bug
|
**Describe the bug**
When configuring a RADIUS filter, if you search for an attribute it will work but if you focus out then into the field then you lose the content of the search and are back with the full unfiltered list
**To Reproduce**
1. Create a new RADIUS filter

2. Focus on the field containing Idle-Timeout
3. Search for Timeout
4. Focus out and then back into the field
5. Abracadabra your search has just disappeared :mage_man:
**Expected behavior**
Search should stay there as long as the field contains my search
|
1.0
|
RADIUS filter attribute search glitch on maintenance/10.0 (most likely applies to devel too) - **Describe the bug**
When configuring a RADIUS filter, if you search for an attribute it will work but if you focus out then into the field then you lose the content of the search and are back with the full unfiltered list
**To Reproduce**
1. Create a new RADIUS filter

2. Focus on the field containing Idle-Timeout
3. Search for Timeout
4. Focus out and then back into the field
5. Abracadabra your search has just disappeared :mage_man:
**Expected behavior**
Search should stay there as long as the field contains my search
|
priority
|
radius filter attribute search glitch on maintenance most likely applies to devel too describe the bug when configuring a radius filter if you search for an attribute it will work but if you focus out then into the field then you lose the content of the search and are back with the full unfiltered list to reproduce create a new radius filter focus on the field containing idle timeout search for timeout focus out and then back into the field abracadabra your search has just disappeared mage man expected behavior search should stay there as long as the field contains my search
| 1
|
67,854
| 3,282,102,873
|
IssuesEvent
|
2015-10-28 03:03:15
|
shuSteppenwolf/creditanalytics
|
https://api.github.com/repos/shuSteppenwolf/creditanalytics
|
closed
|
Additional bond fields
|
auto-migrated Priority-High Type-Enhancement
|
```
- Next exercise information (if available)
- Previous/current/next coupon info (coupon dates, coupon amounts)
```
Original issue reported on code.google.com by `lakshmi7...@gmail.com` on 26 Jan 2012 at 4:48
|
1.0
|
Additional bond fields - ```
- Next exercise information (if available)
- Previous/current/next coupon info (coupon dates, coupon amounts)
```
Original issue reported on code.google.com by `lakshmi7...@gmail.com` on 26 Jan 2012 at 4:48
|
priority
|
additional bond fields next exercise information if available previous current next coupon info coupon dates coupon amounts original issue reported on code google com by gmail com on jan at
| 1
|
179,361
| 6,624,418,915
|
IssuesEvent
|
2017-09-22 11:34:05
|
ballerinalang/composer
|
https://api.github.com/repos/ballerinalang/composer
|
opened
|
Try it feature should be available only for services
|
Priority/High Severity/Major Type/Improvement
|
Pack - 809e1059ea629ab06d43ef1442a7a05bc5b272d7
Try it feature should be available only for services
|
1.0
|
Try it feature should be available only for services - Pack - 809e1059ea629ab06d43ef1442a7a05bc5b272d7
Try it feature should be available only for services
|
priority
|
try it feature should be available only for services pack try it feature should be available only for services
| 1
|
812,789
| 30,352,244,075
|
IssuesEvent
|
2023-07-11 19:57:11
|
WFP-VAM/prism-app
|
https://api.github.com/repos/WFP-VAM/prism-app
|
opened
|
[Bug]: Tooltip rendering has become sluggish and frequently fails altogether
|
bug priority:high triage
|
### What happened?
I'm trying to deploy the latest versions from master for Cambodia and RBD. I have noticed that the tooltip has become very slow before a layer is loaded, and fails altogether once a layer is loaded.
Before a layer loads, I should be able to click on the map and see the admin name. This sometimes doesn't work unless I reload. After reloading, it's sluggish.
After loading a layer, if I click on an admin area, the tooltip takes a couple of seconds to load. This is visible in RBD, CH data layers.
In the case of Cambodia, the Kobo layer tooltip doesn't load at all. I noticed if I activate the IDPoor layer, then both layers show the tooltip.
### Which country / deployment are you running?
RBD and Cambodia on master. I assume this applies for all countries though
### Add a screenshot (if relevant)
Cambodia - tooltip doesn't work for Kobo layer
https://github.com/WFP-VAM/prism-app/assets/3343536/e1c2c41e-6fd2-412e-8f80-489fdb4f5eee
RBD - tooltip is slow with no layer activated, then fails when a CH layer is active
https://github.com/WFP-VAM/prism-app/assets/3343536/a60dc2b2-ab44-4b9e-b446-ac68837640f1
|
1.0
|
[Bug]: Tooltip rendering has become sluggish and frequently fails altogether - ### What happened?
I'm trying to deploy the latest versions from master for Cambodia and RBD. I have noticed that the tooltip has become very slow before a layer is loaded, and fails altogether once a layer is loaded.
Before a layer loads, I should be able to click on the map and see the admin name. This sometimes doesn't work unless I reload. After reloading, it's sluggish.
After loading a layer, if I click on an admin area, the tooltip takes a couple of seconds to load. This is visible in RBD, CH data layers.
In the case of Cambodia, the Kobo layer tooltip doesn't load at all. I noticed if I activate the IDPoor layer, then both layers show the tooltip.
### Which country / deployment are you running?
RBD and Cambodia on master. I assume this applies for all countries though
### Add a screenshot (if relevant)
Cambodia - tooltip doesn't work for Kobo layer
https://github.com/WFP-VAM/prism-app/assets/3343536/e1c2c41e-6fd2-412e-8f80-489fdb4f5eee
RBD - tooltip is slow with no layer activated, then fails when a CH layer is active
https://github.com/WFP-VAM/prism-app/assets/3343536/a60dc2b2-ab44-4b9e-b446-ac68837640f1
|
priority
|
tooltip rendering has become sluggish and frequently fails altogether what happened i m trying to deploy the latest versions from master for cambodia and rbd i have noticed that the tooltip has become very slow before a layer is loaded and fails altogether once a layer is loaded before a layer loads i should be able to click on the map and see the admin name this sometimes doesn t work unless i reload after reloading it s sluggish after loading a layer if i click on an admin area the tooltip takes a couple of seconds to load this is visible in rbd ch data layers in the case of cambodia the kobo layer tooltip doesn t load at all i noticed if i activate the idpoor layer then both layers show the tooltip which country deployment are you running rbd and cambodia on master i assume this applies for all countries though add a screenshot if relevant cambodia tooltip doesn t work for kobo layer rbd tooltip is slow with no layer activated then fails when a ch layer is active
| 1
|
825,275
| 31,301,746,609
|
IssuesEvent
|
2023-08-23 00:33:47
|
SurajPratap10/Imagine_AI
|
https://api.github.com/repos/SurajPratap10/Imagine_AI
|
closed
|
Feature: Adding new SECTIONS to the Website
|
enhancement gssoc23 High Priority π₯ β goal: addition π¨π»βπ goal: major level3
|
**Is your feature request related to a problem? Please describe.**
- Add some new sections to the website
- Make the accessible through the Navbar
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
*You are free to drop your ideas on what `sections` you want to add in the website, just after adding them show me some demo pics or videos*.
|
1.0
|
Feature: Adding new SECTIONS to the Website - **Is your feature request related to a problem? Please describe.**
- Add some new sections to the website
- Make the accessible through the Navbar
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
*You are free to drop your ideas on what `sections` you want to add in the website, just after adding them show me some demo pics or videos*.
|
priority
|
feature adding new sections to the website is your feature request related to a problem please describe add some new sections to the website make the accessible through the navbar describe the solution you d like a clear and concise description of what you want to happen describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context you are free to drop your ideas on what sections you want to add in the website just after adding them show me some demo pics or videos
| 1
|
160,218
| 6,084,961,649
|
IssuesEvent
|
2017-06-17 09:50:24
|
standardnotes/android
|
https://api.github.com/repos/standardnotes/android
|
closed
|
Deleting note locally - no action
|
bug High priority
|
Report from user, feel free to close if can't replicate:
I want to report a bug in the Android app v1.3.0: If I try to delete an existing note, nothing happens. I am using the app locally, with no sync account. I am using a Huawei honor 8 smartphone with official ROM and Android Nougat.
|
1.0
|
Deleting note locally - no action - Report from user, feel free to close if can't replicate:
I want to report a bug in the Android app v1.3.0: If I try to delete an existing note, nothing happens. I am using the app locally, with no sync account. I am using a Huawei honor 8 smartphone with official ROM and Android Nougat.
|
priority
|
deleting note locally no action report from user feel free to close if can t replicate i want to report a bug in the android app if i try to delete an existing note nothing happens i am using the app locally with no sync account i am using a huawei honor smartphone with official rom and android nougat
| 1
|
711,044
| 24,448,666,247
|
IssuesEvent
|
2022-10-06 20:23:32
|
ArctosDB/arctos
|
https://api.github.com/repos/ArctosDB/arctos
|
closed
|
Add Kuwait Governates
|
Priority-High (Needed for work) Function-Locality/Event/Georeferencing
|
Currently there are no subdivisions in the Kuwait higher geography. These are from Wikipedia.
[Kuwait Governates to add to higher geography 09 Sep 2022.csv]
(https://github.com/ArctosDB/arctos/files/9521871/Kuwait.Governates.to.add.to.higher.geography.09.Sep.2022.csv)
|
1.0
|
Add Kuwait Governates - Currently there are no subdivisions in the Kuwait higher geography. These are from Wikipedia.
[Kuwait Governates to add to higher geography 09 Sep 2022.csv]
(https://github.com/ArctosDB/arctos/files/9521871/Kuwait.Governates.to.add.to.higher.geography.09.Sep.2022.csv)
|
priority
|
add kuwait governates currently there are no subdivisions in the kuwait higher geography these are from wikipedia
| 1
|
106,893
| 4,287,362,550
|
IssuesEvent
|
2016-07-16 18:25:16
|
neuropoly/spinalcordtoolbox
|
https://api.github.com/repos/neuropoly/spinalcordtoolbox
|
closed
|
ValueError: setting an array element with a sequence.
|
bug priority: high sct_label_vertebrae
|
data: **sct_testing/small/20151025_emil**
~~~
isct_test_function -f sct_label_vertebrae -d /Volumes/data_raid/data_shared/sct_testing/small/ -p \"-i t2/t2.nii.gz -s t2/t2_seg_manual.nii.gz -o t2_seg_labeled.nii.gz -r 0\"
SpinalCord Toolbox version 2.2_dev
SCT installed in /Users/jcohen/code/spinalcordtoolbox
PYTHON /Users/jcohen/code/spinalcordtoolbox/python/bin/python /Users/jcohen/code/spinalcordtoolbox/python /Users/jcohen/code/spinalcordtoolbox
running /Users/jcohen/code/spinalcordtoolbox/scripts/isct_test_function.py -f sct_label_vertebrae -d /Volumes/data_raid/data_shared/sct_testing/small/ -p "-i t2/t2.nii.gz -s t2/t2_seg_manual.nii.gz -o t2_seg_labeled.nii.gz -r 0"
Check folder existence...
Testing... (started on: 2016-04-05 13:41:13)
running /Users/jcohen/code/spinalcordtoolbox/scripts/sct_label_vertebrae.py -laplacian 0 -o t2_seg_labeled.nii.gz -i /Volumes/data_raid/data_shared/sct_testing/small/20151025_emil/t2/t2.nii.gz -v 1 -s /Volumes/data_raid/data_shared/sct_testing/small/20151025_emil/t2/t2_seg_manual.nii.gz -r 0 -denoise 0 -ofolder sct_label_vertebrae_20151025_emil_160405134113_683918/ -initz 7,2
Check folder existence...
Create temporary folder...
Create temporary folder...
mkdir tmp.160405134114_663672/
Copying input data to tmp folder...
sct_convert -i /Volumes/data_raid/data_shared/sct_testing/small/20151025_emil/t2/t2.nii.gz -o tmp.160405134114_663672/data.nii
sct_convert -i /Volumes/data_raid/data_shared/sct_testing/small/20151025_emil/t2/t2_seg_manual.nii.gz -o tmp.160405134114_663672/segmentation.nii.gz
Create label to identify disc...
Straighten spinal cord...
sct_straighten_spinalcord -i data.nii -s segmentation.nii.gz -r 0 -qc 0
Apply straightening to segmentation...
sct_apply_transfo -i segmentation.nii.gz -d data_straight.nii -w warp_curve2straight.nii.gz -o segmentation_straight.nii.gz -x linear
sct_maths -i segmentation_straight.nii.gz -thr 0.5 -o segmentation_straight.nii.gz
Dilate z-label and apply straightening...
sct_apply_transfo -i labelz.nii.gz -d data_straight.nii -w warp_curve2straight.nii.gz -o labelz_straight.nii.gz -x nn
Get z and disc values from straight label...
.. [8, 2]
Detect intervertebral discs...
.. local adjustment to center disc
.... WARNING: Pattern is missing data (because close to the edge). Using initial current_z provided.
Current disc: 2 (z=8). Direction: superior
.. approximate distance to next disc: 47 mm
.. WARNING: Data contained zero. We are probably at the edge.
Traceback (most recent call last):
File "/Users/jcohen/code/spinalcordtoolbox/scripts/sct_label_vertebrae.py", line 679, in <module>
main()
File "/Users/jcohen/code/spinalcordtoolbox/scripts/sct_label_vertebrae.py", line 193, in main
vertebral_detection('data_straight.nii', 'segmentation_straight.nii.gz', init_disc, verbose)
File "/Users/jcohen/code/spinalcordtoolbox/scripts/sct_label_vertebrae.py", line 391, in vertebral_detection
ind_peak[0] = np.where(I_corr_adj == I_corr_adj.max())[0] # index of max along z
ValueError: setting an array element with a sequence.
/Users/jcohen/code/spinalcordtoolbox/scripts/sct_utils.py, line 100
~~~
|
1.0
|
ValueError: setting an array element with a sequence. - data: **sct_testing/small/20151025_emil**
~~~
isct_test_function -f sct_label_vertebrae -d /Volumes/data_raid/data_shared/sct_testing/small/ -p \"-i t2/t2.nii.gz -s t2/t2_seg_manual.nii.gz -o t2_seg_labeled.nii.gz -r 0\"
SpinalCord Toolbox version 2.2_dev
SCT installed in /Users/jcohen/code/spinalcordtoolbox
PYTHON /Users/jcohen/code/spinalcordtoolbox/python/bin/python /Users/jcohen/code/spinalcordtoolbox/python /Users/jcohen/code/spinalcordtoolbox
running /Users/jcohen/code/spinalcordtoolbox/scripts/isct_test_function.py -f sct_label_vertebrae -d /Volumes/data_raid/data_shared/sct_testing/small/ -p "-i t2/t2.nii.gz -s t2/t2_seg_manual.nii.gz -o t2_seg_labeled.nii.gz -r 0"
Check folder existence...
Testing... (started on: 2016-04-05 13:41:13)
running /Users/jcohen/code/spinalcordtoolbox/scripts/sct_label_vertebrae.py -laplacian 0 -o t2_seg_labeled.nii.gz -i /Volumes/data_raid/data_shared/sct_testing/small/20151025_emil/t2/t2.nii.gz -v 1 -s /Volumes/data_raid/data_shared/sct_testing/small/20151025_emil/t2/t2_seg_manual.nii.gz -r 0 -denoise 0 -ofolder sct_label_vertebrae_20151025_emil_160405134113_683918/ -initz 7,2
Check folder existence...
Create temporary folder...
Create temporary folder...
mkdir tmp.160405134114_663672/
Copying input data to tmp folder...
sct_convert -i /Volumes/data_raid/data_shared/sct_testing/small/20151025_emil/t2/t2.nii.gz -o tmp.160405134114_663672/data.nii
sct_convert -i /Volumes/data_raid/data_shared/sct_testing/small/20151025_emil/t2/t2_seg_manual.nii.gz -o tmp.160405134114_663672/segmentation.nii.gz
Create label to identify disc...
Straighten spinal cord...
sct_straighten_spinalcord -i data.nii -s segmentation.nii.gz -r 0 -qc 0
Apply straightening to segmentation...
sct_apply_transfo -i segmentation.nii.gz -d data_straight.nii -w warp_curve2straight.nii.gz -o segmentation_straight.nii.gz -x linear
sct_maths -i segmentation_straight.nii.gz -thr 0.5 -o segmentation_straight.nii.gz
Dilate z-label and apply straightening...
sct_apply_transfo -i labelz.nii.gz -d data_straight.nii -w warp_curve2straight.nii.gz -o labelz_straight.nii.gz -x nn
Get z and disc values from straight label...
.. [8, 2]
Detect intervertebral discs...
.. local adjustment to center disc
.... WARNING: Pattern is missing data (because close to the edge). Using initial current_z provided.
Current disc: 2 (z=8). Direction: superior
.. approximate distance to next disc: 47 mm
.. WARNING: Data contained zero. We are probably at the edge.
Traceback (most recent call last):
File "/Users/jcohen/code/spinalcordtoolbox/scripts/sct_label_vertebrae.py", line 679, in <module>
main()
File "/Users/jcohen/code/spinalcordtoolbox/scripts/sct_label_vertebrae.py", line 193, in main
vertebral_detection('data_straight.nii', 'segmentation_straight.nii.gz', init_disc, verbose)
File "/Users/jcohen/code/spinalcordtoolbox/scripts/sct_label_vertebrae.py", line 391, in vertebral_detection
ind_peak[0] = np.where(I_corr_adj == I_corr_adj.max())[0] # index of max along z
ValueError: setting an array element with a sequence.
/Users/jcohen/code/spinalcordtoolbox/scripts/sct_utils.py, line 100
~~~
|
priority
|
valueerror setting an array element with a sequence data sct testing small emil isct test function f sct label vertebrae d volumes data raid data shared sct testing small p i nii gz s seg manual nii gz o seg labeled nii gz r spinalcord toolbox version dev sct installed in users jcohen code spinalcordtoolbox python users jcohen code spinalcordtoolbox python bin python users jcohen code spinalcordtoolbox python users jcohen code spinalcordtoolbox running users jcohen code spinalcordtoolbox scripts isct test function py f sct label vertebrae d volumes data raid data shared sct testing small p i nii gz s seg manual nii gz o seg labeled nii gz r check folder existence testing started on running users jcohen code spinalcordtoolbox scripts sct label vertebrae py laplacian o seg labeled nii gz i volumes data raid data shared sct testing small emil nii gz v s volumes data raid data shared sct testing small emil seg manual nii gz r denoise ofolder sct label vertebrae emil initz check folder existence create temporary folder create temporary folder mkdir tmp copying input data to tmp folder sct convert i volumes data raid data shared sct testing small emil nii gz o tmp data nii sct convert i volumes data raid data shared sct testing small emil seg manual nii gz o tmp segmentation nii gz create label to identify disc straighten spinal cord sct straighten spinalcord i data nii s segmentation nii gz r qc apply straightening to segmentation sct apply transfo i segmentation nii gz d data straight nii w warp nii gz o segmentation straight nii gz x linear sct maths i segmentation straight nii gz thr o segmentation straight nii gz dilate z label and apply straightening sct apply transfo i labelz nii gz d data straight nii w warp nii gz o labelz straight nii gz x nn get z and disc values from straight label detect intervertebral discs local adjustment to center disc warning pattern is missing data because close to the edge using initial current z provided current disc z direction superior approximate distance to next disc mm warning data contained zero we are probably at the edge traceback most recent call last file users jcohen code spinalcordtoolbox scripts sct label vertebrae py line in main file users jcohen code spinalcordtoolbox scripts sct label vertebrae py line in main vertebral detection data straight nii segmentation straight nii gz init disc verbose file users jcohen code spinalcordtoolbox scripts sct label vertebrae py line in vertebral detection ind peak np where i corr adj i corr adj max index of max along z valueerror setting an array element with a sequence users jcohen code spinalcordtoolbox scripts sct utils py line
| 1
|
787,618
| 27,724,688,488
|
IssuesEvent
|
2023-03-15 00:35:29
|
ArctosDB/arctos
|
https://api.github.com/repos/ArctosDB/arctos
|
closed
|
Add remarks to edit other identifiers in catalog record page
|
Priority-High (Needed for work) Function-ObjectRecord Bug Display/Interface User experience
|

does not allow me to add a remark - it should.
But wait! It is just missing a header - Delete needs moved over to the right and Remark should be added above the remark column.
|
1.0
|
Add remarks to edit other identifiers in catalog record page - 
does not allow me to add a remark - it should.
But wait! It is just missing a header - Delete needs moved over to the right and Remark should be added above the remark column.
|
priority
|
add remarks to edit other identifiers in catalog record page does not allow me to add a remark it should but wait it is just missing a header delete needs moved over to the right and remark should be added above the remark column
| 1
|
333,592
| 10,128,793,929
|
IssuesEvent
|
2019-08-01 13:32:56
|
NCIOCPL/cgov-digital-platform
|
https://api.github.com/repos/NCIOCPL/cgov-digital-platform
|
closed
|
Glossification not working on dceg-cms
|
High priority
|
I am not able to glossify content on dceg-cms.cancer.gov
My guess is it needs to be configured? Someone please investigate!
|
1.0
|
Glossification not working on dceg-cms - I am not able to glossify content on dceg-cms.cancer.gov
My guess is it needs to be configured? Someone please investigate!
|
priority
|
glossification not working on dceg cms i am not able to glossify content on dceg cms cancer gov my guess is it needs to be configured someone please investigate
| 1
|
620,204
| 19,556,421,897
|
IssuesEvent
|
2022-01-03 10:11:23
|
MaibornWolff/codecharta
|
https://api.github.com/repos/MaibornWolff/codecharta
|
closed
|
Suspicious Metrics feature does not respect current app settings
|
bug pr-visualization javascript priority:high difficulty:low good first issue UX / UI
|
# Bug
## Expected Behavior
GIVEN: initially loaded CodeCharta with experimental mode turned `off`
WHEN loading a map that has suspicious metrics and then turning `on` experimental mode and then clicking on a suspicious metrics Custom Config e.g. "Suspicious MCC Files" like:

THEN experimental mode should still be turned on
## Actual Behavior
THEN experimental mode is turned off
## Open questions
We should check, if any other settings should also be respected when loading a suggested Custom Config
## Steps to Reproduce the Problem
1. Load CodeCharta with experimental mode off
1. Upload `.cc.json` with suspicious metrics
1. Enable experimental mode
1. Click on a suggestion to view suspicious files like "Suspicious MCC Files" and watch how the experimental mode is changing.
|
1.0
|
Suspicious Metrics feature does not respect current app settings - # Bug
## Expected Behavior
GIVEN: initially loaded CodeCharta with experimental mode turned `off`
WHEN loading a map that has suspicious metrics and then turning `on` experimental mode and then clicking on a suspicious metrics Custom Config e.g. "Suspicious MCC Files" like:

THEN experimental mode should still be turned on
## Actual Behavior
THEN experimental mode is turned off
## Open questions
We should check, if any other settings should also be respected when loading a suggested Custom Config
## Steps to Reproduce the Problem
1. Load CodeCharta with experimental mode off
1. Upload `.cc.json` with suspicious metrics
1. Enable experimental mode
1. Click on a suggestion to view suspicious files like "Suspicious MCC Files" and watch how the experimental mode is changing.
|
priority
|
suspicious metrics feature does not respect current app settings bug expected behavior given initially loaded codecharta with experimental mode turned off when loading a map that has suspicious metrics and then turning on experimental mode and then clicking on a suspicious metrics custom config e g suspicious mcc files like then experimental mode should still be turned on actual behavior then experimental mode is turned off open questions we should check if any other settings should also be respected when loading a suggested custom config steps to reproduce the problem load codecharta with experimental mode off upload cc json with suspicious metrics enable experimental mode click on a suggestion to view suspicious files like suspicious mcc files and watch how the experimental mode is changing
| 1
|
285,969
| 8,781,672,299
|
IssuesEvent
|
2018-12-19 21:16:05
|
wri/gfw-mapbuilder
|
https://api.github.com/repos/wri/gfw-mapbuilder
|
closed
|
Confirm report fix is deployed for Monday's launch
|
HIGHEST priority
|
- [x] Report fix (cmr.forest-atlas.org/map
http://eth.doesntexist.org/Atlas)
**Priorities for the report:**
1. Formatting - add extra margin around analyses
2. Order - analyses change depending on which one comes back first.
3. Refresh - ensure report re-loads upon browser refresh **(DONE)**
4. Sharing - permanent link so report re-hydrates when opening in new browser or sharing with someone
|
1.0
|
Confirm report fix is deployed for Monday's launch - - [x] Report fix (cmr.forest-atlas.org/map
http://eth.doesntexist.org/Atlas)
**Priorities for the report:**
1. Formatting - add extra margin around analyses
2. Order - analyses change depending on which one comes back first.
3. Refresh - ensure report re-loads upon browser refresh **(DONE)**
4. Sharing - permanent link so report re-hydrates when opening in new browser or sharing with someone
|
priority
|
confirm report fix is deployed for monday s launch report fix cmr forest atlas org map priorities for the report formatting add extra margin around analyses order analyses change depending on which one comes back first refresh ensure report re loads upon browser refresh done sharing permanent link so report re hydrates when opening in new browser or sharing with someone
| 1
|
627,940
| 19,957,380,584
|
IssuesEvent
|
2022-01-28 01:57:58
|
civictechindex/CTI-website-frontend
|
https://api.github.com/repos/civictechindex/CTI-website-frontend
|
opened
|
Rethink Nav Labels
|
role: UI/UX p-feature: faq Priority: High size: 1pt
|
### Overview
Navigation titles should be clear as to what the page does. The current disconnect between what the Nav titles are vs. what the page shows (ex. Nav Name: "Add your project" vs Page Title: "Tag Generator"). This also creates confusion when referring to pages in FAQs and other sections.
<details>
<summary> Current Nav name vs Page Title </summary>

</details>
### Action Items
- [ ] Spend 20 minutes on the following:
- [ ] Write a quick synopsis/objective/what you can achieve, for each page. This will help in developing a naming schema (Do for both βJoin the Indexβ and βOverviewβ)
### Resources/Instructions
This came as a note from Bonnie after auditing the whole site.
[Bonnie's Notes](https://github.com/civictechindex/CTI-website-frontend/issues/1096#issuecomment-1021554391) coming from Issue #1096
Once this is completed, a new Issue will be generated for another audit to make sure labeling within FAQs is correct and in-sync with Nav names
Tracked on Issue #1066
|
1.0
|
Rethink Nav Labels - ### Overview
Navigation titles should be clear as to what the page does. The current disconnect between what the Nav titles are vs. what the page shows (ex. Nav Name: "Add your project" vs Page Title: "Tag Generator"). This also creates confusion when referring to pages in FAQs and other sections.
<details>
<summary> Current Nav name vs Page Title </summary>

</details>
### Action Items
- [ ] Spend 20 minutes on the following:
- [ ] Write a quick synopsis/objective/what you can achieve, for each page. This will help in developing a naming schema (Do for both βJoin the Indexβ and βOverviewβ)
### Resources/Instructions
This came as a note from Bonnie after auditing the whole site.
[Bonnie's Notes](https://github.com/civictechindex/CTI-website-frontend/issues/1096#issuecomment-1021554391) coming from Issue #1096
Once this is completed, a new Issue will be generated for another audit to make sure labeling within FAQs is correct and in-sync with Nav names
Tracked on Issue #1066
|
priority
|
rethink nav labels overview navigation titles should be clear as to what the page does the current disconnect between what the nav titles are vs what the page shows ex nav name add your project vs page title tag generator this also creates confusion when referring to pages in faqs and other sections current nav name vs page title action items spend minutes on the following write a quick synopsis objective what you can achieve for each page this will help in developing a naming schema do for both βjoin the indexβ and βoverviewβ resources instructions this came as a note from bonnie after auditing the whole site coming from issue once this is completed a new issue will be generated for another audit to make sure labeling within faqs is correct and in sync with nav names tracked on issue
| 1
|
199,093
| 6,981,010,222
|
IssuesEvent
|
2017-12-13 05:33:47
|
infinitered/gluegun
|
https://api.github.com/repos/infinitered/gluegun
|
opened
|
Plugin commands aren't resolved properly
|
bug high priority
|
When you bring in a plugin, the commands are loaded properly but not resolved when you run that plugin command.
I have a fix for this incoming.
|
1.0
|
Plugin commands aren't resolved properly - When you bring in a plugin, the commands are loaded properly but not resolved when you run that plugin command.
I have a fix for this incoming.
|
priority
|
plugin commands aren t resolved properly when you bring in a plugin the commands are loaded properly but not resolved when you run that plugin command i have a fix for this incoming
| 1
|
405,925
| 11,884,526,050
|
IssuesEvent
|
2020-03-27 17:48:59
|
TAMU-CPT/galaxy-tools
|
https://api.github.com/repos/TAMU-CPT/galaxy-tools
|
closed
|
Public Repo out of sync with private repo
|
High Priority
|
Somewhere the this repo got out of sync with our private Github Enterprise repo. Need to rebase and merge
|
1.0
|
Public Repo out of sync with private repo - Somewhere the this repo got out of sync with our private Github Enterprise repo. Need to rebase and merge
|
priority
|
public repo out of sync with private repo somewhere the this repo got out of sync with our private github enterprise repo need to rebase and merge
| 1
|
725,952
| 24,982,228,292
|
IssuesEvent
|
2022-11-02 12:38:59
|
teogor/ceres
|
https://api.github.com/repos/teogor/ceres
|
closed
|
[NativeAdView] not loading properly
|
@bug @priority-high
|
The `NativeAdView` is not loading properly when configured with loadContinuously=true
|
1.0
|
[NativeAdView] not loading properly - The `NativeAdView` is not loading properly when configured with loadContinuously=true
|
priority
|
not loading properly the nativeadview is not loading properly when configured with loadcontinuously true
| 1
|
726,046
| 24,985,865,005
|
IssuesEvent
|
2022-11-02 15:01:47
|
restarone/violet_rails
|
https://api.github.com/repos/restarone/violet_rails
|
opened
|
Cannot generate asset urls correctly when deployed to subdomain
|
bug high priority
|
**Describe the bug**
When violet is deployed to subdomain, it cannot generate asset urls correctly.
**Expected behavior**
When the deployment is done in subdomain level, the asset must contain url with the same subdomain host rather than "public" or other subdomain host.
**Screenshots**
As we can see, the host is "review-1187" but "public" is present instead of "review-1187" in the asset url.
<img width="1400" alt="Screenshot 2022-11-02 at 20 38 02" src="https://user-images.githubusercontent.com/55496123/199524641-3da2eff2-e667-400b-875f-efac3516d5e5.png">
|
1.0
|
Cannot generate asset urls correctly when deployed to subdomain - **Describe the bug**
When violet is deployed to subdomain, it cannot generate asset urls correctly.
**Expected behavior**
When the deployment is done in subdomain level, the asset must contain url with the same subdomain host rather than "public" or other subdomain host.
**Screenshots**
As we can see, the host is "review-1187" but "public" is present instead of "review-1187" in the asset url.
<img width="1400" alt="Screenshot 2022-11-02 at 20 38 02" src="https://user-images.githubusercontent.com/55496123/199524641-3da2eff2-e667-400b-875f-efac3516d5e5.png">
|
priority
|
cannot generate asset urls correctly when deployed to subdomain describe the bug when violet is deployed to subdomain it cannot generate asset urls correctly expected behavior when the deployment is done in subdomain level the asset must contain url with the same subdomain host rather than public or other subdomain host screenshots as we can see the host is review but public is present instead of review in the asset url img width alt screenshot at src
| 1
|
621,806
| 19,596,652,235
|
IssuesEvent
|
2022-01-05 18:42:08
|
GRIS-UdeM/ControlGris
|
https://api.github.com/repos/GRIS-UdeM/ControlGris
|
opened
|
Analyse du signal audio et gΓ©nΓ©ration d'OSC en consΓ©quence
|
enhancement High priority
|
PrΓ©parer ControlGRIS afin qu'il puisse analyser le signal audio et en fonction des descripteurs audio, exporter des donnΓ©es OSC qui gΓ©nΓ¨rent des trajectoires issues su signal lui-mΓͺme.
|
1.0
|
Analyse du signal audio et gΓ©nΓ©ration d'OSC en consΓ©quence - PrΓ©parer ControlGRIS afin qu'il puisse analyser le signal audio et en fonction des descripteurs audio, exporter des donnΓ©es OSC qui gΓ©nΓ¨rent des trajectoires issues su signal lui-mΓͺme.
|
priority
|
analyse du signal audio et gΓ©nΓ©ration d osc en consΓ©quence prΓ©parer controlgris afin qu il puisse analyser le signal audio et en fonction des descripteurs audio exporter des donnΓ©es osc qui gΓ©nΓ¨rent des trajectoires issues su signal lui mΓͺme
| 1
|
28,798
| 2,711,616,792
|
IssuesEvent
|
2015-04-09 07:56:09
|
TypeStrong/atom-typescript
|
https://api.github.com/repos/TypeStrong/atom-typescript
|
closed
|
Odd completion popup behavior
|
bug priority:high
|
I'm getting a lot of lag with code completion now. Before all the updates, it was very responsive, but now it's almost unusable, since I type faster then the completion can pop up. The completion box also spans the entire width of the screen for no apparent reason.

|
1.0
|
Odd completion popup behavior - I'm getting a lot of lag with code completion now. Before all the updates, it was very responsive, but now it's almost unusable, since I type faster then the completion can pop up. The completion box also spans the entire width of the screen for no apparent reason.

|
priority
|
odd completion popup behavior i m getting a lot of lag with code completion now before all the updates it was very responsive but now it s almost unusable since i type faster then the completion can pop up the completion box also spans the entire width of the screen for no apparent reason
| 1
|
594,048
| 18,022,665,117
|
IssuesEvent
|
2021-09-16 21:43:18
|
AbsaOSS/enceladus
|
https://api.github.com/repos/AbsaOSS/enceladus
|
closed
|
POC on S3 writing using Hudi
|
under discussion priority: high
|
## Background
On the way to cloud writing to S# needs to be investigated
## Feature
Write a simple Spark application or alter SparkJobs to write parquet files into S3 using [Apache Hudi](https://hudi.apache.org/docs/s3_hoodie), in similar structure as we do using HDFS
|
1.0
|
POC on S3 writing using Hudi - ## Background
On the way to cloud writing to S# needs to be investigated
## Feature
Write a simple Spark application or alter SparkJobs to write parquet files into S3 using [Apache Hudi](https://hudi.apache.org/docs/s3_hoodie), in similar structure as we do using HDFS
|
priority
|
poc on writing using hudi background on the way to cloud writing to s needs to be investigated feature write a simple spark application or alter sparkjobs to write parquet files into using in similar structure as we do using hdfs
| 1
|
586,885
| 17,599,331,994
|
IssuesEvent
|
2021-08-17 09:48:38
|
eclipse/dirigible
|
https://api.github.com/repos/eclipse/dirigible
|
closed
|
[OData] Issue to Retrieval OData using API
|
bug component-core priority-high efforts-low component-odata
|
Issue w.r.t the OData retrieval. unable to retrieve the data using API's.
1. Recently I have updated all the tables with the few columns and then I generated the model.odata.
2. I unpublished the whole project and ran below sql query in database perspective.
delete from DIRIGIBLE_ODATA;
delete from DIRIGIBLE_ODATA_MAPPING;
delete from DIRIGIBLE_ODATA_SCHEMA;
Results: 0 0 0
3. then I published all the projects freshly and tried to run the application again it didn't work.
4. when trying to fetch meta data using below link it is returning empty no results (https://dirigiblewebide-symetric.cfapps.eu11.hana.ondemand.com/odata/v2/$metadata)
**Output:**
All the API calls are responding with 404 not found status.
**Expected behavior**
All API call should be served successfully
**Screenshots**


[DirigibleWebIDE-2021-08-11 10_02_35.895+0000.txt]
[model.odata.txt](https://github.com/eclipse/dirigible/files/6968106/model.odata.txt)
(https://github.com/eclipse/dirigible/files/6968099/DirigibleWebIDE-2021-08-11.10_02_35.895%2B0000.txt)
**Desktop (please complete the following information):**
- OS: windows 10 pro
- Browser : Google chrome
- Version 92.0.4515.131 (Official Build) (64-bit)
**Additional context**
Add any other context about the problem here.
|
1.0
|
[OData] Issue to Retrieval OData using API - Issue w.r.t the OData retrieval. unable to retrieve the data using API's.
1. Recently I have updated all the tables with the few columns and then I generated the model.odata.
2. I unpublished the whole project and ran below sql query in database perspective.
delete from DIRIGIBLE_ODATA;
delete from DIRIGIBLE_ODATA_MAPPING;
delete from DIRIGIBLE_ODATA_SCHEMA;
Results: 0 0 0
3. then I published all the projects freshly and tried to run the application again it didn't work.
4. when trying to fetch meta data using below link it is returning empty no results (https://dirigiblewebide-symetric.cfapps.eu11.hana.ondemand.com/odata/v2/$metadata)
**Output:**
All the API calls are responding with 404 not found status.
**Expected behavior**
All API call should be served successfully
**Screenshots**


[DirigibleWebIDE-2021-08-11 10_02_35.895+0000.txt]
[model.odata.txt](https://github.com/eclipse/dirigible/files/6968106/model.odata.txt)
(https://github.com/eclipse/dirigible/files/6968099/DirigibleWebIDE-2021-08-11.10_02_35.895%2B0000.txt)
**Desktop (please complete the following information):**
- OS: windows 10 pro
- Browser : Google chrome
- Version 92.0.4515.131 (Official Build) (64-bit)
**Additional context**
Add any other context about the problem here.
|
priority
|
issue to retrieval odata using api issue w r t the odata retrieval unable to retrieve the data using api s recently i have updated all the tables with the few columns and then i generated the model odata i unpublished the whole project and ran below sql query in database perspective delete from dirigible odata delete from dirigible odata mapping delete from dirigible odata schema results then i published all the projects freshly and tried to run the application again it didn t work when trying to fetch meta data using below link it is returning empty no results output all the api calls are responding with not found status expected behavior all api call should be served successfully screenshots desktop please complete the following information os windows pro browser google chrome version official build bit additional context add any other context about the problem here
| 1
|
64,439
| 3,211,814,662
|
IssuesEvent
|
2015-10-06 12:57:11
|
ceylon/ceylon-ide-eclipse
|
https://api.github.com/repos/ceylon/ceylon-ide-eclipse
|
closed
|
Typechecker errors are removed when using native("js")
|
bug high priority
|
I created a new project, enabled both "Compile for JVM" and "Compile for Javascript", then created two modules:
- one is annotated `native("js")`
- one is annotated `native("jvm")`
In the JS module, typechecker errors are not reported. This leads to the JS backend being invoked, and the compilation succeeds: I can run my code in the browser although the Ceylon code is incorrect. This only happens in the IDE.
Example of invalid code, for which no typechecker error is reported:
```
void fun(dynamic foo, Lololol lol) { // note the invalid type Lololol
print(foo); // this shouldn't work because foo is dynamic
}
```
|
1.0
|
Typechecker errors are removed when using native("js") - I created a new project, enabled both "Compile for JVM" and "Compile for Javascript", then created two modules:
- one is annotated `native("js")`
- one is annotated `native("jvm")`
In the JS module, typechecker errors are not reported. This leads to the JS backend being invoked, and the compilation succeeds: I can run my code in the browser although the Ceylon code is incorrect. This only happens in the IDE.
Example of invalid code, for which no typechecker error is reported:
```
void fun(dynamic foo, Lololol lol) { // note the invalid type Lololol
print(foo); // this shouldn't work because foo is dynamic
}
```
|
priority
|
typechecker errors are removed when using native js i created a new project enabled both compile for jvm and compile for javascript then created two modules one is annotated native js one is annotated native jvm in the js module typechecker errors are not reported this leads to the js backend being invoked and the compilation succeeds i can run my code in the browser although the ceylon code is incorrect this only happens in the ide example of invalid code for which no typechecker error is reported void fun dynamic foo lololol lol note the invalid type lololol print foo this shouldn t work because foo is dynamic
| 1
|
517,594
| 15,016,721,293
|
IssuesEvent
|
2021-02-01 09:56:50
|
tysonkaufmann/su-go
|
https://api.github.com/repos/tysonkaufmann/su-go
|
opened
|
[DEV] Create a Reset Password Request UI
|
High Priority task
|
**Related To**
- [Reset Password](https://github.com/tysonkaufmann/su-go/issues/19)
**Description**
This allows the user to be able to interact with the web application in order to request a password reset.
**Development Steps**
- Create a page and add
1. TextBox Username
2. Button Send Verification
3. Button Back to Login
- Link the `Back to Login` button to the login page
|
1.0
|
[DEV] Create a Reset Password Request UI - **Related To**
- [Reset Password](https://github.com/tysonkaufmann/su-go/issues/19)
**Description**
This allows the user to be able to interact with the web application in order to request a password reset.
**Development Steps**
- Create a page and add
1. TextBox Username
2. Button Send Verification
3. Button Back to Login
- Link the `Back to Login` button to the login page
|
priority
|
create a reset password request ui related to description this allows the user to be able to interact with the web application in order to request a password reset development steps create a page and add textbox username button send verification button back to login link the back to login button to the login page
| 1
|
606,616
| 18,766,171,774
|
IssuesEvent
|
2021-11-06 01:04:01
|
AY2122S1-CS2103T-T17-3/tp
|
https://api.github.com/repos/AY2122S1-CS2103T-T17-3/tp
|
closed
|
TagPanel bugs
|
priority.HIGH severity.High
|
When performing an edit with clientId that is not the addressbook e.g. edit 100 t/sample, the tag button will still get created.
When editing and deleting clients, the tag button will persist. Also, when switching addressbook the tagbutton does not seem to update as expected
|
1.0
|
TagPanel bugs - When performing an edit with clientId that is not the addressbook e.g. edit 100 t/sample, the tag button will still get created.
When editing and deleting clients, the tag button will persist. Also, when switching addressbook the tagbutton does not seem to update as expected
|
priority
|
tagpanel bugs when performing an edit with clientid that is not the addressbook e g edit t sample the tag button will still get created when editing and deleting clients the tag button will persist also when switching addressbook the tagbutton does not seem to update as expected
| 1
|
561,361
| 16,615,976,648
|
IssuesEvent
|
2021-06-02 16:42:35
|
django-cms/django-cms
|
https://api.github.com/repos/django-cms/django-cms
|
opened
|
[BUG] Page becomes slow/unresponsive after plugin adding/editing/removing
|
priority: high
|
## Description
On django-cms projects with a large number of sub-sites, editing/adding/removing/changing position on plugins freezes the browser for around a minute. On some devices the page crashes or becomes unresponsive indefinitely. We identified this to be caused by diff-dom library struggling with large amounts of html elements when trying to refresh the page's toolbar content. After a plugin is added or modified or removed, the page fetches new html data, then using diff-dom it creates a diff and applies it to the old toolbar html elements. However, the diff creation process takes unusually long when the toolbar contents contain a large amount of html elements. In our case, we have a django-cms project with 382 sites in it, and this process takes about 50 seconds. Increasing the number of sites increases the time exponentially.
We have a fix for this and will create a pull request shortly. If it's up to django-cms' standards, you can accept it, otherwise, feel free to suggest changes or add any of your own.
## Steps to reproduce
1. Create a django-cms project with 400 sub-sites.
2. Create a page in any of the sites and add any plugin to it. We commonly tested this using the text plugin.
3. Confirm that after the plugin is added, the page stays greyed-out and unresponsive for around a minute. On some devices, the browser may crash.
## Expected behaviour
Refreshing the content after plugin addition should not take more than a few seconds.
## Actual behaviour
Adding a plugin on pages of projects with large amount of sites freezes the page for around a minute on some devices, or crashes the browser on others.
## Additional information (CMS/Python/Django versions)
django-cms version 3.7.4
django version 2.2.17
|
1.0
|
[BUG] Page becomes slow/unresponsive after plugin adding/editing/removing - ## Description
On django-cms projects with a large number of sub-sites, editing/adding/removing/changing position on plugins freezes the browser for around a minute. On some devices the page crashes or becomes unresponsive indefinitely. We identified this to be caused by diff-dom library struggling with large amounts of html elements when trying to refresh the page's toolbar content. After a plugin is added or modified or removed, the page fetches new html data, then using diff-dom it creates a diff and applies it to the old toolbar html elements. However, the diff creation process takes unusually long when the toolbar contents contain a large amount of html elements. In our case, we have a django-cms project with 382 sites in it, and this process takes about 50 seconds. Increasing the number of sites increases the time exponentially.
We have a fix for this and will create a pull request shortly. If it's up to django-cms' standards, you can accept it, otherwise, feel free to suggest changes or add any of your own.
## Steps to reproduce
1. Create a django-cms project with 400 sub-sites.
2. Create a page in any of the sites and add any plugin to it. We commonly tested this using the text plugin.
3. Confirm that after the plugin is added, the page stays greyed-out and unresponsive for around a minute. On some devices, the browser may crash.
## Expected behaviour
Refreshing the content after plugin addition should not take more than a few seconds.
## Actual behaviour
Adding a plugin on pages of projects with large amount of sites freezes the page for around a minute on some devices, or crashes the browser on others.
## Additional information (CMS/Python/Django versions)
django-cms version 3.7.4
django version 2.2.17
|
priority
|
page becomes slow unresponsive after plugin adding editing removing description on django cms projects with a large number of sub sites editing adding removing changing position on plugins freezes the browser for around a minute on some devices the page crashes or becomes unresponsive indefinitely we identified this to be caused by diff dom library struggling with large amounts of html elements when trying to refresh the page s toolbar content after a plugin is added or modified or removed the page fetches new html data then using diff dom it creates a diff and applies it to the old toolbar html elements however the diff creation process takes unusually long when the toolbar contents contain a large amount of html elements in our case we have a django cms project with sites in it and this process takes about seconds increasing the number of sites increases the time exponentially we have a fix for this and will create a pull request shortly if it s up to django cms standards you can accept it otherwise feel free to suggest changes or add any of your own steps to reproduce create a django cms project with sub sites create a page in any of the sites and add any plugin to it we commonly tested this using the text plugin confirm that after the plugin is added the page stays greyed out and unresponsive for around a minute on some devices the browser may crash expected behaviour refreshing the content after plugin addition should not take more than a few seconds actual behaviour adding a plugin on pages of projects with large amount of sites freezes the page for around a minute on some devices or crashes the browser on others additional information cms python django versions django cms version django version
| 1
|
470,831
| 13,547,146,854
|
IssuesEvent
|
2020-09-17 03:16:42
|
CCSI-Toolset/FOQUS
|
https://api.github.com/repos/CCSI-Toolset/FOQUS
|
opened
|
Error when importing PyQt5.QtWidgets
|
FOQUS GUI Priority:High
|
There is an error while loading foqus in the foqus.py file when it's trying to load PyQt5.QtWidgets
|
1.0
|
Error when importing PyQt5.QtWidgets - There is an error while loading foqus in the foqus.py file when it's trying to load PyQt5.QtWidgets
|
priority
|
error when importing qtwidgets there is an error while loading foqus in the foqus py file when it s trying to load qtwidgets
| 1
|
600,660
| 18,349,745,440
|
IssuesEvent
|
2021-10-08 11:00:02
|
AY2122S1-CS2113T-W12-4/tp
|
https://api.github.com/repos/AY2122S1-CS2113T-W12-4/tp
|
closed
|
11. As a perfectionist house-husband, I want to reset the list of items in Fridget so that I can start using Fridget after not using it for a while, or after initial testing with fake items.
|
type.Story priority.High
|
Essentially resetting the stored list (deleting the permanent storage).
|
1.0
|
11. As a perfectionist house-husband, I want to reset the list of items in Fridget so that I can start using Fridget after not using it for a while, or after initial testing with fake items. - Essentially resetting the stored list (deleting the permanent storage).
|
priority
|
as a perfectionist house husband i want to reset the list of items in fridget so that i can start using fridget after not using it for a while or after initial testing with fake items essentially resetting the stored list deleting the permanent storage
| 1
|
596,031
| 18,094,884,258
|
IssuesEvent
|
2021-09-22 07:57:14
|
mui-org/material-ui
|
https://api.github.com/repos/mui-org/material-ui
|
closed
|
[docs] Change Algolia search implementation
|
enhancement docs priority: high
|
- [x] I have searched the [issues](https://github.com/mui-org/material-ui/issues) of this repository and believe that this is not a duplicate.
## Summary π‘
Redo the integration with Algolia's API.
## Examples π
<img width="691" alt="Screenshot 2021-05-04 at 01 33 05" src="https://user-images.githubusercontent.com/3165635/116945735-b3f28300-ac78-11eb-82ee-1c05170e49ab.png">
https://popper.js.org/react-popper/
Regarding how we will display the search bar, something we have pocked with for desktop and mobile.
<img width="373" alt="desktop" src="https://user-images.githubusercontent.com/3165635/123550195-c7980280-d76c-11eb-8867-9ab2b977655b.png">
<img width="340" alt="mobile" src="https://user-images.githubusercontent.com/3165635/123550194-c6ff6c00-d76c-11eb-9b51-77cc8c5fb508.png">
## Motivation π¦
<!--
What are you trying to accomplish? How has the lack of this feature affected you?
Providing context helps us come up with a solution that is most useful in the real world.
-->
The new interface of https://github.com/algolia/docsearch is similar to Spotlight on macOS, it yields a better UX than the current one.
In the future, we could even consider internalizing this component, I have seen https://blueprintjs.com/docs/versions/4/#select/omnibar do it, but it's unlikely something we want to consider now as the objective is to do a quick win, without involving a designer.
|
1.0
|
[docs] Change Algolia search implementation - - [x] I have searched the [issues](https://github.com/mui-org/material-ui/issues) of this repository and believe that this is not a duplicate.
## Summary π‘
Redo the integration with Algolia's API.
## Examples π
<img width="691" alt="Screenshot 2021-05-04 at 01 33 05" src="https://user-images.githubusercontent.com/3165635/116945735-b3f28300-ac78-11eb-82ee-1c05170e49ab.png">
https://popper.js.org/react-popper/
Regarding how we will display the search bar, something we have pocked with for desktop and mobile.
<img width="373" alt="desktop" src="https://user-images.githubusercontent.com/3165635/123550195-c7980280-d76c-11eb-8867-9ab2b977655b.png">
<img width="340" alt="mobile" src="https://user-images.githubusercontent.com/3165635/123550194-c6ff6c00-d76c-11eb-9b51-77cc8c5fb508.png">
## Motivation π¦
<!--
What are you trying to accomplish? How has the lack of this feature affected you?
Providing context helps us come up with a solution that is most useful in the real world.
-->
The new interface of https://github.com/algolia/docsearch is similar to Spotlight on macOS, it yields a better UX than the current one.
In the future, we could even consider internalizing this component, I have seen https://blueprintjs.com/docs/versions/4/#select/omnibar do it, but it's unlikely something we want to consider now as the objective is to do a quick win, without involving a designer.
|
priority
|
change algolia search implementation i have searched the of this repository and believe that this is not a duplicate summary π‘ redo the integration with algolia s api examples π img width alt screenshot at src regarding how we will display the search bar something we have pocked with for desktop and mobile img width alt desktop src img width alt mobile src motivation π¦ what are you trying to accomplish how has the lack of this feature affected you providing context helps us come up with a solution that is most useful in the real world the new interface of is similar to spotlight on macos it yields a better ux than the current one in the future we could even consider internalizing this component i have seen do it but it s unlikely something we want to consider now as the objective is to do a quick win without involving a designer
| 1
|
577,495
| 17,112,304,854
|
IssuesEvent
|
2021-07-10 15:27:11
|
elabftw/elabftw
|
https://api.github.com/repos/elabftw/elabftw
|
closed
|
Bring back the save and close the experiment button
|
feature request fixed in hypernext priority:high
|
# Feature request
Did not think this would be problematic, but when experiments are long or contains, images or graphs, this has impacted the workflow by doing long scrolls when we want to close the experiment.
Maybe a save and close button next to the save on large screens.
- [X] I'm using the hosted version or I have PRO support
|
1.0
|
Bring back the save and close the experiment button - # Feature request
Did not think this would be problematic, but when experiments are long or contains, images or graphs, this has impacted the workflow by doing long scrolls when we want to close the experiment.
Maybe a save and close button next to the save on large screens.
- [X] I'm using the hosted version or I have PRO support
|
priority
|
bring back the save and close the experiment button feature request did not think this would be problematic but when experiments are long or contains images or graphs this has impacted the workflow by doing long scrolls when we want to close the experiment maybe a save and close button next to the save on large screens i m using the hosted version or i have pro support
| 1
|
129,269
| 5,093,371,821
|
IssuesEvent
|
2017-01-03 05:25:52
|
pymedusa/Medusa
|
https://api.github.com/repos/pymedusa/Medusa
|
closed
|
[Develop] JS related issues
|
Bug Priority: 1. High
|
TODO:
- [x] [File browser: close button is showing "close" together with the "x"](https://github.com/pymedusa/Medusa/issues/1353#issuecomment-257041472)
- [ ] [Rename page issue](https://github.com/pymedusa/Medusa/issues/1347) (**Only Windows?**)
- [x] [Parent Folder is not being saved, when clicked "set as default"](https://github.com/pymedusa/Medusa/issues/1424)
- [x] [Repeated checks for news? check_for_new_news loops](https://github.com/pymedusa/Medusa/issues/1348)
- [x] [Button "Checkout button" not working. Calls nothing. Nothing in logs](https://github.com/pymedusa/Medusa/issues/1353#issuecomment-257041667)
- [x] After clicking on the search icon for an episode, status never changes from "success" without reloading page
- [x] displayShow no longer has buttons to hide / show seasons:
Tested this: 1. disable show all seasons. No buttons. 2. restarted. Buttons. 3. enabled show all seasons. No buttons. 4. disabled show all seasons. No buttons. (so I gues at this point I need to restart again) (**Works after restart.**)
|
1.0
|
[Develop] JS related issues - TODO:
- [x] [File browser: close button is showing "close" together with the "x"](https://github.com/pymedusa/Medusa/issues/1353#issuecomment-257041472)
- [ ] [Rename page issue](https://github.com/pymedusa/Medusa/issues/1347) (**Only Windows?**)
- [x] [Parent Folder is not being saved, when clicked "set as default"](https://github.com/pymedusa/Medusa/issues/1424)
- [x] [Repeated checks for news? check_for_new_news loops](https://github.com/pymedusa/Medusa/issues/1348)
- [x] [Button "Checkout button" not working. Calls nothing. Nothing in logs](https://github.com/pymedusa/Medusa/issues/1353#issuecomment-257041667)
- [x] After clicking on the search icon for an episode, status never changes from "success" without reloading page
- [x] displayShow no longer has buttons to hide / show seasons:
Tested this: 1. disable show all seasons. No buttons. 2. restarted. Buttons. 3. enabled show all seasons. No buttons. 4. disabled show all seasons. No buttons. (so I gues at this point I need to restart again) (**Works after restart.**)
|
priority
|
js related issues todo only windows after clicking on the search icon for an episode status never changes from success without reloading page displayshow no longer has buttons to hide show seasons tested this disable show all seasons no buttons restarted buttons enabled show all seasons no buttons disabled show all seasons no buttons so i gues at this point i need to restart again works after restart
| 1
|
393,790
| 11,624,930,261
|
IssuesEvent
|
2020-02-27 11:40:05
|
ubtue/DatenProbleme
|
https://api.github.com/repos/ubtue/DatenProbleme
|
opened
|
Html tags im Titel
|
Zotero_AUTO_RSS high priority
|
**HASH*
IxTheo#2020-01-30#F772302E14678AB6CC812CFEE219038F80FEE0BE
IxTheo#2020-01-30#0F8FD74B42379D50865209332D9D749545AAC490
**AusfΓΌhrliche Problembeschreibung**
HTML tags muss entfernt werden
in der Testdatenbank WinIBW
> 4000 The @Book of <i>Exodus</i> : A Biography, Joel S.Baden, Princeton University Press, 2019 (ISBN 978β0β691β16954β5), xviii + 238 pp., hb 26.95
**Offenbare LΓΆsungen**
im halbautomatischen Verfahren werden die HTML tags beim Export-Translator utnedrΓΌckt
`titleStatement += "$d" + ZU.unescapeHTML(item.title.substr(item.shortTitle.length)`
|
1.0
|
Html tags im Titel - **HASH*
IxTheo#2020-01-30#F772302E14678AB6CC812CFEE219038F80FEE0BE
IxTheo#2020-01-30#0F8FD74B42379D50865209332D9D749545AAC490
**AusfΓΌhrliche Problembeschreibung**
HTML tags muss entfernt werden
in der Testdatenbank WinIBW
> 4000 The @Book of <i>Exodus</i> : A Biography, Joel S.Baden, Princeton University Press, 2019 (ISBN 978β0β691β16954β5), xviii + 238 pp., hb 26.95
**Offenbare LΓΆsungen**
im halbautomatischen Verfahren werden die HTML tags beim Export-Translator utnedrΓΌckt
`titleStatement += "$d" + ZU.unescapeHTML(item.title.substr(item.shortTitle.length)`
|
priority
|
html tags im titel hash ixtheo ixtheo ausfΓΌhrliche problembeschreibung html tags muss entfernt werden in der testdatenbank winibw the book of exodus a biography joel s baden princeton university press isbn β β β β xviii pp hb offenbare lΓΆsungen im halbautomatischen verfahren werden die html tags beim export translator utnedrΓΌckt titlestatement d zu unescapehtml item title substr item shorttitle length
| 1
|
182,448
| 6,670,277,968
|
IssuesEvent
|
2017-10-03 22:44:33
|
callstack/react-native-paper
|
https://api.github.com/repos/callstack/react-native-paper
|
opened
|
Flow is broken in example folder
|
help wanted high priority
|
*Current Behavior*
If you use `<Checkbox>` without mandatory `checked` prop the Flow check has to fail.
*Current Behavior*
I've used `<Checkbox>` without mandatory `checked` prop. Flow check was valid.
*Steps to reproduce*
- In `CheckboxExample.js` remove `checked` prop on random `Checkbox`
- run `yarn flow`
|
1.0
|
Flow is broken in example folder - *Current Behavior*
If you use `<Checkbox>` without mandatory `checked` prop the Flow check has to fail.
*Current Behavior*
I've used `<Checkbox>` without mandatory `checked` prop. Flow check was valid.
*Steps to reproduce*
- In `CheckboxExample.js` remove `checked` prop on random `Checkbox`
- run `yarn flow`
|
priority
|
flow is broken in example folder current behavior if you use without mandatory checked prop the flow check has to fail current behavior i ve used without mandatory checked prop flow check was valid steps to reproduce in checkboxexample js remove checked prop on random checkbox run yarn flow
| 1
|
828,988
| 31,850,202,467
|
IssuesEvent
|
2023-09-15 00:32:31
|
reactive-python/reactpy
|
https://api.github.com/repos/reactive-python/reactpy
|
closed
|
Do Not Unmount Old View On Reconnect
|
priority-1-high type-revision
|
### Current Situation
Presently, upon reconnecting, the client [unmounts the old view and mounts a new one](https://github.com/idom-team/idom/blob/d1a69f676be43da15c9d2c7c419f55e4acb10ef5/src/client/packages/idom-client-react/src/mount.js#L42). This results in a brief, but jarring flash as the new view loads.
### Proposed Actions
Instead of unmounting the old view we can simply retain it. The first message from the server should be the new VDOM which should be set at the root. This will cause React to render the full tree of components, but it will prevent the view from flashing from the user's perspective.
|
1.0
|
Do Not Unmount Old View On Reconnect - ### Current Situation
Presently, upon reconnecting, the client [unmounts the old view and mounts a new one](https://github.com/idom-team/idom/blob/d1a69f676be43da15c9d2c7c419f55e4acb10ef5/src/client/packages/idom-client-react/src/mount.js#L42). This results in a brief, but jarring flash as the new view loads.
### Proposed Actions
Instead of unmounting the old view we can simply retain it. The first message from the server should be the new VDOM which should be set at the root. This will cause React to render the full tree of components, but it will prevent the view from flashing from the user's perspective.
|
priority
|
do not unmount old view on reconnect current situation presently upon reconnecting the client this results in a brief but jarring flash as the new view loads proposed actions instead of unmounting the old view we can simply retain it the first message from the server should be the new vdom which should be set at the root this will cause react to render the full tree of components but it will prevent the view from flashing from the user s perspective
| 1
|
6,501
| 2,588,810,041
|
IssuesEvent
|
2015-02-18 06:20:52
|
TheLens/land-records
|
https://api.github.com/repos/TheLens/land-records
|
closed
|
Autosuggest: search immediately after selecting
|
bug High priority
|
Same on desktop and phone: When you select an autosuggest item, it puts that text in the search bar. You have to touch/click again to search. Unintuitive in my opinion.
Google searches automatically when you choose one and I think that's what we should do.
|
1.0
|
Autosuggest: search immediately after selecting - Same on desktop and phone: When you select an autosuggest item, it puts that text in the search bar. You have to touch/click again to search. Unintuitive in my opinion.
Google searches automatically when you choose one and I think that's what we should do.
|
priority
|
autosuggest search immediately after selecting same on desktop and phone when you select an autosuggest item it puts that text in the search bar you have to touch click again to search unintuitive in my opinion google searches automatically when you choose one and i think that s what we should do
| 1
|
444,839
| 12,821,511,024
|
IssuesEvent
|
2020-07-06 08:12:53
|
fossasia/open-event-frontend
|
https://api.github.com/repos/fossasia/open-event-frontend
|
closed
|
Unable to complete a ticket order using discount code
|
Priority: High bug
|
**Describe the bug**
Event link: https://eventyay.com/e/643cfffb
Discount code: FOSSASIA
The error appears at the last step of the payment process.
Screenshot of the error

**Desktop (please complete the following information):**
- OS: Mac OS 10.14
- Browser Chrome
|
1.0
|
Unable to complete a ticket order using discount code - **Describe the bug**
Event link: https://eventyay.com/e/643cfffb
Discount code: FOSSASIA
The error appears at the last step of the payment process.
Screenshot of the error

**Desktop (please complete the following information):**
- OS: Mac OS 10.14
- Browser Chrome
|
priority
|
unable to complete a ticket order using discount code describe the bug event link discount code fossasia the error appears at the last step of the payment process screenshot of the error desktop please complete the following information os mac os browser chrome
| 1
|
156,741
| 5,988,586,891
|
IssuesEvent
|
2017-06-02 05:30:10
|
openbabel/openbabel
|
https://api.github.com/repos/openbabel/openbabel
|
closed
|
kekulize runs "forever" for fullerenes
|
api auto-migrated bug high priority
|
The new kekulize algorithm uses a depth-first search that takes "forever" (> 6 hours) to finish for large aromatic systems such as fullerenes.
I will be redesigning this as a breadth-first search rather than depth-first, which should make it converge much more rapidly. The problem with depth first is that it wanders all over the molecule trying to assign thousands and millions of combinations of single/double, when it might be that the very first bond it laid down was wrong. By using breadth-first, the algorithm will ensure that all the bonds in a local region make sense before spreading out to the next layer of atoms. For fullerenes, this should make it converge almost immediately on a solution.
Reported by: @cjames53
|
1.0
|
kekulize runs "forever" for fullerenes - The new kekulize algorithm uses a depth-first search that takes "forever" (> 6 hours) to finish for large aromatic systems such as fullerenes.
I will be redesigning this as a breadth-first search rather than depth-first, which should make it converge much more rapidly. The problem with depth first is that it wanders all over the molecule trying to assign thousands and millions of combinations of single/double, when it might be that the very first bond it laid down was wrong. By using breadth-first, the algorithm will ensure that all the bonds in a local region make sense before spreading out to the next layer of atoms. For fullerenes, this should make it converge almost immediately on a solution.
Reported by: @cjames53
|
priority
|
kekulize runs forever for fullerenes the new kekulize algorithm uses a depth first search that takes forever hours to finish for large aromatic systems such as fullerenes i will be redesigning this as a breadth first search rather than depth first which should make it converge much more rapidly the problem with depth first is that it wanders all over the molecule trying to assign thousands and millions of combinations of single double when it might be that the very first bond it laid down was wrong by using breadth first the algorithm will ensure that all the bonds in a local region make sense before spreading out to the next layer of atoms for fullerenes this should make it converge almost immediately on a solution reported by
| 1
|
404,310
| 11,854,994,766
|
IssuesEvent
|
2020-03-25 02:44:00
|
AY1920S2-CS2103T-W17-2/main
|
https://api.github.com/repos/AY1920S2-CS2103T-W17-2/main
|
opened
|
Secondary Revamping for DG [View]
|
priority.High status.Ongoing
|
- [ ] Add implementation details for View feature.
- [ ] Fix code linking in View (For MainWindow.java and MainWindow.fxml)
|
1.0
|
Secondary Revamping for DG [View] - - [ ] Add implementation details for View feature.
- [ ] Fix code linking in View (For MainWindow.java and MainWindow.fxml)
|
priority
|
secondary revamping for dg add implementation details for view feature fix code linking in view for mainwindow java and mainwindow fxml
| 1
|
260,487
| 8,210,577,161
|
IssuesEvent
|
2018-09-04 11:13:13
|
photonstorm/phaser
|
https://api.github.com/repos/photonstorm/phaser
|
closed
|
Arcade physics / physics.world.collide is broken
|
π Bug π€¨ Difficulty: Medium π₯ Priority: High
|
Manual collision check (physics.world.collide) against tile layers is broken for Arcade physics since 3.10.
Doing this in the scene update method will just fail silently in Phaser 3.10+ but worked fine in Phaser 3.9:
` this.physics.world.collide(sprite, layer);`
While doing this in the create method will work as expected (while a bit less flexible solution):
`this.physics.add.collider(sprite, layer);`
@mikewesthad has set up a demo here:
https://codesandbox.io/s/l2lmk8oxom?expanddevtools=1&module=%2Fjs%2Fplatformer-scene.js
"In the console, it demos how the body delta is 0 when you use World#collide (which the tilemap collision separation code expects to be non-zero)."
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/62886396-arcade-physics-physics-world-collide-is-broken?utm_campaign=plugin&utm_content=tracker%2F283654&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F283654&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
1.0
|
Arcade physics / physics.world.collide is broken - Manual collision check (physics.world.collide) against tile layers is broken for Arcade physics since 3.10.
Doing this in the scene update method will just fail silently in Phaser 3.10+ but worked fine in Phaser 3.9:
` this.physics.world.collide(sprite, layer);`
While doing this in the create method will work as expected (while a bit less flexible solution):
`this.physics.add.collider(sprite, layer);`
@mikewesthad has set up a demo here:
https://codesandbox.io/s/l2lmk8oxom?expanddevtools=1&module=%2Fjs%2Fplatformer-scene.js
"In the console, it demos how the body delta is 0 when you use World#collide (which the tilemap collision separation code expects to be non-zero)."
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/62886396-arcade-physics-physics-world-collide-is-broken?utm_campaign=plugin&utm_content=tracker%2F283654&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F283654&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
priority
|
arcade physics physics world collide is broken manual collision check physics world collide against tile layers is broken for arcade physics since doing this in the scene update method will just fail silently in phaser but worked fine in phaser this physics world collide sprite layer while doing this in the create method will work as expected while a bit less flexible solution this physics add collider sprite layer mikewesthad has set up a demo here in the console it demos how the body delta is when you use world collide which the tilemap collision separation code expects to be non zero want to back this issue we accept bounties via
| 1
|
182,500
| 6,670,729,165
|
IssuesEvent
|
2017-10-04 01:46:27
|
OperationCode/operationcode_frontend
|
https://api.github.com/repos/OperationCode/operationcode_frontend
|
opened
|
Update page to reflect gala postponement
|
Priority: High
|
# Content change
## Why?
The OC Gala is being rescheduled. We need to update the website's information to reflect the current status of the Gala (postponed). This is of critical importance to avoid further sales of Gala tickets.
## Details
* The current content at `/gala` should be replaced with a message summarized as: "The Operation Code Gala has been postponed. More information TBA."
* The Gala banner on landing page should be removed.
The current content at `/gala` & banner can be added back when we have a new date for the Gala.
|
1.0
|
Update page to reflect gala postponement - # Content change
## Why?
The OC Gala is being rescheduled. We need to update the website's information to reflect the current status of the Gala (postponed). This is of critical importance to avoid further sales of Gala tickets.
## Details
* The current content at `/gala` should be replaced with a message summarized as: "The Operation Code Gala has been postponed. More information TBA."
* The Gala banner on landing page should be removed.
The current content at `/gala` & banner can be added back when we have a new date for the Gala.
|
priority
|
update page to reflect gala postponement content change why the oc gala is being rescheduled we need to update the website s information to reflect the current status of the gala postponed this is of critical importance to avoid further sales of gala tickets details the current content at gala should be replaced with a message summarized as the operation code gala has been postponed more information tba the gala banner on landing page should be removed the current content at gala banner can be added back when we have a new date for the gala
| 1
|
791,720
| 27,873,723,758
|
IssuesEvent
|
2023-03-21 14:54:27
|
LandOfRails/LandOfSignals
|
https://api.github.com/repos/LandOfRails/LandOfSignals
|
closed
|
Rework new contentpacksystem, rendering and signal prioritization
|
enhancement Forge 1.7.10 Forge 1.10.2 Forge 1.11.2 Forge 1.12.2 Forge 1.14.4 Forge 1.15.2 Forge 1.16.5 Stellwand Signals Priority: High
|
### Rework new contentpacksystem, rendering and signal prioritization
**Rework new contentpacksystem**
As pointed out by the community, the new contentpacksystem adds a lot of overhead that wasn't necessary before.
To improve this issue, we should add the option to make a simple signal (v1) or complex signal (v2).
~~The internal contentpack should stay the same.~~
*After thinking about it a bit, it makes more sense to split the signals into two groups (the simple and the complex ones) and build their own renderers. This will improve performance and lessens the complexity.
This will introduce more logic to the signalbox though and the old signals have to be build back to their v1 state.
**Rendering**
The rendering could be split into two to save performance for the simple signals. The right renderer could be selected by a given version. This would allow old signals to stay the same (the order for translation, rotation and scaling changed with v2).
This would also allow the activation of the signal manipulator tool (should still be reworked a bit to not crash the games in certain configurations).
**Signal prioritization**
The current attempt for signal prioritization was to use the order in which the signals where added in the contentpack.
This allows v2 packs to give a proper order, but old v1 packs couldn't rely on this feature. This was the reason for the legacymode.
By moving the signal priorization from the contentpack to each signal, we could avoid the legacymode all together and give the player more freedom with the signalstates. This will render existing signalsystems broken. But old signals will be useable as before.
The signal prioritization should be configurable on the signalbox and should be saved inside the signal.
### Checklist
- [x] Improved contentpacksystem
- [x] Improved rendering
- [x] Improved signal prioritization
- [x] Tested with client
- [x] Tested with client and server
|
1.0
|
Rework new contentpacksystem, rendering and signal prioritization - ### Rework new contentpacksystem, rendering and signal prioritization
**Rework new contentpacksystem**
As pointed out by the community, the new contentpacksystem adds a lot of overhead that wasn't necessary before.
To improve this issue, we should add the option to make a simple signal (v1) or complex signal (v2).
~~The internal contentpack should stay the same.~~
*After thinking about it a bit, it makes more sense to split the signals into two groups (the simple and the complex ones) and build their own renderers. This will improve performance and lessens the complexity.
This will introduce more logic to the signalbox though and the old signals have to be build back to their v1 state.
**Rendering**
The rendering could be split into two to save performance for the simple signals. The right renderer could be selected by a given version. This would allow old signals to stay the same (the order for translation, rotation and scaling changed with v2).
This would also allow the activation of the signal manipulator tool (should still be reworked a bit to not crash the games in certain configurations).
**Signal prioritization**
The current attempt for signal prioritization was to use the order in which the signals where added in the contentpack.
This allows v2 packs to give a proper order, but old v1 packs couldn't rely on this feature. This was the reason for the legacymode.
By moving the signal priorization from the contentpack to each signal, we could avoid the legacymode all together and give the player more freedom with the signalstates. This will render existing signalsystems broken. But old signals will be useable as before.
The signal prioritization should be configurable on the signalbox and should be saved inside the signal.
### Checklist
- [x] Improved contentpacksystem
- [x] Improved rendering
- [x] Improved signal prioritization
- [x] Tested with client
- [x] Tested with client and server
|
priority
|
rework new contentpacksystem rendering and signal prioritization rework new contentpacksystem rendering and signal prioritization rework new contentpacksystem as pointed out by the community the new contentpacksystem adds a lot of overhead that wasn t necessary before to improve this issue we should add the option to make a simple signal or complex signal the internal contentpack should stay the same after thinking about it a bit it makes more sense to split the signals into two groups the simple and the complex ones and build their own renderers this will improve performance and lessens the complexity this will introduce more logic to the signalbox though and the old signals have to be build back to their state rendering the rendering could be split into two to save performance for the simple signals the right renderer could be selected by a given version this would allow old signals to stay the same the order for translation rotation and scaling changed with this would also allow the activation of the signal manipulator tool should still be reworked a bit to not crash the games in certain configurations signal prioritization the current attempt for signal prioritization was to use the order in which the signals where added in the contentpack this allows packs to give a proper order but old packs couldn t rely on this feature this was the reason for the legacymode by moving the signal priorization from the contentpack to each signal we could avoid the legacymode all together and give the player more freedom with the signalstates this will render existing signalsystems broken but old signals will be useable as before the signal prioritization should be configurable on the signalbox and should be saved inside the signal checklist improved contentpacksystem improved rendering improved signal prioritization tested with client tested with client and server
| 1
|
118,393
| 4,744,411,501
|
IssuesEvent
|
2016-10-21 00:53:30
|
cyberpwnn/GlacialRealms
|
https://api.github.com/repos/cyberpwnn/GlacialRealms
|
closed
|
Fortune does not work with Blast
|
bug duplicate effex high priority
|
Fortune is completely ignored when mining with Blast on your pickaxe, the solution was for you to handle the math and give the rewards through your plugin.
|
1.0
|
Fortune does not work with Blast - Fortune is completely ignored when mining with Blast on your pickaxe, the solution was for you to handle the math and give the rewards through your plugin.
|
priority
|
fortune does not work with blast fortune is completely ignored when mining with blast on your pickaxe the solution was for you to handle the math and give the rewards through your plugin
| 1
|
687,908
| 23,542,742,100
|
IssuesEvent
|
2022-08-20 16:58:30
|
tauri-apps/tao
|
https://api.github.com/repos/tauri-apps/tao
|
closed
|
[bug] Global shortcuts are never triggered on Linux
|
type: bug help wanted platform: Linux priority: high
|
### Describe the bug
Even using API example it's possible to register global shortcut, but it never triggers on linux
[This line](https://github.com/tauri-apps/tauri/blob/46f2eae8aad7c6a228eaf48480d5603dae6454b4/examples/api/src/components/Shortcuts.svelte#L16) never reached.
### Reproduction
Compile latest tauri (git commit 46f2eae8aad7c6a228eaf48480d5603dae6454b4) and run api example as per readme. Observe lack of global shortcut trigger - both JS and Rust examples don't work.
### Expected behavior
Shortcut triggered - from JS and Rust
### Platform and versions
```shell
Operating System - Pop!_OS, version 20.04 X64
```
### Stack trace
_No response_
### Additional context
_No response_
|
1.0
|
[bug] Global shortcuts are never triggered on Linux - ### Describe the bug
Even using API example it's possible to register global shortcut, but it never triggers on linux
[This line](https://github.com/tauri-apps/tauri/blob/46f2eae8aad7c6a228eaf48480d5603dae6454b4/examples/api/src/components/Shortcuts.svelte#L16) never reached.
### Reproduction
Compile latest tauri (git commit 46f2eae8aad7c6a228eaf48480d5603dae6454b4) and run api example as per readme. Observe lack of global shortcut trigger - both JS and Rust examples don't work.
### Expected behavior
Shortcut triggered - from JS and Rust
### Platform and versions
```shell
Operating System - Pop!_OS, version 20.04 X64
```
### Stack trace
_No response_
### Additional context
_No response_
|
priority
|
global shortcuts are never triggered on linux describe the bug even using api example it s possible to register global shortcut but it never triggers on linux never reached reproduction compile latest tauri git commit and run api example as per readme observe lack of global shortcut trigger both js and rust examples don t work expected behavior shortcut triggered from js and rust platform and versions shell operating system pop os version stack trace no response additional context no response
| 1
|
394,723
| 11,647,983,090
|
IssuesEvent
|
2020-03-01 18:08:02
|
LukeANewton/Tool-Framework-to-Measure-Test-Case-Diversity
|
https://api.github.com/repos/LukeANewton/Tool-Framework-to-Measure-Test-Case-Diversity
|
closed
|
Implement report formatting
|
high priority
|
Currently, when we print out the result of the similarities, we print out a single aggregate value.
We should be able to print out more than just a value. Printing out information like what parameters were chosen for this run and other aggregates would be beneficial. Perhaps we can print it out in human readable formats and machine readable formats.
New report formats should implement the AggregationStrategy interface.
|
1.0
|
Implement report formatting - Currently, when we print out the result of the similarities, we print out a single aggregate value.
We should be able to print out more than just a value. Printing out information like what parameters were chosen for this run and other aggregates would be beneficial. Perhaps we can print it out in human readable formats and machine readable formats.
New report formats should implement the AggregationStrategy interface.
|
priority
|
implement report formatting currently when we print out the result of the similarities we print out a single aggregate value we should be able to print out more than just a value printing out information like what parameters were chosen for this run and other aggregates would be beneficial perhaps we can print it out in human readable formats and machine readable formats new report formats should implement the aggregationstrategy interface
| 1
|
690,560
| 23,664,149,360
|
IssuesEvent
|
2022-08-26 18:48:14
|
City-Bureau/city-scrapers-atl
|
https://api.github.com/repos/City-Bureau/city-scrapers-atl
|
closed
|
New Scraper: Atlanta BeltLine Affordable Housing Advisory Board (BAHAB)
|
priority-high
|
Create a new scraper for Atlanta BeltLine Affordable Housing Advisory Board (BAHAB)
Website: https://beltline.org/the-project/planning-and-community-engagement/bahab/#meetings-amp-agendas
Jurisdiction: City of Atlanta
Classification: Housing, Development
Its primary function is to advise on the administration and execution of the BeltLine Affordable Housing Trust Fund (BAHTF).
BAHABβs Role and Responsibilities
Making recommendations to Invest Atlanta and the City on goals and policies for the use of BAHTF dollars
Monitoring the location and availability of affordable housing throughout the BeltLine
Coordinating the activities of BAHAB with other affordable housing throughout the BeltLine
|
1.0
|
New Scraper: Atlanta BeltLine Affordable Housing Advisory Board (BAHAB) - Create a new scraper for Atlanta BeltLine Affordable Housing Advisory Board (BAHAB)
Website: https://beltline.org/the-project/planning-and-community-engagement/bahab/#meetings-amp-agendas
Jurisdiction: City of Atlanta
Classification: Housing, Development
Its primary function is to advise on the administration and execution of the BeltLine Affordable Housing Trust Fund (BAHTF).
BAHABβs Role and Responsibilities
Making recommendations to Invest Atlanta and the City on goals and policies for the use of BAHTF dollars
Monitoring the location and availability of affordable housing throughout the BeltLine
Coordinating the activities of BAHAB with other affordable housing throughout the BeltLine
|
priority
|
new scraper atlanta beltline affordable housing advisory board bahab create a new scraper for atlanta beltline affordable housing advisory board bahab website jurisdiction city of atlanta classification housing development its primary function is to advise on the administration and execution of the beltline affordable housing trust fund bahtf bahabβs role and responsibilities making recommendations to invest atlanta and the city on goals and policies for the use of bahtf dollars monitoring the location and availability of affordable housing throughout the beltline coordinating the activities of bahab with other affordable housing throughout the beltline
| 1
|
600,643
| 18,348,115,163
|
IssuesEvent
|
2021-10-08 09:05:47
|
AY2122S1-CS2103T-T10-2/tp
|
https://api.github.com/repos/AY2122S1-CS2103T-T10-2/tp
|
opened
|
Deleting participants will cause their IDs to change upon restart
|
priority.High type.Bug severity.High
|
When a participant is deleted from Managera, the IDs of the remaining participants with similar prefixes will be shifted up.
e.g., Deleting a participant with ID alexyeo1 will cause another participant with ID alexyeo 2 to be assigned alexyeo1 upon restart.
|
1.0
|
Deleting participants will cause their IDs to change upon restart - When a participant is deleted from Managera, the IDs of the remaining participants with similar prefixes will be shifted up.
e.g., Deleting a participant with ID alexyeo1 will cause another participant with ID alexyeo 2 to be assigned alexyeo1 upon restart.
|
priority
|
deleting participants will cause their ids to change upon restart when a participant is deleted from managera the ids of the remaining participants with similar prefixes will be shifted up e g deleting a participant with id will cause another participant with id alexyeo to be assigned upon restart
| 1
|
554,862
| 16,440,848,534
|
IssuesEvent
|
2021-05-20 14:13:49
|
projecteon/MacroQuest
|
https://api.github.com/repos/projecteon/MacroQuest
|
closed
|
Finish implementing healing via group heal spells
|
Healing High Priority enhancement
|
During classic this is for clerics only
- [ ] Ini for pct health per member
- [ ] Ini for number of group members below pct health
- [ ] Ini for spell to use for group heal
Other macros also do an avg health check, is that really needed?
|
1.0
|
Finish implementing healing via group heal spells - During classic this is for clerics only
- [ ] Ini for pct health per member
- [ ] Ini for number of group members below pct health
- [ ] Ini for spell to use for group heal
Other macros also do an avg health check, is that really needed?
|
priority
|
finish implementing healing via group heal spells during classic this is for clerics only ini for pct health per member ini for number of group members below pct health ini for spell to use for group heal other macros also do an avg health check is that really needed
| 1
|
476,351
| 13,737,163,165
|
IssuesEvent
|
2020-10-05 12:49:40
|
Scholar-6/brillder
|
https://api.github.com/repos/Scholar-6/brillder
|
closed
|
Question display in play results book
|
High Level Priority
|
<img width="957" alt="Screenshot 2020-09-22 at 15 20 50" src="https://user-images.githubusercontent.com/59654112/93887702-5318ca80-fce7-11ea-95ac-8bab15c6814e.png">
- [x] move mouse out to close book as in proposal.
- [x] At the moment because the notification panel is over the hover area the book opens automatically, so you can't see the text or cover. Any easy fix? If not, we could make give the B on the cover a hover colour and pulse and tell users to 'Click the B' to open it'.
- [x] question displays on left (scrollable if necessary)
<img width="957" alt="Screenshot 2020-09-22 at 15 21 00" src="https://user-images.githubusercontent.com/59654112/93887711-557b2480-fce7-11ea-8677-ee6fe5e4cf5b.png">
- [x] content page at the front for different attempts
- [x] clicking the eye next to ticks or crosses shows answer from either investigation or review
- [x] new attempt registers in row below on right
- [x] something like this click and animation to move between pages: https://archive.org/details/goodytwoshoes00newyiala/page/n11/mode/2up
|
1.0
|
Question display in play results book - <img width="957" alt="Screenshot 2020-09-22 at 15 20 50" src="https://user-images.githubusercontent.com/59654112/93887702-5318ca80-fce7-11ea-95ac-8bab15c6814e.png">
- [x] move mouse out to close book as in proposal.
- [x] At the moment because the notification panel is over the hover area the book opens automatically, so you can't see the text or cover. Any easy fix? If not, we could make give the B on the cover a hover colour and pulse and tell users to 'Click the B' to open it'.
- [x] question displays on left (scrollable if necessary)
<img width="957" alt="Screenshot 2020-09-22 at 15 21 00" src="https://user-images.githubusercontent.com/59654112/93887711-557b2480-fce7-11ea-8677-ee6fe5e4cf5b.png">
- [x] content page at the front for different attempts
- [x] clicking the eye next to ticks or crosses shows answer from either investigation or review
- [x] new attempt registers in row below on right
- [x] something like this click and animation to move between pages: https://archive.org/details/goodytwoshoes00newyiala/page/n11/mode/2up
|
priority
|
question display in play results book img width alt screenshot at src move mouse out to close book as in proposal at the moment because the notification panel is over the hover area the book opens automatically so you can t see the text or cover any easy fix if not we could make give the b on the cover a hover colour and pulse and tell users to click the b to open it question displays on left scrollable if necessary img width alt screenshot at src content page at the front for different attempts clicking the eye next to ticks or crosses shows answer from either investigation or review new attempt registers in row below on right something like this click and animation to move between pages
| 1
|
749,060
| 26,148,781,170
|
IssuesEvent
|
2022-12-30 10:11:56
|
chromebrew/chromebrew
|
https://api.github.com/repos/chromebrew/chromebrew
|
closed
|
π Bug: Unable to build packages (no compressed archive generated)
|
bug π π¨π¨ high priority π¨π¨
|
This is a consistent bug. Running `crew build zstd` results in the following:
```
Using zstd to compress package. This may take some time.
sha256sum: /home/chronos/user/chromebrew/release/x86_64/zstd-1.5.2-1-chromeos-x86_64.tar.zst: No such file or directory
```
Restoring the repo to hash 850c53f2e82b26562e81a0b0fe8547dca4e79596 fixes this problem.
|
1.0
|
π Bug: Unable to build packages (no compressed archive generated) - This is a consistent bug. Running `crew build zstd` results in the following:
```
Using zstd to compress package. This may take some time.
sha256sum: /home/chronos/user/chromebrew/release/x86_64/zstd-1.5.2-1-chromeos-x86_64.tar.zst: No such file or directory
```
Restoring the repo to hash 850c53f2e82b26562e81a0b0fe8547dca4e79596 fixes this problem.
|
priority
|
π bug unable to build packages no compressed archive generated this is a consistent bug running crew build zstd results in the following using zstd to compress package this may take some time home chronos user chromebrew release zstd chromeos tar zst no such file or directory restoring the repo to hash fixes this problem
| 1
|
66,516
| 3,254,847,282
|
IssuesEvent
|
2015-10-20 03:41:58
|
notsecure/uTox
|
https://api.github.com/repos/notsecure/uTox
|
closed
|
Groupchat activity isn't shown on the list
|
bug Friends groups high_priority msg user_interface
|
There should be a dot indicating if there are new messages. According to the mockup it should look like the one used for online contacts.
|
1.0
|
Groupchat activity isn't shown on the list - There should be a dot indicating if there are new messages. According to the mockup it should look like the one used for online contacts.
|
priority
|
groupchat activity isn t shown on the list there should be a dot indicating if there are new messages according to the mockup it should look like the one used for online contacts
| 1
|
712,362
| 24,492,691,773
|
IssuesEvent
|
2022-10-10 05:00:29
|
fugue-project/fugue
|
https://api.github.com/repos/fugue-project/fugue
|
opened
|
[FEATURE] Remove execution from FugueWorkflow context manager, remove engine from FugueWorkflow
|
enhancement behavior change high priority core feature
|
Currently, we can use FugueWorkflow + execution engine in this way:
```python
with FugueWorkflow(engine) as dag:
dag.df(..).show()
```
This is not an ideal design. Workflow definition shouldn't be related with execution engine. So in this change, we no longer allow FugueWorkflow to take engine as input, and the with statement becomes a pure cosmetic syntax to make boundary of the dag definition. Instead, you can always do:
```python
dag = FugueWorkflow(engine):
dag.df(..).show()
dag.run(engine)
```
It's a bit more coding, but it separates compile time and run time definitions.
|
1.0
|
[FEATURE] Remove execution from FugueWorkflow context manager, remove engine from FugueWorkflow - Currently, we can use FugueWorkflow + execution engine in this way:
```python
with FugueWorkflow(engine) as dag:
dag.df(..).show()
```
This is not an ideal design. Workflow definition shouldn't be related with execution engine. So in this change, we no longer allow FugueWorkflow to take engine as input, and the with statement becomes a pure cosmetic syntax to make boundary of the dag definition. Instead, you can always do:
```python
dag = FugueWorkflow(engine):
dag.df(..).show()
dag.run(engine)
```
It's a bit more coding, but it separates compile time and run time definitions.
|
priority
|
remove execution from fugueworkflow context manager remove engine from fugueworkflow currently we can use fugueworkflow execution engine in this way python with fugueworkflow engine as dag dag df show this is not an ideal design workflow definition shouldn t be related with execution engine so in this change we no longer allow fugueworkflow to take engine as input and the with statement becomes a pure cosmetic syntax to make boundary of the dag definition instead you can always do python dag fugueworkflow engine dag df show dag run engine it s a bit more coding but it separates compile time and run time definitions
| 1
|
78,027
| 3,508,783,688
|
IssuesEvent
|
2016-01-08 19:29:49
|
vigetlabs/trackomatic
|
https://api.github.com/repos/vigetlabs/trackomatic
|
closed
|
Resolve hosting concerns
|
high priority
|
We need to hash out how/where to host this file so it's not being served from vigesharing-is-vigecaring/awavering and it's being appropriately gzipped/cached. A dev might be needed to get us up and running on something; they know more about what accounts we have access to/would be best.
|
1.0
|
Resolve hosting concerns - We need to hash out how/where to host this file so it's not being served from vigesharing-is-vigecaring/awavering and it's being appropriately gzipped/cached. A dev might be needed to get us up and running on something; they know more about what accounts we have access to/would be best.
|
priority
|
resolve hosting concerns we need to hash out how where to host this file so it s not being served from vigesharing is vigecaring awavering and it s being appropriately gzipped cached a dev might be needed to get us up and running on something they know more about what accounts we have access to would be best
| 1
|
289,041
| 8,854,337,028
|
IssuesEvent
|
2019-01-09 00:53:06
|
visit-dav/issues-test
|
https://api.github.com/repos/visit-dav/issues-test
|
closed
|
Movie encoding with ffmpeg on cab.llnl.gov not working
|
asc bug crash likelihood medium priority reviewed severity high wrong results
|
Rhee Moono reported that movie encoding on cab.llnl.gov was not working for him. I tried it out and it didn't work for me either. Here are the error messages from ffmpeg. cab688{brugger}44: ffmpeg f image2 i mpeg_link%04d.jpeg mbd rd flagsmv4aic trellis 1 flags qprd bf 2 cmp 2 g 25 pass 1 y b 100000000movie.mpgffmpeg version 0.10.6 Copyright (c) 20002012 the FFmpeg developers built on Dec 11 2012 23:58:43 with gcc 4.4.6 20110731 (Red Hat 4.4.63) configuration: prefix=/usr bindir=/usr/bin datadir=/usr/share/ffmpegincdir=/usr/include/ffmpeg libdir=/usr/lib64 mandir=/usr/share/manarch=x86_64 extracflags='O2 g pipe Wall Wp,D_FORTIFY_SOURCE=2fexceptions fstackprotector param=sspbuffersize=4 m64 mtune=generic'enablebzlib disablecrystalhd enablegnutls enablelibassenablelibcdio enablelibcelt enablelibdc1394 disableindev=jackenablelibfreetype enablelibgsm enablelibmp3lame enableopenalenablelibopenjpeg enablelibpulse enablelibrtmpenablelibschroedinger enablelibspeex enablelibtheoraenablelibvorbis enablelibv4l2 enablelibx264 enablelibxvidenablex11grab enableavfilter enablepostproc enablepthreadsdisablestatic enableshared enablegpl disabledebugdisablestripping shlibdir=/usr/lib64 enableruntimecpudetect libavutil 51. 35.100 / 51. 35.100 libavcodec 53. 61.100 / 53. 61.100 libavformat 53. 32.100 / 53. 32.100 libavdevice 53. 4.100 / 53. 4.100 libavfilter 2. 61.100 / 2. 61.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 6.100 / 0. 6.100 libpostproc 52. 0.100 / 52. 0.100[image2 0x626fc0] max_analyze_duration 5000000 reached at 5000000<br />Input #0, image2, from 'mpeg_link%04d.jpeg':<br /> Duration: 00:00:08.52, start: 0.000000, bitrate: N/A<br /> Stream #0:0: Video: mjpeg, yuvj420p, 608x480 [SAR 1:1 DAR 19:15], 25 fps,<br />25 tbr, 25 tbn, 25 tbc<br />Please use b:a or b:v, b is ambiguous<br />Incompatible pixel format 'yuvj420p' for codec 'mpeg1video', autoselecting<br />format 'yuv420p'<br />[buffer 0x63a020] w:608 h:480 pixfmt:yuvj420p tb:1/1000000 sar:1/1sws_param:[buffersink 0x639ba0] autoinserting filter 'autoinserted scale 0' between<br />the filter 'src' and the filter 'out'<br />[scale 0x6364c0] w:608 h:480 fmt:yuvj420p > w:608 h:480 fmt:yuv420pflags:0x4[NULL 0x635060] [Eval 0x7fffffffb6b0] Undefined constant or missing '(' in'aicqprd'[NULL 0x635060] Unable to parse option value "aicqprd" <br />[NULL 0x635060] Error setting option flags to value mv4aicqprd.Output #0, mpeg, to 'movie.mpg': Stream #0:0: Video: none (hq), yuv420p, 608x480 [SAR 1:1 DAR 19:15],q=231, pass 1, 200 kb/s, 90k tbn, 25 tbcStream mapping: Stream #0:0 > #0:0 (mjpeg > mpeg1video)Error while opening encoder for output stream #0:0 maybe incorrectparameters such as bit_rate, rate, width or height
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1505
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Urgent
Subject: Movie encoding with ffmpeg on cab.llnl.gov not working
Assigned to: Eric Brugger
Category: -
Target version: 2.6.3
Author: Eric Brugger
Start: 06/19/2013
Due date:
% Done: 100%
Estimated time: 4.00 hours
Created: 06/19/2013 11:15 am
Updated: 06/19/2013 01:59 pm
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.6.2
Impact:
Expected Use:
OS: Linux
Support Group: DOE/ASC
Description:
Rhee Moono reported that movie encoding on cab.llnl.gov was not working for him. I tried it out and it didn't work for me either. Here are the error messages from ffmpeg. cab688{brugger}44: ffmpeg f image2 i mpeg_link%04d.jpeg mbd rd flagsmv4aic trellis 1 flags qprd bf 2 cmp 2 g 25 pass 1 y b 100000000movie.mpgffmpeg version 0.10.6 Copyright (c) 20002012 the FFmpeg developers built on Dec 11 2012 23:58:43 with gcc 4.4.6 20110731 (Red Hat 4.4.63) configuration: prefix=/usr bindir=/usr/bin datadir=/usr/share/ffmpegincdir=/usr/include/ffmpeg libdir=/usr/lib64 mandir=/usr/share/manarch=x86_64 extracflags='O2 g pipe Wall Wp,D_FORTIFY_SOURCE=2fexceptions fstackprotector param=sspbuffersize=4 m64 mtune=generic'enablebzlib disablecrystalhd enablegnutls enablelibassenablelibcdio enablelibcelt enablelibdc1394 disableindev=jackenablelibfreetype enablelibgsm enablelibmp3lame enableopenalenablelibopenjpeg enablelibpulse enablelibrtmpenablelibschroedinger enablelibspeex enablelibtheoraenablelibvorbis enablelibv4l2 enablelibx264 enablelibxvidenablex11grab enableavfilter enablepostproc enablepthreadsdisablestatic enableshared enablegpl disabledebugdisablestripping shlibdir=/usr/lib64 enableruntimecpudetect libavutil 51. 35.100 / 51. 35.100 libavcodec 53. 61.100 / 53. 61.100 libavformat 53. 32.100 / 53. 32.100 libavdevice 53. 4.100 / 53. 4.100 libavfilter 2. 61.100 / 2. 61.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 6.100 / 0. 6.100 libpostproc 52. 0.100 / 52. 0.100[image2 0x626fc0] max_analyze_duration 5000000 reached at 5000000<br />Input #0, image2, from 'mpeg_link%04d.jpeg':<br /> Duration: 00:00:08.52, start: 0.000000, bitrate: N/A<br /> Stream #0:0: Video: mjpeg, yuvj420p, 608x480 [SAR 1:1 DAR 19:15], 25 fps,<br />25 tbr, 25 tbn, 25 tbc<br />Please use b:a or b:v, b is ambiguous<br />Incompatible pixel format 'yuvj420p' for codec 'mpeg1video', autoselecting<br />format 'yuv420p'<br />[buffer 0x63a020] w:608 h:480 pixfmt:yuvj420p tb:1/1000000 sar:1/1sws_param:[buffersink 0x639ba0] autoinserting filter 'autoinserted scale 0' between<br />the filter 'src' and the filter 'out'<br />[scale 0x6364c0] w:608 h:480 fmt:yuvj420p > w:608 h:480 fmt:yuv420pflags:0x4[NULL 0x635060] [Eval 0x7fffffffb6b0] Undefined constant or missing '(' in'aicqprd'[NULL 0x635060] Unable to parse option value "aicqprd" <br />[NULL 0x635060] Error setting option flags to value mv4aicqprd.Output #0, mpeg, to 'movie.mpg': Stream #0:0: Video: none (hq), yuv420p, 608x480 [SAR 1:1 DAR 19:15],q=231, pass 1, 200 kb/s, 90k tbn, 25 tbcStream mapping: Stream #0:0 > #0:0 (mjpeg > mpeg1video)Error while opening encoder for output stream #0:0 maybe incorrectparameters such as bit_rate, rate, width or height
Comments:
It looks like LC has updated ffmpeg and the flags passed to ffmpeg no longer work. I will have to figure out what they are. I committed revisions 21189 and 21192 to the 2.6 RC and trunk with thefollowing change:1) I modified movie encoding using ffmpeg to update the flags that get passed to ffmpeg so it would work with the newer version of ffmpeg. This consited of changing "mbd rd flags mv4aic" to "mbd 2" and changing "b" to "-b:v". The fix was only put on the RC since the movie scripts on the trunk now use visit_utils for movie encoding. This resolves #1505.M bin/makemovie.py (RC only)M resources/help/en_US/relnotes2.6.3.html
|
1.0
|
Movie encoding with ffmpeg on cab.llnl.gov not working - Rhee Moono reported that movie encoding on cab.llnl.gov was not working for him. I tried it out and it didn't work for me either. Here are the error messages from ffmpeg. cab688{brugger}44: ffmpeg f image2 i mpeg_link%04d.jpeg mbd rd flagsmv4aic trellis 1 flags qprd bf 2 cmp 2 g 25 pass 1 y b 100000000movie.mpgffmpeg version 0.10.6 Copyright (c) 20002012 the FFmpeg developers built on Dec 11 2012 23:58:43 with gcc 4.4.6 20110731 (Red Hat 4.4.63) configuration: prefix=/usr bindir=/usr/bin datadir=/usr/share/ffmpegincdir=/usr/include/ffmpeg libdir=/usr/lib64 mandir=/usr/share/manarch=x86_64 extracflags='O2 g pipe Wall Wp,D_FORTIFY_SOURCE=2fexceptions fstackprotector param=sspbuffersize=4 m64 mtune=generic'enablebzlib disablecrystalhd enablegnutls enablelibassenablelibcdio enablelibcelt enablelibdc1394 disableindev=jackenablelibfreetype enablelibgsm enablelibmp3lame enableopenalenablelibopenjpeg enablelibpulse enablelibrtmpenablelibschroedinger enablelibspeex enablelibtheoraenablelibvorbis enablelibv4l2 enablelibx264 enablelibxvidenablex11grab enableavfilter enablepostproc enablepthreadsdisablestatic enableshared enablegpl disabledebugdisablestripping shlibdir=/usr/lib64 enableruntimecpudetect libavutil 51. 35.100 / 51. 35.100 libavcodec 53. 61.100 / 53. 61.100 libavformat 53. 32.100 / 53. 32.100 libavdevice 53. 4.100 / 53. 4.100 libavfilter 2. 61.100 / 2. 61.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 6.100 / 0. 6.100 libpostproc 52. 0.100 / 52. 0.100[image2 0x626fc0] max_analyze_duration 5000000 reached at 5000000<br />Input #0, image2, from 'mpeg_link%04d.jpeg':<br /> Duration: 00:00:08.52, start: 0.000000, bitrate: N/A<br /> Stream #0:0: Video: mjpeg, yuvj420p, 608x480 [SAR 1:1 DAR 19:15], 25 fps,<br />25 tbr, 25 tbn, 25 tbc<br />Please use b:a or b:v, b is ambiguous<br />Incompatible pixel format 'yuvj420p' for codec 'mpeg1video', autoselecting<br />format 'yuv420p'<br />[buffer 0x63a020] w:608 h:480 pixfmt:yuvj420p tb:1/1000000 sar:1/1sws_param:[buffersink 0x639ba0] autoinserting filter 'autoinserted scale 0' between<br />the filter 'src' and the filter 'out'<br />[scale 0x6364c0] w:608 h:480 fmt:yuvj420p > w:608 h:480 fmt:yuv420pflags:0x4[NULL 0x635060] [Eval 0x7fffffffb6b0] Undefined constant or missing '(' in'aicqprd'[NULL 0x635060] Unable to parse option value "aicqprd" <br />[NULL 0x635060] Error setting option flags to value mv4aicqprd.Output #0, mpeg, to 'movie.mpg': Stream #0:0: Video: none (hq), yuv420p, 608x480 [SAR 1:1 DAR 19:15],q=231, pass 1, 200 kb/s, 90k tbn, 25 tbcStream mapping: Stream #0:0 > #0:0 (mjpeg > mpeg1video)Error while opening encoder for output stream #0:0 maybe incorrectparameters such as bit_rate, rate, width or height
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 1505
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Urgent
Subject: Movie encoding with ffmpeg on cab.llnl.gov not working
Assigned to: Eric Brugger
Category: -
Target version: 2.6.3
Author: Eric Brugger
Start: 06/19/2013
Due date:
% Done: 100%
Estimated time: 4.00 hours
Created: 06/19/2013 11:15 am
Updated: 06/19/2013 01:59 pm
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.6.2
Impact:
Expected Use:
OS: Linux
Support Group: DOE/ASC
Description:
Rhee Moono reported that movie encoding on cab.llnl.gov was not working for him. I tried it out and it didn't work for me either. Here are the error messages from ffmpeg. cab688{brugger}44: ffmpeg f image2 i mpeg_link%04d.jpeg mbd rd flagsmv4aic trellis 1 flags qprd bf 2 cmp 2 g 25 pass 1 y b 100000000movie.mpgffmpeg version 0.10.6 Copyright (c) 20002012 the FFmpeg developers built on Dec 11 2012 23:58:43 with gcc 4.4.6 20110731 (Red Hat 4.4.63) configuration: prefix=/usr bindir=/usr/bin datadir=/usr/share/ffmpegincdir=/usr/include/ffmpeg libdir=/usr/lib64 mandir=/usr/share/manarch=x86_64 extracflags='O2 g pipe Wall Wp,D_FORTIFY_SOURCE=2fexceptions fstackprotector param=sspbuffersize=4 m64 mtune=generic'enablebzlib disablecrystalhd enablegnutls enablelibassenablelibcdio enablelibcelt enablelibdc1394 disableindev=jackenablelibfreetype enablelibgsm enablelibmp3lame enableopenalenablelibopenjpeg enablelibpulse enablelibrtmpenablelibschroedinger enablelibspeex enablelibtheoraenablelibvorbis enablelibv4l2 enablelibx264 enablelibxvidenablex11grab enableavfilter enablepostproc enablepthreadsdisablestatic enableshared enablegpl disabledebugdisablestripping shlibdir=/usr/lib64 enableruntimecpudetect libavutil 51. 35.100 / 51. 35.100 libavcodec 53. 61.100 / 53. 61.100 libavformat 53. 32.100 / 53. 32.100 libavdevice 53. 4.100 / 53. 4.100 libavfilter 2. 61.100 / 2. 61.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 6.100 / 0. 6.100 libpostproc 52. 0.100 / 52. 0.100[image2 0x626fc0] max_analyze_duration 5000000 reached at 5000000<br />Input #0, image2, from 'mpeg_link%04d.jpeg':<br /> Duration: 00:00:08.52, start: 0.000000, bitrate: N/A<br /> Stream #0:0: Video: mjpeg, yuvj420p, 608x480 [SAR 1:1 DAR 19:15], 25 fps,<br />25 tbr, 25 tbn, 25 tbc<br />Please use b:a or b:v, b is ambiguous<br />Incompatible pixel format 'yuvj420p' for codec 'mpeg1video', autoselecting<br />format 'yuv420p'<br />[buffer 0x63a020] w:608 h:480 pixfmt:yuvj420p tb:1/1000000 sar:1/1sws_param:[buffersink 0x639ba0] autoinserting filter 'autoinserted scale 0' between<br />the filter 'src' and the filter 'out'<br />[scale 0x6364c0] w:608 h:480 fmt:yuvj420p > w:608 h:480 fmt:yuv420pflags:0x4[NULL 0x635060] [Eval 0x7fffffffb6b0] Undefined constant or missing '(' in'aicqprd'[NULL 0x635060] Unable to parse option value "aicqprd" <br />[NULL 0x635060] Error setting option flags to value mv4aicqprd.Output #0, mpeg, to 'movie.mpg': Stream #0:0: Video: none (hq), yuv420p, 608x480 [SAR 1:1 DAR 19:15],q=231, pass 1, 200 kb/s, 90k tbn, 25 tbcStream mapping: Stream #0:0 > #0:0 (mjpeg > mpeg1video)Error while opening encoder for output stream #0:0 maybe incorrectparameters such as bit_rate, rate, width or height
Comments:
It looks like LC has updated ffmpeg and the flags passed to ffmpeg no longer work. I will have to figure out what they are. I committed revisions 21189 and 21192 to the 2.6 RC and trunk with thefollowing change:1) I modified movie encoding using ffmpeg to update the flags that get passed to ffmpeg so it would work with the newer version of ffmpeg. This consited of changing "mbd rd flags mv4aic" to "mbd 2" and changing "b" to "-b:v". The fix was only put on the RC since the movie scripts on the trunk now use visit_utils for movie encoding. This resolves #1505.M bin/makemovie.py (RC only)M resources/help/en_US/relnotes2.6.3.html
|
priority
|
movie encoding with ffmpeg on cab llnl gov not working rhee moono reported that movie encoding on cab llnl gov was not working for him i tried it out and it didn t work for me either here are the error messages from ffmpeg brugger ffmpeg f i mpeg link jpeg mbd rd trellis flags qprd bf cmp g pass y b mpgffmpeg version copyright c the ffmpeg developers built on dec with gcc red hat configuration prefix usr bindir usr bin datadir usr share ffmpegincdir usr include ffmpeg libdir usr mandir usr share manarch extracflags g pipe wall wp d fortify source fstackprotector param sspbuffersize mtune generic enablebzlib disablecrystalhd enablegnutls enablelibassenablelibcdio enablelibcelt disableindev jackenablelibfreetype enablelibgsm enableopenalenablelibopenjpeg enablelibpulse enablelibrtmpenablelibschroedinger enablelibspeex enablelibtheoraenablelibvorbis enableavfilter enablepostproc enablepthreadsdisablestatic enableshared enablegpl disabledebugdisablestripping shlibdir usr enableruntimecpudetect libavutil libavcodec libavformat libavdevice libavfilter libswscale libswresample libpostproc max analyze duration reached at input from mpeg link jpeg duration start bitrate n a stream video mjpeg fps tbr tbn tbc please use b a or b v b is ambiguous incompatible pixel format for codec autoselecting format w h pixfmt tb sar param autoinserting filter autoinserted scale between the filter src and the filter out w h fmt w h fmt undefined constant or missing in aicqprd unable to parse option value aicqprd error setting option flags to value output mpeg to movie mpg stream video none hq q pass kb s tbn tbcstream mapping stream mjpeg error while opening encoder for output stream maybe incorrectparameters such as bit rate rate width or height redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority urgent subject movie encoding with ffmpeg on cab llnl gov not working assigned to eric brugger category target version author eric brugger start due date done estimated time hours created am updated pm likelihood occasional severity crash wrong results found in version impact expected use os linux support group doe asc description rhee moono reported that movie encoding on cab llnl gov was not working for him i tried it out and it didn t work for me either here are the error messages from ffmpeg brugger ffmpeg f i mpeg link jpeg mbd rd trellis flags qprd bf cmp g pass y b mpgffmpeg version copyright c the ffmpeg developers built on dec with gcc red hat configuration prefix usr bindir usr bin datadir usr share ffmpegincdir usr include ffmpeg libdir usr mandir usr share manarch extracflags g pipe wall wp d fortify source fstackprotector param sspbuffersize mtune generic enablebzlib disablecrystalhd enablegnutls enablelibassenablelibcdio enablelibcelt disableindev jackenablelibfreetype enablelibgsm enableopenalenablelibopenjpeg enablelibpulse enablelibrtmpenablelibschroedinger enablelibspeex enablelibtheoraenablelibvorbis enableavfilter enablepostproc enablepthreadsdisablestatic enableshared enablegpl disabledebugdisablestripping shlibdir usr enableruntimecpudetect libavutil libavcodec libavformat libavdevice libavfilter libswscale libswresample libpostproc max analyze duration reached at input from mpeg link jpeg duration start bitrate n a stream video mjpeg fps tbr tbn tbc please use b a or b v b is ambiguous incompatible pixel format for codec autoselecting format w h pixfmt tb sar param autoinserting filter autoinserted scale between the filter src and the filter out w h fmt w h fmt undefined constant or missing in aicqprd unable to parse option value aicqprd error setting option flags to value output mpeg to movie mpg stream video none hq q pass kb s tbn tbcstream mapping stream mjpeg error while opening encoder for output stream maybe incorrectparameters such as bit rate rate width or height comments it looks like lc has updated ffmpeg and the flags passed to ffmpeg no longer work i will have to figure out what they are i committed revisions and to the rc and trunk with thefollowing change i modified movie encoding using ffmpeg to update the flags that get passed to ffmpeg so it would work with the newer version of ffmpeg this consited of changing mbd rd flags to mbd and changing b to b v the fix was only put on the rc since the movie scripts on the trunk now use visit utils for movie encoding this resolves m bin makemovie py rc only m resources help en us html
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.