Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
855
| labels
stringlengths 4
721
| body
stringlengths 1
261k
| index
stringclasses 13
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
240k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
651,921
| 21,514,959,924
|
IssuesEvent
|
2022-04-28 09:02:33
|
nf-core/tools
|
https://api.github.com/repos/nf-core/tools
|
closed
|
README logo not being rendered
|
bug high-priority
|
### Description of the bug
It appears that the path to the logo used in the README isn't being found and hence the image isn't being rendered. If we look at the template PR for rnaseq [here](https://github.com/nf-core/rnaseq/tree/nf-core-template-merge-2.3.2) the path to the image at the top of the README is:
```
#  
```
However it should be:
```
#  
```
Note, it should be `nf-core-rnaseq_logo*` and not `nf-core/rnaseq_logo*`.
In the pipeline template the logo based Jinja variables are `logo_light` and `logo_dark`. These will need to be fixed and checked wherever else they are being used.
### Command used and terminal output
_No response_
### System information
_No response_
|
1.0
|
README logo not being rendered - ### Description of the bug
It appears that the path to the logo used in the README isn't being found and hence the image isn't being rendered. If we look at the template PR for rnaseq [here](https://github.com/nf-core/rnaseq/tree/nf-core-template-merge-2.3.2) the path to the image at the top of the README is:
```
#  
```
However it should be:
```
#  
```
Note, it should be `nf-core-rnaseq_logo*` and not `nf-core/rnaseq_logo*`.
In the pipeline template the logo based Jinja variables are `logo_light` and `logo_dark`. These will need to be fixed and checked wherever else they are being used.
### Command used and terminal output
_No response_
### System information
_No response_
|
priority
|
readme logo not being rendered description of the bug it appears that the path to the logo used in the readme isn t being found and hence the image isn t being rendered if we look at the template pr for rnaseq the path to the image at the top of the readme is docs images nf core rnaseq logo light png gh light mode only docs images nf core rnaseq logo dark png gh dark mode only however it should be docs images nf core rnaseq logo light png gh light mode only docs images nf core rnaseq logo dark png gh dark mode only note it should be nf core rnaseq logo and not nf core rnaseq logo in the pipeline template the logo based jinja variables are logo light and logo dark these will need to be fixed and checked wherever else they are being used command used and terminal output no response system information no response
| 1
|
424,515
| 12,312,363,767
|
IssuesEvent
|
2020-05-12 13:50:09
|
hotosm/tasking-manager
|
https://api.github.com/repos/hotosm/tasking-manager
|
closed
|
Check on notification options
|
Priority: High Status: Needs implementation Type: Bug
|
Users reported that the options on their profile aren't having effects on the messages they receive:

|
1.0
|
Check on notification options - Users reported that the options on their profile aren't having effects on the messages they receive:

|
priority
|
check on notification options users reported that the options on their profile aren t having effects on the messages they receive
| 1
|
237,661
| 7,762,875,603
|
IssuesEvent
|
2018-06-01 14:49:46
|
martchellop/Entretenibit
|
https://api.github.com/repos/martchellop/Entretenibit
|
closed
|
Add the cards in the select page
|
enhancement priority: high
|
Add the cards in the select page, they are 2 per line and should always be centralized in the middle.
|
1.0
|
Add the cards in the select page - Add the cards in the select page, they are 2 per line and should always be centralized in the middle.
|
priority
|
add the cards in the select page add the cards in the select page they are per line and should always be centralized in the middle
| 1
|
779,445
| 27,353,091,433
|
IssuesEvent
|
2023-02-27 10:57:19
|
sebastien-d-me/SebBlog
|
https://api.github.com/repos/sebastien-d-me/SebBlog
|
opened
|
Comment validation system
|
Priority: High Statut: Not started Type : Back-end
|
#### Description:
Creation of the comment validation system.
------------
###### Estimated time: 2 day(s)
###### Difficulty: โญโญ
|
1.0
|
Comment validation system - #### Description:
Creation of the comment validation system.
------------
###### Estimated time: 2 day(s)
###### Difficulty: โญโญ
|
priority
|
comment validation system description creation of the comment validation system estimated time day s difficulty โญโญ
| 1
|
501,076
| 14,520,680,097
|
IssuesEvent
|
2020-12-14 05:58:40
|
a2000-erp-team/WEBERP
|
https://api.github.com/repos/a2000-erp-team/WEBERP
|
opened
|
Upon saving of report maintenance, system will save and refresh the screen to the first line. Please edit this to fix the screen to where user has saved and NOT to refresh to the 1st line
|
ABIGAIL High Priority
|


|
1.0
|
Upon saving of report maintenance, system will save and refresh the screen to the first line. Please edit this to fix the screen to where user has saved and NOT to refresh to the 1st line - 

|
priority
|
upon saving of report maintenance system will save and refresh the screen to the first line please edit this to fix the screen to where user has saved and not to refresh to the line
| 1
|
714,692
| 24,570,597,760
|
IssuesEvent
|
2022-10-13 08:20:45
|
fractal-analytics-platform/fractal-server
|
https://api.github.com/repos/fractal-analytics-platform/fractal-server
|
closed
|
`sqlite3.OperationalError: no such table: applyworkflow`
|
High Priority
|
When running the server (from `main`), and submitting a workflow via the client, I get the error:
```python traceback
INFO: 127.0.0.1:53570 - "POST /api/v1/project/apply/ HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1819, in _execute_context
self.dialect.do_execute(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
cursor.execute(statement, parameters)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 100, in execute
self._adapt_connection._handle_exception(error)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 229, in _handle_exception
raise error
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 82, in execute
self.await_(_cursor.execute(operation, parameters))
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 76, in await_only
return current.driver.switch(awaitable)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 129, in greenlet_spawn
value = await result
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/cursor.py", line 37, in execute
await self._execute(self._cursor.execute, sql, parameters)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/cursor.py", line 31, in _execute
return await self._conn._execute(fn, *args, **kwargs)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/core.py", line 129, in _execute
return await future
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/core.py", line 102, in run
result = function()
sqlite3.OperationalError: no such table: applyworkflow
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 404, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/fastapi/applications.py", line 269, in __call__
await super().__call__(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/applications.py", line 124, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/middleware/cors.py", line 84, in __call__
await self.app(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/exceptions.py", line 93, in __call__
raise exc
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
await self.app(scope, receive, sender)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/routing.py", line 670, in __call__
await route.handle(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/routing.py", line 266, in handle
await self.app(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/routing.py", line 65, in app
response = await func(request)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/fastapi/routing.py", line 227, in app
raw_response = await run_endpoint_function(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/fastapi/routing.py", line 160, in run_endpoint_function
return await dependant.call(**values)
File "/home/tommaso/Fractal/fractal-server/fractal_server/app/api/v1/project.py", line 191, in apply_workflow
await db.commit()
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/ext/asyncio/session.py", line 578, in commit
return await greenlet_spawn(self.sync_session.commit)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 136, in greenlet_spawn
result = context.switch(value)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 1431, in commit
self._transaction.commit(_to_root=self.future)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 829, in commit
self._prepare_impl()
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl
self.session.flush()
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3363, in flush
self._flush(objects)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3503, in _flush
transaction.rollback(_capture_exception=True)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3463, in _flush
flush_context.execute()
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
rec.execute(self)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py", line 630, in execute
util.preloaded.orm_persistence.save_obj(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 245, in save_obj
_emit_insert_statements(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 1238, in _emit_insert_statements
result = connection._execute_20(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1631, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 325, in _execute_on_connection
return connection._execute_clauseelement(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1498, in _execute_clauseelement
ret = self._execute_context(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1862, in _execute_context
self._handle_dbapi_exception(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 2043, in _handle_dbapi_exception
util.raise_(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1819, in _execute_context
self.dialect.do_execute(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
cursor.execute(statement, parameters)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 100, in execute
self._adapt_connection._handle_exception(error)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 229, in _handle_exception
raise error
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 82, in execute
self.await_(_cursor.execute(operation, parameters))
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 76, in await_only
return current.driver.switch(awaitable)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 129, in greenlet_spawn
value = await result
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/cursor.py", line 37, in execute
await self._execute(self._cursor.execute, sql, parameters)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/cursor.py", line 31, in _execute
return await self._conn._execute(fn, *args, **kwargs)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/core.py", line 129, in _execute
return await future
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/core.py", line 102, in run
result = function()
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: applyworkflow
[SQL: INSERT INTO applyworkflow (project_id, input_dataset_id, output_dataset_id, workflow_id, overwrite_input, worker_init, start_timestamp, status) VALUES (?, ?, ?, ?, ?, ?, ?, ?)]
[parameters: (1, 1, 2, 10, 0, None, '2022-10-13 07:53:48.577775', <StatusType.SUBMITTED: 'submitted'>)]
(Background on this error at: https://sqlalche.me/e/14/e3q8)
2022-10-13 09:53:48,579; ERROR; Exception in ASGI application
Traceback (most recent call last):
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1819, in _execute_context
self.dialect.do_execute(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
cursor.execute(statement, parameters)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 100, in execute
self._adapt_connection._handle_exception(error)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 229, in _handle_exception
raise error
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 82, in execute
self.await_(_cursor.execute(operation, parameters))
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 76, in await_only
return current.driver.switch(awaitable)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 129, in greenlet_spawn
value = await result
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/cursor.py", line 37, in execute
await self._execute(self._cursor.execute, sql, parameters)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/cursor.py", line 31, in _execute
return await self._conn._execute(fn, *args, **kwargs)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/core.py", line 129, in _execute
return await future
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/core.py", line 102, in run
result = function()
sqlite3.OperationalError: no such table: applyworkflow
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 404, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/fastapi/applications.py", line 269, in __call__
await super().__call__(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/applications.py", line 124, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/middleware/cors.py", line 84, in __call__
await self.app(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/exceptions.py", line 93, in __call__
raise exc
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
await self.app(scope, receive, sender)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/routing.py", line 670, in __call__
await route.handle(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/routing.py", line 266, in handle
await self.app(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/routing.py", line 65, in app
response = await func(request)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/fastapi/routing.py", line 227, in app
raw_response = await run_endpoint_function(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/fastapi/routing.py", line 160, in run_endpoint_function
return await dependant.call(**values)
File "/home/tommaso/Fractal/fractal-server/fractal_server/app/api/v1/project.py", line 191, in apply_workflow
await db.commit()
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/ext/asyncio/session.py", line 578, in commit
return await greenlet_spawn(self.sync_session.commit)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 136, in greenlet_spawn
result = context.switch(value)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 1431, in commit
self._transaction.commit(_to_root=self.future)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 829, in commit
self._prepare_impl()
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl
self.session.flush()
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3363, in flush
self._flush(objects)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3503, in _flush
transaction.rollback(_capture_exception=True)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3463, in _flush
flush_context.execute()
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
rec.execute(self)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py", line 630, in execute
util.preloaded.orm_persistence.save_obj(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 245, in save_obj
_emit_insert_statements(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 1238, in _emit_insert_statements
result = connection._execute_20(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1631, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 325, in _execute_on_connection
return connection._execute_clauseelement(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1498, in _execute_clauseelement
ret = self._execute_context(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1862, in _execute_context
self._handle_dbapi_exception(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 2043, in _handle_dbapi_exception
util.raise_(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1819, in _execute_context
self.dialect.do_execute(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
cursor.execute(statement, parameters)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 100, in execute
self._adapt_connection._handle_exception(error)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 229, in _handle_exception
raise error
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 82, in execute
self.await_(_cursor.execute(operation, parameters))
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 76, in await_only
return current.driver.switch(awaitable)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 129, in greenlet_spawn
value = await result
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/cursor.py", line 37, in execute
await self._execute(self._cursor.execute, sql, parameters)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/cursor.py", line 31, in _execute
return await self._conn._execute(fn, *args, **kwargs)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/core.py", line 129, in _execute
return await future
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/core.py", line 102, in run
result = function()
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: applyworkflow
```
|
1.0
|
`sqlite3.OperationalError: no such table: applyworkflow` - When running the server (from `main`), and submitting a workflow via the client, I get the error:
```python traceback
INFO: 127.0.0.1:53570 - "POST /api/v1/project/apply/ HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1819, in _execute_context
self.dialect.do_execute(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
cursor.execute(statement, parameters)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 100, in execute
self._adapt_connection._handle_exception(error)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 229, in _handle_exception
raise error
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 82, in execute
self.await_(_cursor.execute(operation, parameters))
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 76, in await_only
return current.driver.switch(awaitable)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 129, in greenlet_spawn
value = await result
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/cursor.py", line 37, in execute
await self._execute(self._cursor.execute, sql, parameters)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/cursor.py", line 31, in _execute
return await self._conn._execute(fn, *args, **kwargs)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/core.py", line 129, in _execute
return await future
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/core.py", line 102, in run
result = function()
sqlite3.OperationalError: no such table: applyworkflow
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 404, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/fastapi/applications.py", line 269, in __call__
await super().__call__(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/applications.py", line 124, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/middleware/cors.py", line 84, in __call__
await self.app(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/exceptions.py", line 93, in __call__
raise exc
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
await self.app(scope, receive, sender)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/routing.py", line 670, in __call__
await route.handle(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/routing.py", line 266, in handle
await self.app(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/routing.py", line 65, in app
response = await func(request)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/fastapi/routing.py", line 227, in app
raw_response = await run_endpoint_function(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/fastapi/routing.py", line 160, in run_endpoint_function
return await dependant.call(**values)
File "/home/tommaso/Fractal/fractal-server/fractal_server/app/api/v1/project.py", line 191, in apply_workflow
await db.commit()
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/ext/asyncio/session.py", line 578, in commit
return await greenlet_spawn(self.sync_session.commit)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 136, in greenlet_spawn
result = context.switch(value)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 1431, in commit
self._transaction.commit(_to_root=self.future)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 829, in commit
self._prepare_impl()
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl
self.session.flush()
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3363, in flush
self._flush(objects)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3503, in _flush
transaction.rollback(_capture_exception=True)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3463, in _flush
flush_context.execute()
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
rec.execute(self)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py", line 630, in execute
util.preloaded.orm_persistence.save_obj(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 245, in save_obj
_emit_insert_statements(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 1238, in _emit_insert_statements
result = connection._execute_20(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1631, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 325, in _execute_on_connection
return connection._execute_clauseelement(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1498, in _execute_clauseelement
ret = self._execute_context(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1862, in _execute_context
self._handle_dbapi_exception(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 2043, in _handle_dbapi_exception
util.raise_(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1819, in _execute_context
self.dialect.do_execute(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
cursor.execute(statement, parameters)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 100, in execute
self._adapt_connection._handle_exception(error)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 229, in _handle_exception
raise error
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 82, in execute
self.await_(_cursor.execute(operation, parameters))
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 76, in await_only
return current.driver.switch(awaitable)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 129, in greenlet_spawn
value = await result
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/cursor.py", line 37, in execute
await self._execute(self._cursor.execute, sql, parameters)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/cursor.py", line 31, in _execute
return await self._conn._execute(fn, *args, **kwargs)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/core.py", line 129, in _execute
return await future
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/core.py", line 102, in run
result = function()
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: applyworkflow
[SQL: INSERT INTO applyworkflow (project_id, input_dataset_id, output_dataset_id, workflow_id, overwrite_input, worker_init, start_timestamp, status) VALUES (?, ?, ?, ?, ?, ?, ?, ?)]
[parameters: (1, 1, 2, 10, 0, None, '2022-10-13 07:53:48.577775', <StatusType.SUBMITTED: 'submitted'>)]
(Background on this error at: https://sqlalche.me/e/14/e3q8)
2022-10-13 09:53:48,579; ERROR; Exception in ASGI application
Traceback (most recent call last):
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1819, in _execute_context
self.dialect.do_execute(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
cursor.execute(statement, parameters)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 100, in execute
self._adapt_connection._handle_exception(error)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 229, in _handle_exception
raise error
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 82, in execute
self.await_(_cursor.execute(operation, parameters))
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 76, in await_only
return current.driver.switch(awaitable)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 129, in greenlet_spawn
value = await result
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/cursor.py", line 37, in execute
await self._execute(self._cursor.execute, sql, parameters)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/cursor.py", line 31, in _execute
return await self._conn._execute(fn, *args, **kwargs)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/core.py", line 129, in _execute
return await future
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/core.py", line 102, in run
result = function()
sqlite3.OperationalError: no such table: applyworkflow
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 404, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/fastapi/applications.py", line 269, in __call__
await super().__call__(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/applications.py", line 124, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/middleware/cors.py", line 84, in __call__
await self.app(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/exceptions.py", line 93, in __call__
raise exc
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
await self.app(scope, receive, sender)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/routing.py", line 670, in __call__
await route.handle(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/routing.py", line 266, in handle
await self.app(scope, receive, send)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/starlette/routing.py", line 65, in app
response = await func(request)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/fastapi/routing.py", line 227, in app
raw_response = await run_endpoint_function(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/fastapi/routing.py", line 160, in run_endpoint_function
return await dependant.call(**values)
File "/home/tommaso/Fractal/fractal-server/fractal_server/app/api/v1/project.py", line 191, in apply_workflow
await db.commit()
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/ext/asyncio/session.py", line 578, in commit
return await greenlet_spawn(self.sync_session.commit)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 136, in greenlet_spawn
result = context.switch(value)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 1431, in commit
self._transaction.commit(_to_root=self.future)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 829, in commit
self._prepare_impl()
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl
self.session.flush()
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3363, in flush
self._flush(objects)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3503, in _flush
transaction.rollback(_capture_exception=True)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3463, in _flush
flush_context.execute()
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
rec.execute(self)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/unitofwork.py", line 630, in execute
util.preloaded.orm_persistence.save_obj(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 245, in save_obj
_emit_insert_statements(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py", line 1238, in _emit_insert_statements
result = connection._execute_20(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1631, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 325, in _execute_on_connection
return connection._execute_clauseelement(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1498, in _execute_clauseelement
ret = self._execute_context(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1862, in _execute_context
self._handle_dbapi_exception(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 2043, in _handle_dbapi_exception
util.raise_(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1819, in _execute_context
self.dialect.do_execute(
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
cursor.execute(statement, parameters)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 100, in execute
self._adapt_connection._handle_exception(error)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 229, in _handle_exception
raise error
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/aiosqlite.py", line 82, in execute
self.await_(_cursor.execute(operation, parameters))
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 76, in await_only
return current.driver.switch(awaitable)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 129, in greenlet_spawn
value = await result
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/cursor.py", line 37, in execute
await self._execute(self._cursor.execute, sql, parameters)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/cursor.py", line 31, in _execute
return await self._conn._execute(fn, *args, **kwargs)
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/core.py", line 129, in _execute
return await future
File "/home/tommaso/miniconda3/envs/fractal/lib/python3.8/site-packages/aiosqlite/core.py", line 102, in run
result = function()
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: applyworkflow
```
|
priority
|
operationalerror no such table applyworkflow when running the server from main and submitting a workflow via the client i get the error python traceback info post api project apply http internal server error error exception in asgi application traceback most recent call last file home tommaso envs fractal lib site packages sqlalchemy engine base py line in execute context self dialect do execute file home tommaso envs fractal lib site packages sqlalchemy engine default py line in do execute cursor execute statement parameters file home tommaso envs fractal lib site packages sqlalchemy dialects sqlite aiosqlite py line in execute self adapt connection handle exception error file home tommaso envs fractal lib site packages sqlalchemy dialects sqlite aiosqlite py line in handle exception raise error file home tommaso envs fractal lib site packages sqlalchemy dialects sqlite aiosqlite py line in execute self await cursor execute operation parameters file home tommaso envs fractal lib site packages sqlalchemy util concurrency py line in await only return current driver switch awaitable file home tommaso envs fractal lib site packages sqlalchemy util concurrency py line in greenlet spawn value await result file home tommaso envs fractal lib site packages aiosqlite cursor py line in execute await self execute self cursor execute sql parameters file home tommaso envs fractal lib site packages aiosqlite cursor py line in execute return await self conn execute fn args kwargs file home tommaso envs fractal lib site packages aiosqlite core py line in execute return await future file home tommaso envs fractal lib site packages aiosqlite core py line in run result function operationalerror no such table applyworkflow the above exception was the direct cause of the following exception traceback most recent call last file home tommaso envs fractal lib site packages uvicorn protocols http impl py line in run asgi result await app type ignore file home tommaso envs fractal lib site packages uvicorn middleware proxy headers py line in call return await self app scope receive send file home tommaso envs fractal lib site packages fastapi applications py line in call await super call scope receive send file home tommaso envs fractal lib site packages starlette applications py line in call await self middleware stack scope receive send file home tommaso envs fractal lib site packages starlette middleware errors py line in call raise exc file home tommaso envs fractal lib site packages starlette middleware errors py line in call await self app scope receive send file home tommaso envs fractal lib site packages starlette middleware cors py line in call await self app scope receive send file home tommaso envs fractal lib site packages starlette exceptions py line in call raise exc file home tommaso envs fractal lib site packages starlette exceptions py line in call await self app scope receive sender file home tommaso envs fractal lib site packages fastapi middleware asyncexitstack py line in call raise e file home tommaso envs fractal lib site packages fastapi middleware asyncexitstack py line in call await self app scope receive send file home tommaso envs fractal lib site packages starlette routing py line in call await route handle scope receive send file home tommaso envs fractal lib site packages starlette routing py line in handle await self app scope receive send file home tommaso envs fractal lib site packages starlette routing py line in app response await func request file home tommaso envs fractal lib site packages fastapi routing py line in app raw response await run endpoint function file home tommaso envs fractal lib site packages fastapi routing py line in run endpoint function return await dependant call values file home tommaso fractal fractal server fractal server app api project py line in apply workflow await db commit file home tommaso envs fractal lib site packages sqlalchemy ext asyncio session py line in commit return await greenlet spawn self sync session commit file home tommaso envs fractal lib site packages sqlalchemy util concurrency py line in greenlet spawn result context switch value file home tommaso envs fractal lib site packages sqlalchemy orm session py line in commit self transaction commit to root self future file home tommaso envs fractal lib site packages sqlalchemy orm session py line in commit self prepare impl file home tommaso envs fractal lib site packages sqlalchemy orm session py line in prepare impl self session flush file home tommaso envs fractal lib site packages sqlalchemy orm session py line in flush self flush objects file home tommaso envs fractal lib site packages sqlalchemy orm session py line in flush transaction rollback capture exception true file home tommaso envs fractal lib site packages sqlalchemy util langhelpers py line in exit compat raise file home tommaso envs fractal lib site packages sqlalchemy util compat py line in raise raise exception file home tommaso envs fractal lib site packages sqlalchemy orm session py line in flush flush context execute file home tommaso envs fractal lib site packages sqlalchemy orm unitofwork py line in execute rec execute self file home tommaso envs fractal lib site packages sqlalchemy orm unitofwork py line in execute util preloaded orm persistence save obj file home tommaso envs fractal lib site packages sqlalchemy orm persistence py line in save obj emit insert statements file home tommaso envs fractal lib site packages sqlalchemy orm persistence py line in emit insert statements result connection execute file home tommaso envs fractal lib site packages sqlalchemy engine base py line in execute return meth self args kwargs execution options file home tommaso envs fractal lib site packages sqlalchemy sql elements py line in execute on connection return connection execute clauseelement file home tommaso envs fractal lib site packages sqlalchemy engine base py line in execute clauseelement ret self execute context file home tommaso envs fractal lib site packages sqlalchemy engine base py line in execute context self handle dbapi exception file home tommaso envs fractal lib site packages sqlalchemy engine base py line in handle dbapi exception util raise file home tommaso envs fractal lib site packages sqlalchemy util compat py line in raise raise exception file home tommaso envs fractal lib site packages sqlalchemy engine base py line in execute context self dialect do execute file home tommaso envs fractal lib site packages sqlalchemy engine default py line in do execute cursor execute statement parameters file home tommaso envs fractal lib site packages sqlalchemy dialects sqlite aiosqlite py line in execute self adapt connection handle exception error file home tommaso envs fractal lib site packages sqlalchemy dialects sqlite aiosqlite py line in handle exception raise error file home tommaso envs fractal lib site packages sqlalchemy dialects sqlite aiosqlite py line in execute self await cursor execute operation parameters file home tommaso envs fractal lib site packages sqlalchemy util concurrency py line in await only return current driver switch awaitable file home tommaso envs fractal lib site packages sqlalchemy util concurrency py line in greenlet spawn value await result file home tommaso envs fractal lib site packages aiosqlite cursor py line in execute await self execute self cursor execute sql parameters file home tommaso envs fractal lib site packages aiosqlite cursor py line in execute return await self conn execute fn args kwargs file home tommaso envs fractal lib site packages aiosqlite core py line in execute return await future file home tommaso envs fractal lib site packages aiosqlite core py line in run result function sqlalchemy exc operationalerror operationalerror no such table applyworkflow background on this error at error exception in asgi application traceback most recent call last file home tommaso envs fractal lib site packages sqlalchemy engine base py line in execute context self dialect do execute file home tommaso envs fractal lib site packages sqlalchemy engine default py line in do execute cursor execute statement parameters file home tommaso envs fractal lib site packages sqlalchemy dialects sqlite aiosqlite py line in execute self adapt connection handle exception error file home tommaso envs fractal lib site packages sqlalchemy dialects sqlite aiosqlite py line in handle exception raise error file home tommaso envs fractal lib site packages sqlalchemy dialects sqlite aiosqlite py line in execute self await cursor execute operation parameters file home tommaso envs fractal lib site packages sqlalchemy util concurrency py line in await only return current driver switch awaitable file home tommaso envs fractal lib site packages sqlalchemy util concurrency py line in greenlet spawn value await result file home tommaso envs fractal lib site packages aiosqlite cursor py line in execute await self execute self cursor execute sql parameters file home tommaso envs fractal lib site packages aiosqlite cursor py line in execute return await self conn execute fn args kwargs file home tommaso envs fractal lib site packages aiosqlite core py line in execute return await future file home tommaso envs fractal lib site packages aiosqlite core py line in run result function operationalerror no such table applyworkflow the above exception was the direct cause of the following exception traceback most recent call last file home tommaso envs fractal lib site packages uvicorn protocols http impl py line in run asgi result await app type ignore file home tommaso envs fractal lib site packages uvicorn middleware proxy headers py line in call return await self app scope receive send file home tommaso envs fractal lib site packages fastapi applications py line in call await super call scope receive send file home tommaso envs fractal lib site packages starlette applications py line in call await self middleware stack scope receive send file home tommaso envs fractal lib site packages starlette middleware errors py line in call raise exc file home tommaso envs fractal lib site packages starlette middleware errors py line in call await self app scope receive send file home tommaso envs fractal lib site packages starlette middleware cors py line in call await self app scope receive send file home tommaso envs fractal lib site packages starlette exceptions py line in call raise exc file home tommaso envs fractal lib site packages starlette exceptions py line in call await self app scope receive sender file home tommaso envs fractal lib site packages fastapi middleware asyncexitstack py line in call raise e file home tommaso envs fractal lib site packages fastapi middleware asyncexitstack py line in call await self app scope receive send file home tommaso envs fractal lib site packages starlette routing py line in call await route handle scope receive send file home tommaso envs fractal lib site packages starlette routing py line in handle await self app scope receive send file home tommaso envs fractal lib site packages starlette routing py line in app response await func request file home tommaso envs fractal lib site packages fastapi routing py line in app raw response await run endpoint function file home tommaso envs fractal lib site packages fastapi routing py line in run endpoint function return await dependant call values file home tommaso fractal fractal server fractal server app api project py line in apply workflow await db commit file home tommaso envs fractal lib site packages sqlalchemy ext asyncio session py line in commit return await greenlet spawn self sync session commit file home tommaso envs fractal lib site packages sqlalchemy util concurrency py line in greenlet spawn result context switch value file home tommaso envs fractal lib site packages sqlalchemy orm session py line in commit self transaction commit to root self future file home tommaso envs fractal lib site packages sqlalchemy orm session py line in commit self prepare impl file home tommaso envs fractal lib site packages sqlalchemy orm session py line in prepare impl self session flush file home tommaso envs fractal lib site packages sqlalchemy orm session py line in flush self flush objects file home tommaso envs fractal lib site packages sqlalchemy orm session py line in flush transaction rollback capture exception true file home tommaso envs fractal lib site packages sqlalchemy util langhelpers py line in exit compat raise file home tommaso envs fractal lib site packages sqlalchemy util compat py line in raise raise exception file home tommaso envs fractal lib site packages sqlalchemy orm session py line in flush flush context execute file home tommaso envs fractal lib site packages sqlalchemy orm unitofwork py line in execute rec execute self file home tommaso envs fractal lib site packages sqlalchemy orm unitofwork py line in execute util preloaded orm persistence save obj file home tommaso envs fractal lib site packages sqlalchemy orm persistence py line in save obj emit insert statements file home tommaso envs fractal lib site packages sqlalchemy orm persistence py line in emit insert statements result connection execute file home tommaso envs fractal lib site packages sqlalchemy engine base py line in execute return meth self args kwargs execution options file home tommaso envs fractal lib site packages sqlalchemy sql elements py line in execute on connection return connection execute clauseelement file home tommaso envs fractal lib site packages sqlalchemy engine base py line in execute clauseelement ret self execute context file home tommaso envs fractal lib site packages sqlalchemy engine base py line in execute context self handle dbapi exception file home tommaso envs fractal lib site packages sqlalchemy engine base py line in handle dbapi exception util raise file home tommaso envs fractal lib site packages sqlalchemy util compat py line in raise raise exception file home tommaso envs fractal lib site packages sqlalchemy engine base py line in execute context self dialect do execute file home tommaso envs fractal lib site packages sqlalchemy engine default py line in do execute cursor execute statement parameters file home tommaso envs fractal lib site packages sqlalchemy dialects sqlite aiosqlite py line in execute self adapt connection handle exception error file home tommaso envs fractal lib site packages sqlalchemy dialects sqlite aiosqlite py line in handle exception raise error file home tommaso envs fractal lib site packages sqlalchemy dialects sqlite aiosqlite py line in execute self await cursor execute operation parameters file home tommaso envs fractal lib site packages sqlalchemy util concurrency py line in await only return current driver switch awaitable file home tommaso envs fractal lib site packages sqlalchemy util concurrency py line in greenlet spawn value await result file home tommaso envs fractal lib site packages aiosqlite cursor py line in execute await self execute self cursor execute sql parameters file home tommaso envs fractal lib site packages aiosqlite cursor py line in execute return await self conn execute fn args kwargs file home tommaso envs fractal lib site packages aiosqlite core py line in execute return await future file home tommaso envs fractal lib site packages aiosqlite core py line in run result function sqlalchemy exc operationalerror operationalerror no such table applyworkflow
| 1
|
637,038
| 20,618,407,707
|
IssuesEvent
|
2022-03-07 15:15:30
|
owid/covid-19-data
|
https://api.github.com/repos/owid/covid-19-data
|
closed
|
Invalid data in vaccinations-by-manufacturer.csv
|
bug dom:vaccinations priority:high report
|
### Country
Uruguay
### Domain
Vaccinations
### Which data is inaccurate or missing?
In file: [vaccinations-by-manufacturer.csv](https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/vaccinations/vaccinations-by-manufacturer.csv)
E.g.:
See the bottom lines, which do not have data:
Uruguay,2022-03-06,Oxford/AstraZeneca,89635
Uruguay,2022-03-06,Pfizer/BioNTech,2398276
Uruguay,2022-03-06,Sinovac,3247639
Uruguay,,Oxford/AstraZeneca,89635
Uruguay,,Pfizer/BioNTech,2362759
Uruguay,,Sinovac,3247636
### Why do you think the data is inaccurate or missing?
Either the data is not available, or there is human error, or error of automatic processing of the data.
|
1.0
|
Invalid data in vaccinations-by-manufacturer.csv - ### Country
Uruguay
### Domain
Vaccinations
### Which data is inaccurate or missing?
In file: [vaccinations-by-manufacturer.csv](https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/vaccinations/vaccinations-by-manufacturer.csv)
E.g.:
See the bottom lines, which do not have data:
Uruguay,2022-03-06,Oxford/AstraZeneca,89635
Uruguay,2022-03-06,Pfizer/BioNTech,2398276
Uruguay,2022-03-06,Sinovac,3247639
Uruguay,,Oxford/AstraZeneca,89635
Uruguay,,Pfizer/BioNTech,2362759
Uruguay,,Sinovac,3247636
### Why do you think the data is inaccurate or missing?
Either the data is not available, or there is human error, or error of automatic processing of the data.
|
priority
|
invalid data in vaccinations by manufacturer csv country uruguay domain vaccinations which data is inaccurate or missing in file e g see the bottom lines which do not have data uruguay oxford astrazeneca uruguay pfizer biontech uruguay sinovac uruguay oxford astrazeneca uruguay pfizer biontech uruguay sinovac why do you think the data is inaccurate or missing either the data is not available or there is human error or error of automatic processing of the data
| 1
|
825,268
| 31,301,744,637
|
IssuesEvent
|
2023-08-23 00:33:37
|
SurajPratap10/Imagine_AI
|
https://api.github.com/repos/SurajPratap10/Imagine_AI
|
closed
|
Adding a blogs section containg lastes ai news
|
gssoc23 High Priority ๐ฅ โญ goal: addition level3
|
Similar to this it will help the user to get the updated and the latest news about ai

pls assign me this issue under GSSOC'23 label
@SurajPratap10
|
1.0
|
Adding a blogs section containg lastes ai news - Similar to this it will help the user to get the updated and the latest news about ai

pls assign me this issue under GSSOC'23 label
@SurajPratap10
|
priority
|
adding a blogs section containg lastes ai news similar to this it will help the user to get the updated and the latest news about ai pls assign me this issue under gssoc label
| 1
|
25,289
| 2,678,805,008
|
IssuesEvent
|
2015-03-26 13:29:08
|
andresriancho/w3af
|
https://api.github.com/repos/andresriancho/w3af
|
closed
|
JSON fuzzing error @ return self.get_token().set_value(value)
|
bug priority:high
|
It has something to do with the delay controllers used in blind sql injection, eval, os commanding detection.
I suspect it's something with the copying of mutants not maintaining the token value.
```
********************************************************************************
mutant.get_token(): <DataToken for (u'object-external_reference-string',): "">
trivial_mutant.get_token(): None
********************************************************************************
```
```python
trivial_mutant = mutant.copy()
print '*' * 80
print 'mutant.get_token(): %r' % mutant.get_token()
print 'trivial_mutant.get_token(): %r' % trivial_mutant.get_token()
print '*' * 80
trivial_mutant.set_token_value(payload)
```
## Version Information
```
Python version: 2.7.6 (default, Mar 22 2014, 22:59:56) [GCC 4.8.2]
GTK version: 2.24.23
PyGTK version: 2.24.0
w3af version:
w3af - Web Application Attack and Audit Framework
Version: 1.6.48
Revision: cbd26a6993 - 25 mar 2015 10:49
Branch: detached HEAD
Local changes: Yes
Author: Andres Riancho and the w3af team.
```
## Traceback
```pytb
A "AttributeError" exception was found while running audit.generic on "Method: POST | https://domain/path/foo/ | JSON: (object-external_reference-string, object-token-string, object-reason-string, object-payment_method_id-string)". The exception was: "'NoneType' object has no attribute 'set_value'" at mutant.py:set_token_value():106.The full traceback is:
File "/home/user/pch/w3af/w3af/core/controllers/core_helpers/consumers/audit.py", line 110, in _audit
plugin.audit_with_copy(fuzzable_request, orig_resp)
File "/home/user/pch/w3af/w3af/core/controllers/plugins/audit_plugin.py", line 139, in audit_with_copy
return self.audit(fuzzable_request, orig_resp)
File "/home/user/pch/w3af/w3af/plugins/audit/generic.py", line 92, in audit
m.set_token_value(error_string)
File "/home/user/pch/w3af/w3af/core/data/fuzzer/mutants/mutant.py", line 106, in set_token_value
return self.get_token().set_value(value)
```
## Enabled Plugins
```python
{'attack': {},
'audit': {u'blind_sqli': <OptionList: eq_limit>,
u'buffer_overflow': <OptionList: >,
u'cors_origin': <OptionList: origin_header_value>,
u'csrf': <OptionList: >,
u'dav': <OptionList: >,
u'eval': <OptionList: use_time_delay|use_echo>,
u'file_upload': <OptionList: extensions>,
u'format_string': <OptionList: >,
u'frontpage': <OptionList: >,
u'generic': <OptionList: diff_ratio>,
u'global_redirect': <OptionList: >,
u'htaccess_methods': <OptionList: >,
u'ldapi': <OptionList: >,
u'lfi': <OptionList: >,
u'memcachei': <OptionList: >,
u'mx_injection': <OptionList: >,
u'os_commanding': <OptionList: >,
u'phishing_vector': <OptionList: >,
u'preg_replace': <OptionList: >,
u'redos': <OptionList: >,
u'response_splitting': <OptionList: >,
u'rfd': <OptionList: >,
u'rfi': <OptionList: listen_address|listen_port|use_w3af_site>,
u'shell_shock': <OptionList: >,
u'sqli': <OptionList: >,
u'ssi': <OptionList: >,
u'ssl_certificate': <OptionList: minExpireDays|caFileName>,
u'un_ssl': <OptionList: >,
u'xpath': <OptionList: >,
u'xss': <OptionList: persistent_xss>,
u'xst': <OptionList: >},
'auth': {},
'bruteforce': {},
'crawl': {u'spider_man': <OptionList: listen_address|listen_port>},
'evasion': {},
'grep': {'error_500': {}},
'infrastructure': {'allowed_methods': {},
'frontpage_version': {},
'server_header': {}},
'mangle': {},
'output': {u'console': <OptionList: verbose>}}
```
|
1.0
|
JSON fuzzing error @ return self.get_token().set_value(value) - It has something to do with the delay controllers used in blind sql injection, eval, os commanding detection.
I suspect it's something with the copying of mutants not maintaining the token value.
```
********************************************************************************
mutant.get_token(): <DataToken for (u'object-external_reference-string',): "">
trivial_mutant.get_token(): None
********************************************************************************
```
```python
trivial_mutant = mutant.copy()
print '*' * 80
print 'mutant.get_token(): %r' % mutant.get_token()
print 'trivial_mutant.get_token(): %r' % trivial_mutant.get_token()
print '*' * 80
trivial_mutant.set_token_value(payload)
```
## Version Information
```
Python version: 2.7.6 (default, Mar 22 2014, 22:59:56) [GCC 4.8.2]
GTK version: 2.24.23
PyGTK version: 2.24.0
w3af version:
w3af - Web Application Attack and Audit Framework
Version: 1.6.48
Revision: cbd26a6993 - 25 mar 2015 10:49
Branch: detached HEAD
Local changes: Yes
Author: Andres Riancho and the w3af team.
```
## Traceback
```pytb
A "AttributeError" exception was found while running audit.generic on "Method: POST | https://domain/path/foo/ | JSON: (object-external_reference-string, object-token-string, object-reason-string, object-payment_method_id-string)". The exception was: "'NoneType' object has no attribute 'set_value'" at mutant.py:set_token_value():106.The full traceback is:
File "/home/user/pch/w3af/w3af/core/controllers/core_helpers/consumers/audit.py", line 110, in _audit
plugin.audit_with_copy(fuzzable_request, orig_resp)
File "/home/user/pch/w3af/w3af/core/controllers/plugins/audit_plugin.py", line 139, in audit_with_copy
return self.audit(fuzzable_request, orig_resp)
File "/home/user/pch/w3af/w3af/plugins/audit/generic.py", line 92, in audit
m.set_token_value(error_string)
File "/home/user/pch/w3af/w3af/core/data/fuzzer/mutants/mutant.py", line 106, in set_token_value
return self.get_token().set_value(value)
```
## Enabled Plugins
```python
{'attack': {},
'audit': {u'blind_sqli': <OptionList: eq_limit>,
u'buffer_overflow': <OptionList: >,
u'cors_origin': <OptionList: origin_header_value>,
u'csrf': <OptionList: >,
u'dav': <OptionList: >,
u'eval': <OptionList: use_time_delay|use_echo>,
u'file_upload': <OptionList: extensions>,
u'format_string': <OptionList: >,
u'frontpage': <OptionList: >,
u'generic': <OptionList: diff_ratio>,
u'global_redirect': <OptionList: >,
u'htaccess_methods': <OptionList: >,
u'ldapi': <OptionList: >,
u'lfi': <OptionList: >,
u'memcachei': <OptionList: >,
u'mx_injection': <OptionList: >,
u'os_commanding': <OptionList: >,
u'phishing_vector': <OptionList: >,
u'preg_replace': <OptionList: >,
u'redos': <OptionList: >,
u'response_splitting': <OptionList: >,
u'rfd': <OptionList: >,
u'rfi': <OptionList: listen_address|listen_port|use_w3af_site>,
u'shell_shock': <OptionList: >,
u'sqli': <OptionList: >,
u'ssi': <OptionList: >,
u'ssl_certificate': <OptionList: minExpireDays|caFileName>,
u'un_ssl': <OptionList: >,
u'xpath': <OptionList: >,
u'xss': <OptionList: persistent_xss>,
u'xst': <OptionList: >},
'auth': {},
'bruteforce': {},
'crawl': {u'spider_man': <OptionList: listen_address|listen_port>},
'evasion': {},
'grep': {'error_500': {}},
'infrastructure': {'allowed_methods': {},
'frontpage_version': {},
'server_header': {}},
'mangle': {},
'output': {u'console': <OptionList: verbose>}}
```
|
priority
|
json fuzzing error return self get token set value value it has something to do with the delay controllers used in blind sql injection eval os commanding detection i suspect it s something with the copying of mutants not maintaining the token value mutant get token trivial mutant get token none python trivial mutant mutant copy print print mutant get token r mutant get token print trivial mutant get token r trivial mutant get token print trivial mutant set token value payload version information python version default mar gtk version pygtk version version web application attack and audit framework version revision mar branch detached head local changes yes author andres riancho and the team traceback pytb a attributeerror exception was found while running audit generic on method post json object external reference string object token string object reason string object payment method id string the exception was nonetype object has no attribute set value at mutant py set token value the full traceback is file home user pch core controllers core helpers consumers audit py line in audit plugin audit with copy fuzzable request orig resp file home user pch core controllers plugins audit plugin py line in audit with copy return self audit fuzzable request orig resp file home user pch plugins audit generic py line in audit m set token value error string file home user pch core data fuzzer mutants mutant py line in set token value return self get token set value value enabled plugins python attack audit u blind sqli u buffer overflow u cors origin u csrf u dav u eval u file upload u format string u frontpage u generic u global redirect u htaccess methods u ldapi u lfi u memcachei u mx injection u os commanding u phishing vector u preg replace u redos u response splitting u rfd u rfi u shell shock u sqli u ssi u ssl certificate u un ssl u xpath u xss u xst auth bruteforce crawl u spider man evasion grep error infrastructure allowed methods frontpage version server header mangle output u console
| 1
|
611,325
| 18,952,167,810
|
IssuesEvent
|
2021-11-18 16:11:26
|
dmwm/CRABServer
|
https://api.github.com/repos/dmwm/CRABServer
|
closed
|
Error during code cleanup
|
Type: Bug Priority: High
|
when removing old crabCache code I introduced a problem here
https://github.com/dmwm/CRABServer/blob/238d27b366451b5071700ef0f6449dff9b8d4584/src/python/TaskWorker/Actions/DryRunUploader.py#L56
variable `result` is not define anymore (nor needed)
|
1.0
|
Error during code cleanup - when removing old crabCache code I introduced a problem here
https://github.com/dmwm/CRABServer/blob/238d27b366451b5071700ef0f6449dff9b8d4584/src/python/TaskWorker/Actions/DryRunUploader.py#L56
variable `result` is not define anymore (nor needed)
|
priority
|
error during code cleanup when removing old crabcache code i introduced a problem here variable result is not define anymore nor needed
| 1
|
672,835
| 22,841,512,907
|
IssuesEvent
|
2022-07-12 22:36:17
|
michaelrsweet/pdfio
|
https://api.github.com/repos/michaelrsweet/pdfio
|
closed
|
Cant get info from PDF
|
bug priority-high
|
Can't get info (e.g. author, subject, keywords) from PDF when reading in from a file
**To Reproduce**
Compile and run this
```c
#include "pdfio.h"
int main() {
{ // Create the test file
pdfio_file_t *pdf = pdfioFileCreate("test.pdf", "2.0", NULL, NULL, NULL, NULL);
pdfioFileSetAuthor(pdf, "John Doe");
printf("%s\n", pdfioFileGetAuthor(pdf));
pdfioFileClose(pdf);
}
{ // Read it back in
pdfio_file_t *pdf = pdfioFileOpen("test.pdf", NULL, NULL, NULL, NULL);
printf("%s", pdfioFileGetAuthor(pdf));
pdfioFileClose(pdf);
}
}
```
**Expected behavior**
File should be read back in correctly and print "John Doe" twice
**Additional Info**
I can find `/Author(John Doe)` in the info object so it does seem to be writing correctly
**System Information:**
- OS: Linux
- Version: Latest commit ([26d485c](https://github.com/michaelrsweet/pdfio/commit/26d485cfc51e9a4eea6a687dba46e6a16a0dc20f))
|
1.0
|
Cant get info from PDF - Can't get info (e.g. author, subject, keywords) from PDF when reading in from a file
**To Reproduce**
Compile and run this
```c
#include "pdfio.h"
int main() {
{ // Create the test file
pdfio_file_t *pdf = pdfioFileCreate("test.pdf", "2.0", NULL, NULL, NULL, NULL);
pdfioFileSetAuthor(pdf, "John Doe");
printf("%s\n", pdfioFileGetAuthor(pdf));
pdfioFileClose(pdf);
}
{ // Read it back in
pdfio_file_t *pdf = pdfioFileOpen("test.pdf", NULL, NULL, NULL, NULL);
printf("%s", pdfioFileGetAuthor(pdf));
pdfioFileClose(pdf);
}
}
```
**Expected behavior**
File should be read back in correctly and print "John Doe" twice
**Additional Info**
I can find `/Author(John Doe)` in the info object so it does seem to be writing correctly
**System Information:**
- OS: Linux
- Version: Latest commit ([26d485c](https://github.com/michaelrsweet/pdfio/commit/26d485cfc51e9a4eea6a687dba46e6a16a0dc20f))
|
priority
|
cant get info from pdf can t get info e g author subject keywords from pdf when reading in from a file to reproduce compile and run this c include pdfio h int main create the test file pdfio file t pdf pdfiofilecreate test pdf null null null null pdfiofilesetauthor pdf john doe printf s n pdfiofilegetauthor pdf pdfiofileclose pdf read it back in pdfio file t pdf pdfiofileopen test pdf null null null null printf s pdfiofilegetauthor pdf pdfiofileclose pdf expected behavior file should be read back in correctly and print john doe twice additional info i can find author john doe in the info object so it does seem to be writing correctly system information os linux version latest commit
| 1
|
514,942
| 14,947,169,975
|
IssuesEvent
|
2021-01-26 08:13:33
|
bounswe/bounswe2020group4
|
https://api.github.com/repos/bounswe/bounswe2020group4
|
closed
|
(AND) Messages UI
|
Android Android-UI Coding Effort: Medium Priority: High
|
The messages of the user are needed to be demonstrated.
Live chat format is needed to be demonstrated
Deadline : 24.01.2021
|
1.0
|
(AND) Messages UI - The messages of the user are needed to be demonstrated.
Live chat format is needed to be demonstrated
Deadline : 24.01.2021
|
priority
|
and messages ui the messages of the user are needed to be demonstrated live chat format is needed to be demonstrated deadline
| 1
|
356,800
| 10,597,599,248
|
IssuesEvent
|
2019-10-10 01:19:55
|
fedora-infra/bodhi
|
https://api.github.com/repos/fedora-infra/bodhi
|
closed
|
Vagrant box log is filled up by fedora-messaging failure messages
|
Composer Crash EasyFix High priority
|
When I run bodhi in development mode in its vagrant box, the bodhi log is filled up by the failure of fm-consumer@config.service, thus it's impossible to read:
```
Jun 14 05:50:38 bodhi-dev.example.com systemd[1]: Started Fedora Messaging consumer.
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: [fedora_messaging.cli INFO] Starting consumer with bodhi.server.consumers:Consumer callback
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: Traceback (most recent call last):
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/usr/local/bin/fedora-messaging", line 11, in <module>
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: load_entry_point('fedora-messaging', 'console_scripts', 'fedora-messaging')()
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/usr/lib/python3.7/site-packages/click/core.py", line 721, in __call__
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: return self.main(*args, **kwargs)
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/usr/lib/python3.7/site-packages/click/core.py", line 696, in main
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: rv = self.invoke(ctx)
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/usr/lib/python3.7/site-packages/click/core.py", line 1065, in invoke
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: return _process_result(sub_ctx.command.invoke(sub_ctx))
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/usr/lib/python3.7/site-packages/click/core.py", line 894, in invoke
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: return ctx.invoke(self.callback, **ctx.params)
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/usr/lib/python3.7/site-packages/click/core.py", line 534, in invoke
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: return callback(*args, **kwargs)
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/usr/lib/python3.7/site-packages/fedora_messaging/cli.py", line 144, in consume
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: callback, bindings=bindings, queues=queues
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/usr/lib/python3.7/site-packages/fedora_messaging/api.py", line 108, in twisted_consume
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: callback = _check_callback(callback)
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/usr/lib/python3.7/site-packages/fedora_messaging/api.py", line 51, in _check_callback
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: callback_object = callback()
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/home/vagrant/bodhi/bodhi/server/consumers/__init__.py", line 57, in __init__
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: self.composer_handler = ComposerHandler()
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/home/vagrant/bodhi/bodhi/server/consumers/composer.py", line 164, in __init__
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: validate_path(config[setting])
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/home/vagrant/bodhi/bodhi/server/config.py", line 158, in validate_path
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: if not os.path.exists(value):
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/usr/lib64/python3.7/genericpath.py", line 19, in exists
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: os.stat(path)
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
Jun 14 05:50:40 bodhi-dev.example.com systemd[1]: fm-consumer@config.service: Main process exited, code=exited, status=1/FAILURE
Jun 14 05:50:40 bodhi-dev.example.com systemd[1]: fm-consumer@config.service: Failed with result 'exit-code'.
Jun 14 05:50:40 bodhi-dev.example.com systemd[1]: fm-consumer@config.service: Service RestartSec=100ms expired, scheduling restart.
Jun 14 05:50:40 bodhi-dev.example.com systemd[1]: fm-consumer@config.service: Scheduled restart job, restart counter is at 34.
Jun 14 05:50:40 bodhi-dev.example.com systemd[1]: Stopped Fedora Messaging consumer.
```
Is there any setting missing in the example development.ini file that needs to be set? Or is there any setting that will set a maximum number of retry to start fedora-messaging?
|
1.0
|
Vagrant box log is filled up by fedora-messaging failure messages - When I run bodhi in development mode in its vagrant box, the bodhi log is filled up by the failure of fm-consumer@config.service, thus it's impossible to read:
```
Jun 14 05:50:38 bodhi-dev.example.com systemd[1]: Started Fedora Messaging consumer.
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: [fedora_messaging.cli INFO] Starting consumer with bodhi.server.consumers:Consumer callback
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: Traceback (most recent call last):
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/usr/local/bin/fedora-messaging", line 11, in <module>
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: load_entry_point('fedora-messaging', 'console_scripts', 'fedora-messaging')()
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/usr/lib/python3.7/site-packages/click/core.py", line 721, in __call__
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: return self.main(*args, **kwargs)
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/usr/lib/python3.7/site-packages/click/core.py", line 696, in main
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: rv = self.invoke(ctx)
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/usr/lib/python3.7/site-packages/click/core.py", line 1065, in invoke
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: return _process_result(sub_ctx.command.invoke(sub_ctx))
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/usr/lib/python3.7/site-packages/click/core.py", line 894, in invoke
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: return ctx.invoke(self.callback, **ctx.params)
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/usr/lib/python3.7/site-packages/click/core.py", line 534, in invoke
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: return callback(*args, **kwargs)
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/usr/lib/python3.7/site-packages/fedora_messaging/cli.py", line 144, in consume
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: callback, bindings=bindings, queues=queues
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/usr/lib/python3.7/site-packages/fedora_messaging/api.py", line 108, in twisted_consume
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: callback = _check_callback(callback)
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/usr/lib/python3.7/site-packages/fedora_messaging/api.py", line 51, in _check_callback
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: callback_object = callback()
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/home/vagrant/bodhi/bodhi/server/consumers/__init__.py", line 57, in __init__
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: self.composer_handler = ComposerHandler()
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/home/vagrant/bodhi/bodhi/server/consumers/composer.py", line 164, in __init__
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: validate_path(config[setting])
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/home/vagrant/bodhi/bodhi/server/config.py", line 158, in validate_path
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: if not os.path.exists(value):
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: File "/usr/lib64/python3.7/genericpath.py", line 19, in exists
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: os.stat(path)
Jun 14 05:50:40 bodhi-dev.example.com fedora-messaging[11226]: TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
Jun 14 05:50:40 bodhi-dev.example.com systemd[1]: fm-consumer@config.service: Main process exited, code=exited, status=1/FAILURE
Jun 14 05:50:40 bodhi-dev.example.com systemd[1]: fm-consumer@config.service: Failed with result 'exit-code'.
Jun 14 05:50:40 bodhi-dev.example.com systemd[1]: fm-consumer@config.service: Service RestartSec=100ms expired, scheduling restart.
Jun 14 05:50:40 bodhi-dev.example.com systemd[1]: fm-consumer@config.service: Scheduled restart job, restart counter is at 34.
Jun 14 05:50:40 bodhi-dev.example.com systemd[1]: Stopped Fedora Messaging consumer.
```
Is there any setting missing in the example development.ini file that needs to be set? Or is there any setting that will set a maximum number of retry to start fedora-messaging?
|
priority
|
vagrant box log is filled up by fedora messaging failure messages when i run bodhi in development mode in its vagrant box the bodhi log is filled up by the failure of fm consumer config service thus it s impossible to read jun bodhi dev example com systemd started fedora messaging consumer jun bodhi dev example com fedora messaging starting consumer with bodhi server consumers consumer callback jun bodhi dev example com fedora messaging traceback most recent call last jun bodhi dev example com fedora messaging file usr local bin fedora messaging line in jun bodhi dev example com fedora messaging load entry point fedora messaging console scripts fedora messaging jun bodhi dev example com fedora messaging file usr lib site packages click core py line in call jun bodhi dev example com fedora messaging return self main args kwargs jun bodhi dev example com fedora messaging file usr lib site packages click core py line in main jun bodhi dev example com fedora messaging rv self invoke ctx jun bodhi dev example com fedora messaging file usr lib site packages click core py line in invoke jun bodhi dev example com fedora messaging return process result sub ctx command invoke sub ctx jun bodhi dev example com fedora messaging file usr lib site packages click core py line in invoke jun bodhi dev example com fedora messaging return ctx invoke self callback ctx params jun bodhi dev example com fedora messaging file usr lib site packages click core py line in invoke jun bodhi dev example com fedora messaging return callback args kwargs jun bodhi dev example com fedora messaging file usr lib site packages fedora messaging cli py line in consume jun bodhi dev example com fedora messaging callback bindings bindings queues queues jun bodhi dev example com fedora messaging file usr lib site packages fedora messaging api py line in twisted consume jun bodhi dev example com fedora messaging callback check callback callback jun bodhi dev example com fedora messaging file usr lib site packages fedora messaging api py line in check callback jun bodhi dev example com fedora messaging callback object callback jun bodhi dev example com fedora messaging file home vagrant bodhi bodhi server consumers init py line in init jun bodhi dev example com fedora messaging self composer handler composerhandler jun bodhi dev example com fedora messaging file home vagrant bodhi bodhi server consumers composer py line in init jun bodhi dev example com fedora messaging validate path config jun bodhi dev example com fedora messaging file home vagrant bodhi bodhi server config py line in validate path jun bodhi dev example com fedora messaging if not os path exists value jun bodhi dev example com fedora messaging file usr genericpath py line in exists jun bodhi dev example com fedora messaging os stat path jun bodhi dev example com fedora messaging typeerror stat path should be string bytes os pathlike or integer not nonetype jun bodhi dev example com systemd fm consumer config service main process exited code exited status failure jun bodhi dev example com systemd fm consumer config service failed with result exit code jun bodhi dev example com systemd fm consumer config service service restartsec expired scheduling restart jun bodhi dev example com systemd fm consumer config service scheduled restart job restart counter is at jun bodhi dev example com systemd stopped fedora messaging consumer is there any setting missing in the example development ini file that needs to be set or is there any setting that will set a maximum number of retry to start fedora messaging
| 1
|
251,868
| 8,028,792,273
|
IssuesEvent
|
2018-07-27 14:03:54
|
cdnjs/cdnjs
|
https://api.github.com/repos/cdnjs/cdnjs
|
closed
|
[Request] Add tui.calendar
|
:gift: BEGINNER :label: Library Request :rotating_light: High Priority good first issue
|
**Library name:** tui.calendar
**Repository url:** https://github.com/nhnent/tui.calendar
**npm package url:** https://www.npmjs.com/package/tui-calendar
**License:** [MIT](https://github.com/nhnent/tui.calendar/blob/master/LICENSE)
**Official homepage:** **https**://ui.toast.com/tui-calendar
## Something you might want to know about CDNJS
CDNJS provides a CDN containing front-end libraries to allow all the web developers from around the world to use the libraries on our free CDN without any additional download/upload process.
## How to deal with this issue?
You can choose to add the library via single package.json or with its required assets. As for the working environment, you can choose to work on the GitHub GUI or on your computer (locally).
Working locally will let you learn git (version control) and debug skills. However, it needs more spaces on your computer in comparison with using GitHub. So don't feel any pressure if you want to use the GitHub GUI. If you decide to work locally, it's recommended to pulling cdnjs repository with [sparseCheckout](https://github.com/cdnjs/cdnjs/blob/master/documents/sparseCheckout.md ) because the cdnjs repo is too huge.
The followings are the useful steps to let you add the requested library with the GitHub GUI in an easier way. :muscle:
### Via single package.json
1. [Fork](https://github.com/cdnjs/cdnjs/fork) the cdnjs repo and do the following steps on your forked repo.
2. Create a new, non-master branch by clicking the "Branch: master" dropdown and enter the name of the library being added.
3. Create the file `<lib_name>/package.json` under `ajax/libs` directory by clicking "Create new file" button. You will see the part of the screen like this image <img width="477" alt="2017-09-01 1 39 56" src="https://user-images.githubusercontent.com/13430892/29956675-18f5eb68-8f1b-11e7-896b-7f595cba0181.png">
Find the package.json (or bower.json if there is no package.json) in source repo (including npm). Please copy its content and modify the content if the content doesn't match the format of package.json in cdnjs. There are 9 fields we may need in total here. Other than these fields, please remove them.
* Necessary ones: `name`, `filename`, `description`, `keywords`, `repository`, `author`, `license` and "npm auto-update" (or "git auto-update")
* `name`: library name, please use the GitHub repo name if the source is GitHub, please use npm package name if the source is npm.
* `filename`: Find the main file of this library, you might be able to find it after reading its documentation. If the main file is not minified, please still use `<filename>.min.js` or `<filename>.min.css` structure naming, we'll do the minify job.
* `author`: There might be more than one author. Please use `authors` instead when there are multiple authors.
* `license`: There also might be more than one license. Please use `licenses` when there are multiple licenses.
* npm auto-update or git auto-update: Please follow https://github.com/cdnjs/cdnjs/blob/master/documents/autoupdate.md. There are a few rules to help you choose GitHub or npm to be the source. The simplest rule you can use here is to choose the one with more versions. If the number of versions are the same, please use npm because we prefer it.
* Unnecessary: `homepage` if it's the same as the source repo or just a demo page with no link to its source repo.
Last, please use [JSONLint](http://jsonlint.com/) to validate your package.json.
4. Commit new file:
* The heading of the commit message
* If the source is git: `Add <lib_name> w/ git auto-update via single package.json`
* If the source is npm: `Add <lib_name> w/ npm auto-update via single package.json`
* The body of the commit message: `close #<issue_number>, cc @<lib_author>`
5. Submit the pull request in cdnjs repo and fill out the asked information to make sure that everything you can do is good.
[Example 1](https://github.com/extend1994/cdnjs/blob/99b1a5ca8de29b6fa96468d417befa093841bba7/ajax/libs/videojs-ima/package.json), [Example 2](https://github.com/extend1994/cdnjs/blob/91d227960987d08efadc95f7719cb2ab5b9614f7/ajax/libs/gotem/package.json), [Official document](https://github.com/cdnjs/cdnjs/blob/master/CONTRIBUTING.md#e-adding-a-new-library-by-a-single-packagejson)
### With library assets
1. The same as the step 1 in the method "via single package.json".
2. The same as the step 2 in the method "via single package.json".
3. Similar to the the method "via single package.json". But here we need the other necessary field: `version` **_which is the latest stable version_**. And please create package.json locally and upload it with step 4. Also, don't forget to use [JSONLint](http://jsonlint.com/) to validate package.json or use `tools/fixFormat.js` to fix its format.
4. Every library should be stored under the `ajax/libs` directory. It has its own sub-directory of `ajax/libs` and each version of the library has its own sub-directory of the library directory name, for example: `/ajax/libs/jquery/3.2.1/`. Find the files from the latest stable version this library needs to run on a browser well (which are also the files you list in `files` field of auto-update config in package.json) and upload them according to their versions. Note every file should be the completely the same as every file in the upstream. **Make everything is alright with the command `npm test` if you work locally**.
5. Commit the files:
* The heading of the commit message:
* If the source is git: `Add <lib_name>@<version> w/ git auto-update`
* If the source is npm: `Add <lib_name>@<version> w/ npm auto-update`
* The body of the commit message: `close #<issue_number>, cc @<lib_author>`
6. The same as the step 5 in the method "via single package.json".
[Example](https://github.com/cdnjs/cdnjs/pull/10452/files), [Official document](https://github.com/cdnjs/cdnjs/blob/master/CONTRIBUTING.md#d-adding-a-new-library-with-its-assets)
### Work locally
Some useful commands:
1. Pull the repo from cdnjs repo
```
git pull origin master
```
2. Create a non-master branch from master branch.
```
git checkout -b <lib_name>
```
3. Commit the files, with `-v` you can see more detailed information
```
git commit [-v]
```
4. Revise your last commit message
```
git commit --amend [-v]
```
5. Push to GitHub, add `-f` if you want to update your remote branch.
```
git push <remote_name_you_name_for_your_forked_cdnjs_repo> <branch_name> [-f]
```
Feel free to ask some questions if you have any, we will help you as soon as possible. Now don't hesitate to become a contributor of CDNJS :smile:
## References
For more details, you can refer [our contributing document](https://github.com/cdnjs/cdnjs/blob/master/CONTRIBUTING.md) or [the contributing document from GitHub](https://git-scm.com/book/en/v2/Distributed-Git-Contributing-to-a-Project)
## Suggestion for this library
1. What cdnjs need is all under `dist` folder.
2. Remember to use https for every url in package.json if available.
3. Using "Git auto-update" is better than npm auto-update because there are more versions.
|
1.0
|
[Request] Add tui.calendar - **Library name:** tui.calendar
**Repository url:** https://github.com/nhnent/tui.calendar
**npm package url:** https://www.npmjs.com/package/tui-calendar
**License:** [MIT](https://github.com/nhnent/tui.calendar/blob/master/LICENSE)
**Official homepage:** **https**://ui.toast.com/tui-calendar
## Something you might want to know about CDNJS
CDNJS provides a CDN containing front-end libraries to allow all the web developers from around the world to use the libraries on our free CDN without any additional download/upload process.
## How to deal with this issue?
You can choose to add the library via single package.json or with its required assets. As for the working environment, you can choose to work on the GitHub GUI or on your computer (locally).
Working locally will let you learn git (version control) and debug skills. However, it needs more spaces on your computer in comparison with using GitHub. So don't feel any pressure if you want to use the GitHub GUI. If you decide to work locally, it's recommended to pulling cdnjs repository with [sparseCheckout](https://github.com/cdnjs/cdnjs/blob/master/documents/sparseCheckout.md ) because the cdnjs repo is too huge.
The followings are the useful steps to let you add the requested library with the GitHub GUI in an easier way. :muscle:
### Via single package.json
1. [Fork](https://github.com/cdnjs/cdnjs/fork) the cdnjs repo and do the following steps on your forked repo.
2. Create a new, non-master branch by clicking the "Branch: master" dropdown and enter the name of the library being added.
3. Create the file `<lib_name>/package.json` under `ajax/libs` directory by clicking "Create new file" button. You will see the part of the screen like this image <img width="477" alt="2017-09-01 1 39 56" src="https://user-images.githubusercontent.com/13430892/29956675-18f5eb68-8f1b-11e7-896b-7f595cba0181.png">
Find the package.json (or bower.json if there is no package.json) in source repo (including npm). Please copy its content and modify the content if the content doesn't match the format of package.json in cdnjs. There are 9 fields we may need in total here. Other than these fields, please remove them.
* Necessary ones: `name`, `filename`, `description`, `keywords`, `repository`, `author`, `license` and "npm auto-update" (or "git auto-update")
* `name`: library name, please use the GitHub repo name if the source is GitHub, please use npm package name if the source is npm.
* `filename`: Find the main file of this library, you might be able to find it after reading its documentation. If the main file is not minified, please still use `<filename>.min.js` or `<filename>.min.css` structure naming, we'll do the minify job.
* `author`: There might be more than one author. Please use `authors` instead when there are multiple authors.
* `license`: There also might be more than one license. Please use `licenses` when there are multiple licenses.
* npm auto-update or git auto-update: Please follow https://github.com/cdnjs/cdnjs/blob/master/documents/autoupdate.md. There are a few rules to help you choose GitHub or npm to be the source. The simplest rule you can use here is to choose the one with more versions. If the number of versions are the same, please use npm because we prefer it.
* Unnecessary: `homepage` if it's the same as the source repo or just a demo page with no link to its source repo.
Last, please use [JSONLint](http://jsonlint.com/) to validate your package.json.
4. Commit new file:
* The heading of the commit message
* If the source is git: `Add <lib_name> w/ git auto-update via single package.json`
* If the source is npm: `Add <lib_name> w/ npm auto-update via single package.json`
* The body of the commit message: `close #<issue_number>, cc @<lib_author>`
5. Submit the pull request in cdnjs repo and fill out the asked information to make sure that everything you can do is good.
[Example 1](https://github.com/extend1994/cdnjs/blob/99b1a5ca8de29b6fa96468d417befa093841bba7/ajax/libs/videojs-ima/package.json), [Example 2](https://github.com/extend1994/cdnjs/blob/91d227960987d08efadc95f7719cb2ab5b9614f7/ajax/libs/gotem/package.json), [Official document](https://github.com/cdnjs/cdnjs/blob/master/CONTRIBUTING.md#e-adding-a-new-library-by-a-single-packagejson)
### With library assets
1. The same as the step 1 in the method "via single package.json".
2. The same as the step 2 in the method "via single package.json".
3. Similar to the the method "via single package.json". But here we need the other necessary field: `version` **_which is the latest stable version_**. And please create package.json locally and upload it with step 4. Also, don't forget to use [JSONLint](http://jsonlint.com/) to validate package.json or use `tools/fixFormat.js` to fix its format.
4. Every library should be stored under the `ajax/libs` directory. It has its own sub-directory of `ajax/libs` and each version of the library has its own sub-directory of the library directory name, for example: `/ajax/libs/jquery/3.2.1/`. Find the files from the latest stable version this library needs to run on a browser well (which are also the files you list in `files` field of auto-update config in package.json) and upload them according to their versions. Note every file should be the completely the same as every file in the upstream. **Make everything is alright with the command `npm test` if you work locally**.
5. Commit the files:
* The heading of the commit message:
* If the source is git: `Add <lib_name>@<version> w/ git auto-update`
* If the source is npm: `Add <lib_name>@<version> w/ npm auto-update`
* The body of the commit message: `close #<issue_number>, cc @<lib_author>`
6. The same as the step 5 in the method "via single package.json".
[Example](https://github.com/cdnjs/cdnjs/pull/10452/files), [Official document](https://github.com/cdnjs/cdnjs/blob/master/CONTRIBUTING.md#d-adding-a-new-library-with-its-assets)
### Work locally
Some useful commands:
1. Pull the repo from cdnjs repo
```
git pull origin master
```
2. Create a non-master branch from master branch.
```
git checkout -b <lib_name>
```
3. Commit the files, with `-v` you can see more detailed information
```
git commit [-v]
```
4. Revise your last commit message
```
git commit --amend [-v]
```
5. Push to GitHub, add `-f` if you want to update your remote branch.
```
git push <remote_name_you_name_for_your_forked_cdnjs_repo> <branch_name> [-f]
```
Feel free to ask some questions if you have any, we will help you as soon as possible. Now don't hesitate to become a contributor of CDNJS :smile:
## References
For more details, you can refer [our contributing document](https://github.com/cdnjs/cdnjs/blob/master/CONTRIBUTING.md) or [the contributing document from GitHub](https://git-scm.com/book/en/v2/Distributed-Git-Contributing-to-a-Project)
## Suggestion for this library
1. What cdnjs need is all under `dist` folder.
2. Remember to use https for every url in package.json if available.
3. Using "Git auto-update" is better than npm auto-update because there are more versions.
|
priority
|
add tui calendar library name tui calendar repository url npm package url license official homepage https ui toast com tui calendar something you might want to know about cdnjs cdnjs provides a cdn containing front end libraries to allow all the web developers from around the world to use the libraries on our free cdn without any additional download upload process how to deal with this issue you can choose to add the library via single package json or with its required assets as for the working environment you can choose to work on the github gui or on your computer locally working locally will let you learn git version control and debug skills however it needs more spaces on your computer in comparison with using github so don t feel any pressure if you want to use the github gui if you decide to work locally it s recommended to pulling cdnjs repository with because the cdnjs repo is too huge the followings are the useful steps to let you add the requested library with the github gui in an easier way muscle via single package json the cdnjs repo and do the following steps on your forked repo create a new non master branch by clicking the branch master dropdown and enter the name of the library being added create the file package json under ajax libs directory by clicking create new file button you will see the part of the screen like this image img width alt src find the package json or bower json if there is no package json in source repo including npm please copy its content and modify the content if the content doesn t match the format of package json in cdnjs there are fields we may need in total here other than these fields please remove them necessary ones name filename description keywords repository author license and npm auto update or git auto update name library name please use the github repo name if the source is github please use npm package name if the source is npm filename find the main file of this library you might be able to find it after reading its documentation if the main file is not minified please still use min js or min css structure naming we ll do the minify job author there might be more than one author please use authors instead when there are multiple authors license there also might be more than one license please use licenses when there are multiple licenses npm auto update or git auto update please follow there are a few rules to help you choose github or npm to be the source the simplest rule you can use here is to choose the one with more versions if the number of versions are the same please use npm because we prefer it unnecessary homepage if it s the same as the source repo or just a demo page with no link to its source repo last please use to validate your package json commit new file the heading of the commit message if the source is git add w git auto update via single package json if the source is npm add w npm auto update via single package json the body of the commit message close cc submit the pull request in cdnjs repo and fill out the asked information to make sure that everything you can do is good with library assets the same as the step in the method via single package json the same as the step in the method via single package json similar to the the method via single package json but here we need the other necessary field version which is the latest stable version and please create package json locally and upload it with step also don t forget to use to validate package json or use tools fixformat js to fix its format every library should be stored under the ajax libs directory it has its own sub directory of ajax libs and each version of the library has its own sub directory of the library directory name for example ajax libs jquery find the files from the latest stable version this library needs to run on a browser well which are also the files you list in files field of auto update config in package json and upload them according to their versions note every file should be the completely the same as every file in the upstream make everything is alright with the command npm test if you work locally commit the files the heading of the commit message if the source is git add w git auto update if the source is npm add w npm auto update the body of the commit message close cc the same as the step in the method via single package json work locally some useful commands pull the repo from cdnjs repo git pull origin master create a non master branch from master branch git checkout b commit the files with v you can see more detailed information git commit revise your last commit message git commit amend push to github add f if you want to update your remote branch git push feel free to ask some questions if you have any we will help you as soon as possible now don t hesitate to become a contributor of cdnjs smile references for more details you can refer or suggestion for this library what cdnjs need is all under dist folder remember to use https for every url in package json if available using git auto update is better than npm auto update because there are more versions
| 1
|
359,458
| 10,676,627,307
|
IssuesEvent
|
2019-10-21 14:05:45
|
WoWManiaUK/Blackwing-Lair
|
https://api.github.com/repos/WoWManiaUK/Blackwing-Lair
|
closed
|
[Horde] Using Enchanted Conch Crashing the Server !
|
Fixed Confirmed Fixed in Dev Priority-High
|
**Links:**
https://www.wowhead.com/item=56227/enchanted-conch
https://www.wowhead.com/quest=25936/pay-it-forward
from WoWHead or our Armory
**What is happening:**
using the item seems to crash the server you can look in #world in DIscord
only on Horde side !
**What should happen:**
|
1.0
|
[Horde] Using Enchanted Conch Crashing the Server ! - **Links:**
https://www.wowhead.com/item=56227/enchanted-conch
https://www.wowhead.com/quest=25936/pay-it-forward
from WoWHead or our Armory
**What is happening:**
using the item seems to crash the server you can look in #world in DIscord
only on Horde side !
**What should happen:**
|
priority
|
using enchanted conch crashing the server links from wowhead or our armory what is happening using the item seems to crash the server you can look in world in discord only on horde side what should happen
| 1
|
639,255
| 20,749,916,390
|
IssuesEvent
|
2022-03-15 05:59:57
|
vignetteapp/MediaPipe.NET
|
https://api.github.com/repos/vignetteapp/MediaPipe.NET
|
closed
|
BlazePoseCpuCalculator crashing
|
bug help wanted priority:high area:pinvoke area:framework
|
I've tried using the code from Mediapipe.Net.Examples.FaceMesh with BlazePoseCpuCalculator (after adding necessary medipipe / graphs and modules folder) on Windows 10, and I get the following exception

Where can I find a working example for this calculator?
|
1.0
|
BlazePoseCpuCalculator crashing - I've tried using the code from Mediapipe.Net.Examples.FaceMesh with BlazePoseCpuCalculator (after adding necessary medipipe / graphs and modules folder) on Windows 10, and I get the following exception

Where can I find a working example for this calculator?
|
priority
|
blazeposecpucalculator crashing i ve tried using the code from mediapipe net examples facemesh with blazeposecpucalculator after adding necessary medipipe graphs and modules folder on windows and i get the following exception where can i find a working example for this calculator
| 1
|
458,667
| 13,179,503,058
|
IssuesEvent
|
2020-08-12 11:03:48
|
bbc/simorgh
|
https://api.github.com/repos/bbc/simorgh
|
closed
|
Lighthouse Recommendation - Remove unused code
|
Weekly goal cross-team high-priority performance technical-work
|
**Is your feature request related to a problem? Please describe.**
Parent issue: https://github.com/bbc/simorgh-infrastructure/issues/1088
Running lighthouse on all page types using web.dev shows that we have unused JavaScript and it should be removed.
https://web.dev/remove-unused-code/
A run against https://www.test.bbc.com/arabic/articles/c1er5mjnznzo:

https://lighthouse-dot-webdotdevsite.appspot.com//lh/html?url=https%3A%2F%2Fwww.test.bbc.com%2Farabic%2Farticles%2Fc1er5mjnznzo
We should look to try and remove the unused javascript identified in the potential savings.
**Describe the solution you'd like**
- Run the 'bundle buddy' tool against our pages to identify better code splitting approaches.
- https://web.dev/reduce-javascript-payloads-with-code-splitting
- Consider whether our existing code splitting strategy is still optimal when moving between page types
- Consider pre-loading of bundles that are needed for onward journeys from a particular page:
```
const LoadableStoryPage = loadable(() =>โจ import(/* webpackPrefetch: true */ './StoryPage'),โจ);โฉ
```
- Perform some analysis on the minimum bundle size each page can have and potentially combine bundles for different pages types depending on user journey.
Reference this POC PR as a guide to the changes we may need to make: https://github.com/bbc/simorgh/pull/5843
**Describe alternatives you've considered**
- Guess.js - Data driven bundling using google analytics https://github.com/guess-js/guess
- Module/nomodule pattern for reducing bundle size. See https://github.com/bbc/simorgh/issues/7155 for more details
- we update our dependencies regularly which invalidates the vendor js bundle cache meaning returning users have to download the entire vendor bundle every time we update a dependency. Consider splitting the vendor bundle into smaller chunks so the user doesn't have to download as much data when we update a dependency
- https://hackernoon.com/effective-code-splitting-in-react-a-practical-guide-2195359d5d49
**Testing notes**
[Tester to complete]
Dev insight: Will Cypress tests be required or are unit tests sufficient? Will there be any potential regression? etc
- [ ] This feature is expected to need manual testing.
**Additional context**
Add any other context or screenshots about the feature request here.
|
1.0
|
Lighthouse Recommendation - Remove unused code - **Is your feature request related to a problem? Please describe.**
Parent issue: https://github.com/bbc/simorgh-infrastructure/issues/1088
Running lighthouse on all page types using web.dev shows that we have unused JavaScript and it should be removed.
https://web.dev/remove-unused-code/
A run against https://www.test.bbc.com/arabic/articles/c1er5mjnznzo:

https://lighthouse-dot-webdotdevsite.appspot.com//lh/html?url=https%3A%2F%2Fwww.test.bbc.com%2Farabic%2Farticles%2Fc1er5mjnznzo
We should look to try and remove the unused javascript identified in the potential savings.
**Describe the solution you'd like**
- Run the 'bundle buddy' tool against our pages to identify better code splitting approaches.
- https://web.dev/reduce-javascript-payloads-with-code-splitting
- Consider whether our existing code splitting strategy is still optimal when moving between page types
- Consider pre-loading of bundles that are needed for onward journeys from a particular page:
```
const LoadableStoryPage = loadable(() =>โจ import(/* webpackPrefetch: true */ './StoryPage'),โจ);โฉ
```
- Perform some analysis on the minimum bundle size each page can have and potentially combine bundles for different pages types depending on user journey.
Reference this POC PR as a guide to the changes we may need to make: https://github.com/bbc/simorgh/pull/5843
**Describe alternatives you've considered**
- Guess.js - Data driven bundling using google analytics https://github.com/guess-js/guess
- Module/nomodule pattern for reducing bundle size. See https://github.com/bbc/simorgh/issues/7155 for more details
- we update our dependencies regularly which invalidates the vendor js bundle cache meaning returning users have to download the entire vendor bundle every time we update a dependency. Consider splitting the vendor bundle into smaller chunks so the user doesn't have to download as much data when we update a dependency
- https://hackernoon.com/effective-code-splitting-in-react-a-practical-guide-2195359d5d49
**Testing notes**
[Tester to complete]
Dev insight: Will Cypress tests be required or are unit tests sufficient? Will there be any potential regression? etc
- [ ] This feature is expected to need manual testing.
**Additional context**
Add any other context or screenshots about the feature request here.
|
priority
|
lighthouse recommendation remove unused code is your feature request related to a problem please describe parent issue running lighthouse on all page types using web dev shows that we have unused javascript and it should be removed a run against we should look to try and remove the unused javascript identified in the potential savings describe the solution you d like run the bundle buddy tool against our pages to identify better code splitting approaches consider whether our existing code splitting strategy is still optimal when moving between page types consider pre loading of bundles that are needed for onward journeys from a particular page const loadablestorypage loadable โจ import webpackprefetch true storypage โจ โฉ perform some analysis on the minimum bundle size each page can have and potentially combine bundles for different pages types depending on user journey reference this poc pr as a guide to the changes we may need to make describe alternatives you ve considered guess js data driven bundling using google analytics module nomodule pattern for reducing bundle size see for more details we update our dependencies regularly which invalidates the vendor js bundle cache meaning returning users have to download the entire vendor bundle every time we update a dependency consider splitting the vendor bundle into smaller chunks so the user doesn t have to download as much data when we update a dependency testing notes dev insight will cypress tests be required or are unit tests sufficient will there be any potential regression etc this feature is expected to need manual testing additional context add any other context or screenshots about the feature request here
| 1
|
527,207
| 15,325,962,672
|
IssuesEvent
|
2021-02-26 02:31:45
|
OpenMined/PyGrid
|
https://api.github.com/repos/OpenMined/PyGrid
|
closed
|
[Vanity API] Association Request Routes
|
Priority: 2 - High :cold_sweat: Severity: 4 - Low :sunglasses: Type: Epic :call_me_hand:
|
## Description
Create vanity API routes for Domain Node Initial Association Request Feature
Send Association Request
Request: POST /association-requests/request
Body: name, address
Description: Creates a new association request in the database and then sends this request to the address in the body at the โreceive association requestโ endpoint described below. This endpoint will also store and send a handshake value which is a randomly generated hash that serves as a unique identifier of the request. Only users with the permission of can_manage_infrastructure can create an association request.
Receive Association Request
Request: POST /association-requests/receive
Body: address, handshake, value
Description: This endpoint looks up whether or not the requestโs handshake value already exists in the database. If the handshake value does not exist, a new association request will be created. If the handshake value does exist, the system will assume that the association request is a response to an existing request and receive a value of either โacceptโ or โdenyโ. This will indicate that the request for a given handshake was either accepted or denied.
Note: As a form of security, this endpoint will need to view the HTTP request to ensure that the party sending the request was indeed the same as the address. If not, then this is a fake request and should be ignored.
Respond to Association Request
Request: POST /association-requests/respond
Body: address, handshake, value
Description: Creates a new response to an existing association request. This will fire off an API call to the address in the body at the โreceive association requestโ endpoint described above. It will pass with it a value of either โacceptโ or โdenyโ indicating whether the request should be marked as accepted or denied. Only users with the permission of can_manage_infrastructure can respond to an association request.
Get Association Request
Request: GET /association-requests/{id}
Body: N/A
Description: Gets an individual association request. Only users with the permission of can_manage_infrastructure can get an association request.
Get Association Requests
Request: GET /association-requests
Body: N/A
Description: Gets all association requests. Only users with the permission of can_manage_infrastructure can get all association requests.
Delete Association Request
Request: DELETE /association-requests/{id}
Body: N/A
Description: Deletes an association request. Only users with the permission of can_manage_infrastructure can delete an association request.
## Additional Context
- [PyGrid Roadmap](https://docs.google.com/document/d/1_aFR69cTw3BnSLk0jYOd-vXMhNrZkbuEezST-mM2q1k/edit#heading=h.s0sl8c8kxwy3)
|
1.0
|
[Vanity API] Association Request Routes - ## Description
Create vanity API routes for Domain Node Initial Association Request Feature
Send Association Request
Request: POST /association-requests/request
Body: name, address
Description: Creates a new association request in the database and then sends this request to the address in the body at the โreceive association requestโ endpoint described below. This endpoint will also store and send a handshake value which is a randomly generated hash that serves as a unique identifier of the request. Only users with the permission of can_manage_infrastructure can create an association request.
Receive Association Request
Request: POST /association-requests/receive
Body: address, handshake, value
Description: This endpoint looks up whether or not the requestโs handshake value already exists in the database. If the handshake value does not exist, a new association request will be created. If the handshake value does exist, the system will assume that the association request is a response to an existing request and receive a value of either โacceptโ or โdenyโ. This will indicate that the request for a given handshake was either accepted or denied.
Note: As a form of security, this endpoint will need to view the HTTP request to ensure that the party sending the request was indeed the same as the address. If not, then this is a fake request and should be ignored.
Respond to Association Request
Request: POST /association-requests/respond
Body: address, handshake, value
Description: Creates a new response to an existing association request. This will fire off an API call to the address in the body at the โreceive association requestโ endpoint described above. It will pass with it a value of either โacceptโ or โdenyโ indicating whether the request should be marked as accepted or denied. Only users with the permission of can_manage_infrastructure can respond to an association request.
Get Association Request
Request: GET /association-requests/{id}
Body: N/A
Description: Gets an individual association request. Only users with the permission of can_manage_infrastructure can get an association request.
Get Association Requests
Request: GET /association-requests
Body: N/A
Description: Gets all association requests. Only users with the permission of can_manage_infrastructure can get all association requests.
Delete Association Request
Request: DELETE /association-requests/{id}
Body: N/A
Description: Deletes an association request. Only users with the permission of can_manage_infrastructure can delete an association request.
## Additional Context
- [PyGrid Roadmap](https://docs.google.com/document/d/1_aFR69cTw3BnSLk0jYOd-vXMhNrZkbuEezST-mM2q1k/edit#heading=h.s0sl8c8kxwy3)
|
priority
|
association request routes description create vanity api routes for domain node initial association request feature send association request request post association requests request body name address description creates a new association request in the database and then sends this request to the address in the body at the โreceive association requestโ endpoint described below this endpoint will also store and send a handshake value which is a randomly generated hash that serves as a unique identifier of the request only users with the permission of can manage infrastructure can create an association request receive association request request post association requests receive body address handshake value description this endpoint looks up whether or not the requestโs handshake value already exists in the database if the handshake value does not exist a new association request will be created if the handshake value does exist the system will assume that the association request is a response to an existing request and receive a value of either โacceptโ or โdenyโ this will indicate that the request for a given handshake was either accepted or denied note as a form of security this endpoint will need to view the http request to ensure that the party sending the request was indeed the same as the address if not then this is a fake request and should be ignored respond to association request request post association requests respond body address handshake value description creates a new response to an existing association request this will fire off an api call to the address in the body at the โreceive association requestโ endpoint described above it will pass with it a value of either โacceptโ or โdenyโ indicating whether the request should be marked as accepted or denied only users with the permission of can manage infrastructure can respond to an association request get association request request get association requests id body n a description gets an individual association request only users with the permission of can manage infrastructure can get an association request get association requests request get association requests body n a description gets all association requests only users with the permission of can manage infrastructure can get all association requests delete association request request delete association requests id body n a description deletes an association request only users with the permission of can manage infrastructure can delete an association request additional context
| 1
|
97,645
| 4,003,651,277
|
IssuesEvent
|
2016-05-12 01:44:46
|
duckduckgo/zeroclickinfo-spice
|
https://api.github.com/repos/duckduckgo/zeroclickinfo-spice
|
closed
|
Coderwall: IA Broken, API Response Format Changed?
|
Bug Low-Hanging Fruit Priority: High
|
This IA is currently not working and there is a console error.
The Safari debugger suggests that the `$.each()` loop over `item.accounts` isn't working.
It looks like the API response format has actually changed and now the code needs to be updated
------
IA Page: http://duck.co/ia/view/coderwall
/cc @motersen
|
1.0
|
Coderwall: IA Broken, API Response Format Changed? - This IA is currently not working and there is a console error.
The Safari debugger suggests that the `$.each()` loop over `item.accounts` isn't working.
It looks like the API response format has actually changed and now the code needs to be updated
------
IA Page: http://duck.co/ia/view/coderwall
/cc @motersen
|
priority
|
coderwall ia broken api response format changed this ia is currently not working and there is a console error the safari debugger suggests that the each loop over item accounts isn t working it looks like the api response format has actually changed and now the code needs to be updated ia page cc motersen
| 1
|
149,288
| 5,715,706,890
|
IssuesEvent
|
2017-04-19 13:42:46
|
90301/Crescent-CRM-V
|
https://api.github.com/repos/90301/Crescent-CRM-V
|
opened
|
Implement Licensing Lock
|
High Priority Security
|
- [ ] Contact Licensing server(s).
- [ ] send key.
- [ ] on licensing server: check key.
- [ ] Send back response. (valid / invalid)
- [ ] Display licensing status or error message.
|
1.0
|
Implement Licensing Lock - - [ ] Contact Licensing server(s).
- [ ] send key.
- [ ] on licensing server: check key.
- [ ] Send back response. (valid / invalid)
- [ ] Display licensing status or error message.
|
priority
|
implement licensing lock contact licensing server s send key on licensing server check key send back response valid invalid display licensing status or error message
| 1
|
636,547
| 20,602,851,002
|
IssuesEvent
|
2022-03-06 14:39:48
|
PolyhedronStudio/Polyhedron-Engine
|
https://api.github.com/repos/PolyhedronStudio/Polyhedron-Engine
|
closed
|
Add in support for IQM Animations and a few other of its special features.
|
enhancement help wanted Server Game Client Server VkPt Important/High Priority
|
# Why, what for? Wasn't it already working in perfect shape?
The IQM format comes with a framerate value per animation, and in fact also a name. This framerate aspect is currently unintegrated and is almost a requirement to use in the case of N&C if we want to make things work more nicely with the game code again. If an animation is determined by an integral value and the Hz rate is 50... you can imagine that if all other weapon code which is in return strictly based on which frame you're at goes haywire.
Wished for/proposed solution would be that no matter the hz, it'll play the animation at the same speed. (Of course this would skip frames or just not work nicely at all in case there is a ridiculous high amount and the FPS can't take it, or visa versa.)
## There are more reasons to want to have full support, because "as above isn't like below today":
The IQM format can be expended, it leaves room for devs to add in custom data by purpose. FTEQW engine has made great use of this, and in fact loading in what they then called a ".vvm" would be a piece of cake. A single extra structure containing a variable or two.
This specific structure was about model events, for example, when animation "shoot" gets executed, it'd fire a local game implemented event number resulting in cool effects. An example of such a .qc script can be seen here. v_pistol.qc
The other reason is that, should we use a head-torso-legs system like Q3 based engines do, or can we one way or the other integrate blending there? These few topics go beyond my reach on a code technical scale. (well I could do the head-torso-legs obviously but ...)
```
output v_pistol.vvm
materialprefix /textures/models/weapons/pistol/
origin 64 -20 -24
rotate 270 -90 0
scale 1.5
scene "iqe/v_pistol_muzzleflash.iqe" noanim 1
scene "iqe/v_pistol_muzzleflash.iqe" nomesh 1 name "idle1" fps 30 start 168 end 196
event reset
event 2 1337 "weapon_pistol.fire"
event 2 1338 "muzzleflash1.vvm"
scene "iqe/v_pistol_muzzleflash.iqe" nomesh 1 name "attack1" fps 30 start 2 end 10
scene "iqe/v_pistol_muzzleflash.iqe" nomesh 1 name "attack2" fps 30 start 196 end 214
event reset
event 10 1337 "weapon_pistol.reload"
scene "iqe/v_pistol_muzzleflash.iqe" nomesh 1 name "reload1" fps 30 start 10 end 56
event reset
event 135 1337 "weapon_pistol.deploy"
scene "iqe/v_pistol_muzzleflash.iqe" nomesh 1 name "draw1" fps 30 start 135 end 167
event reset
event 126 1337 "weapon_pistol.holster"
scene "iqe/v_pistol_muzzleflash.iqe" nomesh 1 name "holster1" fps 20 start 126 end 135
```
Examples of the file structure I used and the models themselves can be found [here ](https://github.com/WatIsDeze/SchizoMania-FTE/tree/prt-0.0.1/schizomania/game.pk3dir/models/weapons/pistol) in my old SchizoMania project repository:
Notice how the names, the frame rate per second, and the specific frames out of the list in general to pick from can all be adjusted in these "scripts"? The workflow might be a bit time consuming, but it'll be our best bet at having nice animated characters, as well as nice animated weapons.
Keep in mind that for a bullet shell to pop out, it'd just take a certain event number, modelname, and any artist can actually take control over what shell, what sound, and at what frame this has to happen.
Right now we got IQM playing nicely, without blending (If I recall this would require Inverse Kinematics.), however it seems to lack the framerate aspect which is the highest priority right now. And of course, all the other things mentioned.
One final example of how I did the Zombie Character back in SchizoMania, it also mentions how animations can auto loop (unless of course ever the game code tells it to go play some other animation.)
```
output zombie_derrick.vvm
materialprefix /models/characters/zombie_derrick/
scene "iqe/tpose.iqe"
origin 0 0 0
rotate 0 -90 0
scene "iqe/agonizing.iqe" fps 30
scene "iqe/attack1.iqe" fps 55
scene "iqe/attack2.iqe" fps 30
scene "iqe/dying1.iqe" fps 30
scene "iqe/dying1_fast.iqe" fps 30
scene "iqe/dying2.iqe" fps 30
scene "iqe/dying2_fast.iqe" fps 30
scene "iqe/hit_react_a.iqe" fps 30
scene "iqe/hit_react_b.iqe" fps 30
scene "iqe/idle1.iqe" fps 30 loop
scene "iqe/idle_hunt.iqe" fps 30 loop
scene "iqe/idle_scratch.iqe" fps 30 loop
scene "iqe/running1.iqe" fps 30 loop
scene "iqe/scream.iqe" fps 30 loop
scene "iqe/turn_backface.iqe" fps 30 loop
scene "iqe/walking1.iqe" fps 35 loop
scene "iqe/walking2.iqe" fps 50 loop
scene "iqe/walking3.iqe" fps 30 loop
```
All of this resulted in a slightly slow and somewhat limited by toolset workflow, however given the advantages and the fact that the Quake communities do keep these tools up to date (Eihrul has his own iqmtool, and Blender export/import.) Where FTEQW has its own iqmtool as well with several other improvements.
## Sources:
Source code to the engine [Tesseract src](https://websvn.tuxfamily.org/listing.php?repname=tesseract%2Fmain&path=%2Fsrc%2Fengine%2F&peg=2497&rev=#a24748246f41842de340a3f666837aa0a), which is messy/tricky to read, you'll see the files soon enough. He goes over the top with a BIH, Ragdolls, no physics engine for it either. That is not our goal but it might help be a reference for loading in IQM data.
|
1.0
|
Add in support for IQM Animations and a few other of its special features. - # Why, what for? Wasn't it already working in perfect shape?
The IQM format comes with a framerate value per animation, and in fact also a name. This framerate aspect is currently unintegrated and is almost a requirement to use in the case of N&C if we want to make things work more nicely with the game code again. If an animation is determined by an integral value and the Hz rate is 50... you can imagine that if all other weapon code which is in return strictly based on which frame you're at goes haywire.
Wished for/proposed solution would be that no matter the hz, it'll play the animation at the same speed. (Of course this would skip frames or just not work nicely at all in case there is a ridiculous high amount and the FPS can't take it, or visa versa.)
## There are more reasons to want to have full support, because "as above isn't like below today":
The IQM format can be expended, it leaves room for devs to add in custom data by purpose. FTEQW engine has made great use of this, and in fact loading in what they then called a ".vvm" would be a piece of cake. A single extra structure containing a variable or two.
This specific structure was about model events, for example, when animation "shoot" gets executed, it'd fire a local game implemented event number resulting in cool effects. An example of such a .qc script can be seen here. v_pistol.qc
The other reason is that, should we use a head-torso-legs system like Q3 based engines do, or can we one way or the other integrate blending there? These few topics go beyond my reach on a code technical scale. (well I could do the head-torso-legs obviously but ...)
```
output v_pistol.vvm
materialprefix /textures/models/weapons/pistol/
origin 64 -20 -24
rotate 270 -90 0
scale 1.5
scene "iqe/v_pistol_muzzleflash.iqe" noanim 1
scene "iqe/v_pistol_muzzleflash.iqe" nomesh 1 name "idle1" fps 30 start 168 end 196
event reset
event 2 1337 "weapon_pistol.fire"
event 2 1338 "muzzleflash1.vvm"
scene "iqe/v_pistol_muzzleflash.iqe" nomesh 1 name "attack1" fps 30 start 2 end 10
scene "iqe/v_pistol_muzzleflash.iqe" nomesh 1 name "attack2" fps 30 start 196 end 214
event reset
event 10 1337 "weapon_pistol.reload"
scene "iqe/v_pistol_muzzleflash.iqe" nomesh 1 name "reload1" fps 30 start 10 end 56
event reset
event 135 1337 "weapon_pistol.deploy"
scene "iqe/v_pistol_muzzleflash.iqe" nomesh 1 name "draw1" fps 30 start 135 end 167
event reset
event 126 1337 "weapon_pistol.holster"
scene "iqe/v_pistol_muzzleflash.iqe" nomesh 1 name "holster1" fps 20 start 126 end 135
```
Examples of the file structure I used and the models themselves can be found [here ](https://github.com/WatIsDeze/SchizoMania-FTE/tree/prt-0.0.1/schizomania/game.pk3dir/models/weapons/pistol) in my old SchizoMania project repository:
Notice how the names, the frame rate per second, and the specific frames out of the list in general to pick from can all be adjusted in these "scripts"? The workflow might be a bit time consuming, but it'll be our best bet at having nice animated characters, as well as nice animated weapons.
Keep in mind that for a bullet shell to pop out, it'd just take a certain event number, modelname, and any artist can actually take control over what shell, what sound, and at what frame this has to happen.
Right now we got IQM playing nicely, without blending (If I recall this would require Inverse Kinematics.), however it seems to lack the framerate aspect which is the highest priority right now. And of course, all the other things mentioned.
One final example of how I did the Zombie Character back in SchizoMania, it also mentions how animations can auto loop (unless of course ever the game code tells it to go play some other animation.)
```
output zombie_derrick.vvm
materialprefix /models/characters/zombie_derrick/
scene "iqe/tpose.iqe"
origin 0 0 0
rotate 0 -90 0
scene "iqe/agonizing.iqe" fps 30
scene "iqe/attack1.iqe" fps 55
scene "iqe/attack2.iqe" fps 30
scene "iqe/dying1.iqe" fps 30
scene "iqe/dying1_fast.iqe" fps 30
scene "iqe/dying2.iqe" fps 30
scene "iqe/dying2_fast.iqe" fps 30
scene "iqe/hit_react_a.iqe" fps 30
scene "iqe/hit_react_b.iqe" fps 30
scene "iqe/idle1.iqe" fps 30 loop
scene "iqe/idle_hunt.iqe" fps 30 loop
scene "iqe/idle_scratch.iqe" fps 30 loop
scene "iqe/running1.iqe" fps 30 loop
scene "iqe/scream.iqe" fps 30 loop
scene "iqe/turn_backface.iqe" fps 30 loop
scene "iqe/walking1.iqe" fps 35 loop
scene "iqe/walking2.iqe" fps 50 loop
scene "iqe/walking3.iqe" fps 30 loop
```
All of this resulted in a slightly slow and somewhat limited by toolset workflow, however given the advantages and the fact that the Quake communities do keep these tools up to date (Eihrul has his own iqmtool, and Blender export/import.) Where FTEQW has its own iqmtool as well with several other improvements.
## Sources:
Source code to the engine [Tesseract src](https://websvn.tuxfamily.org/listing.php?repname=tesseract%2Fmain&path=%2Fsrc%2Fengine%2F&peg=2497&rev=#a24748246f41842de340a3f666837aa0a), which is messy/tricky to read, you'll see the files soon enough. He goes over the top with a BIH, Ragdolls, no physics engine for it either. That is not our goal but it might help be a reference for loading in IQM data.
|
priority
|
add in support for iqm animations and a few other of its special features why what for wasn t it already working in perfect shape the iqm format comes with a framerate value per animation and in fact also a name this framerate aspect is currently unintegrated and is almost a requirement to use in the case of n c if we want to make things work more nicely with the game code again if an animation is determined by an integral value and the hz rate is you can imagine that if all other weapon code which is in return strictly based on which frame you re at goes haywire wished for proposed solution would be that no matter the hz it ll play the animation at the same speed of course this would skip frames or just not work nicely at all in case there is a ridiculous high amount and the fps can t take it or visa versa there are more reasons to want to have full support because as above isn t like below today the iqm format can be expended it leaves room for devs to add in custom data by purpose fteqw engine has made great use of this and in fact loading in what they then called a vvm would be a piece of cake a single extra structure containing a variable or two this specific structure was about model events for example when animation shoot gets executed it d fire a local game implemented event number resulting in cool effects an example of such a qc script can be seen here v pistol qc the other reason is that should we use a head torso legs system like based engines do or can we one way or the other integrate blending there these few topics go beyond my reach on a code technical scale well i could do the head torso legs obviously but output v pistol vvm materialprefix textures models weapons pistol origin rotate scale scene iqe v pistol muzzleflash iqe noanim scene iqe v pistol muzzleflash iqe nomesh name fps start end event reset event weapon pistol fire event vvm scene iqe v pistol muzzleflash iqe nomesh name fps start end scene iqe v pistol muzzleflash iqe nomesh name fps start end event reset event weapon pistol reload scene iqe v pistol muzzleflash iqe nomesh name fps start end event reset event weapon pistol deploy scene iqe v pistol muzzleflash iqe nomesh name fps start end event reset event weapon pistol holster scene iqe v pistol muzzleflash iqe nomesh name fps start end examples of the file structure i used and the models themselves can be found in my old schizomania project repository notice how the names the frame rate per second and the specific frames out of the list in general to pick from can all be adjusted in these scripts the workflow might be a bit time consuming but it ll be our best bet at having nice animated characters as well as nice animated weapons keep in mind that for a bullet shell to pop out it d just take a certain event number modelname and any artist can actually take control over what shell what sound and at what frame this has to happen right now we got iqm playing nicely without blending if i recall this would require inverse kinematics however it seems to lack the framerate aspect which is the highest priority right now and of course all the other things mentioned one final example of how i did the zombie character back in schizomania it also mentions how animations can auto loop unless of course ever the game code tells it to go play some other animation output zombie derrick vvm materialprefix models characters zombie derrick scene iqe tpose iqe origin rotate scene iqe agonizing iqe fps scene iqe iqe fps scene iqe iqe fps scene iqe iqe fps scene iqe fast iqe fps scene iqe iqe fps scene iqe fast iqe fps scene iqe hit react a iqe fps scene iqe hit react b iqe fps scene iqe iqe fps loop scene iqe idle hunt iqe fps loop scene iqe idle scratch iqe fps loop scene iqe iqe fps loop scene iqe scream iqe fps loop scene iqe turn backface iqe fps loop scene iqe iqe fps loop scene iqe iqe fps loop scene iqe iqe fps loop all of this resulted in a slightly slow and somewhat limited by toolset workflow however given the advantages and the fact that the quake communities do keep these tools up to date eihrul has his own iqmtool and blender export import where fteqw has its own iqmtool as well with several other improvements sources source code to the engine which is messy tricky to read you ll see the files soon enough he goes over the top with a bih ragdolls no physics engine for it either that is not our goal but it might help be a reference for loading in iqm data
| 1
|
290,170
| 8,883,104,930
|
IssuesEvent
|
2019-01-14 14:56:40
|
prysmaticlabs/prysm
|
https://api.github.com/repos/prysmaticlabs/prysm
|
opened
|
Deprecate Deposit Contract in Solidity and Use Vyper
|
Help Wanted Priority: High
|
There will only be one PoW deposit contract in production and pinpoint to one now will help the community to discover bugs in the early phases. I don't think there's any reason for us to use our own deposit contract in Solidity, and since we can generate bindings from the ABI & bytecode for the vyper contract (see: https://vyper.readthedocs.io/en/latest/compiling-a-contract.html). We should be using the vyper contract as specified in the ETH2.0 spec.
High-level summaries of todo:
- Remove any reference to solidity contract
- Generate binding for vyper contract
- Use vypper contract binding across pyrsm
|
1.0
|
Deprecate Deposit Contract in Solidity and Use Vyper - There will only be one PoW deposit contract in production and pinpoint to one now will help the community to discover bugs in the early phases. I don't think there's any reason for us to use our own deposit contract in Solidity, and since we can generate bindings from the ABI & bytecode for the vyper contract (see: https://vyper.readthedocs.io/en/latest/compiling-a-contract.html). We should be using the vyper contract as specified in the ETH2.0 spec.
High-level summaries of todo:
- Remove any reference to solidity contract
- Generate binding for vyper contract
- Use vypper contract binding across pyrsm
|
priority
|
deprecate deposit contract in solidity and use vyper there will only be one pow deposit contract in production and pinpoint to one now will help the community to discover bugs in the early phases i don t think there s any reason for us to use our own deposit contract in solidity and since we can generate bindings from the abi bytecode for the vyper contract see we should be using the vyper contract as specified in the spec high level summaries of todo remove any reference to solidity contract generate binding for vyper contract use vypper contract binding across pyrsm
| 1
|
160,693
| 6,101,438,901
|
IssuesEvent
|
2017-06-20 14:37:36
|
kuzzleio/documentation
|
https://api.github.com/repos/kuzzleio/documentation
|
closed
|
Feedback about page 'api-documentation/controller-memory-storage/geohash.md'
|
bug priority-high
|
Colorization is not the same between (in `Other protocols`):
http://192.168.1.74:8080/api-documentation/controller-memory-storage/geohash/ and
http://192.168.1.74:8080/api-documentation/controller-memory-storage/geopos/
but the content seams the same.
|
1.0
|
Feedback about page 'api-documentation/controller-memory-storage/geohash.md' - Colorization is not the same between (in `Other protocols`):
http://192.168.1.74:8080/api-documentation/controller-memory-storage/geohash/ and
http://192.168.1.74:8080/api-documentation/controller-memory-storage/geopos/
but the content seams the same.
|
priority
|
feedback about page api documentation controller memory storage geohash md colorization is not the same between in other protocols and but the content seams the same
| 1
|
515,381
| 14,961,332,835
|
IssuesEvent
|
2021-01-27 07:34:37
|
maticnetwork/bor
|
https://api.github.com/repos/maticnetwork/bor
|
opened
|
Error: unknown ancestor while syncing Bor
|
help wanted high-priority
|
```
Jan 27 05:52:05 x bash[18140]: WARN [01-27|05:52:05.355] Synchronisation failed, dropping peer peer=425fb951f45be104 err="retrieved hash chain is invalid: unknown ancestor"
Jan 27 05:52:13 x bash[18140]: ERROR[01-27|05:52:13.942]
Jan 27 05:52:13 x bash[18140]: ########## BAD BLOCK #########
Jan 27 05:52:13 x bash[18140]: Chain config: {ChainID: 137 Homestead: 0 DAO: <nil> DAOSupport: false EIP150: 0 EIP155: 0 EIP158: 0 Byzantium: 0 Constantinople: 0 Petersburg: 0 Istanbul: 3395000, Muir Glacier: 3395000, YOLO v1: <nil>, Engine: bor}
Jan 27 05:52:13 x bash[18140]: Number: 9811556
Jan 27 05:52:13 x bash[18140]: Hash: 0x3d135064e1a20ebc3b7b771cc3d5c5358b12b5e8e7bc7b01572c0b3f03e7d7e0
Jan 27 05:52:13 x bash[18140]: Error: unknown ancestor
Jan 27 05:52:13 x bash[18140]: ##############################
Jan 27 05:52:13 x bash[18140]:
Jan 27 05:52:13 x bash[18140]: WARN [01-27|05:52:13.942] Synchronisation failed, dropping peer peer=9ec0a4f565fd1814 err="retrieved hash chain is invalid: unknown ancestor"
```
|
1.0
|
Error: unknown ancestor while syncing Bor - ```
Jan 27 05:52:05 x bash[18140]: WARN [01-27|05:52:05.355] Synchronisation failed, dropping peer peer=425fb951f45be104 err="retrieved hash chain is invalid: unknown ancestor"
Jan 27 05:52:13 x bash[18140]: ERROR[01-27|05:52:13.942]
Jan 27 05:52:13 x bash[18140]: ########## BAD BLOCK #########
Jan 27 05:52:13 x bash[18140]: Chain config: {ChainID: 137 Homestead: 0 DAO: <nil> DAOSupport: false EIP150: 0 EIP155: 0 EIP158: 0 Byzantium: 0 Constantinople: 0 Petersburg: 0 Istanbul: 3395000, Muir Glacier: 3395000, YOLO v1: <nil>, Engine: bor}
Jan 27 05:52:13 x bash[18140]: Number: 9811556
Jan 27 05:52:13 x bash[18140]: Hash: 0x3d135064e1a20ebc3b7b771cc3d5c5358b12b5e8e7bc7b01572c0b3f03e7d7e0
Jan 27 05:52:13 x bash[18140]: Error: unknown ancestor
Jan 27 05:52:13 x bash[18140]: ##############################
Jan 27 05:52:13 x bash[18140]:
Jan 27 05:52:13 x bash[18140]: WARN [01-27|05:52:13.942] Synchronisation failed, dropping peer peer=9ec0a4f565fd1814 err="retrieved hash chain is invalid: unknown ancestor"
```
|
priority
|
error unknown ancestor while syncing bor jan x bash warn synchronisation failed dropping peer peer err retrieved hash chain is invalid unknown ancestor jan x bash error jan x bash bad block jan x bash chain config chainid homestead dao daosupport false byzantium constantinople petersburg istanbul muir glacier yolo engine bor jan x bash number jan x bash hash jan x bash error unknown ancestor jan x bash jan x bash jan x bash warn synchronisation failed dropping peer peer err retrieved hash chain is invalid unknown ancestor
| 1
|
350,338
| 10,482,409,920
|
IssuesEvent
|
2019-09-24 12:08:12
|
aseprite/aseprite
|
https://api.github.com/repos/aseprite/aseprite
|
closed
|
Custom Brush Paintbrush Mode Is Broken
|
bug high priority sprite editor
|
It's possible I'm missing something, but paintbrush mode for brushes seems broken.
Here's Aseprite 1.1.5:

(I didn't check the very latest version it worked as expected. I probably would have if I hadn't accidentally submitted this issue before I finished writing it :P )
Here's Aseprite v1.2-beta2:

Essentially the brush is stamped once, and never again.
### Aseprite and System version
- Aseprite v1.2-beta2 Steam
Windows Vista 32 Bit
|
1.0
|
Custom Brush Paintbrush Mode Is Broken - It's possible I'm missing something, but paintbrush mode for brushes seems broken.
Here's Aseprite 1.1.5:

(I didn't check the very latest version it worked as expected. I probably would have if I hadn't accidentally submitted this issue before I finished writing it :P )
Here's Aseprite v1.2-beta2:

Essentially the brush is stamped once, and never again.
### Aseprite and System version
- Aseprite v1.2-beta2 Steam
Windows Vista 32 Bit
|
priority
|
custom brush paintbrush mode is broken it s possible i m missing something but paintbrush mode for brushes seems broken here s aseprite i didn t check the very latest version it worked as expected i probably would have if i hadn t accidentally submitted this issue before i finished writing it p here s aseprite essentially the brush is stamped once and never again aseprite and system version aseprite steam windows vista bit
| 1
|
363,865
| 10,756,209,952
|
IssuesEvent
|
2019-10-31 10:43:14
|
AY1920S1-CS2103-T14-2/main
|
https://api.github.com/repos/AY1920S1-CS2103-T14-2/main
|
closed
|
Add Assignments
|
priority.High severity.Medium type.enhancement
|
Add CRUD for Assignment model, commands (eg "addAssign", etc), statistics about student performance and GUI.
|
1.0
|
Add Assignments - Add CRUD for Assignment model, commands (eg "addAssign", etc), statistics about student performance and GUI.
|
priority
|
add assignments add crud for assignment model commands eg addassign etc statistics about student performance and gui
| 1
|
63,823
| 3,201,095,945
|
IssuesEvent
|
2015-10-02 02:49:03
|
cs2103aug2015-f10-4j/main
|
https://api.github.com/repos/cs2103aug2015-f10-4j/main
|
closed
|
A user should be able to search for a task/event using keywords
|
component.logic priority.high type.story
|
so that the user can find what he/she needs efficiently
|
1.0
|
A user should be able to search for a task/event using keywords - so that the user can find what he/she needs efficiently
|
priority
|
a user should be able to search for a task event using keywords so that the user can find what he she needs efficiently
| 1
|
233,119
| 7,694,029,209
|
IssuesEvent
|
2018-05-18 07:13:25
|
HGustavs/LenaSYS
|
https://api.github.com/repos/HGustavs/LenaSYS
|
closed
|
Code standard update
|
Group 1 (2018) highPriority
|
Because of issue #5273.
Update the code standard to reflect this change. Only var is supposed to be used, not let.
|
1.0
|
Code standard update - Because of issue #5273.
Update the code standard to reflect this change. Only var is supposed to be used, not let.
|
priority
|
code standard update because of issue update the code standard to reflect this change only var is supposed to be used not let
| 1
|
638,238
| 20,719,591,595
|
IssuesEvent
|
2022-03-13 06:51:56
|
AY2122S2-CS2113-F10-1/tp
|
https://api.github.com/repos/AY2122S2-CS2113-F10-1/tp
|
closed
|
As a user with existing project(s), I can add a to-do to a project
|
type.Story priority.High
|
...so that I can get a clear outline of what needs to be done.
|
1.0
|
As a user with existing project(s), I can add a to-do to a project - ...so that I can get a clear outline of what needs to be done.
|
priority
|
as a user with existing project s i can add a to do to a project so that i can get a clear outline of what needs to be done
| 1
|
656,202
| 21,723,455,963
|
IssuesEvent
|
2022-05-11 04:25:06
|
vmware/singleton
|
https://api.github.com/repos/vmware/singleton
|
opened
|
[BUG] [Ruby Client]Failed to get localized translation when requesting zh-cn/zh_cn locale by getString() and translate() interface.
|
kind/bug priority/high area/ruby-client
|
**Describe the bug**
Failed to get localized translation when requesting zh-cn/zh_cn locale by getString() and translate() interface.
**To Reproduce**
Steps to reproduce the behavior:
1. Load config file: Singleton.load_config()
2. The translation of en/zh-Hans locale:
en: about.description: "Use this area to provide additional information"
zh-Hans: "about.description" : "ไฝฟ็จๆญคๅบๅๅฏๆไพๅ
ถไปไฟกๆฏ"
3. Invoke below interface:
expect(SgtnClient::Translation.getString("about", "about.description", "zh_cn")).to eq("ไฝฟ็จๆญคๅบๅๅฏๆไพๅ
ถไปไฟกๆฏ")
expect(Singleton.translate("about.description", "about", "zh-cn")).to eq("ไฝฟ็จๆญคๅบๅๅฏๆไพๅ
ถไปไฟกๆฏ")
4. Return source en translation
**Expected behavior**
Should return zh-Hans translation.
|
1.0
|
[BUG] [Ruby Client]Failed to get localized translation when requesting zh-cn/zh_cn locale by getString() and translate() interface. - **Describe the bug**
Failed to get localized translation when requesting zh-cn/zh_cn locale by getString() and translate() interface.
**To Reproduce**
Steps to reproduce the behavior:
1. Load config file: Singleton.load_config()
2. The translation of en/zh-Hans locale:
en: about.description: "Use this area to provide additional information"
zh-Hans: "about.description" : "ไฝฟ็จๆญคๅบๅๅฏๆไพๅ
ถไปไฟกๆฏ"
3. Invoke below interface:
expect(SgtnClient::Translation.getString("about", "about.description", "zh_cn")).to eq("ไฝฟ็จๆญคๅบๅๅฏๆไพๅ
ถไปไฟกๆฏ")
expect(Singleton.translate("about.description", "about", "zh-cn")).to eq("ไฝฟ็จๆญคๅบๅๅฏๆไพๅ
ถไปไฟกๆฏ")
4. Return source en translation
**Expected behavior**
Should return zh-Hans translation.
|
priority
|
failed to get localized translation when requesting zh cn zh cn locale by getstring and translate interface describe the bug failed to get localized translation when requesting zh cn zh cn locale by getstring and translate interface to reproduce steps to reproduce the behavior load config file singleton load config the translation of en zh hans locale en about description use this area to provide additional information zh hans about description ไฝฟ็จๆญคๅบๅๅฏๆไพๅ
ถไปไฟกๆฏ invoke below interface expect sgtnclient translation getstring about about description zh cn to eq ไฝฟ็จๆญคๅบๅๅฏๆไพๅ
ถไปไฟกๆฏ expect singleton translate about description about zh cn to eq ไฝฟ็จๆญคๅบๅๅฏๆไพๅ
ถไปไฟกๆฏ return source en translation expected behavior should return zh hans translation
| 1
|
474,582
| 13,672,189,309
|
IssuesEvent
|
2020-09-29 08:09:35
|
nextcloud/mail
|
https://api.github.com/repos/nextcloud/mail
|
opened
|
Opening an envelope action menu navigates to that envelope
|
1. to develop bug priority:high regression
|
### Expected behavior
It should be possible to open the actions menu of an envelope without opening the thread.
### Actual behavior
The click event triggers a navigation.
### Mail app
**Mail app version:** (see apps admin page, e.g. 0.5.3)
Latest master
|
1.0
|
Opening an envelope action menu navigates to that envelope - ### Expected behavior
It should be possible to open the actions menu of an envelope without opening the thread.
### Actual behavior
The click event triggers a navigation.
### Mail app
**Mail app version:** (see apps admin page, e.g. 0.5.3)
Latest master
|
priority
|
opening an envelope action menu navigates to that envelope expected behavior it should be possible to open the actions menu of an envelope without opening the thread actual behavior the click event triggers a navigation mail app mail app version see apps admin page e g latest master
| 1
|
607,092
| 18,772,945,697
|
IssuesEvent
|
2021-11-07 06:22:55
|
AY2122S1-CS2103T-T10-1/tp
|
https://api.github.com/repos/AY2122S1-CS2103T-T10-1/tp
|
closed
|
Absence of specific error messages for edit command
|
type.Bug priority.High mustfix
|
steps to reproduce bug :
enter `edit te/teammates`
(also `edit 1 te/` does not display an error message saying that the tele handle is missing and likewise for other fields)
Expected output : error message saying that index of person to be edited is missing
Actual output :

|
1.0
|
Absence of specific error messages for edit command - steps to reproduce bug :
enter `edit te/teammates`
(also `edit 1 te/` does not display an error message saying that the tele handle is missing and likewise for other fields)
Expected output : error message saying that index of person to be edited is missing
Actual output :

|
priority
|
absence of specific error messages for edit command steps to reproduce bug enter edit te teammates also edit te does not display an error message saying that the tele handle is missing and likewise for other fields expected output error message saying that index of person to be edited is missing actual output
| 1
|
93,092
| 3,882,647,576
|
IssuesEvent
|
2016-04-13 10:46:13
|
Co0sh/BetonQuest
|
https://api.github.com/repos/Co0sh/BetonQuest
|
closed
|
Conversations don't stop if Player disconnects
|
Confirmed Bug High Priority
|
I'm using Version 1.8.2 of your plugin on my private server and I have the following issue:
When a player disconnects form the server because of a bad internet connection or if his minecraft crashes and is in a conversation it seems that the conversation dosen't stop.
If he reconnects afther this he isn't able to talk to any NPC's, if he right klicks on them nothing happens and the Debug-log shows the message:
`[21.02.2016 16:32:52] DEBUG: Player LaCore is in conversation right now, returning.`
If he wants to go away he can't, becaus the `stop:` option of the NPC is set to `'true'`
But he also is not able to end the conversation because the conversation isn't shown in chat.
If he randomly types in a number he can walk away, as if stop wasn't active, but he still can't talk to NPC's.
Also if I restart the server and use `/q purge LaCore` he still can't talk to the NPC.
The conversation for the NPC works fine for all oter players which did not disconnect during a conversation.
This issue also happens if you use `/q reload`, while a player is in a conversation.
Sorry if my english isn't the best, I'm not a native english speeker...
I hope you can understand my problem anyway.
|
1.0
|
Conversations don't stop if Player disconnects - I'm using Version 1.8.2 of your plugin on my private server and I have the following issue:
When a player disconnects form the server because of a bad internet connection or if his minecraft crashes and is in a conversation it seems that the conversation dosen't stop.
If he reconnects afther this he isn't able to talk to any NPC's, if he right klicks on them nothing happens and the Debug-log shows the message:
`[21.02.2016 16:32:52] DEBUG: Player LaCore is in conversation right now, returning.`
If he wants to go away he can't, becaus the `stop:` option of the NPC is set to `'true'`
But he also is not able to end the conversation because the conversation isn't shown in chat.
If he randomly types in a number he can walk away, as if stop wasn't active, but he still can't talk to NPC's.
Also if I restart the server and use `/q purge LaCore` he still can't talk to the NPC.
The conversation for the NPC works fine for all oter players which did not disconnect during a conversation.
This issue also happens if you use `/q reload`, while a player is in a conversation.
Sorry if my english isn't the best, I'm not a native english speeker...
I hope you can understand my problem anyway.
|
priority
|
conversations don t stop if player disconnects i m using version of your plugin on my private server and i have the following issue when a player disconnects form the server because of a bad internet connection or if his minecraft crashes and is in a conversation it seems that the conversation dosen t stop if he reconnects afther this he isn t able to talk to any npc s if he right klicks on them nothing happens and the debug log shows the message debug player lacore is in conversation right now returning if he wants to go away he can t becaus the stop option of the npc is set to true but he also is not able to end the conversation because the conversation isn t shown in chat if he randomly types in a number he can walk away as if stop wasn t active but he still can t talk to npc s also if i restart the server and use q purge lacore he still can t talk to the npc the conversation for the npc works fine for all oter players which did not disconnect during a conversation this issue also happens if you use q reload while a player is in a conversation sorry if my english isn t the best i m not a native english speeker i hope you can understand my problem anyway
| 1
|
520,037
| 15,077,758,911
|
IssuesEvent
|
2021-02-05 07:34:07
|
wso2/cellery
|
https://api.github.com/repos/wso2/cellery
|
closed
|
Attribute/Header based request routing
|
Priority/High Resolution/Wonโt Fix Severity/Major Type/New Feature
|
Currently when we perform the canary updates, we split the entire traffic by percentage for v1 and v2 of the cell instances. But I think it's nice to have header or attribute based routing as well. For example, we should be able to route the traffic from certain region to one cell instance where as all other requests will be routed to another cell instance.
|
1.0
|
Attribute/Header based request routing - Currently when we perform the canary updates, we split the entire traffic by percentage for v1 and v2 of the cell instances. But I think it's nice to have header or attribute based routing as well. For example, we should be able to route the traffic from certain region to one cell instance where as all other requests will be routed to another cell instance.
|
priority
|
attribute header based request routing currently when we perform the canary updates we split the entire traffic by percentage for and of the cell instances but i think it s nice to have header or attribute based routing as well for example we should be able to route the traffic from certain region to one cell instance where as all other requests will be routed to another cell instance
| 1
|
425,935
| 12,364,964,993
|
IssuesEvent
|
2020-05-18 08:02:55
|
cms-gem-daq-project/ctp7_modules
|
https://api.github.com/repos/cms-gem-daq-project/ctp7_modules
|
closed
|
Migrate `ctp7_module` code to templated RPC calls
|
Priority: High Status: Help Wanted Type: Enhancement
|
## Brief summary of issue
This issue will track the actual migration of the various current `ctp7_modules`.
Some will need an interface redesign, due to the complicated current usage, some will be straightforward to implement once cms-gem-daq-project/xhal#123 is closed
### Types of issue
- [x] Feature request (request for change which adds functionality)
## Expected Behavior
* All functions **must** no longer be "local"
* **No** functions shall access any part of an `RPCMessage` object
* Exceptions **must** be defined in #146, and shall be handled appropriately
* Function signatures and return types **shall** be as simple as possible to cover usage and factored accordingly where necessary
* Functional block wrappers **may** wrap around multiple module functions to expose higher level functionality to a remote call
## Current Behavior
Local functions are called by RPC callbacks, which are called by the remote method
## Context (for feature requests)
Part of refactoring and migration to utilization of templated RPC calls
|
1.0
|
Migrate `ctp7_module` code to templated RPC calls - ## Brief summary of issue
This issue will track the actual migration of the various current `ctp7_modules`.
Some will need an interface redesign, due to the complicated current usage, some will be straightforward to implement once cms-gem-daq-project/xhal#123 is closed
### Types of issue
- [x] Feature request (request for change which adds functionality)
## Expected Behavior
* All functions **must** no longer be "local"
* **No** functions shall access any part of an `RPCMessage` object
* Exceptions **must** be defined in #146, and shall be handled appropriately
* Function signatures and return types **shall** be as simple as possible to cover usage and factored accordingly where necessary
* Functional block wrappers **may** wrap around multiple module functions to expose higher level functionality to a remote call
## Current Behavior
Local functions are called by RPC callbacks, which are called by the remote method
## Context (for feature requests)
Part of refactoring and migration to utilization of templated RPC calls
|
priority
|
migrate module code to templated rpc calls brief summary of issue this issue will track the actual migration of the various current modules some will need an interface redesign due to the complicated current usage some will be straightforward to implement once cms gem daq project xhal is closed types of issue feature request request for change which adds functionality expected behavior all functions must no longer be local no functions shall access any part of an rpcmessage object exceptions must be defined in and shall be handled appropriately function signatures and return types shall be as simple as possible to cover usage and factored accordingly where necessary functional block wrappers may wrap around multiple module functions to expose higher level functionality to a remote call current behavior local functions are called by rpc callbacks which are called by the remote method context for feature requests part of refactoring and migration to utilization of templated rpc calls
| 1
|
814,829
| 30,523,805,027
|
IssuesEvent
|
2023-07-19 09:50:15
|
chimple/cuba
|
https://api.github.com/repos/chimple/cuba
|
closed
|
when we install the apk, directly it will go to inside the profile
|
bug High Priority
|
In some of the device's when we install apk it's directly taking inside the profile https://images.zenhubusercontent.com/517241518/717ed551-f038-4c81-a8a4-50622ab06c44/screenrecorder_2023_07_12_14_12_40_196.mp4
|
1.0
|
when we install the apk, directly it will go to inside the profile - In some of the device's when we install apk it's directly taking inside the profile https://images.zenhubusercontent.com/517241518/717ed551-f038-4c81-a8a4-50622ab06c44/screenrecorder_2023_07_12_14_12_40_196.mp4
|
priority
|
when we install the apk directly it will go to inside the profile in some of the device s when we install apk it s directly taking inside the profile
| 1
|
312,135
| 9,543,838,129
|
IssuesEvent
|
2019-05-01 12:02:13
|
AugurProject/augur
|
https://api.github.com/repos/AugurProject/augur
|
closed
|
order book disappears after enter an order
|
Bug Priority: High
|
entering a sell order onto the order book...after the trade confirms from mm before it hits up onto the order book, the whole order book disappears and then comes back
|
1.0
|
order book disappears after enter an order - entering a sell order onto the order book...after the trade confirms from mm before it hits up onto the order book, the whole order book disappears and then comes back
|
priority
|
order book disappears after enter an order entering a sell order onto the order book after the trade confirms from mm before it hits up onto the order book the whole order book disappears and then comes back
| 1
|
563,322
| 16,680,182,784
|
IssuesEvent
|
2021-06-07 22:05:39
|
bounswe/2021SpringGroup10
|
https://api.github.com/repos/bounswe/2021SpringGroup10
|
closed
|
Adding Home Page for Practice App
|
Coding: Frontend Platform: Web Priority: High
|
Every team member has prepared a demo application to learn and practice about API logic with Flask. To stage every different app we need to have one main Home Page which performs a bridge utility.
|
1.0
|
Adding Home Page for Practice App - Every team member has prepared a demo application to learn and practice about API logic with Flask. To stage every different app we need to have one main Home Page which performs a bridge utility.
|
priority
|
adding home page for practice app every team member has prepared a demo application to learn and practice about api logic with flask to stage every different app we need to have one main home page which performs a bridge utility
| 1
|
778,349
| 27,312,382,959
|
IssuesEvent
|
2023-02-24 13:14:34
|
ploomber/jupysql
|
https://api.github.com/repos/ploomber/jupysql
|
opened
|
More robust SQLquery generation
|
high priority
|
Currently, we have a few SQL templates that we use to generate queries (examples: [here](https://github.com/ploomber/jupysql/blob/924f237d1c97733211eeea2f37d521de841cbdcd/src/sql/store.py#L66), and [here](https://github.com/ploomber/jupysql/blob/924f237d1c97733211eeea2f37d521de841cbdcd/src/sql/plot.py#L72)). All of these templates have double quotes to wrap identifiers such as a table or column names (I added this to support identifiers with spaces). However, this isn't compatible with MySQL (and possibly other databases as well).
We've had reports ([CTEs](https://github.com/ploomber/jupysql/issues/145) and [plotting](https://github.com/ploomber/jupysql/pull/152#issuecomment-1442805171)) where JupySQL fails on MySQL because the default configuration uses backticks and breaks with double quotes.
## solution: sqlglot
I did some quick comparison and determined that sqlglot is the best solution: https://github.com/ploomber/contributing/blob/main/notes/sqlalchemy-sqlglot.ipynb
requirements:
- we need to validate it produces valid SQL for the `percentile_disc` use case (namely a SQL query that has `percentile_disc([0.25, 0.50])` which is valid in duckdb, let's validate that the output from sqlglot is valid in MySQL, postgres, and sqlite)
- mapping between sqlalchemy dialect and sqlglot `write` parameter. sqlalchemy's dialect string might be different from the parameter that sqlglot expects
## comments
Note that this is a configurable parameter, and users can configure MySQL (or other databases) to use double quotes and perhaps other characters. So even if we generate SQL statements with the default character, it might fail. For now, we can add a section in our docs explaining this issue (that JupySQL will generate SQL statements with the default delimiter).
|
1.0
|
More robust SQLquery generation - Currently, we have a few SQL templates that we use to generate queries (examples: [here](https://github.com/ploomber/jupysql/blob/924f237d1c97733211eeea2f37d521de841cbdcd/src/sql/store.py#L66), and [here](https://github.com/ploomber/jupysql/blob/924f237d1c97733211eeea2f37d521de841cbdcd/src/sql/plot.py#L72)). All of these templates have double quotes to wrap identifiers such as a table or column names (I added this to support identifiers with spaces). However, this isn't compatible with MySQL (and possibly other databases as well).
We've had reports ([CTEs](https://github.com/ploomber/jupysql/issues/145) and [plotting](https://github.com/ploomber/jupysql/pull/152#issuecomment-1442805171)) where JupySQL fails on MySQL because the default configuration uses backticks and breaks with double quotes.
## solution: sqlglot
I did some quick comparison and determined that sqlglot is the best solution: https://github.com/ploomber/contributing/blob/main/notes/sqlalchemy-sqlglot.ipynb
requirements:
- we need to validate it produces valid SQL for the `percentile_disc` use case (namely a SQL query that has `percentile_disc([0.25, 0.50])` which is valid in duckdb, let's validate that the output from sqlglot is valid in MySQL, postgres, and sqlite)
- mapping between sqlalchemy dialect and sqlglot `write` parameter. sqlalchemy's dialect string might be different from the parameter that sqlglot expects
## comments
Note that this is a configurable parameter, and users can configure MySQL (or other databases) to use double quotes and perhaps other characters. So even if we generate SQL statements with the default character, it might fail. For now, we can add a section in our docs explaining this issue (that JupySQL will generate SQL statements with the default delimiter).
|
priority
|
more robust sqlquery generation currently we have a few sql templates that we use to generate queries examples and all of these templates have double quotes to wrap identifiers such as a table or column names i added this to support identifiers with spaces however this isn t compatible with mysql and possibly other databases as well we ve had reports and where jupysql fails on mysql because the default configuration uses backticks and breaks with double quotes solution sqlglot i did some quick comparison and determined that sqlglot is the best solution requirements we need to validate it produces valid sql for the percentile disc use case namely a sql query that has percentile disc which is valid in duckdb let s validate that the output from sqlglot is valid in mysql postgres and sqlite mapping between sqlalchemy dialect and sqlglot write parameter sqlalchemy s dialect string might be different from the parameter that sqlglot expects comments note that this is a configurable parameter and users can configure mysql or other databases to use double quotes and perhaps other characters so even if we generate sql statements with the default character it might fail for now we can add a section in our docs explaining this issue that jupysql will generate sql statements with the default delimiter
| 1
|
527,337
| 15,340,322,198
|
IssuesEvent
|
2021-02-27 06:28:05
|
philchapdelaine/cpsc312_p1
|
https://api.github.com/repos/philchapdelaine/cpsc312_p1
|
closed
|
GUI - display the main game/state
|
higher priority
|
Basically as the title says. display the grid with the alive cells
additonally I think displaying a new state every 1 or 2 seconds makes sense
|
1.0
|
GUI - display the main game/state - Basically as the title says. display the grid with the alive cells
additonally I think displaying a new state every 1 or 2 seconds makes sense
|
priority
|
gui display the main game state basically as the title says display the grid with the alive cells additonally i think displaying a new state every or seconds makes sense
| 1
|
103,033
| 4,164,184,636
|
IssuesEvent
|
2016-06-18 16:20:36
|
MinetestForFun/server-minetestforfun
|
https://api.github.com/repos/MinetestForFun/server-minetestforfun
|
closed
|
Dรฉcalage entre spawns
|
Modding โค BugFix Priority: High
|
Lorsque je spawn du portail nether q cote du spawn on arrive au nether avec un dรฉcalage. Et au retour, c'est pire, on tombe a cotรฉ du spawn et de tres haut, on en ,meurt en perdant tout
|
1.0
|
Dรฉcalage entre spawns - Lorsque je spawn du portail nether q cote du spawn on arrive au nether avec un dรฉcalage. Et au retour, c'est pire, on tombe a cotรฉ du spawn et de tres haut, on en ,meurt en perdant tout
|
priority
|
dรฉcalage entre spawns lorsque je spawn du portail nether q cote du spawn on arrive au nether avec un dรฉcalage et au retour c est pire on tombe a cotรฉ du spawn et de tres haut on en meurt en perdant tout
| 1
|
648,376
| 21,184,466,763
|
IssuesEvent
|
2022-04-08 11:14:48
|
gardener/gardener
|
https://api.github.com/repos/gardener/gardener
|
closed
|
Avoid querying all extension admission webhooks
|
kind/enhancement area/robustness area/high-availability priority/4
|
**How to categorize this issue?**
<!--
Please select area, kind, and priority for this issue. This helps the community categorizing it.
Replace below TODOs or exchange the existing identifiers with those that fit best in your opinion.
If multiple identifiers make sense you can also state the commands multiple times, e.g.
/area control-plane
/area auto-scaling
...
"/area" identifiers: audit-logging|auto-scaling|backup|certification|control-plane-migration|control-plane|cost|delivery|dev-productivity|disaster-recovery|documentation|high-availability|logging|metering|monitoring|networking|open-source|ops-productivity|os|performance|quality|robustness|scalability|security|storage|testing|usability|user-management
"/kind" identifiers: api-change|bug|cleanup|discussion|enhancement|epic|impediment|poc|post-mortem|question|regression|task|technical-debt|test
-->
/area high-availability
/area robustness
/kind enhancement
**What would you like to be added**:
Would be great if gardener had a mechanism to avoid having to query all admission webhooks on e.g. shoot creation.
Today on a given installation we have e.g. a bunch of them:
```
gardener-extension-admission-alicloud
gardener-extension-admission-aws
gardener-extension-admission-azure
gardener-extension-admission-gcp
gardener-extension-admission-openstack
gardener-extension-validator-vsphere
```
Now if we create a shoot e.g. for AWS, also the non-AWS webhooks are called, and if one of them is not ready (e.g. the vpshere one), then the whole admission fails and shoots cannot be created.
@rfranzke suggested that such thing could be implemented by defining some labels on a shoot by another/new admission webhook, that would allow the cloudprovider specific webhooks to filter correspondingly for shoots they are responsible for and thus ignore all other shoots.
**Why is this needed**:
Limit the blast radius of cloudprovider specific or extension specific failures.
|
1.0
|
Avoid querying all extension admission webhooks - **How to categorize this issue?**
<!--
Please select area, kind, and priority for this issue. This helps the community categorizing it.
Replace below TODOs or exchange the existing identifiers with those that fit best in your opinion.
If multiple identifiers make sense you can also state the commands multiple times, e.g.
/area control-plane
/area auto-scaling
...
"/area" identifiers: audit-logging|auto-scaling|backup|certification|control-plane-migration|control-plane|cost|delivery|dev-productivity|disaster-recovery|documentation|high-availability|logging|metering|monitoring|networking|open-source|ops-productivity|os|performance|quality|robustness|scalability|security|storage|testing|usability|user-management
"/kind" identifiers: api-change|bug|cleanup|discussion|enhancement|epic|impediment|poc|post-mortem|question|regression|task|technical-debt|test
-->
/area high-availability
/area robustness
/kind enhancement
**What would you like to be added**:
Would be great if gardener had a mechanism to avoid having to query all admission webhooks on e.g. shoot creation.
Today on a given installation we have e.g. a bunch of them:
```
gardener-extension-admission-alicloud
gardener-extension-admission-aws
gardener-extension-admission-azure
gardener-extension-admission-gcp
gardener-extension-admission-openstack
gardener-extension-validator-vsphere
```
Now if we create a shoot e.g. for AWS, also the non-AWS webhooks are called, and if one of them is not ready (e.g. the vpshere one), then the whole admission fails and shoots cannot be created.
@rfranzke suggested that such thing could be implemented by defining some labels on a shoot by another/new admission webhook, that would allow the cloudprovider specific webhooks to filter correspondingly for shoots they are responsible for and thus ignore all other shoots.
**Why is this needed**:
Limit the blast radius of cloudprovider specific or extension specific failures.
|
priority
|
avoid querying all extension admission webhooks how to categorize this issue please select area kind and priority for this issue this helps the community categorizing it replace below todos or exchange the existing identifiers with those that fit best in your opinion if multiple identifiers make sense you can also state the commands multiple times e g area control plane area auto scaling area identifiers audit logging auto scaling backup certification control plane migration control plane cost delivery dev productivity disaster recovery documentation high availability logging metering monitoring networking open source ops productivity os performance quality robustness scalability security storage testing usability user management kind identifiers api change bug cleanup discussion enhancement epic impediment poc post mortem question regression task technical debt test area high availability area robustness kind enhancement what would you like to be added would be great if gardener had a mechanism to avoid having to query all admission webhooks on e g shoot creation today on a given installation we have e g a bunch of them gardener extension admission alicloud gardener extension admission aws gardener extension admission azure gardener extension admission gcp gardener extension admission openstack gardener extension validator vsphere now if we create a shoot e g for aws also the non aws webhooks are called and if one of them is not ready e g the vpshere one then the whole admission fails and shoots cannot be created rfranzke suggested that such thing could be implemented by defining some labels on a shoot by another new admission webhook that would allow the cloudprovider specific webhooks to filter correspondingly for shoots they are responsible for and thus ignore all other shoots why is this needed limit the blast radius of cloudprovider specific or extension specific failures
| 1
|
112,774
| 4,537,904,714
|
IssuesEvent
|
2016-09-09 03:12:14
|
ampproject/amphtml
|
https://api.github.com/repos/ampproject/amphtml
|
closed
|
Ads should still load if getting CID fails
|
Priority: High Related to: Ads
|
We have had cases where a bug in the viewer causes the promise to get the CID fail/never resolve, which in turn is causing Ads to fail to load.
We should ensure Ads can still load if getting CID fails within `x` milliseconds to protect against these situations where a viewer bug breaks ads.
|
1.0
|
Ads should still load if getting CID fails - We have had cases where a bug in the viewer causes the promise to get the CID fail/never resolve, which in turn is causing Ads to fail to load.
We should ensure Ads can still load if getting CID fails within `x` milliseconds to protect against these situations where a viewer bug breaks ads.
|
priority
|
ads should still load if getting cid fails we have had cases where a bug in the viewer causes the promise to get the cid fail never resolve which in turn is causing ads to fail to load we should ensure ads can still load if getting cid fails within x milliseconds to protect against these situations where a viewer bug breaks ads
| 1
|
257,179
| 8,133,746,975
|
IssuesEvent
|
2018-08-19 06:59:38
|
commons-app/apps-android-commons
|
https://api.github.com/repos/commons-app/apps-android-commons
|
opened
|
Using the same filename in v2.8 overwrites the existing file
|
bug high priority
|
**Summary:**
I received a report from a user that creating a second file with the same filename as the first, overwrites the first file - https://commons.wikimedia.org/wiki/File:Venner%C3%B8d_skole.jpeg
This should not happen. While we have had a few overwrite issues (see #703 ), they are not usually as straightforward as this one. The app should be adding a numbered suffix if the filename exists, and it seems we are not doing that. I wonder if some of the changes made recently to the upload process would have caused this?
@neslihanturan could you check if the upload workflow change affected this? Thanks.
**Steps to reproduce:**
Upload a file, then upload another one with the same filename.
**Device and Android version:**
@PeterFisk could you please fill this in?
**Commons app version:**
2.8.1
**Would you like to work on the issue?**
Pref not
|
1.0
|
Using the same filename in v2.8 overwrites the existing file - **Summary:**
I received a report from a user that creating a second file with the same filename as the first, overwrites the first file - https://commons.wikimedia.org/wiki/File:Venner%C3%B8d_skole.jpeg
This should not happen. While we have had a few overwrite issues (see #703 ), they are not usually as straightforward as this one. The app should be adding a numbered suffix if the filename exists, and it seems we are not doing that. I wonder if some of the changes made recently to the upload process would have caused this?
@neslihanturan could you check if the upload workflow change affected this? Thanks.
**Steps to reproduce:**
Upload a file, then upload another one with the same filename.
**Device and Android version:**
@PeterFisk could you please fill this in?
**Commons app version:**
2.8.1
**Would you like to work on the issue?**
Pref not
|
priority
|
using the same filename in overwrites the existing file summary i received a report from a user that creating a second file with the same filename as the first overwrites the first file this should not happen while we have had a few overwrite issues see they are not usually as straightforward as this one the app should be adding a numbered suffix if the filename exists and it seems we are not doing that i wonder if some of the changes made recently to the upload process would have caused this neslihanturan could you check if the upload workflow change affected this thanks steps to reproduce upload a file then upload another one with the same filename device and android version peterfisk could you please fill this in commons app version would you like to work on the issue pref not
| 1
|
326,950
| 9,962,987,350
|
IssuesEvent
|
2019-07-07 19:17:40
|
eads/desapariciones
|
https://api.github.com/repos/eads/desapariciones
|
opened
|
sync components on mapbox viewport change
|
category: site priority: high
|
moving around the viewport should allow for (throttled) data updates
|
1.0
|
sync components on mapbox viewport change - moving around the viewport should allow for (throttled) data updates
|
priority
|
sync components on mapbox viewport change moving around the viewport should allow for throttled data updates
| 1
|
667,409
| 22,471,288,622
|
IssuesEvent
|
2022-06-22 08:21:03
|
PovertyAction/high-frequency-checks
|
https://api.github.com/repos/PovertyAction/high-frequency-checks
|
closed
|
Constantly get tempfile errors when running the template
|
bug high priority
|
Constantly get this error: _file C:\Users\CBREWS~1.IPA\AppData\Local\Temp\ST_0d000005.tmp is read-only; cannot be modified or erased_
Often can work around it by just rerunning the do file a few times until it runs through but should figure out why this is happening.
Need to start tracking for which checks this is happening.
|
1.0
|
Constantly get tempfile errors when running the template - Constantly get this error: _file C:\Users\CBREWS~1.IPA\AppData\Local\Temp\ST_0d000005.tmp is read-only; cannot be modified or erased_
Often can work around it by just rerunning the do file a few times until it runs through but should figure out why this is happening.
Need to start tracking for which checks this is happening.
|
priority
|
constantly get tempfile errors when running the template constantly get this error file c users cbrews ipa appdata local temp st tmp is read only cannot be modified or erased often can work around it by just rerunning the do file a few times until it runs through but should figure out why this is happening need to start tracking for which checks this is happening
| 1
|
432,502
| 12,494,266,665
|
IssuesEvent
|
2020-06-01 10:51:32
|
sodafoundation/SIM
|
https://api.github.com/repos/sodafoundation/SIM
|
closed
|
E2E alert processing at alert manger side
|
Feature High Priority
|
*@sushanthakumar commented on May 10, 2020, 6:27 PM UTC:*
**Is this a BUG REPORT or FEATURE REQUEST?**:
> /kind feature
**What happened**:
Alert manager is responsible for
> Listening to traps
> Process incoming traps and extract meaningful info
> Identify the respective driver
> Invoke driver manager interface and get the filled alert model
> Export the model
**What you expected to happen**:
Alert manager should implement above functionalities
**How to reproduce it (as minimally and precisely as possible)**:
**Anything else we need to know?**:
**Environment**:
* NBP version:
* OS (e.g. from /etc/os-release):
* Kernel (e.g. `uname -a`):
* Install tools:
* Others:
*This issue was moved by [kumarashit](https://github.com/kumarashit) from [sodafoundation/SIM-TempIssues#7](https://github.com/sodafoundation/SIM-TempIssues/issues/7).*
|
1.0
|
E2E alert processing at alert manger side - *@sushanthakumar commented on May 10, 2020, 6:27 PM UTC:*
**Is this a BUG REPORT or FEATURE REQUEST?**:
> /kind feature
**What happened**:
Alert manager is responsible for
> Listening to traps
> Process incoming traps and extract meaningful info
> Identify the respective driver
> Invoke driver manager interface and get the filled alert model
> Export the model
**What you expected to happen**:
Alert manager should implement above functionalities
**How to reproduce it (as minimally and precisely as possible)**:
**Anything else we need to know?**:
**Environment**:
* NBP version:
* OS (e.g. from /etc/os-release):
* Kernel (e.g. `uname -a`):
* Install tools:
* Others:
*This issue was moved by [kumarashit](https://github.com/kumarashit) from [sodafoundation/SIM-TempIssues#7](https://github.com/sodafoundation/SIM-TempIssues/issues/7).*
|
priority
|
alert processing at alert manger side sushanthakumar commented on may pm utc is this a bug report or feature request kind feature what happened alert manager is responsible for listening to traps process incoming traps and extract meaningful info identify the respective driver invoke driver manager interface and get the filled alert model export the model what you expected to happen alert manager should implement above functionalities how to reproduce it as minimally and precisely as possible anything else we need to know environment nbp version os e g from etc os release kernel e g uname a install tools others this issue was moved by from
| 1
|
453,256
| 13,067,287,715
|
IssuesEvent
|
2020-07-30 23:59:41
|
craftercms/craftercms
|
https://api.github.com/repos/craftercms/craftercms
|
closed
|
[studio-ui] Change the character limit for username when creating a user
|
bug priority: high
|
## Describe the bug
The UI for creating a user only allows up to 32 characters for a username. We allow email addresses for username, and in some cases, email addresses are longer than 32 characters. Update the limit for the number of characters allowed for the username to be the same as the limit for the username in the db
## To Reproduce
Steps to reproduce the behavior:
1. From the `Main Menu` click on `Users` and click on the `New User` button
2. In the `User Name` field, enter more than 32 characters, notice the warning `User Name can't be longer than 32 characters`

## Expected behavior
The number of characters allowed for the `User Name` should be the same as what the db allows. (`username` VARCHAR(255)`, please verify with @dejan-brkic max char number for user name)
## Screenshots
{{If applicable, add screenshots to help explain your problem.}}
## Logs
{{If applicable, attach the logs/stack trace (use https://gist.github.com).}}
## Specs
### Version
Studio Version Number: 3.1.9-SNAPSHOT-ea1875
Build Number: ea1875bbcbf8c71c8244f75239e102908eec949a
Build Date/Time: 07-16-2020 10:18:39 -0400
### OS
MacOS
### Browser
Chrome browser
## Additional context
{{Add any other context about the problem here.}}
|
1.0
|
[studio-ui] Change the character limit for username when creating a user - ## Describe the bug
The UI for creating a user only allows up to 32 characters for a username. We allow email addresses for username, and in some cases, email addresses are longer than 32 characters. Update the limit for the number of characters allowed for the username to be the same as the limit for the username in the db
## To Reproduce
Steps to reproduce the behavior:
1. From the `Main Menu` click on `Users` and click on the `New User` button
2. In the `User Name` field, enter more than 32 characters, notice the warning `User Name can't be longer than 32 characters`

## Expected behavior
The number of characters allowed for the `User Name` should be the same as what the db allows. (`username` VARCHAR(255)`, please verify with @dejan-brkic max char number for user name)
## Screenshots
{{If applicable, add screenshots to help explain your problem.}}
## Logs
{{If applicable, attach the logs/stack trace (use https://gist.github.com).}}
## Specs
### Version
Studio Version Number: 3.1.9-SNAPSHOT-ea1875
Build Number: ea1875bbcbf8c71c8244f75239e102908eec949a
Build Date/Time: 07-16-2020 10:18:39 -0400
### OS
MacOS
### Browser
Chrome browser
## Additional context
{{Add any other context about the problem here.}}
|
priority
|
change the character limit for username when creating a user describe the bug the ui for creating a user only allows up to characters for a username we allow email addresses for username and in some cases email addresses are longer than characters update the limit for the number of characters allowed for the username to be the same as the limit for the username in the db to reproduce steps to reproduce the behavior from the main menu click on users and click on the new user button in the user name field enter more than characters notice the warning user name can t be longer than characters expected behavior the number of characters allowed for the user name should be the same as what the db allows username varchar please verify with dejan brkic max char number for user name screenshots if applicable add screenshots to help explain your problem logs if applicable attach the logs stack trace use specs version studio version number snapshot build number build date time os macos browser chrome browser additional context add any other context about the problem here
| 1
|
185,643
| 6,726,181,188
|
IssuesEvent
|
2017-10-17 08:59:01
|
ballerinalang/composer
|
https://api.github.com/repos/ballerinalang/composer
|
closed
|
Wrong syntax generated when adding structs
|
0.94-pre-release Priority/Highest Severity/Major Type/Bug
|
Wrong syntax generated when adding structs. Two properties will be automatically added

|
1.0
|
Wrong syntax generated when adding structs - Wrong syntax generated when adding structs. Two properties will be automatically added

|
priority
|
wrong syntax generated when adding structs wrong syntax generated when adding structs two properties will be automatically added
| 1
|
280,825
| 8,687,158,791
|
IssuesEvent
|
2018-12-03 12:59:06
|
Ensepro/ensepro-core
|
https://api.github.com/repos/Ensepro/ensepro-core
|
closed
|
Subconsulta para CNs justapostos
|
priority: high type: task
|
Implementar no CBC, apรณs finalizadas a submissรฃo da pergunta aos sistemas de REN e as possรญveis ressubmissรตes ao CLN, a identificaรงรฃo de CNs justapostos.
Caso ocorram CNs justapostos (sequรชncia de 2 ou mais CNs), รฉ necessรกrio localizar o CN que contรฉm um PROP e fazer uma consulta prรฉvia simples ร BC para tentar localizar a(s) tripla(s) que contenha(m) o PROP como SUJEITO ou OBJETO e o N como PREDICADO.
Caso a subconsulta encontre triplas, as respostas encontradas devem ser incluรญdas como TRs do tipo PROP na prรณxima consulta ao ES.
IMPORTANTE: por enquanto, nรฃo vamos criar um peso especรญfico para o PROP originado de subconsulta por CN justaposto, pois haveria necessidade de ajustar tambรฉm o algoritmo de cรกlculo de ranque unificado recรฉm criado.
## Exemplo
**Frase**: Qual o nome do prefeito da capital da Polinรฉsia Francesa?
**CN justaposto**: "nome do prefeito da capital da Polinรฉsia Francesa"
**CN com PROP**: "capital da Polinรฉsia Francesa"
**Subconsulta**: ["Polinรฉsia_Francesa" capital ?x]
**Resultado da subconsulta**: Papeete, PROP
TRs da prรณxima consulta: *prefeito (N), Papeete (PROP)*
*IMPORTANTE*: neste momento, vamos considerar que o elemento encontrado na subconsulta รฉ um PROP. Futuramente, se necessรกrio, talvez utilizemos alguma ferramante para validar qual o tipo correto do elemento encontrado (Spotlight, ConceptNet, etc).
|
1.0
|
Subconsulta para CNs justapostos - Implementar no CBC, apรณs finalizadas a submissรฃo da pergunta aos sistemas de REN e as possรญveis ressubmissรตes ao CLN, a identificaรงรฃo de CNs justapostos.
Caso ocorram CNs justapostos (sequรชncia de 2 ou mais CNs), รฉ necessรกrio localizar o CN que contรฉm um PROP e fazer uma consulta prรฉvia simples ร BC para tentar localizar a(s) tripla(s) que contenha(m) o PROP como SUJEITO ou OBJETO e o N como PREDICADO.
Caso a subconsulta encontre triplas, as respostas encontradas devem ser incluรญdas como TRs do tipo PROP na prรณxima consulta ao ES.
IMPORTANTE: por enquanto, nรฃo vamos criar um peso especรญfico para o PROP originado de subconsulta por CN justaposto, pois haveria necessidade de ajustar tambรฉm o algoritmo de cรกlculo de ranque unificado recรฉm criado.
## Exemplo
**Frase**: Qual o nome do prefeito da capital da Polinรฉsia Francesa?
**CN justaposto**: "nome do prefeito da capital da Polinรฉsia Francesa"
**CN com PROP**: "capital da Polinรฉsia Francesa"
**Subconsulta**: ["Polinรฉsia_Francesa" capital ?x]
**Resultado da subconsulta**: Papeete, PROP
TRs da prรณxima consulta: *prefeito (N), Papeete (PROP)*
*IMPORTANTE*: neste momento, vamos considerar que o elemento encontrado na subconsulta รฉ um PROP. Futuramente, se necessรกrio, talvez utilizemos alguma ferramante para validar qual o tipo correto do elemento encontrado (Spotlight, ConceptNet, etc).
|
priority
|
subconsulta para cns justapostos implementar no cbc apรณs finalizadas a submissรฃo da pergunta aos sistemas de ren e as possรญveis ressubmissรตes ao cln a identificaรงรฃo de cns justapostos caso ocorram cns justapostos sequรชncia de ou mais cns รฉ necessรกrio localizar o cn que contรฉm um prop e fazer uma consulta prรฉvia simples ร bc para tentar localizar a s tripla s que contenha m o prop como sujeito ou objeto e o n como predicado caso a subconsulta encontre triplas as respostas encontradas devem ser incluรญdas como trs do tipo prop na prรณxima consulta ao es importante por enquanto nรฃo vamos criar um peso especรญfico para o prop originado de subconsulta por cn justaposto pois haveria necessidade de ajustar tambรฉm o algoritmo de cรกlculo de ranque unificado recรฉm criado exemplo frase qual o nome do prefeito da capital da polinรฉsia francesa cn justaposto nome do prefeito da capital da polinรฉsia francesa cn com prop capital da polinรฉsia francesa subconsulta resultado da subconsulta papeete prop trs da prรณxima consulta prefeito n papeete prop importante neste momento vamos considerar que o elemento encontrado na subconsulta รฉ um prop futuramente se necessรกrio talvez utilizemos alguma ferramante para validar qual o tipo correto do elemento encontrado spotlight conceptnet etc
| 1
|
328,035
| 9,985,129,031
|
IssuesEvent
|
2019-07-10 15:51:47
|
geosolutions-it/MapStore2-C027
|
https://api.github.com/repos/geosolutions-it/MapStore2-C027
|
closed
|
WFS Download
|
Priority: High Project: C027 Task enhancement
|
Change the WFS download tool configuration to enable it only for authenticated users.
|
1.0
|
WFS Download - Change the WFS download tool configuration to enable it only for authenticated users.
|
priority
|
wfs download change the wfs download tool configuration to enable it only for authenticated users
| 1
|
144,996
| 5,556,845,412
|
IssuesEvent
|
2017-03-24 10:18:45
|
ocadotechnology/codeforlife-portal
|
https://api.github.com/repos/ocadotechnology/codeforlife-portal
|
closed
|
"Christmas Makeover for Rapid Router!" still featured on the Main Page
|
front end priority: high
|
Given that March is ending, it might be a good idea to remove the spotlight from "[Christmas Makeover for Rapid Router!](http://www.ocadotechnology.com/our-blog/articles/Rapid-Router-game-from-Ocado-gets-a-Christmas-makeover)".
It might still be nice to have it somewhere in the history though.
|
1.0
|
"Christmas Makeover for Rapid Router!" still featured on the Main Page - Given that March is ending, it might be a good idea to remove the spotlight from "[Christmas Makeover for Rapid Router!](http://www.ocadotechnology.com/our-blog/articles/Rapid-Router-game-from-Ocado-gets-a-Christmas-makeover)".
It might still be nice to have it somewhere in the history though.
|
priority
|
christmas makeover for rapid router still featured on the main page given that march is ending it might be a good idea to remove the spotlight from it might still be nice to have it somewhere in the history though
| 1
|
669,239
| 22,617,454,245
|
IssuesEvent
|
2022-06-30 00:34:29
|
veat-cesi/user-account-management
|
https://api.github.com/repos/veat-cesi/user-account-management
|
closed
|
Home Page
|
priority:high
|
Build a Home page for Veat UAM :
- [x] Side menu with all models
- [ ] ~Main page with cards~
- [ ] ~Settings button~Settings button~
- [ ] ~Dark/Light mode switch button~
|
1.0
|
Home Page - Build a Home page for Veat UAM :
- [x] Side menu with all models
- [ ] ~Main page with cards~
- [ ] ~Settings button~Settings button~
- [ ] ~Dark/Light mode switch button~
|
priority
|
home page build a home page for veat uam side menu with all models main page with cards settings button settings button dark light mode switch button
| 1
|
711,920
| 24,479,720,265
|
IssuesEvent
|
2022-10-08 17:05:31
|
fyusuf-a/ft_transcendence
|
https://api.github.com/repos/fyusuf-a/ft_transcendence
|
closed
|
bug: fail avatar change not reported
|
bug frontend HIGH PRIORITY
|
If you try to change the avatar to an illegal file then the dialog remains open and no notice is given. You can close it manually, but there is no indication that it failed
|
1.0
|
bug: fail avatar change not reported - If you try to change the avatar to an illegal file then the dialog remains open and no notice is given. You can close it manually, but there is no indication that it failed
|
priority
|
bug fail avatar change not reported if you try to change the avatar to an illegal file then the dialog remains open and no notice is given you can close it manually but there is no indication that it failed
| 1
|
30,231
| 2,723,077,338
|
IssuesEvent
|
2015-04-14 09:54:54
|
martflu/aurora
|
https://api.github.com/repos/martflu/aurora
|
opened
|
Fix stability issues in production
|
bug high priority
|
As of 2015-04-08 we've been major stability issues after an update to both the server OS and the hosting setup.
One issue (changed/removed settings files) was already discovered and fixed (upping the max file descriptor limit again and preventing the server from sleeping under certain conditions).
Currently, we still get sporadic error 500 and error 502. As they seem to happen at about the same time on the testing environment, it seems as though there's still a configuration issue.
|
1.0
|
Fix stability issues in production - As of 2015-04-08 we've been major stability issues after an update to both the server OS and the hosting setup.
One issue (changed/removed settings files) was already discovered and fixed (upping the max file descriptor limit again and preventing the server from sleeping under certain conditions).
Currently, we still get sporadic error 500 and error 502. As they seem to happen at about the same time on the testing environment, it seems as though there's still a configuration issue.
|
priority
|
fix stability issues in production as of we ve been major stability issues after an update to both the server os and the hosting setup one issue changed removed settings files was already discovered and fixed upping the max file descriptor limit again and preventing the server from sleeping under certain conditions currently we still get sporadic error and error as they seem to happen at about the same time on the testing environment it seems as though there s still a configuration issue
| 1
|
520,033
| 15,077,754,563
|
IssuesEvent
|
2021-02-05 07:33:39
|
wso2/cellery
|
https://api.github.com/repos/wso2/cellery
|
closed
|
Minikube error when executing cellery setup > create
|
Priority/High Resolution/Wonโt Fix Severity/Major Type/Improvement component/CLI
|
**Description:**
I have been trying to install minikube setup before, and during that I encountered some issue and the setup failed. And then I was able to execute the steps that was suggested by the minikube error, and then tried to execute the cellery setup > create again, and then encountered below error.
```
cellery setup
โ Create
Cellery setup command failed: failed to get user input, failed to check if minikube is running, failed to check status of minikube profile cellery-local-setup, exit status 6 /nPlease execute minikube delete -p cellery-local-setup to remove the inconsistent profile
```
And to overcome the issue, I had to execute the `minikube profile cellery-local-setup`.
I think we should do below to avoid these kind issues.
1) If minikube profile creation is failed, we have to delete the profile without leaving in the inconsistent state.
2) Even if the cellery-local-setup profile status check or some operations are throwing an error, we should print as warning and continue to show other options. For example, in the above case, cellery setup> create fails with the an error. Instead of failing, it should print a warning about the local setup, and show the options to create `GCP` and `Existing clusters`.
**Suggested Labels:**
Improvement
**Suggested Assignees:**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers canโt assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
**Affected Product Version:**
0.6.0
**OS, DB, other environment details and versions:**
MacOS.
|
1.0
|
Minikube error when executing cellery setup > create - **Description:**
I have been trying to install minikube setup before, and during that I encountered some issue and the setup failed. And then I was able to execute the steps that was suggested by the minikube error, and then tried to execute the cellery setup > create again, and then encountered below error.
```
cellery setup
โ Create
Cellery setup command failed: failed to get user input, failed to check if minikube is running, failed to check status of minikube profile cellery-local-setup, exit status 6 /nPlease execute minikube delete -p cellery-local-setup to remove the inconsistent profile
```
And to overcome the issue, I had to execute the `minikube profile cellery-local-setup`.
I think we should do below to avoid these kind issues.
1) If minikube profile creation is failed, we have to delete the profile without leaving in the inconsistent state.
2) Even if the cellery-local-setup profile status check or some operations are throwing an error, we should print as warning and continue to show other options. For example, in the above case, cellery setup> create fails with the an error. Instead of failing, it should print a warning about the local setup, and show the options to create `GCP` and `Existing clusters`.
**Suggested Labels:**
Improvement
**Suggested Assignees:**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers canโt assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
**Affected Product Version:**
0.6.0
**OS, DB, other environment details and versions:**
MacOS.
|
priority
|
minikube error when executing cellery setup create description i have been trying to install minikube setup before and during that i encountered some issue and the setup failed and then i was able to execute the steps that was suggested by the minikube error and then tried to execute the cellery setup create again and then encountered below error cellery setup โ create cellery setup command failed failed to get user input failed to check if minikube is running failed to check status of minikube profile cellery local setup exit status nplease execute minikube delete p cellery local setup to remove the inconsistent profile and to overcome the issue i had to execute the minikube profile cellery local setup i think we should do below to avoid these kind issues if minikube profile creation is failed we have to delete the profile without leaving in the inconsistent state even if the cellery local setup profile status check or some operations are throwing an error we should print as warning and continue to show other options for example in the above case cellery setup create fails with the an error instead of failing it should print a warning about the local setup and show the options to create gcp and existing clusters suggested labels improvement suggested assignees affected product version os db other environment details and versions macos
| 1
|
585,258
| 17,483,650,021
|
IssuesEvent
|
2021-08-09 08:05:05
|
CatalogueOfLife/portal
|
https://api.github.com/repos/CatalogueOfLife/portal
|
closed
|
List of sources shows html tags in citations
|
bug high priority
|
https://www.catalogueoflife.org/data/source-datasets
<img width="1828" alt="Screenshot 2021-08-04 at 11 08 19" src="https://user-images.githubusercontent.com/327505/128154759-270c87b3-808b-4b89-adc8-cfbb3f1415e4.png">
|
1.0
|
List of sources shows html tags in citations - https://www.catalogueoflife.org/data/source-datasets
<img width="1828" alt="Screenshot 2021-08-04 at 11 08 19" src="https://user-images.githubusercontent.com/327505/128154759-270c87b3-808b-4b89-adc8-cfbb3f1415e4.png">
|
priority
|
list of sources shows html tags in citations img width alt screenshot at src
| 1
|
272,652
| 8,515,823,675
|
IssuesEvent
|
2018-10-31 23:11:20
|
xelatihy/yocto-gl
|
https://api.github.com/repos/xelatihy/yocto-gl
|
closed
|
Remove Happly dependency.
|
enhancement high priority
|
Happly works ok for now, but has some significant issues. See if we cam skip it.
|
1.0
|
Remove Happly dependency. - Happly works ok for now, but has some significant issues. See if we cam skip it.
|
priority
|
remove happly dependency happly works ok for now but has some significant issues see if we cam skip it
| 1
|
96,652
| 3,971,451,186
|
IssuesEvent
|
2016-05-04 11:58:54
|
juju/docs
|
https://api.github.com/repos/juju/docs
|
closed
|
Docs needed: Models > Managing
|
2.0 high priority
|
This issue spawned from #1008 .
This new file needs to be populated: models-managing.md .
Be sure to include `juju switch` usage.
|
1.0
|
Docs needed: Models > Managing - This issue spawned from #1008 .
This new file needs to be populated: models-managing.md .
Be sure to include `juju switch` usage.
|
priority
|
docs needed models managing this issue spawned from this new file needs to be populated models managing md be sure to include juju switch usage
| 1
|
755,440
| 26,429,584,208
|
IssuesEvent
|
2023-01-14 16:44:33
|
gamefreedomgit/Maelstrom
|
https://api.github.com/repos/gamefreedomgit/Maelstrom
|
closed
|
[Moved from Discord] Sacrifices Quest mobs not de-aggro'ing
|
Quest - Cataclysm (1-60) Priority: High Status: Confirmed Bug Report from Discord
|
https://cata-twinhead.twinstar.cz/?quest=14212
Mistake
OP
โ Today at 20:56
After you reach the destination, the Worgens are supposed to de-aggro, but doesn't.
https://www.youtube.com/watch?v=lBDgI0Wd7L8
|
1.0
|
[Moved from Discord] Sacrifices Quest mobs not de-aggro'ing - https://cata-twinhead.twinstar.cz/?quest=14212
Mistake
OP
โ Today at 20:56
After you reach the destination, the Worgens are supposed to de-aggro, but doesn't.
https://www.youtube.com/watch?v=lBDgI0Wd7L8
|
priority
|
sacrifices quest mobs not de aggro ing mistake op โ today at after you reach the destination the worgens are supposed to de aggro but doesn t
| 1
|
369,737
| 10,917,020,901
|
IssuesEvent
|
2019-11-21 14:27:01
|
ahmedkaludi/accelerated-mobile-pages
|
https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages
|
closed
|
Need to adding gaping for col 2 in AMP Pagebuilder
|
Urgent [Priority: HIGH] bug
|
When ever the 2 columns section are used, the design is looking like this
https://monosnap.com/file/IcH8CLqpHZ4ekE1xYRVxSKKFqDsc08
Need to add gaping for that.
|
1.0
|
Need to adding gaping for col 2 in AMP Pagebuilder - When ever the 2 columns section are used, the design is looking like this
https://monosnap.com/file/IcH8CLqpHZ4ekE1xYRVxSKKFqDsc08
Need to add gaping for that.
|
priority
|
need to adding gaping for col in amp pagebuilder when ever the columns section are used the design is looking like this need to add gaping for that
| 1
|
561,145
| 16,611,964,530
|
IssuesEvent
|
2021-06-02 12:39:17
|
wp-media/wp-rocket
|
https://api.github.com/repos/wp-media/wp-rocket
|
closed
|
Delay JS - the inline script prevents some JavaScript from firing
|
module: delay JS priority: high severity: major type: bug
|
**Before submitting an issue please check that youโve completed the following steps:**
- Made sure youโre on the latest version โ
`3.9`
- Used the search feature to ensure that the bug hasnโt been reported before - #3926 is probably related, but it's too specific.
**Describe the bug**
In some cases, despite excluding all files/inline JavaScript from **Delay JavaScript execution**, some JavaScript is not executed until there is interaction with the page.
**screencast:** https://youtu.be/ZuQj-KiNxNk
Removing the inline JavaScript we add in the `<head>` resolves the issue, of course.
**To Reproduce**
Steps to reproduce the behavior:
There isn't something solid currently.
On a customer's website, the following `DOMContentLoaded` script didn't fire :
https://snippi.com/s/8emx5tk
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
Non-delayed scripts should be executed.
**Additional context**
**Tickets:**
https://secure.helpscout.net/conversation/1497405310/259918/ - credentials to a staging site inside
https://secure.helpscout.net/conversation/1518901270/265435/
Probably related: #3926
Add any other context about the problem here.
**Backlog Grooming (for WP Media dev team use only)**
- [ ] Reproduce the problem
- [ ] Identify the root cause
- [ ] Scope a solution
- [ ] Estimate the effort
|
1.0
|
Delay JS - the inline script prevents some JavaScript from firing - **Before submitting an issue please check that youโve completed the following steps:**
- Made sure youโre on the latest version โ
`3.9`
- Used the search feature to ensure that the bug hasnโt been reported before - #3926 is probably related, but it's too specific.
**Describe the bug**
In some cases, despite excluding all files/inline JavaScript from **Delay JavaScript execution**, some JavaScript is not executed until there is interaction with the page.
**screencast:** https://youtu.be/ZuQj-KiNxNk
Removing the inline JavaScript we add in the `<head>` resolves the issue, of course.
**To Reproduce**
Steps to reproduce the behavior:
There isn't something solid currently.
On a customer's website, the following `DOMContentLoaded` script didn't fire :
https://snippi.com/s/8emx5tk
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
Non-delayed scripts should be executed.
**Additional context**
**Tickets:**
https://secure.helpscout.net/conversation/1497405310/259918/ - credentials to a staging site inside
https://secure.helpscout.net/conversation/1518901270/265435/
Probably related: #3926
Add any other context about the problem here.
**Backlog Grooming (for WP Media dev team use only)**
- [ ] Reproduce the problem
- [ ] Identify the root cause
- [ ] Scope a solution
- [ ] Estimate the effort
|
priority
|
delay js the inline script prevents some javascript from firing before submitting an issue please check that youโve completed the following steps made sure youโre on the latest version โ
used the search feature to ensure that the bug hasnโt been reported before is probably related but it s too specific describe the bug in some cases despite excluding all files inline javascript from delay javascript execution some javascript is not executed until there is interaction with the page screencast removing the inline javascript we add in the resolves the issue of course to reproduce steps to reproduce the behavior there isn t something solid currently on a customer s website the following domcontentloaded script didn t fire go to click on scroll down to see error expected behavior non delayed scripts should be executed additional context tickets credentials to a staging site inside probably related add any other context about the problem here backlog grooming for wp media dev team use only reproduce the problem identify the root cause scope a solution estimate the effort
| 1
|
385,065
| 11,411,965,427
|
IssuesEvent
|
2020-02-01 09:48:43
|
code4romania/monitorizare-vot
|
https://api.github.com/repos/code4romania/monitorizare-vot
|
closed
|
[Counties] Add an order field to county entity
|
BE counties enhancement february-2020 help wanted high-priority
|
Add a new int field to the county entity named `order`. It is needed in order to be able to easily display the counties in the correct order in client apps. And also easily change their display order.
|
1.0
|
[Counties] Add an order field to county entity - Add a new int field to the county entity named `order`. It is needed in order to be able to easily display the counties in the correct order in client apps. And also easily change their display order.
|
priority
|
add an order field to county entity add a new int field to the county entity named order it is needed in order to be able to easily display the counties in the correct order in client apps and also easily change their display order
| 1
|
166,915
| 6,314,606,113
|
IssuesEvent
|
2017-07-24 11:20:14
|
dbcollection/dbcollection
|
https://api.github.com/repos/dbcollection/dbcollection
|
opened
|
Add `.info()` method to list all available fields and their shape
|
api feature-request high priority
|
Introduce a new API method to list all fields of the metadata file, along with information about the shape, data type, if it is a list or if it is in `object_ids`. The result should look like the following:
### If no set name is specified, prints all sets
```python
>>> import dbcollection as dbc
>>> mnist = dbc.load('mnist')
>>> mnist.info()
> 'train' set information:
- 'images' size=[60000, 28, 28], dtype=np.uint8, (is in 'object_ids', position = 0)
- 'labels' size=[60000, 1], dtype=np.uint8, (is in 'object_ids', position = 1)
- 'classes' size=[10, 11], dtype=np.uint8,
- 'object_ids' size=[2, 11], dtype=np.uint8
- 'object_fields' size=[60000, 2], dtype=np.int32
- 'list_images_per_class' size=[10, 6000], dtype=np.int32, (is pre-ordered list)
> 'test' set information:
- 'images' size=[10000, 28, 28], dtype=np.uint8, (is in 'object_ids', position = 0)
- 'labels' size=[10000, 1], dtype=np.uint8, (is in 'object_ids', position = 1)
- 'classes' size=[10, 11], dtype=np.uint8,
- 'object_ids' size=[2, 11], dtype=np.uint8
- 'object_fields' size=[10000, 2], dtype=np.int32
- 'list_images_per_class' size=[10, 1000], dtype=np.int32, (is pre-ordered list)
```
### If a set name is specified, prints only the information regarding that set
```python
>>> import dbcollection as dbc
>>> mnist = dbc.load('mnist')
>>> mnist.info('train')
> 'train' set information:
- 'images' size=[60000, 28, 28], dtype=np.uint8, (is in 'object_ids', position = 0)
- 'labels' size=[60000, 1], dtype=np.uint8, (is in 'object_ids', position = 1)
- 'classes' size=[10, 11], dtype=np.uint8,
- 'object_ids' size=[2, 11], dtype=np.uint8
- 'object_fields' size=[60000, 2], dtype=np.int32
- 'list_images_per_class' size=[10, 6000], dtype=np.int32, (is pre-ordered list)
```
This method allows for a quick view of how the data is structured, which fields belong to the `object_ids` index list and which fields are pre-ordered lists (useful to fetch data from a certain class, file, etc.).
|
1.0
|
Add `.info()` method to list all available fields and their shape - Introduce a new API method to list all fields of the metadata file, along with information about the shape, data type, if it is a list or if it is in `object_ids`. The result should look like the following:
### If no set name is specified, prints all sets
```python
>>> import dbcollection as dbc
>>> mnist = dbc.load('mnist')
>>> mnist.info()
> 'train' set information:
- 'images' size=[60000, 28, 28], dtype=np.uint8, (is in 'object_ids', position = 0)
- 'labels' size=[60000, 1], dtype=np.uint8, (is in 'object_ids', position = 1)
- 'classes' size=[10, 11], dtype=np.uint8,
- 'object_ids' size=[2, 11], dtype=np.uint8
- 'object_fields' size=[60000, 2], dtype=np.int32
- 'list_images_per_class' size=[10, 6000], dtype=np.int32, (is pre-ordered list)
> 'test' set information:
- 'images' size=[10000, 28, 28], dtype=np.uint8, (is in 'object_ids', position = 0)
- 'labels' size=[10000, 1], dtype=np.uint8, (is in 'object_ids', position = 1)
- 'classes' size=[10, 11], dtype=np.uint8,
- 'object_ids' size=[2, 11], dtype=np.uint8
- 'object_fields' size=[10000, 2], dtype=np.int32
- 'list_images_per_class' size=[10, 1000], dtype=np.int32, (is pre-ordered list)
```
### If a set name is specified, prints only the information regarding that set
```python
>>> import dbcollection as dbc
>>> mnist = dbc.load('mnist')
>>> mnist.info('train')
> 'train' set information:
- 'images' size=[60000, 28, 28], dtype=np.uint8, (is in 'object_ids', position = 0)
- 'labels' size=[60000, 1], dtype=np.uint8, (is in 'object_ids', position = 1)
- 'classes' size=[10, 11], dtype=np.uint8,
- 'object_ids' size=[2, 11], dtype=np.uint8
- 'object_fields' size=[60000, 2], dtype=np.int32
- 'list_images_per_class' size=[10, 6000], dtype=np.int32, (is pre-ordered list)
```
This method allows for a quick view of how the data is structured, which fields belong to the `object_ids` index list and which fields are pre-ordered lists (useful to fetch data from a certain class, file, etc.).
|
priority
|
add info method to list all available fields and their shape introduce a new api method to list all fields of the metadata file along with information about the shape data type if it is a list or if it is in object ids the result should look like the following if no set name is specified prints all sets python import dbcollection as dbc mnist dbc load mnist mnist info train set information images size dtype np is in object ids position labels size dtype np is in object ids position classes size dtype np object ids size dtype np object fields size dtype np list images per class size dtype np is pre ordered list test set information images size dtype np is in object ids position labels size dtype np is in object ids position classes size dtype np object ids size dtype np object fields size dtype np list images per class size dtype np is pre ordered list if a set name is specified prints only the information regarding that set python import dbcollection as dbc mnist dbc load mnist mnist info train train set information images size dtype np is in object ids position labels size dtype np is in object ids position classes size dtype np object ids size dtype np object fields size dtype np list images per class size dtype np is pre ordered list this method allows for a quick view of how the data is structured which fields belong to the object ids index list and which fields are pre ordered lists useful to fetch data from a certain class file etc
| 1
|
176,918
| 6,569,241,134
|
IssuesEvent
|
2017-09-09 04:39:41
|
open-austin/iced-coffee
|
https://api.github.com/repos/open-austin/iced-coffee
|
closed
|
New Project Step-By-Step
|
delivery lead high priority
|
## Draft
### Intake:
* Create a project-idea issue, link repo in readme
* Create a repo and add tags for
** Stack
** Status
** Needs
### Evaluation:
* Someone will evaluate to make sure youโre not doing anything sketchy
* They may point you to another similar project in OA to join
* They may suggest an existing project in the CFA network
### Production:
* Leave a comment every time you work on it at an event
* @amaliebarras if you have specific needs
### Marketing:
* When your app is ready to use, let the comms team know and theyโll walk you through
** Documenting
*** Github README
*** Open-Austin.org
** Blogging
** Sharing on social media
** Engaging Open Source community
## Figure out where to publish this, new project champ UX.
|
1.0
|
New Project Step-By-Step - ## Draft
### Intake:
* Create a project-idea issue, link repo in readme
* Create a repo and add tags for
** Stack
** Status
** Needs
### Evaluation:
* Someone will evaluate to make sure youโre not doing anything sketchy
* They may point you to another similar project in OA to join
* They may suggest an existing project in the CFA network
### Production:
* Leave a comment every time you work on it at an event
* @amaliebarras if you have specific needs
### Marketing:
* When your app is ready to use, let the comms team know and theyโll walk you through
** Documenting
*** Github README
*** Open-Austin.org
** Blogging
** Sharing on social media
** Engaging Open Source community
## Figure out where to publish this, new project champ UX.
|
priority
|
new project step by step draft intake create a project idea issue link repo in readme create a repo and add tags for stack status needs evaluation someone will evaluate to make sure youโre not doing anything sketchy they may point you to another similar project in oa to join they may suggest an existing project in the cfa network production leave a comment every time you work on it at an event amaliebarras if you have specific needs marketing when your app is ready to use let the comms team know and theyโll walk you through documenting github readme open austin org blogging sharing on social media engaging open source community figure out where to publish this new project champ ux
| 1
|
245,682
| 7,889,461,561
|
IssuesEvent
|
2018-06-28 04:20:12
|
slic3r/Slic3r
|
https://api.github.com/repos/slic3r/Slic3r
|
closed
|
1.3.0 Release schedule?
|
HIGH PRIORITY
|
### Version
1.3.0
The 1.3.0 [milestone](https://github.com/alexrj/Slic3r/milestone/23) is almost complete and the website says Q4 2017 for release. I think a release is crucial for Slic3r, so much hast changed since 1.2.9. This is not a real Issue, but more of a question to better plan the next steps and where to put effort:
- Is there a planned release date?
- Besides the 5 points in the milestone, which other important tasks are still open? (e.g. website, documentation, ...)
- How can we help to get the release out?
|
1.0
|
1.3.0 Release schedule? - ### Version
1.3.0
The 1.3.0 [milestone](https://github.com/alexrj/Slic3r/milestone/23) is almost complete and the website says Q4 2017 for release. I think a release is crucial for Slic3r, so much hast changed since 1.2.9. This is not a real Issue, but more of a question to better plan the next steps and where to put effort:
- Is there a planned release date?
- Besides the 5 points in the milestone, which other important tasks are still open? (e.g. website, documentation, ...)
- How can we help to get the release out?
|
priority
|
release schedule version the is almost complete and the website says for release i think a release is crucial for so much hast changed since this is not a real issue but more of a question to better plan the next steps and where to put effort is there a planned release date besides the points in the milestone which other important tasks are still open e g website documentation how can we help to get the release out
| 1
|
270,021
| 8,445,583,772
|
IssuesEvent
|
2018-10-18 22:02:37
|
quipucords/quipucords
|
https://api.github.com/repos/quipucords/quipucords
|
closed
|
Add report version into reports
|
component - backend feature request priority - high
|
## Feature description:
### Is your feature request related to a problem?
SEAP is building tools that consume CSV reports produced by QPC. At some future point, we may need to upgrade the report. Providing a version in the report will allow us to determine the version of a report. Previous to this enhancement, the reports were not considered API. This feature allows us to freeze features with a version.
|
1.0
|
Add report version into reports - ## Feature description:
### Is your feature request related to a problem?
SEAP is building tools that consume CSV reports produced by QPC. At some future point, we may need to upgrade the report. Providing a version in the report will allow us to determine the version of a report. Previous to this enhancement, the reports were not considered API. This feature allows us to freeze features with a version.
|
priority
|
add report version into reports feature description is your feature request related to a problem seap is building tools that consume csv reports produced by qpc at some future point we may need to upgrade the report providing a version in the report will allow us to determine the version of a report previous to this enhancement the reports were not considered api this feature allows us to freeze features with a version
| 1
|
674,420
| 23,050,342,535
|
IssuesEvent
|
2022-07-24 14:31:45
|
vmware-samples/packer-examples-for-vsphere
|
https://api.github.com/repos/vmware-samples/packer-examples-for-vsphere
|
opened
|
Add support for HCP Packer
|
type/enhancement status/planned priority/high sev/low
|
### Discussed in https://github.com/vmware-samples/packer-examples-for-vsphere/discussions/230
Add the option to enable integration with support HCP Packer.
### Potential Configuration
```
build {
dynamic "hcp_packer_registry" {
for_each = var.common_hcp_enabled == true ? [1] : []
content {
### โฆ excluded for brevity
}
}
}
```
Will require, at a minimum:
- Addition to common variables
- Updates to `setr-envvars.sh`
- Updates to builds
- e2e Testing
- Other?
### References
- https://cloud.hashicorp.com/products/packer
|
1.0
|
Add support for HCP Packer - ### Discussed in https://github.com/vmware-samples/packer-examples-for-vsphere/discussions/230
Add the option to enable integration with support HCP Packer.
### Potential Configuration
```
build {
dynamic "hcp_packer_registry" {
for_each = var.common_hcp_enabled == true ? [1] : []
content {
### โฆ excluded for brevity
}
}
}
```
Will require, at a minimum:
- Addition to common variables
- Updates to `setr-envvars.sh`
- Updates to builds
- e2e Testing
- Other?
### References
- https://cloud.hashicorp.com/products/packer
|
priority
|
add support for hcp packer discussed in add the option to enable integration with support hcp packer potential configuration build dynamic hcp packer registry for each var common hcp enabled true content โฆ excluded for brevity will require at a minimum addition to common variables updates to setr envvars sh updates to builds testing other references
| 1
|
368,598
| 10,881,776,413
|
IssuesEvent
|
2019-11-17 19:59:29
|
bounswe/bounswe2019group5
|
https://api.github.com/repos/bounswe/bounswe2019group5
|
closed
|
TextEssayDetailActivity has issues regarding essays with images
|
Android Bug Fix Priority: High Status: Available
|
TextEssayDetailActivity can not show an essay that is uploaded as image. I have tracked the issue to the point; when Essay object has an id from which essay that its URI points to a png file, app crashes. When id is from an txt essay and URI still points to the png file app does not crash but prints image as characters.
From what I have understood TextEssayDetailActivity can't show the details of image essays.
|
1.0
|
TextEssayDetailActivity has issues regarding essays with images - TextEssayDetailActivity can not show an essay that is uploaded as image. I have tracked the issue to the point; when Essay object has an id from which essay that its URI points to a png file, app crashes. When id is from an txt essay and URI still points to the png file app does not crash but prints image as characters.
From what I have understood TextEssayDetailActivity can't show the details of image essays.
|
priority
|
textessaydetailactivity has issues regarding essays with images textessaydetailactivity can not show an essay that is uploaded as image i have tracked the issue to the point when essay object has an id from which essay that its uri points to a png file app crashes when id is from an txt essay and uri still points to the png file app does not crash but prints image as characters from what i have understood textessaydetailactivity can t show the details of image essays
| 1
|
313,348
| 9,560,029,537
|
IssuesEvent
|
2019-05-03 18:23:58
|
blackbaud/skyux-datetime
|
https://api.github.com/repos/blackbaud/skyux-datetime
|
opened
|
Date range picker should not resize based on selection
|
Priority: High Type: Bug
|
### Expected behavior
The picker input on date range picker should stay the same size regardless of whether the selection causes other date fields to appear.
### Actual behavior
The picker input takes up the full width, and shrinks based on other date inputs appearing.
### Steps to reproduce
### Severity
### Impact
|
1.0
|
Date range picker should not resize based on selection - ### Expected behavior
The picker input on date range picker should stay the same size regardless of whether the selection causes other date fields to appear.
### Actual behavior
The picker input takes up the full width, and shrinks based on other date inputs appearing.
### Steps to reproduce
### Severity
### Impact
|
priority
|
date range picker should not resize based on selection expected behavior the picker input on date range picker should stay the same size regardless of whether the selection causes other date fields to appear actual behavior the picker input takes up the full width and shrinks based on other date inputs appearing steps to reproduce severity impact
| 1
|
138,076
| 5,327,583,061
|
IssuesEvent
|
2017-02-15 09:35:52
|
input-output-hk/cardano-docs
|
https://api.github.com/repos/input-output-hk/cardano-docs
|
closed
|
Styling for Code Blocks
|
Kind: Task Priority: High
|
Currently some JS solution is used to add syntax highlighting in the code blocks.
It always guesses wrong how to color things. So there are two options โ
+ Find a better way to add syntax highlighting (IMHO unfeasible)
+ Disable the JS solution so that we don't have syntax highlighting for now.
@tomasvrana please provide your opinion and make necessary changes to the way we generate documentation.
|
1.0
|
Styling for Code Blocks - Currently some JS solution is used to add syntax highlighting in the code blocks.
It always guesses wrong how to color things. So there are two options โ
+ Find a better way to add syntax highlighting (IMHO unfeasible)
+ Disable the JS solution so that we don't have syntax highlighting for now.
@tomasvrana please provide your opinion and make necessary changes to the way we generate documentation.
|
priority
|
styling for code blocks currently some js solution is used to add syntax highlighting in the code blocks it always guesses wrong how to color things so there are two options โ find a better way to add syntax highlighting imho unfeasible disable the js solution so that we don t have syntax highlighting for now tomasvrana please provide your opinion and make necessary changes to the way we generate documentation
| 1
|
560,811
| 16,605,140,657
|
IssuesEvent
|
2021-06-02 02:11:16
|
QuantEcon/quantecon-book-theme
|
https://api.github.com/repos/QuantEcon/quantecon-book-theme
|
closed
|
[figures] Setup centred by default?
|
high-priority
|
Our old theme had `centred` figures by default
Comparison is available here: https://github.com/QuantEcon/lecture-python.myst/issues/21
|
1.0
|
[figures] Setup centred by default? - Our old theme had `centred` figures by default
Comparison is available here: https://github.com/QuantEcon/lecture-python.myst/issues/21
|
priority
|
setup centred by default our old theme had centred figures by default comparison is available here
| 1
|
324,997
| 9,915,259,201
|
IssuesEvent
|
2019-06-28 16:21:35
|
piotrwitek/typesafe-actions
|
https://api.github.com/repos/piotrwitek/typesafe-actions
|
closed
|
CRA hangs on typecheck when importing createReducer from another file for combineReducers
|
bug high priority in progress
|
## Description
With the @next release (`5.0.0-0`), create-react-app hangs on waiting for the typecheck results in a specific case, when an imported map of reducers is passed to `combineReducers`:
```
import { combineReducers } from 'redux';
import content from './content';
export default combineReducers({
content,
});
```
_content.ts_
```
import { createReducer } from 'typesafe-actions';
export default createReducer({});
```
Above code causes a memory leak and crash at the end. When modified to:
```
import { combineReducers } from 'redux';
import { createReducer } from "typesafe-actions";
export default combineReducers({
content: createReducer({}),
});
```
it still hangs which is a bit interesting - so we still have `export default createReducer` inside unused/not imported `content.ts` file. When this file is removed, everything builds correctly.
## Steps to Reproduce
Repository contains isolated issue with minimal setup:
https://github.com/hsz/typesafe-actions-issue
Run
```
npm install
npm start
```
It'll stuck on
> Files successfully emitted, waiting for typecheck results...
and emit at the end:
> FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
## Expected behavior
tsc doesn't hang
## Suggested solution(s)
<!-- How could we solve this bug. What changes would need to be made -->
## Project Dependencies
- Typesafe-Actions Version: 5.0.0-0
- TypeScript Version: 3.5.2
- tsconfig.json:
```
{
"compilerOptions": {
"baseUrl": "./src",
"target": "es5",
"lib": [
"dom",
"dom.iterable",
"esnext"
],
"allowJs": true,
"skipLibCheck": true,
"esModuleInterop": true,
"allowSyntheticDefaultImports": true,
"strict": true,
"forceConsistentCasingInFileNames": true,
"module": "esnext",
"moduleResolution": "node",
"resolveJsonModule": true,
"noEmit": true,
"jsx": "preserve",
"isolatedModules": true,
"downlevelIteration": true
},
"include": [
"src"
]
}
```
|
1.0
|
CRA hangs on typecheck when importing createReducer from another file for combineReducers - ## Description
With the @next release (`5.0.0-0`), create-react-app hangs on waiting for the typecheck results in a specific case, when an imported map of reducers is passed to `combineReducers`:
```
import { combineReducers } from 'redux';
import content from './content';
export default combineReducers({
content,
});
```
_content.ts_
```
import { createReducer } from 'typesafe-actions';
export default createReducer({});
```
Above code causes a memory leak and crash at the end. When modified to:
```
import { combineReducers } from 'redux';
import { createReducer } from "typesafe-actions";
export default combineReducers({
content: createReducer({}),
});
```
it still hangs which is a bit interesting - so we still have `export default createReducer` inside unused/not imported `content.ts` file. When this file is removed, everything builds correctly.
## Steps to Reproduce
Repository contains isolated issue with minimal setup:
https://github.com/hsz/typesafe-actions-issue
Run
```
npm install
npm start
```
It'll stuck on
> Files successfully emitted, waiting for typecheck results...
and emit at the end:
> FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
## Expected behavior
tsc doesn't hang
## Suggested solution(s)
<!-- How could we solve this bug. What changes would need to be made -->
## Project Dependencies
- Typesafe-Actions Version: 5.0.0-0
- TypeScript Version: 3.5.2
- tsconfig.json:
```
{
"compilerOptions": {
"baseUrl": "./src",
"target": "es5",
"lib": [
"dom",
"dom.iterable",
"esnext"
],
"allowJs": true,
"skipLibCheck": true,
"esModuleInterop": true,
"allowSyntheticDefaultImports": true,
"strict": true,
"forceConsistentCasingInFileNames": true,
"module": "esnext",
"moduleResolution": "node",
"resolveJsonModule": true,
"noEmit": true,
"jsx": "preserve",
"isolatedModules": true,
"downlevelIteration": true
},
"include": [
"src"
]
}
```
|
priority
|
cra hangs on typecheck when importing createreducer from another file for combinereducers description with the next release create react app hangs on waiting for the typecheck results in a specific case when an imported map of reducers is passed to combinereducers import combinereducers from redux import content from content export default combinereducers content content ts import createreducer from typesafe actions export default createreducer above code causes a memory leak and crash at the end when modified to import combinereducers from redux import createreducer from typesafe actions export default combinereducers content createreducer it still hangs which is a bit interesting so we still have export default createreducer inside unused not imported content ts file when this file is removed everything builds correctly steps to reproduce repository contains isolated issue with minimal setup run npm install npm start it ll stuck on files successfully emitted waiting for typecheck results and emit at the end fatal error ineffective mark compacts near heap limit allocation failed javascript heap out of memory expected behavior tsc doesn t hang suggested solution s project dependencies typesafe actions version typescript version tsconfig json compileroptions baseurl src target lib dom dom iterable esnext allowjs true skiplibcheck true esmoduleinterop true allowsyntheticdefaultimports true strict true forceconsistentcasinginfilenames true module esnext moduleresolution node resolvejsonmodule true noemit true jsx preserve isolatedmodules true downleveliteration true include src
| 1
|
818,213
| 30,678,586,966
|
IssuesEvent
|
2023-07-26 07:36:08
|
dumpus-app/dumpus-app
|
https://api.github.com/repos/dumpus-app/dumpus-app
|
closed
|
Add a way to bypass premium in production env
|
enhancement high priority
|
make window.setPremium(true) available for all environments
|
1.0
|
Add a way to bypass premium in production env - make window.setPremium(true) available for all environments
|
priority
|
add a way to bypass premium in production env make window setpremium true available for all environments
| 1
|
285,239
| 8,756,233,003
|
IssuesEvent
|
2018-12-14 17:04:29
|
centre-for-educational-technology/edidaktikum
|
https://api.github.com/repos/centre-for-educational-technology/edidaktikum
|
closed
|
รlesande vastust muutes muutub vastuse staatus "unchecked"
|
enhancement high priority
|
Nรคiteks: vastus staatusega "accepted" -> tudeng muudab vastust, salvestab -> staatus "unchecked"
Nรคide 2: vastus staatusega "rejected" -> tudeng muudab vastust, salvestab -> staatus "unchecked"
Toimis jรคrgnevalt enne piletit #656
|
1.0
|
รlesande vastust muutes muutub vastuse staatus "unchecked" - Nรคiteks: vastus staatusega "accepted" -> tudeng muudab vastust, salvestab -> staatus "unchecked"
Nรคide 2: vastus staatusega "rejected" -> tudeng muudab vastust, salvestab -> staatus "unchecked"
Toimis jรคrgnevalt enne piletit #656
|
priority
|
รผlesande vastust muutes muutub vastuse staatus unchecked nรคiteks vastus staatusega accepted tudeng muudab vastust salvestab staatus unchecked nรคide vastus staatusega rejected tudeng muudab vastust salvestab staatus unchecked toimis jรคrgnevalt enne piletit
| 1
|
172,517
| 6,506,905,227
|
IssuesEvent
|
2017-08-24 10:57:54
|
bedita/bedita
|
https://api.github.com/repos/bedita/bedita
|
opened
|
Hidden attributes
|
Priority - High Topic - API Topic - ORM Type - Task
|
Introduce `object_types.hidden`:
* list of core attributes to hide
* those fields are ignored in every operation
* this list should appear in `/object_types` and be modifiable via API
|
1.0
|
Hidden attributes - Introduce `object_types.hidden`:
* list of core attributes to hide
* those fields are ignored in every operation
* this list should appear in `/object_types` and be modifiable via API
|
priority
|
hidden attributes introduce object types hidden list of core attributes to hide those fields are ignored in every operation this list should appear in object types and be modifiable via api
| 1
|
175,495
| 6,551,490,660
|
IssuesEvent
|
2017-09-05 14:55:26
|
pburns96/Revature-VenderBender
|
https://api.github.com/repos/pburns96/Revature-VenderBender
|
closed
|
As a customer, I can add tickets, for a concert, to the shopping cart.
|
High Priority
|
Requirements:
- Build the shopping cart functionality.
-Create an add tickets to cart button for each concert entry.
-Add a quantity input for number of tickets being added to the cart.
|
1.0
|
As a customer, I can add tickets, for a concert, to the shopping cart. - Requirements:
- Build the shopping cart functionality.
-Create an add tickets to cart button for each concert entry.
-Add a quantity input for number of tickets being added to the cart.
|
priority
|
as a customer i can add tickets for a concert to the shopping cart requirements build the shopping cart functionality create an add tickets to cart button for each concert entry add a quantity input for number of tickets being added to the cart
| 1
|
639,175
| 20,748,020,390
|
IssuesEvent
|
2022-03-15 02:37:30
|
coder/code-server
|
https://api.github.com/repos/coder/code-server
|
closed
|
[Chore]: Switch from yarn to a submodule
|
high-priority chore
|
We currently use yarn link to create a symlink between vendor/modules/code-oss-dev and coder/vscode (our fork). This is a bit unreliable and also requires cloning and updating two separate repos. The first step then would be to switch away from yarn link and use a submodule.
1. Remove vendor
2. Add git submodule under lib/vscode
3. update all the paths and the scripts to point to that location
There may be some additional code to cleanup
|
1.0
|
[Chore]: Switch from yarn to a submodule - We currently use yarn link to create a symlink between vendor/modules/code-oss-dev and coder/vscode (our fork). This is a bit unreliable and also requires cloning and updating two separate repos. The first step then would be to switch away from yarn link and use a submodule.
1. Remove vendor
2. Add git submodule under lib/vscode
3. update all the paths and the scripts to point to that location
There may be some additional code to cleanup
|
priority
|
switch from yarn to a submodule we currently use yarn link to create a symlink between vendor modules code oss dev and coder vscode our fork this is a bit unreliable and also requires cloning and updating two separate repos the first step then would be to switch away from yarn link and use a submodule remove vendor add git submodule under lib vscode update all the paths and the scripts to point to that location there may be some additional code to cleanup
| 1
|
219,489
| 7,342,871,884
|
IssuesEvent
|
2018-03-07 09:29:33
|
dagcoin/dagcoin
|
https://api.github.com/repos/dagcoin/dagcoin
|
closed
|
sentry is not working on funding and discovery servers
|
bug high priority
|
## Acceptance criteria:
* Sentry sends error notifications to slack
|
1.0
|
sentry is not working on funding and discovery servers - ## Acceptance criteria:
* Sentry sends error notifications to slack
|
priority
|
sentry is not working on funding and discovery servers acceptance criteria sentry sends error notifications to slack
| 1
|
678,922
| 23,215,862,048
|
IssuesEvent
|
2022-08-02 14:02:13
|
status-im/status-desktop
|
https://api.github.com/repos/status-im/status-desktop
|
closed
|
Can't login to a dapp
|
bug Browser priority 1: high E:Browser E:Bugfixes S:5
|
### Description
1. enable wallet
2. enable browser
3. open cryptokitties.io
4. login to the dapp
5. sign a message to the dapp
As result, the screen below can't be bypassed
<img width="1468" alt="Screenshot 2022-06-06 at 15 17 46" src="https://user-images.githubusercontent.com/82375995/172159610-1a7b6888-248e-48ba-ae64-dc7a3260a1b2.png">
|
1.0
|
Can't login to a dapp - ### Description
1. enable wallet
2. enable browser
3. open cryptokitties.io
4. login to the dapp
5. sign a message to the dapp
As result, the screen below can't be bypassed
<img width="1468" alt="Screenshot 2022-06-06 at 15 17 46" src="https://user-images.githubusercontent.com/82375995/172159610-1a7b6888-248e-48ba-ae64-dc7a3260a1b2.png">
|
priority
|
can t login to a dapp description enable wallet enable browser open cryptokitties io login to the dapp sign a message to the dapp as result the screen below can t be bypassed img width alt screenshot at src
| 1
|
196,532
| 6,933,232,034
|
IssuesEvent
|
2017-12-02 03:51:08
|
philippgille/hello-netcoreapp
|
https://api.github.com/repos/philippgille/hello-netcoreapp
|
closed
|
Add proper Docker image for nanoserver
|
blocked improvement priority: high topic: docker
|
The Docker image for Windows nanoserver currently can't be built without the FDD already built outside of Docker. This should be improved so that building the Docker image works the same way for all operating systems and architectures.
Currently Docker images for nanoserver can only be built on Windows, and when using multi-stage builds all used containers must be Windows containers. This means the builder image must also be based on nanoserver, which again means we can't use `build.sh` but have to use `build.ps1`.
Now the same issue as in #3 arises: `Add-Type -Assembly "System.IO.Compression.FileSystem"` doesn't work. This can be fixed, but then other issues arise. But other than #3, this issue doesn't depend on the `net461` etc. FDDs being built. Instead, another option would be to change the build scripts in such a way that an argument like `fdd_linux-x64` can be passed, leading to only the given target being built. That will improve all Docker image build times and is not only for this issue, so it's an issue on its own: #45
Blocked by #45.
|
1.0
|
Add proper Docker image for nanoserver - The Docker image for Windows nanoserver currently can't be built without the FDD already built outside of Docker. This should be improved so that building the Docker image works the same way for all operating systems and architectures.
Currently Docker images for nanoserver can only be built on Windows, and when using multi-stage builds all used containers must be Windows containers. This means the builder image must also be based on nanoserver, which again means we can't use `build.sh` but have to use `build.ps1`.
Now the same issue as in #3 arises: `Add-Type -Assembly "System.IO.Compression.FileSystem"` doesn't work. This can be fixed, but then other issues arise. But other than #3, this issue doesn't depend on the `net461` etc. FDDs being built. Instead, another option would be to change the build scripts in such a way that an argument like `fdd_linux-x64` can be passed, leading to only the given target being built. That will improve all Docker image build times and is not only for this issue, so it's an issue on its own: #45
Blocked by #45.
|
priority
|
add proper docker image for nanoserver the docker image for windows nanoserver currently can t be built without the fdd already built outside of docker this should be improved so that building the docker image works the same way for all operating systems and architectures currently docker images for nanoserver can only be built on windows and when using multi stage builds all used containers must be windows containers this means the builder image must also be based on nanoserver which again means we can t use build sh but have to use build now the same issue as in arises add type assembly system io compression filesystem doesn t work this can be fixed but then other issues arise but other than this issue doesn t depend on the etc fdds being built instead another option would be to change the build scripts in such a way that an argument like fdd linux can be passed leading to only the given target being built that will improve all docker image build times and is not only for this issue so it s an issue on its own blocked by
| 1
|
18,781
| 2,616,005,388
|
IssuesEvent
|
2015-03-02 00:49:50
|
jasonhall/bwapi
|
https://api.github.com/repos/jasonhall/bwapi
|
closed
|
Rally Point
|
auto-migrated Maintainability Priority-High Type-Enhancement
|
```
BWAI needs to automatically rally its units.
```
Original issue reported on code.google.com by `AHeinerm` on 24 May 2009 at 1:23
|
1.0
|
Rally Point - ```
BWAI needs to automatically rally its units.
```
Original issue reported on code.google.com by `AHeinerm` on 24 May 2009 at 1:23
|
priority
|
rally point bwai needs to automatically rally its units original issue reported on code google com by aheinerm on may at
| 1
|
441,413
| 12,717,432,946
|
IssuesEvent
|
2020-06-24 05:09:21
|
ArctosDB/arctos
|
https://api.github.com/repos/ArctosDB/arctos
|
closed
|
troubling logging in
|
Error Messages Priority-High
|
From my student:
Message: FATAL: role "enielsen" is not permitted to log in
I've checked and her account is unlocked. She probably logged in once several months ago and used the Lost Password PG prompt...
|
1.0
|
troubling logging in - From my student:
Message: FATAL: role "enielsen" is not permitted to log in
I've checked and her account is unlocked. She probably logged in once several months ago and used the Lost Password PG prompt...
|
priority
|
troubling logging in from my student message fatal role enielsen is not permitted to log in i ve checked and her account is unlocked she probably logged in once several months ago and used the lost password pg prompt
| 1
|
254,136
| 8,070,285,834
|
IssuesEvent
|
2018-08-06 09:15:44
|
wso2/product-is
|
https://api.github.com/repos/wso2/product-is
|
closed
|
AskPassword expiry time not updated with new value by identity.xml configs
|
Component/Identity Mgt Priority/High Type/Bug
|
Description:
AskPassword expiry time not updated with new value by identity.xml configs. Is given value "1440" hard coded?
Label:
Type/Query
Priority/High
Environment:
IS-5.3.0
wum updated pack (wso2is-5.3.0.1513701178179.zip)
Steps to recreate:
- Follow the steps to configure askpassword[1]
- When updating identity.xml for expiry time, update with new value
` <AskPassword>
<ExpiryTime>600</ExpiryTime>
</AskPassword>
`
- Start the server
- Check mgt-console for Resident Identity Providers-> Account Management Policies -> User Onboarding tab and Check "Ask password code expiry time" field. It always updated "1440"
[1] [https://docs.wso2.com/display/IS530/Creating+Users+using+the+Ask+Password+Option]
|
1.0
|
AskPassword expiry time not updated with new value by identity.xml configs - Description:
AskPassword expiry time not updated with new value by identity.xml configs. Is given value "1440" hard coded?
Label:
Type/Query
Priority/High
Environment:
IS-5.3.0
wum updated pack (wso2is-5.3.0.1513701178179.zip)
Steps to recreate:
- Follow the steps to configure askpassword[1]
- When updating identity.xml for expiry time, update with new value
` <AskPassword>
<ExpiryTime>600</ExpiryTime>
</AskPassword>
`
- Start the server
- Check mgt-console for Resident Identity Providers-> Account Management Policies -> User Onboarding tab and Check "Ask password code expiry time" field. It always updated "1440"
[1] [https://docs.wso2.com/display/IS530/Creating+Users+using+the+Ask+Password+Option]
|
priority
|
askpassword expiry time not updated with new value by identity xml configs description askpassword expiry time not updated with new value by identity xml configs is given value hard coded label type query priority high environment is wum updated pack zip steps to recreate follow the steps to configure askpassword when updating identity xml for expiry time update with new value start the server check mgt console for resident identity providers account management policies user onboarding tab and check ask password code expiry time field it always updated
| 1
|
437,948
| 12,605,022,890
|
IssuesEvent
|
2020-06-11 15:50:05
|
cloudfoundry-incubator/kubecf
|
https://api.github.com/repos/cloudfoundry-incubator/kubecf
|
opened
|
Review Bazel replacement PoC
|
Priority: High
|
## Acceptance Criteria
- we've identified any missing functionality (if any)
- sanity check (same helm chart from new scripts as bazel)
- bazel targets are _not_ deleted yet
- PR #583 is merged
|
1.0
|
Review Bazel replacement PoC - ## Acceptance Criteria
- we've identified any missing functionality (if any)
- sanity check (same helm chart from new scripts as bazel)
- bazel targets are _not_ deleted yet
- PR #583 is merged
|
priority
|
review bazel replacement poc acceptance criteria we ve identified any missing functionality if any sanity check same helm chart from new scripts as bazel bazel targets are not deleted yet pr is merged
| 1
|
422,365
| 12,270,602,710
|
IssuesEvent
|
2020-05-07 15:45:00
|
qutebrowser/qutebrowser
|
https://api.github.com/repos/qutebrowser/qutebrowser
|
closed
|
Security: Reloading page with certificate errors falsely shows a green URL (CVE-2020-11054)
|
priority: 0 - high
|
While working on 46b4d26a9ca1aba24a00ccd004606c8e3a6b17d4 I noticed that only the first load of pages with certificate errors gets a correctly colored URL.
When loading a page with the default `content.ssl_strict = ask` setting, there's a prompt to confirm the certificate issue:

When answering that with "yes", the URL is then colored yellow (`colors.statusbar.url.warn.fg`) rather than green (`colors.statusbar.url.success_https.fg`):

However, when reloading the page (or loading it again in another tab), the URL is green:

This is because QtWebEngine remembers the answer internally and we don't get a `certificateErrors` signal anymore - unfortunately there's also no API to check the certificate state of the current page...
I'm handling this as a low-severity security vulnerability and will request a CVE. There's no way for bad actors to exploit this and the user already did override the certificate error (so should be aware that the connection is not to be trusted), but it still lures users into a false sense of security.
A fix, release and security announcement is in progress.
|
1.0
|
Security: Reloading page with certificate errors falsely shows a green URL (CVE-2020-11054) - While working on 46b4d26a9ca1aba24a00ccd004606c8e3a6b17d4 I noticed that only the first load of pages with certificate errors gets a correctly colored URL.
When loading a page with the default `content.ssl_strict = ask` setting, there's a prompt to confirm the certificate issue:

When answering that with "yes", the URL is then colored yellow (`colors.statusbar.url.warn.fg`) rather than green (`colors.statusbar.url.success_https.fg`):

However, when reloading the page (or loading it again in another tab), the URL is green:

This is because QtWebEngine remembers the answer internally and we don't get a `certificateErrors` signal anymore - unfortunately there's also no API to check the certificate state of the current page...
I'm handling this as a low-severity security vulnerability and will request a CVE. There's no way for bad actors to exploit this and the user already did override the certificate error (so should be aware that the connection is not to be trusted), but it still lures users into a false sense of security.
A fix, release and security announcement is in progress.
|
priority
|
security reloading page with certificate errors falsely shows a green url cve while working on i noticed that only the first load of pages with certificate errors gets a correctly colored url when loading a page with the default content ssl strict ask setting there s a prompt to confirm the certificate issue when answering that with yes the url is then colored yellow colors statusbar url warn fg rather than green colors statusbar url success https fg however when reloading the page or loading it again in another tab the url is green this is because qtwebengine remembers the answer internally and we don t get a certificateerrors signal anymore unfortunately there s also no api to check the certificate state of the current page i m handling this as a low severity security vulnerability and will request a cve there s no way for bad actors to exploit this and the user already did override the certificate error so should be aware that the connection is not to be trusted but it still lures users into a false sense of security a fix release and security announcement is in progress
| 1
|
392,358
| 11,590,446,548
|
IssuesEvent
|
2020-02-24 06:45:53
|
wso2/product-is
|
https://api.github.com/repos/wso2/product-is
|
opened
|
When listing users SCIM2 username is tenant domain appended
|
Affected/5.10.0-Beta2 Priority/Highest Severity/Critical Type/Bug
|
when listing users using scim2 the tenant domain is also appended in the username. The is a behavioural change from the previous IS versions.
|
1.0
|
When listing users SCIM2 username is tenant domain appended - when listing users using scim2 the tenant domain is also appended in the username. The is a behavioural change from the previous IS versions.
|
priority
|
when listing users username is tenant domain appended when listing users using the tenant domain is also appended in the username the is a behavioural change from the previous is versions
| 1
|
180,225
| 6,647,429,831
|
IssuesEvent
|
2017-09-28 03:56:35
|
RoboJackets/apiary
|
https://api.github.com/repos/RoboJackets/apiary
|
closed
|
DuesTransactionController Default User
|
area / API area / backend priority / high type / enhancement
|
Dues Transaction Controller needs the following modifications:
If no user is specified, it should default to the currently logged in User. Only an admin should be able to submit a dues Transaction for another user.
|
1.0
|
DuesTransactionController Default User - Dues Transaction Controller needs the following modifications:
If no user is specified, it should default to the currently logged in User. Only an admin should be able to submit a dues Transaction for another user.
|
priority
|
duestransactioncontroller default user dues transaction controller needs the following modifications if no user is specified it should default to the currently logged in user only an admin should be able to submit a dues transaction for another user
| 1
|
43,766
| 2,892,507,375
|
IssuesEvent
|
2015-06-15 13:27:17
|
JMurk/Utility_Viewer_Issues
|
https://api.github.com/repos/JMurk/Utility_Viewer_Issues
|
closed
|
Utility Viewer - Printing - Stuck on Loading, will not produce print
|
bug high priority
|
**Issue:** in production, the export to PDF function gets stuck on the export loading and fails to ever produce a report.

**Resolution:** Please investigate and resolve accordingly
|
1.0
|
Utility Viewer - Printing - Stuck on Loading, will not produce print - **Issue:** in production, the export to PDF function gets stuck on the export loading and fails to ever produce a report.

**Resolution:** Please investigate and resolve accordingly
|
priority
|
utility viewer printing stuck on loading will not produce print issue in production the export to pdf function gets stuck on the export loading and fails to ever produce a report resolution please investigate and resolve accordingly
| 1
|
608,787
| 18,848,688,292
|
IssuesEvent
|
2021-11-11 17:50:17
|
rust-windowing/winit
|
https://api.github.com/repos/rust-windowing/winit
|
opened
|
Windows: Parent of owned child window not visible
|
type: bug platform: Windows priority: high
|
Declaring one windows as owner will make this window invisible which is unintentional and a regression from 0.25.
Seems to be caused by https://github.com/rust-windowing/winit/pull/1933
Minimum example:
```rust
fn main() -> anyhow::Result<()> {
let event_loop = EventLoop::new();
let owner = WindowBuilder::new()
.with_title("owner")
.build(&event_loop)
.unwrap();
let child = WindowBuilder::new()
.with_title("child")
.with_owner_window(owner.hwnd() as _)
.build(&event_loop)
.unwrap();
event_loop.run(move |event, _, control_flow| {
*control_flow = ControlFlow::Wait;
})
}
```
|
1.0
|
Windows: Parent of owned child window not visible - Declaring one windows as owner will make this window invisible which is unintentional and a regression from 0.25.
Seems to be caused by https://github.com/rust-windowing/winit/pull/1933
Minimum example:
```rust
fn main() -> anyhow::Result<()> {
let event_loop = EventLoop::new();
let owner = WindowBuilder::new()
.with_title("owner")
.build(&event_loop)
.unwrap();
let child = WindowBuilder::new()
.with_title("child")
.with_owner_window(owner.hwnd() as _)
.build(&event_loop)
.unwrap();
event_loop.run(move |event, _, control_flow| {
*control_flow = ControlFlow::Wait;
})
}
```
|
priority
|
windows parent of owned child window not visible declaring one windows as owner will make this window invisible which is unintentional and a regression from seems to be caused by minimum example rust fn main anyhow result let event loop eventloop new let owner windowbuilder new with title owner build event loop unwrap let child windowbuilder new with title child with owner window owner hwnd as build event loop unwrap event loop run move event control flow control flow controlflow wait
| 1
|
827,354
| 31,767,103,694
|
IssuesEvent
|
2023-09-12 09:26:09
|
sparcs-kaist/biseo
|
https://api.github.com/repos/sparcs-kaist/biseo
|
opened
|
์ข
๋ฃ๋ ํฌํ ์ค๋ณต์ผ๋ก ๋ณด์ฌ์ง
|
bug high priority FE Agenda
|
# ์ด์ ๋ด์ฉ \*
from #307
์ข
๋ฃ๋ ํฌํ๊ฐ ์ค๋ณต๋์ ๋์ค๋๊ฒ ๊ฐ์ต๋๋ค
## ์คํฌ๋ฆฐ์ท

# ๊ด๋ จ Task \*
- [ ] Task1
- [ ] Task2
|
1.0
|
์ข
๋ฃ๋ ํฌํ ์ค๋ณต์ผ๋ก ๋ณด์ฌ์ง - # ์ด์ ๋ด์ฉ \*
from #307
์ข
๋ฃ๋ ํฌํ๊ฐ ์ค๋ณต๋์ ๋์ค๋๊ฒ ๊ฐ์ต๋๋ค
## ์คํฌ๋ฆฐ์ท

# ๊ด๋ จ Task \*
- [ ] Task1
- [ ] Task2
|
priority
|
์ข
๋ฃ๋ ํฌํ ์ค๋ณต์ผ๋ก ๋ณด์ฌ์ง ์ด์ ๋ด์ฉ from ์ข
๋ฃ๋ ํฌํ๊ฐ ์ค๋ณต๋์ ๋์ค๋๊ฒ ๊ฐ์ต๋๋ค ์คํฌ๋ฆฐ์ท ๊ด๋ จ task
| 1
|
283,451
| 8,719,712,693
|
IssuesEvent
|
2018-12-08 03:31:38
|
aowen87/BAR
|
https://api.github.com/repos/aowen87/BAR
|
closed
|
Clipping a surface from a VTK file generated by VisIt gives an empty plot.
|
bug crash likelihood medium priority reviewed severity high wrong results
|
Mike Puso reported that if he reads in a vtk file containing a surface into VisIt and performs a clip operation he gets an error message and an empty plot. He gave me his data on the closed and it contains polydata. I looked into the source of the error message and basically there is a conversion being done when the topological dimension is less than the spatial dimension and it skips cells that are greater than the topological dimension. It turns out the spatial dimension is 3 and the topological dimension is 0, causing all the cells to be skipped.
Looking at the debug logs I see that it determines the spatial and topological dimension based on the first file of a collection of VTK files grouped using a ".visit" file. The first file doesn't have any cells and that is why the topological dimension is 0.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2827
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: High
Subject: Clipping a surface from a VTK file generated by VisIt gives an empty plot.
Assigned to: Eric Brugger
Category:
Target version: 2.12.3
Author: Eric Brugger
Start: 05/23/2017
Due date:
% Done: 100
Estimated time: 12.0
Created: 05/23/2017 05:40 pm
Updated: 06/20/2017 05:43 pm
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.12.1
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
Mike Puso reported that if he reads in a vtk file containing a surface into VisIt and performs a clip operation he gets an error message and an empty plot. He gave me his data on the closed and it contains polydata. I looked into the source of the error message and basically there is a conversion being done when the topological dimension is less than the spatial dimension and it skips cells that are greater than the topological dimension. It turns out the spatial dimension is 3 and the topological dimension is 0, causing all the cells to be skipped.
Looking at the debug logs I see that it determines the spatial and topological dimension based on the first file of a collection of VTK files grouped using a ".visit" file. The first file doesn't have any cells and that is why the topological dimension is 0.
Comments:
I committed revisions 31102 and 31104 to the 2.12 RC and trunk withthe following change:1) Fixed a bug where data from a collection of VTK files grouped together with a visit file could not be clipped when the files contained PolyData and the first file had no cells or points. The problem was that the metadata would be gotten from the first file in the visit file and since that file had no cells and points it would give it a topological dimension of zero. I modified the reading of VTK files so that it would get the metadata from the first non empty VTK file. This resolves #2827.M avt/Database/Formats/avtSTSDFileFormat.hM avt/Database/Formats/avtSTSDFileFormatInterface.CM databases/VTK/avtVTKFileFormat.CM databases/VTK/avtVTKFileFormat.hM databases/VTK/avtVTKFileReader.CM databases/VTK/avtVTKFileReader.hM resources/help/en_US/relnotes2.12.3.html
|
1.0
|
Clipping a surface from a VTK file generated by VisIt gives an empty plot. - Mike Puso reported that if he reads in a vtk file containing a surface into VisIt and performs a clip operation he gets an error message and an empty plot. He gave me his data on the closed and it contains polydata. I looked into the source of the error message and basically there is a conversion being done when the topological dimension is less than the spatial dimension and it skips cells that are greater than the topological dimension. It turns out the spatial dimension is 3 and the topological dimension is 0, causing all the cells to be skipped.
Looking at the debug logs I see that it determines the spatial and topological dimension based on the first file of a collection of VTK files grouped using a ".visit" file. The first file doesn't have any cells and that is why the topological dimension is 0.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2827
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: High
Subject: Clipping a surface from a VTK file generated by VisIt gives an empty plot.
Assigned to: Eric Brugger
Category:
Target version: 2.12.3
Author: Eric Brugger
Start: 05/23/2017
Due date:
% Done: 100
Estimated time: 12.0
Created: 05/23/2017 05:40 pm
Updated: 06/20/2017 05:43 pm
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.12.1
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
Mike Puso reported that if he reads in a vtk file containing a surface into VisIt and performs a clip operation he gets an error message and an empty plot. He gave me his data on the closed and it contains polydata. I looked into the source of the error message and basically there is a conversion being done when the topological dimension is less than the spatial dimension and it skips cells that are greater than the topological dimension. It turns out the spatial dimension is 3 and the topological dimension is 0, causing all the cells to be skipped.
Looking at the debug logs I see that it determines the spatial and topological dimension based on the first file of a collection of VTK files grouped using a ".visit" file. The first file doesn't have any cells and that is why the topological dimension is 0.
Comments:
I committed revisions 31102 and 31104 to the 2.12 RC and trunk withthe following change:1) Fixed a bug where data from a collection of VTK files grouped together with a visit file could not be clipped when the files contained PolyData and the first file had no cells or points. The problem was that the metadata would be gotten from the first file in the visit file and since that file had no cells and points it would give it a topological dimension of zero. I modified the reading of VTK files so that it would get the metadata from the first non empty VTK file. This resolves #2827.M avt/Database/Formats/avtSTSDFileFormat.hM avt/Database/Formats/avtSTSDFileFormatInterface.CM databases/VTK/avtVTKFileFormat.CM databases/VTK/avtVTKFileFormat.hM databases/VTK/avtVTKFileReader.CM databases/VTK/avtVTKFileReader.hM resources/help/en_US/relnotes2.12.3.html
|
priority
|
clipping a surface from a vtk file generated by visit gives an empty plot mike puso reported that if he reads in a vtk file containing a surface into visit and performs a clip operation he gets an error message and an empty plot he gave me his data on the closed and it contains polydata i looked into the source of the error message and basically there is a conversion being done when the topological dimension is less than the spatial dimension and it skips cells that are greater than the topological dimension it turns out the spatial dimension is and the topological dimension is causing all the cells to be skipped looking at the debug logs i see that it determines the spatial and topological dimension based on the first file of a collection of vtk files grouped using a visit file the first file doesn t have any cells and that is why the topological dimension is redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority high subject clipping a surface from a vtk file generated by visit gives an empty plot assigned to eric brugger category target version author eric brugger start due date done estimated time created pm updated pm likelihood occasional severity crash wrong results found in version impact expected use os all support group any description mike puso reported that if he reads in a vtk file containing a surface into visit and performs a clip operation he gets an error message and an empty plot he gave me his data on the closed and it contains polydata i looked into the source of the error message and basically there is a conversion being done when the topological dimension is less than the spatial dimension and it skips cells that are greater than the topological dimension it turns out the spatial dimension is and the topological dimension is causing all the cells to be skipped looking at the debug logs i see that it determines the spatial and topological dimension based on the first file of a collection of vtk files grouped using a visit file the first file doesn t have any cells and that is why the topological dimension is comments i committed revisions and to the rc and trunk withthe following change fixed a bug where data from a collection of vtk files grouped together with a visit file could not be clipped when the files contained polydata and the first file had no cells or points the problem was that the metadata would be gotten from the first file in the visit file and since that file had no cells and points it would give it a topological dimension of zero i modified the reading of vtk files so that it would get the metadata from the first non empty vtk file this resolves m avt database formats avtstsdfileformat hm avt database formats avtstsdfileformatinterface cm databases vtk avtvtkfileformat cm databases vtk avtvtkfileformat hm databases vtk avtvtkfilereader cm databases vtk avtvtkfilereader hm resources help en us html
| 1
|
448,074
| 12,942,611,542
|
IssuesEvent
|
2020-07-18 02:56:17
|
ecency/esteem-surfer
|
https://api.github.com/repos/ecency/esteem-surfer
|
opened
|
App won't start
|
high priority
|
Likely due to cloudflare outage or github connection issues which should be handled properly

|
1.0
|
App won't start - Likely due to cloudflare outage or github connection issues which should be handled properly

|
priority
|
app won t start likely due to cloudflare outage or github connection issues which should be handled properly
| 1
|
245,134
| 7,881,624,370
|
IssuesEvent
|
2018-06-26 19:42:33
|
GratefulGarmentProject/StockAid
|
https://api.github.com/repos/GratefulGarmentProject/StockAid
|
closed
|
Add Google Analytics capabilities
|
Auditing Good First Issue Priority - High Quickfix review
|
Add a way to insert a GA API token that will enable Google Analytics and insert the correct javascript to the application layout to properly set it up.
|
1.0
|
Add Google Analytics capabilities - Add a way to insert a GA API token that will enable Google Analytics and insert the correct javascript to the application layout to properly set it up.
|
priority
|
add google analytics capabilities add a way to insert a ga api token that will enable google analytics and insert the correct javascript to the application layout to properly set it up
| 1
|
74,188
| 3,435,960,703
|
IssuesEvent
|
2015-12-12 01:39:07
|
Ecotrust/COMPASS
|
https://api.github.com/repos/Ecotrust/COMPASS
|
closed
|
Show disclaimer [ 6 hours ]
|
High Priority
|
[Disclaimer Flatblock already created on prod]
ODFW crucial habitat layers updated:
February 26, 2014.
ODFW Compass provides coarse-scale, non-regulatory fish and wildlife information, and the crucial habitat layers emphasize areas documented as containing important natural resources. This site is intended to support early planning for large-scale land-use, development, or conservation projects.
By clicking "Agree", you are acknowledging the following statements:
- Data and analyses presented within Compass are based on best available information, and are expected to be updated regularly. Crucial habitat layers reflect documented resources at the time of data aggregation; and as such absence of crucial habitat prioritization does not necessarily indicate that no crucial species or habitats are present (or have been present in that location at one time.)
- Most layers within Compass do not provide detailed information on site-specific locations or streams and using this site does not replace or supersede site-specific consolation with appropriate agencies, including the Oregon Department of Fish and Wildlife.
- Some data layers may be summarized to preserve the confidentiality of sensitive information.
- Documentation for layers and methodologies should be used to better understand the results and methodologies presented.
I have read this disclaimer and understand the intent of this system, and therefore hold ODFW harmless from any liability arising from or related to using the ODFW Compass system.
|
1.0
|
Show disclaimer [ 6 hours ] - [Disclaimer Flatblock already created on prod]
ODFW crucial habitat layers updated:
February 26, 2014.
ODFW Compass provides coarse-scale, non-regulatory fish and wildlife information, and the crucial habitat layers emphasize areas documented as containing important natural resources. This site is intended to support early planning for large-scale land-use, development, or conservation projects.
By clicking "Agree", you are acknowledging the following statements:
- Data and analyses presented within Compass are based on best available information, and are expected to be updated regularly. Crucial habitat layers reflect documented resources at the time of data aggregation; and as such absence of crucial habitat prioritization does not necessarily indicate that no crucial species or habitats are present (or have been present in that location at one time.)
- Most layers within Compass do not provide detailed information on site-specific locations or streams and using this site does not replace or supersede site-specific consolation with appropriate agencies, including the Oregon Department of Fish and Wildlife.
- Some data layers may be summarized to preserve the confidentiality of sensitive information.
- Documentation for layers and methodologies should be used to better understand the results and methodologies presented.
I have read this disclaimer and understand the intent of this system, and therefore hold ODFW harmless from any liability arising from or related to using the ODFW Compass system.
|
priority
|
show disclaimer odfw crucial habitat layers updated february odfw compass provides coarse scale non regulatory fish and wildlife information and the crucial habitat layers emphasize areas documented as containing important natural resources this site is intended to support early planning for large scale land use development or conservation projects by clicking agree you are acknowledging the following statements data and analyses presented within compass are based on best available information and are expected to be updated regularly crucial habitat layers reflect documented resources at the time of data aggregation and as such absence of crucial habitat prioritization does not necessarily indicate that no crucial species or habitats are present or have been present in that location at one time most layers within compass do not provide detailed information on site specific locations or streams and using this site does not replace or supersede site specific consolation with appropriate agencies including the oregon department of fish and wildlife some data layers may be summarized to preserve the confidentiality of sensitive information documentation for layers and methodologies should be used to better understand the results and methodologies presented i have read this disclaimer and understand the intent of this system and therefore hold odfw harmless from any liability arising from or related to using the odfw compass system
| 1
|
352,508
| 10,543,004,198
|
IssuesEvent
|
2019-10-02 14:14:13
|
fac-17/My-Body-Back
|
https://api.github.com/repos/fac-17/My-Body-Back
|
opened
|
Create File Structure
|
Feature High Priority
|
- [ ] Clone this repo
- [ ] Create React App
- [ ] Create folders & gitkeep files for initial push
Should be done after researching React Router
|
1.0
|
Create File Structure - - [ ] Clone this repo
- [ ] Create React App
- [ ] Create folders & gitkeep files for initial push
Should be done after researching React Router
|
priority
|
create file structure clone this repo create react app create folders gitkeep files for initial push should be done after researching react router
| 1
|
547,785
| 16,047,299,078
|
IssuesEvent
|
2021-04-22 14:56:56
|
Proof-Of-Humanity/proof-of-humanity-web
|
https://api.github.com/repos/Proof-Of-Humanity/proof-of-humanity-web
|
closed
|
Wrong Eth value sent to contract when challenging
|
priority: high status: available type: bug :bug:
|
When a user challenge a submission, the ui is asking for 0.114 eth but the SC is returning the half to the challenger automatically.
I think this issue is related to #142
|
1.0
|
Wrong Eth value sent to contract when challenging - When a user challenge a submission, the ui is asking for 0.114 eth but the SC is returning the half to the challenger automatically.
I think this issue is related to #142
|
priority
|
wrong eth value sent to contract when challenging when a user challenge a submission the ui is asking for eth but the sc is returning the half to the challenger automatically i think this issue is related to
| 1
|
355,406
| 10,580,171,332
|
IssuesEvent
|
2019-10-08 05:49:43
|
wso2/ballerina-integrator
|
https://api.github.com/repos/wso2/ballerina-integrator
|
closed
|
Migrate tutorials to work as module templates
|
Priority/Highest Severity/Major Type/Task
|
**Description:**
Migrate the existing tutorials into module template form so that they can be pushed to Ballerina Central and be used as templates as well.
|
1.0
|
Migrate tutorials to work as module templates - **Description:**
Migrate the existing tutorials into module template form so that they can be pushed to Ballerina Central and be used as templates as well.
|
priority
|
migrate tutorials to work as module templates description migrate the existing tutorials into module template form so that they can be pushed to ballerina central and be used as templates as well
| 1
|
170,223
| 6,426,571,347
|
IssuesEvent
|
2017-08-09 17:45:01
|
phetsims/QA
|
https://api.github.com/repos/phetsims/QA
|
opened
|
Dev test: Pendulum Lab 1.0.0-dev.14
|
priority:2-high
|
@ariel-phet and @arouinfar, Pendulum Lab 1.0.0-dev.14 is ready for general dev testing.
URL: http://www.colorado.edu/physics/phet/dev/html/pendulum-lab/1.0.0-dev.14/pendulum-lab_en.html
There have been a lot of model/view changes (particularly with layout), so a full test would be great.
Also let me know if https://github.com/phetsims/pendulum-lab/issues/102 is still an issue.
Please test on the following:
- [ ] iPad 2
- [ ] iPad Air
- [ ] Chrome (Windows)
- [ ] Safari (MacOS)
> Steele, this would be high priority, after CCK.
|
1.0
|
Dev test: Pendulum Lab 1.0.0-dev.14 - @ariel-phet and @arouinfar, Pendulum Lab 1.0.0-dev.14 is ready for general dev testing.
URL: http://www.colorado.edu/physics/phet/dev/html/pendulum-lab/1.0.0-dev.14/pendulum-lab_en.html
There have been a lot of model/view changes (particularly with layout), so a full test would be great.
Also let me know if https://github.com/phetsims/pendulum-lab/issues/102 is still an issue.
Please test on the following:
- [ ] iPad 2
- [ ] iPad Air
- [ ] Chrome (Windows)
- [ ] Safari (MacOS)
> Steele, this would be high priority, after CCK.
|
priority
|
dev test pendulum lab dev ariel phet and arouinfar pendulum lab dev is ready for general dev testing url there have been a lot of model view changes particularly with layout so a full test would be great also let me know if is still an issue please test on the following ipad ipad air chrome windows safari macos steele this would be high priority after cck
| 1
|
569,320
| 17,011,880,259
|
IssuesEvent
|
2021-07-02 06:26:11
|
TeamDooRiBon/DooRi-iOS
|
https://api.github.com/repos/TeamDooRiBon/DooRi-iOS
|
opened
|
[FEAT] ํญ ๋ฐ ์ปจํธ๋กค๋ฌ
|
Feat P1 / Priority High Taehyeon ๐ฎ
|
# ๐ ์ด์ (issue)
ํญ ๋ฐ ์ปจํธ๋กค๋ฌ๋ฅผ ์์ฑํฉ๋๋ค. ๋์ค์ ์์ฑ๋ ๋ทฐ๋ค์ ์ฐ๊ฒฐํด์ฃผ์ธ์.
<img width="493" alt="แแ
ณแแ
ณแ
แ
ตแซแแ
ฃแบ 2021-07-02 แแ
ฉแแ
ฎ 3 17 30" src="https://user-images.githubusercontent.com/61109660/124230155-9d1dbf00-db49-11eb-87bb-1f538d282242.png">
# ๐ to-do
<!-- ์งํํ ์์
์ ๋ํด ์ ์ด์ฃผ์ธ์ -->
- [ ] ํญ ๋ฐ ์ปจํธ๋กค๋ฌ ์์ฑ
- [ ] ์คํ ๋ฆฌ๋ณด๋ ๋ ํผ๋ฐ์ค ์ฐ๊ฒฐ
|
1.0
|
[FEAT] ํญ ๋ฐ ์ปจํธ๋กค๋ฌ - # ๐ ์ด์ (issue)
ํญ ๋ฐ ์ปจํธ๋กค๋ฌ๋ฅผ ์์ฑํฉ๋๋ค. ๋์ค์ ์์ฑ๋ ๋ทฐ๋ค์ ์ฐ๊ฒฐํด์ฃผ์ธ์.
<img width="493" alt="แแ
ณแแ
ณแ
แ
ตแซแแ
ฃแบ 2021-07-02 แแ
ฉแแ
ฎ 3 17 30" src="https://user-images.githubusercontent.com/61109660/124230155-9d1dbf00-db49-11eb-87bb-1f538d282242.png">
# ๐ to-do
<!-- ์งํํ ์์
์ ๋ํด ์ ์ด์ฃผ์ธ์ -->
- [ ] ํญ ๋ฐ ์ปจํธ๋กค๋ฌ ์์ฑ
- [ ] ์คํ ๋ฆฌ๋ณด๋ ๋ ํผ๋ฐ์ค ์ฐ๊ฒฐ
|
priority
|
ํญ ๋ฐ ์ปจํธ๋กค๋ฌ ๐ ์ด์ issue ํญ ๋ฐ ์ปจํธ๋กค๋ฌ๋ฅผ ์์ฑํฉ๋๋ค ๋์ค์ ์์ฑ๋ ๋ทฐ๋ค์ ์ฐ๊ฒฐํด์ฃผ์ธ์ img width alt แแ
ณแแ
ณแ
แ
ตแซแแ
ฃแบ แแ
ฉแแ
ฎ src ๐ to do ํญ ๋ฐ ์ปจํธ๋กค๋ฌ ์์ฑ ์คํ ๋ฆฌ๋ณด๋ ๋ ํผ๋ฐ์ค ์ฐ๊ฒฐ
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.