Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
823,948
| 31,074,738,891
|
IssuesEvent
|
2023-08-12 10:39:43
|
ArkScript-lang/Ark
|
https://api.github.com/repos/ArkScript-lang/Ark
|
opened
|
Consider a smaller stack size
|
enhancement virtual machine compiler optimization priority/medium
|
## The problem
Creating a VM requires creating an `ExecutionContext`, that holds a few important values, such as the stack for the VM, defined as a `std::array<Value, VmStackSize>`. As of today, `sizeof(Value)` returns 40B, and the stack size is set to 8192 ; creating a VM thus requires 327kB.
## Some tidbits about the compiler
The compiler can handle stack cleaning (ie a `(while true 1)` won't trash the stack luckily), and also handles correctly self tail calls (something is in the works to handle generic tail calls).
Generic tail calls will require a new instruction, akin to `JUMP` but for pages, so that we can jump between functions. Development on this should start shortly after having integrated the new parser.
## Questions and experiments
- do we still need a stack THIS big?
- how does it currently affects script startup?
- similarly, how does this affects the use of `(async foo arg...)`?
Data will be needed on the optimal stack size and the average stack size for common scripts.
## Final words
Even though this would change a critical part of the virtual machine, this can not be considered a breaking change as the stack is never exposed to the end user (nor to plugin authors).
|
1.0
|
Consider a smaller stack size - ## The problem
Creating a VM requires creating an `ExecutionContext`, that holds a few important values, such as the stack for the VM, defined as a `std::array<Value, VmStackSize>`. As of today, `sizeof(Value)` returns 40B, and the stack size is set to 8192 ; creating a VM thus requires 327kB.
## Some tidbits about the compiler
The compiler can handle stack cleaning (ie a `(while true 1)` won't trash the stack luckily), and also handles correctly self tail calls (something is in the works to handle generic tail calls).
Generic tail calls will require a new instruction, akin to `JUMP` but for pages, so that we can jump between functions. Development on this should start shortly after having integrated the new parser.
## Questions and experiments
- do we still need a stack THIS big?
- how does it currently affects script startup?
- similarly, how does this affects the use of `(async foo arg...)`?
Data will be needed on the optimal stack size and the average stack size for common scripts.
## Final words
Even though this would change a critical part of the virtual machine, this can not be considered a breaking change as the stack is never exposed to the end user (nor to plugin authors).
|
non_process
|
consider a smaller stack size the problem creating a vm requires creating an executioncontext that holds a few important values such as the stack for the vm defined as a std array as of today sizeof value returns and the stack size is set to creating a vm thus requires some tidbits about the compiler the compiler can handle stack cleaning ie a while true won t trash the stack luckily and also handles correctly self tail calls something is in the works to handle generic tail calls generic tail calls will require a new instruction akin to jump but for pages so that we can jump between functions development on this should start shortly after having integrated the new parser questions and experiments do we still need a stack this big how does it currently affects script startup similarly how does this affects the use of async foo arg data will be needed on the optimal stack size and the average stack size for common scripts final words even though this would change a critical part of the virtual machine this can not be considered a breaking change as the stack is never exposed to the end user nor to plugin authors
| 0
|
127,000
| 12,303,858,978
|
IssuesEvent
|
2020-05-11 19:27:05
|
7h3Rabbit/StaticWebEpiserverPlugin
|
https://api.github.com/repos/7h3Rabbit/StaticWebEpiserverPlugin
|
closed
|
Showcasing usage of Events
|
documentation enhancement
|
We need more examples on our functionality, this is why we need to create a project showcasing how our events can be used.
For this we want to create the following:
- For every <link> element including stylesheet by pages, make them inline in markup for every page.
- Don't include CSS rules that we know would not apply for the current page (For example removing ".quotePuff" or ".teaserblock" when blocks are not present on page)
|
1.0
|
Showcasing usage of Events - We need more examples on our functionality, this is why we need to create a project showcasing how our events can be used.
For this we want to create the following:
- For every <link> element including stylesheet by pages, make them inline in markup for every page.
- Don't include CSS rules that we know would not apply for the current page (For example removing ".quotePuff" or ".teaserblock" when blocks are not present on page)
|
non_process
|
showcasing usage of events we need more examples on our functionality this is why we need to create a project showcasing how our events can be used for this we want to create the following for every element including stylesheet by pages make them inline in markup for every page don t include css rules that we know would not apply for the current page for example removing quotepuff or teaserblock when blocks are not present on page
| 0
|
20,154
| 26,702,976,265
|
IssuesEvent
|
2023-01-27 15:47:30
|
bazelbuild/bazel-skylib
|
https://api.github.com/repos/bazelbuild/bazel-skylib
|
closed
|
Add gazelle plugin to BCR
|
P2 type: process
|
With #400 the gazelle plugin has now been added to the distribution, and it should be possible to ship it to BCR.
|
1.0
|
Add gazelle plugin to BCR - With #400 the gazelle plugin has now been added to the distribution, and it should be possible to ship it to BCR.
|
process
|
add gazelle plugin to bcr with the gazelle plugin has now been added to the distribution and it should be possible to ship it to bcr
| 1
|
4,140
| 7,094,787,141
|
IssuesEvent
|
2018-01-13 08:34:41
|
triplea-game/triplea
|
https://api.github.com/repos/triplea-game/triplea
|
closed
|
Server Ops - documentation and DR checklists - let's build an index of ops docs!
|
category: admin task category: dev & admin process
|
An opportunity to review ops. For everything we run: lobby, bots, forum, github.io website, dice server, warclub website and forum; do we have:
### documentation:
- how to install
- how to check if running
- how to restart
### Disaster Recovery
- are we taking data backups?
- are data backups replicated to a second physical machine?
I think we can consider Derby dead for this discussion. For other legacy components like warclub mysql DB, either we should be able to recover that or continue with our plans to deprecate it post-haste. I'm a bit most interested to know if are backing up the forum data. We do not currently have something for postgres, this is a good reminder.
Beyond the conversation that results, which I think will be useful to get us on the same page, I hope we can create a master link index out of this maybe, so we can have all these sources of information readily available and organized.
|
1.0
|
Server Ops - documentation and DR checklists - let's build an index of ops docs! - An opportunity to review ops. For everything we run: lobby, bots, forum, github.io website, dice server, warclub website and forum; do we have:
### documentation:
- how to install
- how to check if running
- how to restart
### Disaster Recovery
- are we taking data backups?
- are data backups replicated to a second physical machine?
I think we can consider Derby dead for this discussion. For other legacy components like warclub mysql DB, either we should be able to recover that or continue with our plans to deprecate it post-haste. I'm a bit most interested to know if are backing up the forum data. We do not currently have something for postgres, this is a good reminder.
Beyond the conversation that results, which I think will be useful to get us on the same page, I hope we can create a master link index out of this maybe, so we can have all these sources of information readily available and organized.
|
process
|
server ops documentation and dr checklists let s build an index of ops docs an opportunity to review ops for everything we run lobby bots forum github io website dice server warclub website and forum do we have documentation how to install how to check if running how to restart disaster recovery are we taking data backups are data backups replicated to a second physical machine i think we can consider derby dead for this discussion for other legacy components like warclub mysql db either we should be able to recover that or continue with our plans to deprecate it post haste i m a bit most interested to know if are backing up the forum data we do not currently have something for postgres this is a good reminder beyond the conversation that results which i think will be useful to get us on the same page i hope we can create a master link index out of this maybe so we can have all these sources of information readily available and organized
| 1
|
10,073
| 13,044,161,929
|
IssuesEvent
|
2020-07-29 03:47:27
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `TimestampAdd` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `TimestampAdd` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `TimestampAdd` from TiDB -
## Description
Port the scalar function `TimestampAdd` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @mapleFU
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function timestampadd from tidb description port the scalar function timestampadd from tidb to coprocessor score mentor s maplefu recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
8,657
| 11,796,880,601
|
IssuesEvent
|
2020-03-18 11:38:32
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Custom Mapping causes "No expression named ___" error when used in table view with joined fields
|
Administration/Data Model Priority:P3 Querying/Processor Type:Bug
|
Your databases: default custom dataset (but also MySQL 5.0)
Metabase version: 0.30.1
Metabase hosting environment: windows, jar (but also CentOS 7, jar)
Metabase internal database: default (but also Postgres)
Custom Mapping throws an error when you use it and and visible fields from a joined table.
Using the sample dataset and a fresh jar install
- Query Orders
- refresh (loads properly)
- Using table visualization interface, add the product.ein to the viewed fields
- refresh (loads properly)
- change data model for orders so that Quantity is a custom mapped field
- refresh (error - No expression named 'Quantity'.)
- recreate question from scratch (ie: Ask A Question > Custom > Orders table > Add product.ein to fields
- refresh (error - No expression named 'Quantity'.)
- remove custom mapping (change back to value)
- refresh (error, as before)
- recreate question from scratch (ie: Ask A Question > Custom > Orders table > Add product.ein to fields
- refresh (loads properly)
This issue prevents us from upgrading from .29 as it renders many of our queries broken.
Possibly related to: https://github.com/metabase/metabase/issues/8422
Log of Error:
```
Aug 28 20:22:12 WARN metabase.query-processor :: {:status :failed,
:class java.lang.Exception,
:error "No expression named 'Quantity'.",
:stacktrace
["driver.generic_sql.query_processor$expression_with_name.invokeStatic(query_processor.clj:57)"
"driver.generic_sql.query_processor$expression_with_name.invoke(query_processor.clj:53)"
"driver.generic_sql.query_processor$fn__56300.invokeStatic(query_processor.clj:90)"
"driver.generic_sql.query_processor$fn__56300.invoke(query_processor.clj:86)"
"driver.generic_sql.query_processor$apply_fields$iter__56402__56406$fn__56407$fn__56408.invoke(query_processor.clj:229)"
"driver.generic_sql.query_processor$apply_fields$iter__56402__56406$fn__56407.invoke(query_processor.clj:228)"
"driver.generic_sql.query_processor$apply_fields.invokeStatic(query_processor.clj:228)"
"driver.generic_sql.query_processor$apply_fields.invoke(query_processor.clj:225)"
"driver.generic_sql$fn__33627$G__33466__33636.invoke(generic_sql.clj:32)"
"driver.generic_sql.query_processor$apply_clauses.invokeStatic(query_processor.clj:370)"
"driver.generic_sql.query_processor$apply_clauses.invoke(query_processor.clj:365)"
"driver.generic_sql.query_processor$build_honeysql_form.invokeStatic(query_processor.clj:383)"
"driver.generic_sql.query_processor$build_honeysql_form.invoke(query_processor.clj:379)"
"driver.generic_sql.query_processor$mbql__GT_native.invokeStatic(query_processor.clj:391)"
"driver.generic_sql.query_processor$mbql__GT_native.invoke(query_processor.clj:387)"
"driver$fn__28170$G__28065__28177.invoke(driver.clj:104)"
"query_processor.middleware.mbql_to_native$query__GT_native_form.invokeStatic(mbql_to_native.clj:17)"
"query_processor.middleware.mbql_to_native$query__GT_native_form.invoke(mbql_to_native.clj:12)"
"query_processor.middleware.mbql_to_native$mbql__GT_native$fn__33022.invoke(mbql_to_native.clj:26)"
"query_processor.middleware.annotate_and_sort$annotate_and_sort$fn__31336.invoke(annotate_and_sort.clj:42)"
"query_processor.middleware.limit$limit$fn__32977.invoke(limit.clj:15)"
"query_processor.middleware.cumulative_aggregations$cumulative_aggregation$fn__32827.invoke(cumulative_aggregations.clj:58)"
"query_processor.middleware.cumulative_aggregations$cumulative_aggregation$fn__32827.invoke(cumulative_aggregations.clj:58)"
"query_processor.middleware.results_metadata$record_and_return_metadata_BANG_$fn__36951.invoke(results_metadata.clj:51)"
"query_processor.middleware.format_rows$format_rows$fn__32967.invoke(format_rows.clj:26)"
"query_processor.middleware.binning$update_binning_strategy$fn__31430.invoke(binning.clj:165)"
"query_processor.middleware.resolve$resolve_middleware$fn__30924.invoke(resolve.clj:483)"
"query_processor.middleware.expand$expand_middleware$fn__32707.invoke(expand.clj:607)"
"query_processor.middleware.add_row_count_and_status$add_row_count_and_status$fn__31010.invoke(add_row_count_and_status.clj:15)"
"query_processor.middleware.driver_specific$process_query_in_context$fn__32847.invoke(driver_specific.clj:12)"
"query_processor.middleware.resolve_driver$resolve_driver$fn__35494.invoke(resolve_driver.clj:15)"
"query_processor.middleware.bind_effective_timezone$bind_effective_timezone$fn__31344$fn__31345.invoke(bind_effective_timezone.clj:9)"
"util.date$call_with_effective_timezone.invokeStatic(date.clj:82)"
"util.date$call_with_effective_timezone.invoke(date.clj:71)"
"query_processor.middleware.bind_effective_timezone$bind_effective_timezone$fn__31344.invoke(bind_effective_timezone.clj:8)"
"query_processor.middleware.cache$maybe_return_cached_results$fn__31519.invoke(cache.clj:149)"
"query_processor.middleware.catch_exceptions$catch_exceptions$fn__32758.invoke(catch_exceptions.clj:58)"
"query_processor$process_query.invokeStatic(query_processor.clj:135)"
"query_processor$process_query.invoke(query_processor.clj:131)"
"query_processor$run_and_save_query_BANG_.invokeStatic(query_processor.clj:249)"
"query_processor$run_and_save_query_BANG_.invoke(query_processor.clj:243)"
"query_processor$fn__36989$process_query_and_save_execution_BANG___36994$fn__36995.invoke(query_processor.clj:289)"
"query_processor$fn__36989$process_query_and_save_execution_BANG___36994.invoke(query_processor.clj:275)"
"query_processor$fn__37013$process_query_and_save_with_max_BANG___37018$fn__37019.invoke(query_processor.clj:310)"
"query_processor$fn__37013$process_query_and_save_with_max_BANG___37018.invoke(query_processor.clj:306)"
"api.dataset$fn__43466$fn__43469.invoke(dataset.clj:45)"
"api.common$fn__20623$invoke_thunk_with_keepalive__20628$fn__20629$fn__20630.invoke(common.clj:433)"],
:query
{:type "query",
:query
{:source_table 2,
:fields [["field-id" 12] ["field-id" 15] ["field-id" 10] ["field-id" 11] ["field-id" 14] ["field-id" 13] ["field-id" 16] ["field-id" 17] ["field-id" 9] ["expression" "Quantity"] ["fk->" 11 1]]},
:parameters [],
:constraints {:max-results 10000, :max-results-bare-rows 2000},
:info
{:executed-by 1,
:context :ad-hoc,
:card-id nil,
:nested? false,
:query-hash [-83, 24, 72, 50, 35, -77, 88, 20, 117, -105, 61, 57, -104, -120, -7, 83, -40, 54, -2, -34, -118, 42, 56, -41, -109, -46, 96, -10, -60, 14, -66, 122],
:query-type "MBQL"}},
:expanded-query nil}
Aug 28 20:22:12 WARN metabase.query-processor :: Query failure: No expression named 'Quantity'.
["query_processor$assert_query_status_successful.invokeStatic(query_processor.clj:217)"
"query_processor$assert_query_status_successful.invoke(query_processor.clj:210)"
"query_processor$run_and_save_query_BANG_.invokeStatic(query_processor.clj:250)"
"query_processor$run_and_save_query_BANG_.invoke(query_processor.clj:243)"
"query_processor$fn__36989$process_query_and_save_execution_BANG___36994$fn__36995.invoke(query_processor.clj:289)"
"query_processor$fn__36989$process_query_and_save_execution_BANG___36994.invoke(query_processor.clj:275)"
"query_processor$fn__37013$process_query_and_save_with_max_BANG___37018$fn__37019.invoke(query_processor.clj:310)"
"query_processor$fn__37013$process_query_and_save_with_max_BANG___37018.invoke(query_processor.clj:306)"
"api.dataset$fn__43466$fn__43469.invoke(dataset.clj:45)"
"api.common$fn__20623$invoke_thunk_with_keepalive__20628$fn__20629$fn__20630.invoke(common.clj:433)"]
Aug 28 20:22:12 DEBUG metabase.middleware :: POST /api/dataset 200 (516 ms) (12 DB calls). Jetty threads: 8/50 (4 busy, 6 idle, 0 queued)```
|
1.0
|
Custom Mapping causes "No expression named ___" error when used in table view with joined fields - Your databases: default custom dataset (but also MySQL 5.0)
Metabase version: 0.30.1
Metabase hosting environment: windows, jar (but also CentOS 7, jar)
Metabase internal database: default (but also Postgres)
Custom Mapping throws an error when you use it and and visible fields from a joined table.
Using the sample dataset and a fresh jar install
- Query Orders
- refresh (loads properly)
- Using table visualization interface, add the product.ein to the viewed fields
- refresh (loads properly)
- change data model for orders so that Quantity is a custom mapped field
- refresh (error - No expression named 'Quantity'.)
- recreate question from scratch (ie: Ask A Question > Custom > Orders table > Add product.ein to fields
- refresh (error - No expression named 'Quantity'.)
- remove custom mapping (change back to value)
- refresh (error, as before)
- recreate question from scratch (ie: Ask A Question > Custom > Orders table > Add product.ein to fields
- refresh (loads properly)
This issue prevents us from upgrading from .29 as it renders many of our queries broken.
Possibly related to: https://github.com/metabase/metabase/issues/8422
Log of Error:
```
Aug 28 20:22:12 WARN metabase.query-processor :: {:status :failed,
:class java.lang.Exception,
:error "No expression named 'Quantity'.",
:stacktrace
["driver.generic_sql.query_processor$expression_with_name.invokeStatic(query_processor.clj:57)"
"driver.generic_sql.query_processor$expression_with_name.invoke(query_processor.clj:53)"
"driver.generic_sql.query_processor$fn__56300.invokeStatic(query_processor.clj:90)"
"driver.generic_sql.query_processor$fn__56300.invoke(query_processor.clj:86)"
"driver.generic_sql.query_processor$apply_fields$iter__56402__56406$fn__56407$fn__56408.invoke(query_processor.clj:229)"
"driver.generic_sql.query_processor$apply_fields$iter__56402__56406$fn__56407.invoke(query_processor.clj:228)"
"driver.generic_sql.query_processor$apply_fields.invokeStatic(query_processor.clj:228)"
"driver.generic_sql.query_processor$apply_fields.invoke(query_processor.clj:225)"
"driver.generic_sql$fn__33627$G__33466__33636.invoke(generic_sql.clj:32)"
"driver.generic_sql.query_processor$apply_clauses.invokeStatic(query_processor.clj:370)"
"driver.generic_sql.query_processor$apply_clauses.invoke(query_processor.clj:365)"
"driver.generic_sql.query_processor$build_honeysql_form.invokeStatic(query_processor.clj:383)"
"driver.generic_sql.query_processor$build_honeysql_form.invoke(query_processor.clj:379)"
"driver.generic_sql.query_processor$mbql__GT_native.invokeStatic(query_processor.clj:391)"
"driver.generic_sql.query_processor$mbql__GT_native.invoke(query_processor.clj:387)"
"driver$fn__28170$G__28065__28177.invoke(driver.clj:104)"
"query_processor.middleware.mbql_to_native$query__GT_native_form.invokeStatic(mbql_to_native.clj:17)"
"query_processor.middleware.mbql_to_native$query__GT_native_form.invoke(mbql_to_native.clj:12)"
"query_processor.middleware.mbql_to_native$mbql__GT_native$fn__33022.invoke(mbql_to_native.clj:26)"
"query_processor.middleware.annotate_and_sort$annotate_and_sort$fn__31336.invoke(annotate_and_sort.clj:42)"
"query_processor.middleware.limit$limit$fn__32977.invoke(limit.clj:15)"
"query_processor.middleware.cumulative_aggregations$cumulative_aggregation$fn__32827.invoke(cumulative_aggregations.clj:58)"
"query_processor.middleware.cumulative_aggregations$cumulative_aggregation$fn__32827.invoke(cumulative_aggregations.clj:58)"
"query_processor.middleware.results_metadata$record_and_return_metadata_BANG_$fn__36951.invoke(results_metadata.clj:51)"
"query_processor.middleware.format_rows$format_rows$fn__32967.invoke(format_rows.clj:26)"
"query_processor.middleware.binning$update_binning_strategy$fn__31430.invoke(binning.clj:165)"
"query_processor.middleware.resolve$resolve_middleware$fn__30924.invoke(resolve.clj:483)"
"query_processor.middleware.expand$expand_middleware$fn__32707.invoke(expand.clj:607)"
"query_processor.middleware.add_row_count_and_status$add_row_count_and_status$fn__31010.invoke(add_row_count_and_status.clj:15)"
"query_processor.middleware.driver_specific$process_query_in_context$fn__32847.invoke(driver_specific.clj:12)"
"query_processor.middleware.resolve_driver$resolve_driver$fn__35494.invoke(resolve_driver.clj:15)"
"query_processor.middleware.bind_effective_timezone$bind_effective_timezone$fn__31344$fn__31345.invoke(bind_effective_timezone.clj:9)"
"util.date$call_with_effective_timezone.invokeStatic(date.clj:82)"
"util.date$call_with_effective_timezone.invoke(date.clj:71)"
"query_processor.middleware.bind_effective_timezone$bind_effective_timezone$fn__31344.invoke(bind_effective_timezone.clj:8)"
"query_processor.middleware.cache$maybe_return_cached_results$fn__31519.invoke(cache.clj:149)"
"query_processor.middleware.catch_exceptions$catch_exceptions$fn__32758.invoke(catch_exceptions.clj:58)"
"query_processor$process_query.invokeStatic(query_processor.clj:135)"
"query_processor$process_query.invoke(query_processor.clj:131)"
"query_processor$run_and_save_query_BANG_.invokeStatic(query_processor.clj:249)"
"query_processor$run_and_save_query_BANG_.invoke(query_processor.clj:243)"
"query_processor$fn__36989$process_query_and_save_execution_BANG___36994$fn__36995.invoke(query_processor.clj:289)"
"query_processor$fn__36989$process_query_and_save_execution_BANG___36994.invoke(query_processor.clj:275)"
"query_processor$fn__37013$process_query_and_save_with_max_BANG___37018$fn__37019.invoke(query_processor.clj:310)"
"query_processor$fn__37013$process_query_and_save_with_max_BANG___37018.invoke(query_processor.clj:306)"
"api.dataset$fn__43466$fn__43469.invoke(dataset.clj:45)"
"api.common$fn__20623$invoke_thunk_with_keepalive__20628$fn__20629$fn__20630.invoke(common.clj:433)"],
:query
{:type "query",
:query
{:source_table 2,
:fields [["field-id" 12] ["field-id" 15] ["field-id" 10] ["field-id" 11] ["field-id" 14] ["field-id" 13] ["field-id" 16] ["field-id" 17] ["field-id" 9] ["expression" "Quantity"] ["fk->" 11 1]]},
:parameters [],
:constraints {:max-results 10000, :max-results-bare-rows 2000},
:info
{:executed-by 1,
:context :ad-hoc,
:card-id nil,
:nested? false,
:query-hash [-83, 24, 72, 50, 35, -77, 88, 20, 117, -105, 61, 57, -104, -120, -7, 83, -40, 54, -2, -34, -118, 42, 56, -41, -109, -46, 96, -10, -60, 14, -66, 122],
:query-type "MBQL"}},
:expanded-query nil}
Aug 28 20:22:12 WARN metabase.query-processor :: Query failure: No expression named 'Quantity'.
["query_processor$assert_query_status_successful.invokeStatic(query_processor.clj:217)"
"query_processor$assert_query_status_successful.invoke(query_processor.clj:210)"
"query_processor$run_and_save_query_BANG_.invokeStatic(query_processor.clj:250)"
"query_processor$run_and_save_query_BANG_.invoke(query_processor.clj:243)"
"query_processor$fn__36989$process_query_and_save_execution_BANG___36994$fn__36995.invoke(query_processor.clj:289)"
"query_processor$fn__36989$process_query_and_save_execution_BANG___36994.invoke(query_processor.clj:275)"
"query_processor$fn__37013$process_query_and_save_with_max_BANG___37018$fn__37019.invoke(query_processor.clj:310)"
"query_processor$fn__37013$process_query_and_save_with_max_BANG___37018.invoke(query_processor.clj:306)"
"api.dataset$fn__43466$fn__43469.invoke(dataset.clj:45)"
"api.common$fn__20623$invoke_thunk_with_keepalive__20628$fn__20629$fn__20630.invoke(common.clj:433)"]
Aug 28 20:22:12 DEBUG metabase.middleware :: POST /api/dataset 200 (516 ms) (12 DB calls). Jetty threads: 8/50 (4 busy, 6 idle, 0 queued)```
|
process
|
custom mapping causes no expression named error when used in table view with joined fields your databases default custom dataset but also mysql metabase version metabase hosting environment windows jar but also centos jar metabase internal database default but also postgres custom mapping throws an error when you use it and and visible fields from a joined table using the sample dataset and a fresh jar install query orders refresh loads properly using table visualization interface add the product ein to the viewed fields refresh loads properly change data model for orders so that quantity is a custom mapped field refresh error no expression named quantity recreate question from scratch ie ask a question custom orders table add product ein to fields refresh error no expression named quantity remove custom mapping change back to value refresh error as before recreate question from scratch ie ask a question custom orders table add product ein to fields refresh loads properly this issue prevents us from upgrading from as it renders many of our queries broken possibly related to log of error aug warn metabase query processor status failed class java lang exception error no expression named quantity stacktrace driver generic sql query processor expression with name invokestatic query processor clj driver generic sql query processor expression with name invoke query processor clj driver generic sql query processor fn invokestatic query processor clj driver generic sql query processor fn invoke query processor clj driver generic sql query processor apply fields iter fn fn invoke query processor clj driver generic sql query processor apply fields iter fn invoke query processor clj driver generic sql query processor apply fields invokestatic query processor clj driver generic sql query processor apply fields invoke query processor clj driver generic sql fn g invoke generic sql clj driver generic sql query processor apply clauses invokestatic query processor clj driver generic sql query processor apply clauses invoke query processor clj driver generic sql query processor build honeysql form invokestatic query processor clj driver generic sql query processor build honeysql form invoke query processor clj driver generic sql query processor mbql gt native invokestatic query processor clj driver generic sql query processor mbql gt native invoke query processor clj driver fn g invoke driver clj query processor middleware mbql to native query gt native form invokestatic mbql to native clj query processor middleware mbql to native query gt native form invoke mbql to native clj query processor middleware mbql to native mbql gt native fn invoke mbql to native clj query processor middleware annotate and sort annotate and sort fn invoke annotate and sort clj query processor middleware limit limit fn invoke limit clj query processor middleware cumulative aggregations cumulative aggregation fn invoke cumulative aggregations clj query processor middleware cumulative aggregations cumulative aggregation fn invoke cumulative aggregations clj query processor middleware results metadata record and return metadata bang fn invoke results metadata clj query processor middleware format rows format rows fn invoke format rows clj query processor middleware binning update binning strategy fn invoke binning clj query processor middleware resolve resolve middleware fn invoke resolve clj query processor middleware expand expand middleware fn invoke expand clj query processor middleware add row count and status add row count and status fn invoke add row count and status clj query processor middleware driver specific process query in context fn invoke driver specific clj query processor middleware resolve driver resolve driver fn invoke resolve driver clj query processor middleware bind effective timezone bind effective timezone fn fn invoke bind effective timezone clj util date call with effective timezone invokestatic date clj util date call with effective timezone invoke date clj query processor middleware bind effective timezone bind effective timezone fn invoke bind effective timezone clj query processor middleware cache maybe return cached results fn invoke cache clj query processor middleware catch exceptions catch exceptions fn invoke catch exceptions clj query processor process query invokestatic query processor clj query processor process query invoke query processor clj query processor run and save query bang invokestatic query processor clj query processor run and save query bang invoke query processor clj query processor fn process query and save execution bang fn invoke query processor clj query processor fn process query and save execution bang invoke query processor clj query processor fn process query and save with max bang fn invoke query processor clj query processor fn process query and save with max bang invoke query processor clj api dataset fn fn invoke dataset clj api common fn invoke thunk with keepalive fn fn invoke common clj query type query query source table fields parameters constraints max results max results bare rows info executed by context ad hoc card id nil nested false query hash query type mbql expanded query nil aug warn metabase query processor query failure no expression named quantity query processor assert query status successful invokestatic query processor clj query processor assert query status successful invoke query processor clj query processor run and save query bang invokestatic query processor clj query processor run and save query bang invoke query processor clj query processor fn process query and save execution bang fn invoke query processor clj query processor fn process query and save execution bang invoke query processor clj query processor fn process query and save with max bang fn invoke query processor clj query processor fn process query and save with max bang invoke query processor clj api dataset fn fn invoke dataset clj api common fn invoke thunk with keepalive fn fn invoke common clj aug debug metabase middleware post api dataset ms db calls jetty threads busy idle queued
| 1
|
15,036
| 18,757,061,463
|
IssuesEvent
|
2021-11-05 12:13:37
|
tndd/alpaca_v2
|
https://api.github.com/repos/tndd/alpaca_v2
|
closed
|
決定木分析のためのbarsテーブルの変換
|
data_processing
|
# 概要
決定木の説明変数として直接"高値"、"安値"などの数値を渡すのは適切とは思えない。
むしろ始値から"高値"や"安値"がどの程度**相対的に**変動したのかが重要だと感じる。
"取引量"については絶対的な値で問題ない。
# テーブル定義
## 説明変数
カラム名 | 型 | 説明
-- | -- | --
high_bp | float | 始値から高値の値上がりベーシスポイント
low_bp | float | 始値から安値の値下がりベーシスポイント
close_bp | float | 始値からの終値の変動ベーシスポイント
volume | int | 取引量
## 目的変数
カラム名 | 型 | 説明
-- | -- | --
next_price_movement | bool | **翌日**の終値と始値の関係 (0: down, 1: up, 2: eq)
|
1.0
|
決定木分析のためのbarsテーブルの変換 - # 概要
決定木の説明変数として直接"高値"、"安値"などの数値を渡すのは適切とは思えない。
むしろ始値から"高値"や"安値"がどの程度**相対的に**変動したのかが重要だと感じる。
"取引量"については絶対的な値で問題ない。
# テーブル定義
## 説明変数
カラム名 | 型 | 説明
-- | -- | --
high_bp | float | 始値から高値の値上がりベーシスポイント
low_bp | float | 始値から安値の値下がりベーシスポイント
close_bp | float | 始値からの終値の変動ベーシスポイント
volume | int | 取引量
## 目的変数
カラム名 | 型 | 説明
-- | -- | --
next_price_movement | bool | **翌日**の終値と始値の関係 (0: down, 1: up, 2: eq)
|
process
|
決定木分析のためのbarsテーブルの変換 概要 決定木の説明変数として直接 高値 、 安値 などの数値を渡すのは適切とは思えない。 むしろ始値から 高値 や 安値 がどの程度 相対的に 変動したのかが重要だと感じる。 取引量 については絶対的な値で問題ない。 テーブル定義 説明変数 カラム名 型 説明 high bp float 始値から高値の値上がりベーシスポイント low bp float 始値から安値の値下がりベーシスポイント close bp float 始値からの終値の変動ベーシスポイント volume int 取引量 目的変数 カラム名 型 説明 next price movement bool 翌日 の終値と始値の関係 down up eq
| 1
|
8,452
| 11,624,081,807
|
IssuesEvent
|
2020-02-27 10:07:26
|
atilaneves/dpp
|
https://api.github.com/repos/atilaneves/dpp
|
closed
|
avro: /macro_.d(396): Range violation
|
bug preprocessor
|
I have this dockerfile
``` dockerfile
FROM dlang2/ldc-ubuntu:1.19.0 as ldc
RUN apt-get install -y unzip cmake curl clang-9 libclang-9-dev libavro-dev
RUN ln -s /usr/bin/clang-9 /usr/bin/clang
COPY avro.dpp /tmp/
RUN DFLAGS="-L=-L/usr/lib/llvm-9/lib/" dub run dpp -- /tmp/avro.dpp \
--include-path /usr/include/avro \
--preprocess-only
```
File `avro.dpp` looks like this
``` C
#include <avro.h>
```
Docker build will fail with:
```
Running ./root/.dub/packages/dpp-0.4.1/dpp/bin/d++ /tmp/avro.dpp --include-path /usr/include/avro --preprocess-only
Fatal error: core.exception.RangeError@root/.dub/packages/dpp-0.4.1/dpp/source/dpp/translation/macro_.d(396): Range violation
----------------
Program exited with code 2
The command '/bin/sh -c DFLAGS="-L=-L/usr/lib/llvm-9/lib/" dub run dpp -- /tmp/avro.dpp --include-path /usr/include/avro --preprocess-only' returned a non-zero code: 2
```
|
1.0
|
avro: /macro_.d(396): Range violation - I have this dockerfile
``` dockerfile
FROM dlang2/ldc-ubuntu:1.19.0 as ldc
RUN apt-get install -y unzip cmake curl clang-9 libclang-9-dev libavro-dev
RUN ln -s /usr/bin/clang-9 /usr/bin/clang
COPY avro.dpp /tmp/
RUN DFLAGS="-L=-L/usr/lib/llvm-9/lib/" dub run dpp -- /tmp/avro.dpp \
--include-path /usr/include/avro \
--preprocess-only
```
File `avro.dpp` looks like this
``` C
#include <avro.h>
```
Docker build will fail with:
```
Running ./root/.dub/packages/dpp-0.4.1/dpp/bin/d++ /tmp/avro.dpp --include-path /usr/include/avro --preprocess-only
Fatal error: core.exception.RangeError@root/.dub/packages/dpp-0.4.1/dpp/source/dpp/translation/macro_.d(396): Range violation
----------------
Program exited with code 2
The command '/bin/sh -c DFLAGS="-L=-L/usr/lib/llvm-9/lib/" dub run dpp -- /tmp/avro.dpp --include-path /usr/include/avro --preprocess-only' returned a non-zero code: 2
```
|
process
|
avro macro d range violation i have this dockerfile dockerfile from ldc ubuntu as ldc run apt get install y unzip cmake curl clang libclang dev libavro dev run ln s usr bin clang usr bin clang copy avro dpp tmp run dflags l l usr lib llvm lib dub run dpp tmp avro dpp include path usr include avro preprocess only file avro dpp looks like this c include docker build will fail with running root dub packages dpp dpp bin d tmp avro dpp include path usr include avro preprocess only fatal error core exception rangeerror root dub packages dpp dpp source dpp translation macro d range violation program exited with code the command bin sh c dflags l l usr lib llvm lib dub run dpp tmp avro dpp include path usr include avro preprocess only returned a non zero code
| 1
|
3,429
| 6,529,630,524
|
IssuesEvent
|
2017-08-30 12:28:14
|
w3c/w3process
|
https://api.github.com/repos/w3c/w3process
|
closed
|
Proposed changes to TAG makeup
|
Process2018Candidate
|
A proposed set of changes to the makeup of the TAG, as part of #4
* Be explicit that the chair(s) does not need to be a member of the TAG
* Suggest that the Director nominate appointees **after** the TAG election, rather than before
* Add one elected member to the TAG
This suggestion will be reviewed by the TAG at its July 2017 face to face meeting.
|
1.0
|
Proposed changes to TAG makeup - A proposed set of changes to the makeup of the TAG, as part of #4
* Be explicit that the chair(s) does not need to be a member of the TAG
* Suggest that the Director nominate appointees **after** the TAG election, rather than before
* Add one elected member to the TAG
This suggestion will be reviewed by the TAG at its July 2017 face to face meeting.
|
process
|
proposed changes to tag makeup a proposed set of changes to the makeup of the tag as part of be explicit that the chair s does not need to be a member of the tag suggest that the director nominate appointees after the tag election rather than before add one elected member to the tag this suggestion will be reviewed by the tag at its july face to face meeting
| 1
|
129,716
| 18,109,436,881
|
IssuesEvent
|
2021-09-23 00:21:26
|
Tim-Demo/JS-Demo
|
https://api.github.com/repos/Tim-Demo/JS-Demo
|
closed
|
CVE-2016-1000232 (Medium) detected in tough-cookie-2.2.2.tgz - autoclosed
|
security vulnerability
|
## CVE-2016-1000232 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tough-cookie-2.2.2.tgz</b></p></summary>
<p>RFC6265 Cookies and Cookie Jar for node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/tough-cookie/-/tough-cookie-2.2.2.tgz">https://registry.npmjs.org/tough-cookie/-/tough-cookie-2.2.2.tgz</a></p>
<p>Path to dependency file: JS-Demo/package.json</p>
<p>Path to vulnerable library: JS-Demo/node_modules/grunt-retire/node_modules/tough-cookie/package.json</p>
<p>
Dependency Hierarchy:
- grunt-retire-0.3.12.tgz (Root Library)
- request-2.67.0.tgz
- :x: **tough-cookie-2.2.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Tim-Demo/JS-Demo/commit/6867d3cdd385f17346bb7b3f8b5ce830dac87398">6867d3cdd385f17346bb7b3f8b5ce830dac87398</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
NodeJS Tough-Cookie version 2.2.2 contains a Regular Expression Parsing vulnerability in HTTP request Cookie Header parsing that can result in Denial of Service. This attack appear to be exploitable via Custom HTTP header passed by client. This vulnerability appears to have been fixed in 2.3.0.
<p>Publish Date: 2018-09-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-1000232>CVE-2016-1000232</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/130">https://www.npmjs.com/advisories/130</a></p>
<p>Release Date: 2018-09-05</p>
<p>Fix Resolution: 2.3.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"tough-cookie","packageVersion":"2.2.2","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-retire:0.3.12;request:2.67.0;tough-cookie:2.2.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.3.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2016-1000232","vulnerabilityDetails":"NodeJS Tough-Cookie version 2.2.2 contains a Regular Expression Parsing vulnerability in HTTP request Cookie Header parsing that can result in Denial of Service. This attack appear to be exploitable via Custom HTTP header passed by client. This vulnerability appears to have been fixed in 2.3.0.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-1000232","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2016-1000232 (Medium) detected in tough-cookie-2.2.2.tgz - autoclosed - ## CVE-2016-1000232 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tough-cookie-2.2.2.tgz</b></p></summary>
<p>RFC6265 Cookies and Cookie Jar for node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/tough-cookie/-/tough-cookie-2.2.2.tgz">https://registry.npmjs.org/tough-cookie/-/tough-cookie-2.2.2.tgz</a></p>
<p>Path to dependency file: JS-Demo/package.json</p>
<p>Path to vulnerable library: JS-Demo/node_modules/grunt-retire/node_modules/tough-cookie/package.json</p>
<p>
Dependency Hierarchy:
- grunt-retire-0.3.12.tgz (Root Library)
- request-2.67.0.tgz
- :x: **tough-cookie-2.2.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Tim-Demo/JS-Demo/commit/6867d3cdd385f17346bb7b3f8b5ce830dac87398">6867d3cdd385f17346bb7b3f8b5ce830dac87398</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
NodeJS Tough-Cookie version 2.2.2 contains a Regular Expression Parsing vulnerability in HTTP request Cookie Header parsing that can result in Denial of Service. This attack appear to be exploitable via Custom HTTP header passed by client. This vulnerability appears to have been fixed in 2.3.0.
<p>Publish Date: 2018-09-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-1000232>CVE-2016-1000232</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/130">https://www.npmjs.com/advisories/130</a></p>
<p>Release Date: 2018-09-05</p>
<p>Fix Resolution: 2.3.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"tough-cookie","packageVersion":"2.2.2","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-retire:0.3.12;request:2.67.0;tough-cookie:2.2.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.3.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2016-1000232","vulnerabilityDetails":"NodeJS Tough-Cookie version 2.2.2 contains a Regular Expression Parsing vulnerability in HTTP request Cookie Header parsing that can result in Denial of Service. This attack appear to be exploitable via Custom HTTP header passed by client. This vulnerability appears to have been fixed in 2.3.0.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-1000232","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in tough cookie tgz autoclosed cve medium severity vulnerability vulnerable library tough cookie tgz cookies and cookie jar for node js library home page a href path to dependency file js demo package json path to vulnerable library js demo node modules grunt retire node modules tough cookie package json dependency hierarchy grunt retire tgz root library request tgz x tough cookie tgz vulnerable library found in head commit a href found in base branch master vulnerability details nodejs tough cookie version contains a regular expression parsing vulnerability in http request cookie header parsing that can result in denial of service this attack appear to be exploitable via custom http header passed by client this vulnerability appears to have been fixed in publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree grunt retire request tough cookie isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails nodejs tough cookie version contains a regular expression parsing vulnerability in http request cookie header parsing that can result in denial of service this attack appear to be exploitable via custom http header passed by client this vulnerability appears to have been fixed in vulnerabilityurl
| 0
|
13,096
| 15,444,785,917
|
IssuesEvent
|
2021-03-08 10:52:04
|
Ultimate-Hosts-Blacklist/whitelist
|
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
|
closed
|
[FALSE-POSITIVE?] verizon.com
|
whitelisting process
|
It's an American multinational telecommunications conglomerate. There is no reason to block this.
|
1.0
|
[FALSE-POSITIVE?] verizon.com - It's an American multinational telecommunications conglomerate. There is no reason to block this.
|
process
|
verizon com it s an american multinational telecommunications conglomerate there is no reason to block this
| 1
|
2,895
| 5,877,561,252
|
IssuesEvent
|
2017-05-16 00:17:36
|
inasafe/inasafe-realtime
|
https://api.github.com/repos/inasafe/inasafe-realtime
|
closed
|
Realtime EQ contour smoothing
|
bug enhancement feature request realtime processor
|
problem
InaSAFE EQ realtime needs to have its contours smoothed for display. However, the number of people exposed to different shaking levels should be estimated from the raw (not smoothed) data.
proposed solution
For smoothing, Hadi used spatial convolution of intensity matrix with a smoothing kernel to filter out the high frequency part of the contours. In MATLAB you can do this by using “conv2” function.
See original ticket at https://github.com/inasafe/inasafe/issues/2662 for further discussion.
|
1.0
|
Realtime EQ contour smoothing - problem
InaSAFE EQ realtime needs to have its contours smoothed for display. However, the number of people exposed to different shaking levels should be estimated from the raw (not smoothed) data.
proposed solution
For smoothing, Hadi used spatial convolution of intensity matrix with a smoothing kernel to filter out the high frequency part of the contours. In MATLAB you can do this by using “conv2” function.
See original ticket at https://github.com/inasafe/inasafe/issues/2662 for further discussion.
|
process
|
realtime eq contour smoothing problem inasafe eq realtime needs to have its contours smoothed for display however the number of people exposed to different shaking levels should be estimated from the raw not smoothed data proposed solution for smoothing hadi used spatial convolution of intensity matrix with a smoothing kernel to filter out the high frequency part of the contours in matlab you can do this by using “ ” function see original ticket at for further discussion
| 1
|
22,515
| 11,642,500,565
|
IssuesEvent
|
2020-02-29 07:31:20
|
JuliaReach/LazySets.jl
|
https://api.github.com/repos/JuliaReach/LazySets.jl
|
closed
|
Special case concrete Minkowski sum for intervals
|
performance
|
Currently, the concrete minkowski sum between a pair of intervals falls back to the [AbstractHyperrectangle](https://github.com/JuliaReach/LazySets.jl/blob/a4c2db2cb07f7b66f441074303cbfdba4bc276e8/src/ConcreteOperations/minkowski_sum.jl#L216). It should rather use the concrete `+`.
|
True
|
Special case concrete Minkowski sum for intervals - Currently, the concrete minkowski sum between a pair of intervals falls back to the [AbstractHyperrectangle](https://github.com/JuliaReach/LazySets.jl/blob/a4c2db2cb07f7b66f441074303cbfdba4bc276e8/src/ConcreteOperations/minkowski_sum.jl#L216). It should rather use the concrete `+`.
|
non_process
|
special case concrete minkowski sum for intervals currently the concrete minkowski sum between a pair of intervals falls back to the it should rather use the concrete
| 0
|
669
| 3,143,373,074
|
IssuesEvent
|
2015-09-14 06:18:10
|
e-government-ua/i
|
https://api.github.com/repos/e-government-ua/i
|
closed
|
На главном портале скачанные документы всегда пустые
|
bug In process of testing test
|
Шаги для воспроизведения:
1. Создайте на своем компьютере текстовый (.txt) файл
2. В разделе "Документи" загрузите только что созданный файл
3. Скачайте только что загруженный документ
Ожидаемы результат: документ скачан, содержимое документа корректное
Действительный результат: документ пуст, размер документа 0 bytes
пример "битого" документа: https://test.igov.org.ua/api/documents/download/24427
ОС Ubuntu, браузер Google Chrome
|
1.0
|
На главном портале скачанные документы всегда пустые - Шаги для воспроизведения:
1. Создайте на своем компьютере текстовый (.txt) файл
2. В разделе "Документи" загрузите только что созданный файл
3. Скачайте только что загруженный документ
Ожидаемы результат: документ скачан, содержимое документа корректное
Действительный результат: документ пуст, размер документа 0 bytes
пример "битого" документа: https://test.igov.org.ua/api/documents/download/24427
ОС Ubuntu, браузер Google Chrome
|
process
|
на главном портале скачанные документы всегда пустые шаги для воспроизведения создайте на своем компьютере текстовый txt файл в разделе документи загрузите только что созданный файл скачайте только что загруженный документ ожидаемы результат документ скачан содержимое документа корректное действительный результат документ пуст размер документа bytes пример битого документа ос ubuntu браузер google chrome
| 1
|
11,658
| 14,522,416,667
|
IssuesEvent
|
2020-12-14 08:47:30
|
plazi/arcadia-project
|
https://api.github.com/repos/plazi/arcadia-project
|
opened
|
treatment metadata: zookeys example: error in scientific name authorship, link to article deposit in BLR
|
processing input quality control treatment@zenodo
|
@teodorgeorgiev I just looked at this treatment https://doi.org/10.5281/zenodo.4056468 in this article https://zenodo.org/record/3555661 with 136 treatments
1. The article deposit in BLR does not list the treatments in the metadata
2. The article BLR deposit is not listed in the treatment deposits
3. The figure link in the article metadata does not resolve to image in Pensoft, but from there to the article XML
4. The Scientific name authorship in the custom keywords is wrong. Ashmead should be Arias-Penna,
|
1.0
|
treatment metadata: zookeys example: error in scientific name authorship, link to article deposit in BLR - @teodorgeorgiev I just looked at this treatment https://doi.org/10.5281/zenodo.4056468 in this article https://zenodo.org/record/3555661 with 136 treatments
1. The article deposit in BLR does not list the treatments in the metadata
2. The article BLR deposit is not listed in the treatment deposits
3. The figure link in the article metadata does not resolve to image in Pensoft, but from there to the article XML
4. The Scientific name authorship in the custom keywords is wrong. Ashmead should be Arias-Penna,
|
process
|
treatment metadata zookeys example error in scientific name authorship link to article deposit in blr teodorgeorgiev i just looked at this treatment in this article with treatments the article deposit in blr does not list the treatments in the metadata the article blr deposit is not listed in the treatment deposits the figure link in the article metadata does not resolve to image in pensoft but from there to the article xml the scientific name authorship in the custom keywords is wrong ashmead should be arias penna
| 1
|
13,924
| 16,681,321,775
|
IssuesEvent
|
2021-06-08 00:24:57
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Allow cancelling gdal algorithms
|
Feature Request Processing
|
Author Name: **Lene Fischer** (@LeneFischer)
Original Redmine Issue: [20060](https://issues.qgis.org/issues/20060)
Redmine category:processing/core
---
Trying to clip a raster with a mask layer. Deciding to stop the process - but Executing in the statusbar does not respond to clik at the red cross
---
Related issue(s): #27885 (relates), #28261 (duplicates)
Redmine related issue(s): [20063](https://issues.qgis.org/issues/20063), [20441](https://issues.qgis.org/issues/20441)
---
|
1.0
|
Allow cancelling gdal algorithms - Author Name: **Lene Fischer** (@LeneFischer)
Original Redmine Issue: [20060](https://issues.qgis.org/issues/20060)
Redmine category:processing/core
---
Trying to clip a raster with a mask layer. Deciding to stop the process - but Executing in the statusbar does not respond to clik at the red cross
---
Related issue(s): #27885 (relates), #28261 (duplicates)
Redmine related issue(s): [20063](https://issues.qgis.org/issues/20063), [20441](https://issues.qgis.org/issues/20441)
---
|
process
|
allow cancelling gdal algorithms author name lene fischer lenefischer original redmine issue redmine category processing core trying to clip a raster with a mask layer deciding to stop the process but executing in the statusbar does not respond to clik at the red cross related issue s relates duplicates redmine related issue s
| 1
|
17,161
| 22,719,045,154
|
IssuesEvent
|
2022-07-06 06:31:07
|
gradle/gradle
|
https://api.github.com/repos/gradle/gradle
|
closed
|
Incremental annotation processing does not find dependent classes with `proc:only`
|
a:bug @execution affects-version:7.0 in:annotation-processing closed:invalid
|
When doing incremental annotation processing with `proc:only`, compilation may fail because of some missing classes.
For example when you have a class `MyImmutable` which depends on `SupportType`, and then you only change `MyImmutable`, then `SupportType` is not passed to the Java compiler by incremental annotation processing, so it fails:
```
gradle-7.1-inc-annotation-processing/src/main/java/mypkg/MyImmutable.java:23: error: cannot find symbol
SupportType supportType();
```
This did break in Gradle 7.1, and works with Gradle 7.0.
Here is a reproducer with instructions in its `README.md`:
[gradle-7.1-inc-annotation-processing.zip](https://github.com/gradle/gradle/files/6860916/gradle-7.1-inc-annotation-processing.zip)
---
cc: @gradle/execution
|
1.0
|
Incremental annotation processing does not find dependent classes with `proc:only` - When doing incremental annotation processing with `proc:only`, compilation may fail because of some missing classes.
For example when you have a class `MyImmutable` which depends on `SupportType`, and then you only change `MyImmutable`, then `SupportType` is not passed to the Java compiler by incremental annotation processing, so it fails:
```
gradle-7.1-inc-annotation-processing/src/main/java/mypkg/MyImmutable.java:23: error: cannot find symbol
SupportType supportType();
```
This did break in Gradle 7.1, and works with Gradle 7.0.
Here is a reproducer with instructions in its `README.md`:
[gradle-7.1-inc-annotation-processing.zip](https://github.com/gradle/gradle/files/6860916/gradle-7.1-inc-annotation-processing.zip)
---
cc: @gradle/execution
|
process
|
incremental annotation processing does not find dependent classes with proc only when doing incremental annotation processing with proc only compilation may fail because of some missing classes for example when you have a class myimmutable which depends on supporttype and then you only change myimmutable then supporttype is not passed to the java compiler by incremental annotation processing so it fails gradle inc annotation processing src main java mypkg myimmutable java error cannot find symbol supporttype supporttype this did break in gradle and works with gradle here is a reproducer with instructions in its readme md cc gradle execution
| 1
|
56,125
| 13,759,157,808
|
IssuesEvent
|
2020-10-07 02:11:39
|
tensorflow/tensorflow
|
https://api.github.com/repos/tensorflow/tensorflow
|
closed
|
ModuleNotFoundError: No module named 'tensorflow'
|
stalled stat:awaiting response subtype:windows type:build/install
|
-using anaconda python v3.8
-working with jupyter notebook
-try to add on cmd system --- conda install tensorflow but loading old version tensorflow and than give to me this error code:
```
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
```
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-62c45ef75a16> in <module>
----> 1 from tensorflow.keras.models import Sequential
2 #modelleri oluşturmak için
3 from tensorflow.keras.layers import Dense
4 #katmanları da böyle oluştururuz
ModuleNotFoundError: No module named 'tensorflow'
|
1.0
|
ModuleNotFoundError: No module named 'tensorflow' - -using anaconda python v3.8
-working with jupyter notebook
-try to add on cmd system --- conda install tensorflow but loading old version tensorflow and than give to me this error code:
```
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
```
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-62c45ef75a16> in <module>
----> 1 from tensorflow.keras.models import Sequential
2 #modelleri oluşturmak için
3 from tensorflow.keras.layers import Dense
4 #katmanları da böyle oluştururuz
ModuleNotFoundError: No module named 'tensorflow'
|
non_process
|
modulenotfounderror no module named tensorflow using anaconda python working with jupyter notebook try to add on cmd system conda install tensorflow but loading old version tensorflow and than give to me this error code from tensorflow keras models import sequential from tensorflow keras layers import dense modulenotfounderror traceback most recent call last in from tensorflow keras models import sequential modelleri oluşturmak için from tensorflow keras layers import dense katmanları da böyle oluştururuz modulenotfounderror no module named tensorflow
| 0
|
36,200
| 14,949,406,432
|
IssuesEvent
|
2021-01-26 11:29:16
|
terraform-providers/terraform-provider-azurerm
|
https://api.github.com/repos/terraform-providers/terraform-provider-azurerm
|
closed
|
Support for base64 output in azurerm_key_vault_certificate
|
enhancement good first issue service/keyvault
|
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
azurerm_key_vault_certificate outputs certificate_data as a hex string, while azuread_service_principal_certificate requires base64 format.
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
* `azuread_service_principal_certificate`
* `azurerm_key_vault_certificate`
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "azuread_service_principal_certificate" "example" {
service_principal_id = azuread_service_principal.example.id
type = "AsymmetricX509Cert"
value = azurerm_key_vault_certificate.example.certificate_data # broken - is hex string, provider expects base64
end_date = azurerm_key_vault_certificate.example.certificate_attribute[0].expires
}
resource "azurerm_key_vault_certificate" "example" {
name = "generated-cert"
```
### References
Tested with:
```
Terraform v0.12.29
+ provider.azuread v0.11.0
+ provider.azurerm v2.22.0
```
|
1.0
|
Support for base64 output in azurerm_key_vault_certificate - <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
azurerm_key_vault_certificate outputs certificate_data as a hex string, while azuread_service_principal_certificate requires base64 format.
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
* `azuread_service_principal_certificate`
* `azurerm_key_vault_certificate`
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "azuread_service_principal_certificate" "example" {
service_principal_id = azuread_service_principal.example.id
type = "AsymmetricX509Cert"
value = azurerm_key_vault_certificate.example.certificate_data # broken - is hex string, provider expects base64
end_date = azurerm_key_vault_certificate.example.certificate_attribute[0].expires
}
resource "azurerm_key_vault_certificate" "example" {
name = "generated-cert"
```
### References
Tested with:
```
Terraform v0.12.29
+ provider.azuread v0.11.0
+ provider.azurerm v2.22.0
```
|
non_process
|
support for output in azurerm key vault certificate community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description azurerm key vault certificate outputs certificate data as a hex string while azuread service principal certificate requires format new or affected resource s azuread service principal certificate azurerm key vault certificate potential terraform configuration hcl resource azuread service principal certificate example service principal id azuread service principal example id type value azurerm key vault certificate example certificate data broken is hex string provider expects end date azurerm key vault certificate example certificate attribute expires resource azurerm key vault certificate example name generated cert references tested with terraform provider azuread provider azurerm
| 0
|
3,095
| 6,108,454,615
|
IssuesEvent
|
2017-06-21 10:32:57
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
System.Diagnostics.Tests.ProcessStartInfoTests.StartInfo_TextFile_ShellExecute [FAIL]
|
area-System.Diagnostics.Process bug
|
Found in https://ci.dot.net/job/dotnet_corefx/job/master/job/outerloop_netcoreapp_windows_nt_debug_prtest/122/consoleFull#-4760281082d31e50d-1517-49fc-92b3-2ca637122019
Ran part of #21239.
```
System.Diagnostics.Tests.ProcessStartInfoTests.StartInfo_TextFile_ShellExecute [FAIL]
15:48:45 Could not start C:\Users\dotnet-bot\AppData\Local\Temp\ProcessStartInfoTests_dc2ppurg.ww0\StartInfo_TextFile_ShellExecute_1010_2203599d.txt UseShellExecute=True
15:48:45 Association details for '.txt'
15:48:45 ------------------------------
15:48:45 Open command: C:\Windows\system32\OpenWith.exe "%1"
15:48:45 ProgID: Didn't get expected HRESULT (1) when getting char count. HRESULT was 0x80070057
15:48:45
15:48:45 Expected: True
15:48:45 Actual: False
15:48:45 Stack Trace:
15:48:45 D:\j\workspace\outerloop_net---92aeb271\src\System.Diagnostics.Process\tests\ProcessStartInfoTests.cs(1022,0): at System.Diagnostics.Tests.ProcessStartInfoTests.StartInfo_TextFile_ShellExecute()
15:48:46
15:48:46
15:48:46 Usage: dotnet [options]
15:48:46
15:48:46 Usage: dotnet [path-to-application]
15:48:46
15:48:46
15:48:46
15:48:46 Options:
15:48:46
15:48:46 -h|--help Display help.
15:48:46
15:48:46 --version Display version.
15:48:46
15:48:46
15:48:46
15:48:46 path-to-application:
15:48:46
15:48:46 The path to an application .dll file to execute.
15:48:46
15:48:49 Finished: System.Diagnostics.Process.Tests
15:48:49
15:48:49 === TEST EXECUTION SUMMARY ===
15:48:49 D:\j\workspace\outerloop_net---92aeb271\Tools\tests.targets(345,5): warning : System.Diagnostics.Process.Tests Total: 222, Errors: 0, Failed: 1, Skipped: 2, Time: 5.381s [D:\j\workspace\outerloop_net---92aeb271\src\System.Diagnostics.Process\tests\System.Diagnostics.Process.Tests.csproj]
15:48:49 Trying to find crash dumps for project: System.Diagnostics.Process.Tests
15:48:49 No new dump file was found in C:\Users\DOTNET~1\AppData\Local\Temp\CoreRunCrashDumps
15:48:49 Finished running tests. End time=15:48:48.97, Exit code = 1
15:48:49 D:\j\workspace\outerloop_net---92aeb271\Tools\tests.targets(345,5): warning MSB3073: The command "D:\j\workspace\outerloop_net---92aeb271\bin/Windows_NT.AnyCPU.Debug/System.Diagnostics.Process.Tests/netstandard//RunTests.cmd D:\j\workspace\outerloop_net---92aeb271\bin/testhost/netcoreapp-Windows_NT-Debug-x64/" exited with code 1. [D:\j\workspace\outerloop_net---92aeb271\src\System.Diagnostics.Process\tests\System.Diagnostics.Process.Tests.csproj]
15:48:49
D:\j\workspace\outerloop_net---92aeb271\Tools\tests.targets(353,5): error : One or more tests failed while running tests from 'System.Diagnostics.Process.Tests' please check D:\j\workspace\outerloop_net---92aeb271\bin/Windows_NT.AnyCPU.Debug/System.Diagnostics.Process.Tests/netstandard/testResults.xml for details! [D:\j\workspace\outerloop_net---92aeb271\src\System.Diagnostics.Process\tests\System.Diagnostics.Process.Tests.csproj]
```
|
1.0
|
System.Diagnostics.Tests.ProcessStartInfoTests.StartInfo_TextFile_ShellExecute [FAIL] - Found in https://ci.dot.net/job/dotnet_corefx/job/master/job/outerloop_netcoreapp_windows_nt_debug_prtest/122/consoleFull#-4760281082d31e50d-1517-49fc-92b3-2ca637122019
Ran part of #21239.
```
System.Diagnostics.Tests.ProcessStartInfoTests.StartInfo_TextFile_ShellExecute [FAIL]
15:48:45 Could not start C:\Users\dotnet-bot\AppData\Local\Temp\ProcessStartInfoTests_dc2ppurg.ww0\StartInfo_TextFile_ShellExecute_1010_2203599d.txt UseShellExecute=True
15:48:45 Association details for '.txt'
15:48:45 ------------------------------
15:48:45 Open command: C:\Windows\system32\OpenWith.exe "%1"
15:48:45 ProgID: Didn't get expected HRESULT (1) when getting char count. HRESULT was 0x80070057
15:48:45
15:48:45 Expected: True
15:48:45 Actual: False
15:48:45 Stack Trace:
15:48:45 D:\j\workspace\outerloop_net---92aeb271\src\System.Diagnostics.Process\tests\ProcessStartInfoTests.cs(1022,0): at System.Diagnostics.Tests.ProcessStartInfoTests.StartInfo_TextFile_ShellExecute()
15:48:46
15:48:46
15:48:46 Usage: dotnet [options]
15:48:46
15:48:46 Usage: dotnet [path-to-application]
15:48:46
15:48:46
15:48:46
15:48:46 Options:
15:48:46
15:48:46 -h|--help Display help.
15:48:46
15:48:46 --version Display version.
15:48:46
15:48:46
15:48:46
15:48:46 path-to-application:
15:48:46
15:48:46 The path to an application .dll file to execute.
15:48:46
15:48:49 Finished: System.Diagnostics.Process.Tests
15:48:49
15:48:49 === TEST EXECUTION SUMMARY ===
15:48:49 D:\j\workspace\outerloop_net---92aeb271\Tools\tests.targets(345,5): warning : System.Diagnostics.Process.Tests Total: 222, Errors: 0, Failed: 1, Skipped: 2, Time: 5.381s [D:\j\workspace\outerloop_net---92aeb271\src\System.Diagnostics.Process\tests\System.Diagnostics.Process.Tests.csproj]
15:48:49 Trying to find crash dumps for project: System.Diagnostics.Process.Tests
15:48:49 No new dump file was found in C:\Users\DOTNET~1\AppData\Local\Temp\CoreRunCrashDumps
15:48:49 Finished running tests. End time=15:48:48.97, Exit code = 1
15:48:49 D:\j\workspace\outerloop_net---92aeb271\Tools\tests.targets(345,5): warning MSB3073: The command "D:\j\workspace\outerloop_net---92aeb271\bin/Windows_NT.AnyCPU.Debug/System.Diagnostics.Process.Tests/netstandard//RunTests.cmd D:\j\workspace\outerloop_net---92aeb271\bin/testhost/netcoreapp-Windows_NT-Debug-x64/" exited with code 1. [D:\j\workspace\outerloop_net---92aeb271\src\System.Diagnostics.Process\tests\System.Diagnostics.Process.Tests.csproj]
15:48:49
D:\j\workspace\outerloop_net---92aeb271\Tools\tests.targets(353,5): error : One or more tests failed while running tests from 'System.Diagnostics.Process.Tests' please check D:\j\workspace\outerloop_net---92aeb271\bin/Windows_NT.AnyCPU.Debug/System.Diagnostics.Process.Tests/netstandard/testResults.xml for details! [D:\j\workspace\outerloop_net---92aeb271\src\System.Diagnostics.Process\tests\System.Diagnostics.Process.Tests.csproj]
```
|
process
|
system diagnostics tests processstartinfotests startinfo textfile shellexecute found in ran part of system diagnostics tests processstartinfotests startinfo textfile shellexecute could not start c users dotnet bot appdata local temp processstartinfotests startinfo textfile shellexecute txt useshellexecute true association details for txt open command c windows openwith exe progid didn t get expected hresult when getting char count hresult was expected true actual false stack trace d j workspace outerloop net src system diagnostics process tests processstartinfotests cs at system diagnostics tests processstartinfotests startinfo textfile shellexecute usage dotnet usage dotnet options h help display help version display version path to application the path to an application dll file to execute finished system diagnostics process tests test execution summary d j workspace outerloop net tools tests targets warning system diagnostics process tests total errors failed skipped time trying to find crash dumps for project system diagnostics process tests no new dump file was found in c users dotnet appdata local temp coreruncrashdumps finished running tests end time exit code d j workspace outerloop net tools tests targets warning the command d j workspace outerloop net bin windows nt anycpu debug system diagnostics process tests netstandard runtests cmd d j workspace outerloop net bin testhost netcoreapp windows nt debug exited with code d j workspace outerloop net tools tests targets error one or more tests failed while running tests from system diagnostics process tests please check d j workspace outerloop net bin windows nt anycpu debug system diagnostics process tests netstandard testresults xml for details
| 1
|
5,577
| 8,414,995,093
|
IssuesEvent
|
2018-10-13 09:42:04
|
bitshares/bitshares-community-ui
|
https://api.github.com/repos/bitshares/bitshares-community-ui
|
closed
|
Signup component UI
|
Signup feature process ui
|
Use Login.vue for examples
- Should have two forms (password/private key) (see zeplin) with tabs switch
- Should be able to copy generated password from the input field to clipboard
- Should have validations: all fields are required, passwords/pins should match, account name shouldn't be used on bitshares
|
1.0
|
Signup component UI - Use Login.vue for examples
- Should have two forms (password/private key) (see zeplin) with tabs switch
- Should be able to copy generated password from the input field to clipboard
- Should have validations: all fields are required, passwords/pins should match, account name shouldn't be used on bitshares
|
process
|
signup component ui use login vue for examples should have two forms password private key see zeplin with tabs switch should be able to copy generated password from the input field to clipboard should have validations all fields are required passwords pins should match account name shouldn t be used on bitshares
| 1
|
12,106
| 14,740,400,716
|
IssuesEvent
|
2021-01-07 09:01:47
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Admin - Configure EMail to send Deactivation msg section label
|
anc-ui anp-2.5 ant-enhancement grt-ui processes
|
In GitLab by @kdjstudios on Nov 5, 2018, 07:40
Hello Team,
I believe over the last year the functionality of this section has expanded and the label of "Configure EMail to send Deactivation msg" no longer accurately represents it. If we could please simplify this label to be "Email Notifications" I feel that would better correlate what this sections functionality now does. Please discuss this with @tim.traylor in the morning meetings for approval.
|
1.0
|
Admin - Configure EMail to send Deactivation msg section label - In GitLab by @kdjstudios on Nov 5, 2018, 07:40
Hello Team,
I believe over the last year the functionality of this section has expanded and the label of "Configure EMail to send Deactivation msg" no longer accurately represents it. If we could please simplify this label to be "Email Notifications" I feel that would better correlate what this sections functionality now does. Please discuss this with @tim.traylor in the morning meetings for approval.
|
process
|
admin configure email to send deactivation msg section label in gitlab by kdjstudios on nov hello team i believe over the last year the functionality of this section has expanded and the label of configure email to send deactivation msg no longer accurately represents it if we could please simplify this label to be email notifications i feel that would better correlate what this sections functionality now does please discuss this with tim traylor in the morning meetings for approval
| 1
|
19,488
| 25,798,831,979
|
IssuesEvent
|
2022-12-10 20:41:01
|
bbrewington/dbt-bigquery-information-schema
|
https://api.github.com/repos/bbrewington/dbt-bigquery-information-schema
|
closed
|
[FEATURE] Get a dbt "hello world" project started
|
status/in_process
|
Acceptance Criteria:
- Connects to BigQuery (a.k.a. GBQ)
- User-specific settings in profiles.yml (with instructions in README prompting user to update)
- dbt_project.yml is project-specific and not user-specific
- at least one GBQ information schema view added in models/ folder
|
1.0
|
[FEATURE] Get a dbt "hello world" project started - Acceptance Criteria:
- Connects to BigQuery (a.k.a. GBQ)
- User-specific settings in profiles.yml (with instructions in README prompting user to update)
- dbt_project.yml is project-specific and not user-specific
- at least one GBQ information schema view added in models/ folder
|
process
|
get a dbt hello world project started acceptance criteria connects to bigquery a k a gbq user specific settings in profiles yml with instructions in readme prompting user to update dbt project yml is project specific and not user specific at least one gbq information schema view added in models folder
| 1
|
154,428
| 19,724,715,879
|
IssuesEvent
|
2022-01-13 18:45:55
|
Techini/vulnado
|
https://api.github.com/repos/Techini/vulnado
|
opened
|
CVE-2021-42550 (Medium) detected in logback-classic-1.2.3.jar
|
security vulnerability
|
## CVE-2021-42550 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>logback-classic-1.2.3.jar</b></p></summary>
<p>logback-classic module</p>
<p>Library home page: <a href="http://logback.qos.ch">http://logback.qos.ch</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-2.1.2.RELEASE.jar (Root Library)
- spring-boot-starter-logging-2.1.2.RELEASE.jar
- :x: **logback-classic-1.2.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/Techini/vulnado/git/commits/f04e4d76a32040dafd17ce3d872dcd7df5a9dca4">f04e4d76a32040dafd17ce3d872dcd7df5a9dca4</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In logback version 1.2.7 and prior versions, an attacker with the required privileges to edit configurations files could craft a malicious configuration allowing to execute arbitrary code loaded from LDAP servers.
<p>Publish Date: 2021-12-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-42550>CVE-2021-42550</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://logback.qos.ch/news.html">http://logback.qos.ch/news.html</a></p>
<p>Release Date: 2021-12-16</p>
<p>Fix Resolution: ch.qos.logback:logback-classic:1.2.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-42550 (Medium) detected in logback-classic-1.2.3.jar - ## CVE-2021-42550 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>logback-classic-1.2.3.jar</b></p></summary>
<p>logback-classic module</p>
<p>Library home page: <a href="http://logback.qos.ch">http://logback.qos.ch</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-2.1.2.RELEASE.jar (Root Library)
- spring-boot-starter-logging-2.1.2.RELEASE.jar
- :x: **logback-classic-1.2.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/Techini/vulnado/git/commits/f04e4d76a32040dafd17ce3d872dcd7df5a9dca4">f04e4d76a32040dafd17ce3d872dcd7df5a9dca4</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In logback version 1.2.7 and prior versions, an attacker with the required privileges to edit configurations files could craft a malicious configuration allowing to execute arbitrary code loaded from LDAP servers.
<p>Publish Date: 2021-12-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-42550>CVE-2021-42550</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://logback.qos.ch/news.html">http://logback.qos.ch/news.html</a></p>
<p>Release Date: 2021-12-16</p>
<p>Fix Resolution: ch.qos.logback:logback-classic:1.2.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in logback classic jar cve medium severity vulnerability vulnerable library logback classic jar logback classic module library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository ch qos logback logback classic logback classic jar dependency hierarchy spring boot starter release jar root library spring boot starter logging release jar x logback classic jar vulnerable library found in head commit a href vulnerability details in logback version and prior versions an attacker with the required privileges to edit configurations files could craft a malicious configuration allowing to execute arbitrary code loaded from ldap servers publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ch qos logback logback classic step up your open source security game with whitesource
| 0
|
13,122
| 15,505,261,847
|
IssuesEvent
|
2021-03-11 15:10:01
|
dluiscosta/weather_api
|
https://api.github.com/repos/dluiscosta/weather_api
|
closed
|
Use Flask's Blueprints
|
development process enhancement intrinsic motivation
|
Refactor endpoint definitions at ```app/api.py``` to make use of Flask's Blueprints.
|
1.0
|
Use Flask's Blueprints - Refactor endpoint definitions at ```app/api.py``` to make use of Flask's Blueprints.
|
process
|
use flask s blueprints refactor endpoint definitions at app api py to make use of flask s blueprints
| 1
|
306,596
| 26,483,052,575
|
IssuesEvent
|
2023-01-17 15:58:32
|
cosmos/cosmos-sdk
|
https://api.github.com/repos/cosmos/cosmos-sdk
|
closed
|
WaitForHeight test util doesn't check the node's "local" height
|
T: Tests T:Sprint
|
We've seen many times tests that do stuff like "1. send tx, 2. check output, 3. query something related to the tx/get tx with hash" which should fine, but they would fail sometimes in the CI or even locally. The errors would always be stuff like "tx not found", "x was expected to be equal to y", etc Like the tx never happened or the height that we are querying for doesn't exist.
These tests use the util method `WaitForHeight` which waits until a certain height is returned by queries to the Tendermint endpoint `Status`.
In these tests, both the query to get the height and the query to get whatever the test is querying for are done on the same node.
So what's happening here is that Tendermint is already replying with the latest block height before the Cosmos app had the chance to call `Commit()` thus storing this height.
My proposed fix is to add a way to query the baseApp's height, which is the one that matters when doing queries. Or Tendermint should return the latest block after its app calls Commit()?
|
1.0
|
WaitForHeight test util doesn't check the node's "local" height - We've seen many times tests that do stuff like "1. send tx, 2. check output, 3. query something related to the tx/get tx with hash" which should fine, but they would fail sometimes in the CI or even locally. The errors would always be stuff like "tx not found", "x was expected to be equal to y", etc Like the tx never happened or the height that we are querying for doesn't exist.
These tests use the util method `WaitForHeight` which waits until a certain height is returned by queries to the Tendermint endpoint `Status`.
In these tests, both the query to get the height and the query to get whatever the test is querying for are done on the same node.
So what's happening here is that Tendermint is already replying with the latest block height before the Cosmos app had the chance to call `Commit()` thus storing this height.
My proposed fix is to add a way to query the baseApp's height, which is the one that matters when doing queries. Or Tendermint should return the latest block after its app calls Commit()?
|
non_process
|
waitforheight test util doesn t check the node s local height we ve seen many times tests that do stuff like send tx check output query something related to the tx get tx with hash which should fine but they would fail sometimes in the ci or even locally the errors would always be stuff like tx not found x was expected to be equal to y etc like the tx never happened or the height that we are querying for doesn t exist these tests use the util method waitforheight which waits until a certain height is returned by queries to the tendermint endpoint status in these tests both the query to get the height and the query to get whatever the test is querying for are done on the same node so what s happening here is that tendermint is already replying with the latest block height before the cosmos app had the chance to call commit thus storing this height my proposed fix is to add a way to query the baseapp s height which is the one that matters when doing queries or tendermint should return the latest block after its app calls commit
| 0
|
16,750
| 21,918,524,753
|
IssuesEvent
|
2022-05-22 07:46:49
|
q191201771/lal
|
https://api.github.com/repos/q191201771/lal
|
closed
|
Docker方式启动后,推rtmp流,报 ERROR [lal: buffer too short(avc.go:532)]
|
#Question *In process
|
按照README.md里面,用docker方式启动,推rtmp流,刷下面错误(推rtsp流就没有报错):
```
2022/05/18 06:51:03.701809 INFO initial log succ. - config.go:235
2022/05/18 06:51:03.701894 INFO
__ ___ __
/ / / | / /
/ / / /| | / /
/ /___/ ___ |/ /___
/_____/_/ |_/_____/
- config.go:238
2022/05/18 06:51:03.702392 INFO load conf file succ. filename=conf/lalserver.conf.json, raw content={ "# doc of config": "https://pengrl.com/lal/#/ConfigBrief", "conf_version": "v0.3.1", "rtmp": { "enable": true, "addr": ":1935", "gop_num": 0, "merge_write_size": 0, "add_dummy_audio_enable": false, "add_dummy_audio_wait_audio_ms": 150 }, "default_http": { "http_listen_addr": ":8080", "https_listen_addr": ":4433", "https_cert_file": "./conf/cert.pem", "https_key_file": "./conf/key.pem" }, "httpflv": { "enable": true, "enable_https": true, "url_pattern": "/", "gop_num": 0 }, "hls": { "enable": true, "enable_https": true, "url_pattern": "/hls/", "out_path": "./lal_record/hls/", "fragment_duration_ms": 3000, "fragment_num": 6, "delete_threshold": 6, "cleanup_mode": 1, "use_memory_as_disk_flag": false }, "httpts": { "enable": true, "enable_https": true, "url_pattern": "/", "gop_num": 0 }, "rtsp": { "enable": true, "addr": ":5544", "out_wait_key_frame_flag": true }, "record": { "enable_flv": false, "flv_out_path": "./lal_record/flv/", "enable_mpegts": false, "mpegts_out_path": "./lal_record/mpegts" }, "relay_push": { "enable": false, "addr_list":[ ] }, "static_relay_pull": { "enable": false, "addr": "" }, "http_api": { "enable": true, "addr": ":8083" }, "server_id": "1", "http_notify": { "enable": false, "update_interval_sec": 5, "on_update": "http://127.0.0.1:10101/on_update", "on_pub_start": "http://127.0.0.1:10101/on_pub_start", "on_pub_stop": "http://127.0.0.1:10101/on_pub_stop", "on_sub_start": "http://127.0.0.1:10101/on_sub_start", "on_sub_stop": "http://127.0.0.1:10101/on_sub_stop", "on_relay_pull_start": "http://127.0.0.1:10101/on_relay_pull_start", "on_relay_pull_stop": "http://127.0.0.1:10101/on_relay_pull_stop", "on_rtmp_connect": "http://127.0.0.1:10101/on_rtmp_connect", "on_server_start": "http://127.0.0.1:10101/on_server_start" }, "simple_auth": { "key": "q191201771", "dangerous_lal_secret": "pengrl", "pub_rtmp_enable": false, "sub_rtmp_enable": false, "sub_httpflv_enable": false, "sub_httpts_enable": false, "pub_rtsp_enable": false, "sub_rtsp_enable": false, "hls_m3u8_enable": false }, "pprof": { "enable": true, "addr": ":8084" }, "log": { "level": 1, "filename": "./logs/lalserver.log", "is_to_stdout": true, "is_rotate_daily": true, "short_file_flag": true, "timestamp_flag": true, "timestamp_with_ms_flag": true, "level_flag": true, "assert_behavior": 1 }, "debug": { "log_group_interval_sec": 30, "log_group_max_group_num": 10, "log_group_max_sub_num_per_group": 10 } } parsed=&{ConfVersion:v0.3.1 RtmpConfig:{Enable:true Addr::1935 GopNum:0 MergeWriteSize:0 AddDummyAudioEnable:false AddDummyAudioWaitAudioMs:150} DefaultHttpConfig:{CommonHttpAddrConfig:{HttpListenAddr::8080 HttpsListenAddr::4433 HttpsCertFile:./conf/cert.pem HttpsKeyFile:./conf/key.pem}} HttpflvConfig:{CommonHttpServerConfig:{CommonHttpAddrConfig:{HttpListenAddr::8080 HttpsListenAddr::4433 HttpsCertFile:./conf/cert.pem HttpsKeyFile:./conf/key.pem} Enable:true EnableHttps:true UrlPattern:/} GopNum:0} HlsConfig:{CommonHttpServerConfig:{CommonHttpAddrConfig:{HttpListenAddr::8080 HttpsListenAddr::4433 HttpsCertFile:./conf/cert.pem HttpsKeyFile:./conf/key.pem} Enable:true EnableHttps:true UrlPattern:/hls/} UseMemoryAsDiskFlag:false MuxerConfig:{OutPath:./lal_record/hls/ FragmentDurationMs:3000 FragmentNum:6 DeleteThreshold:6 CleanupMode:1}} HttptsConfig:{CommonHttpServerConfig:{CommonHttpAddrConfig:{HttpListenAddr::8080 HttpsListenAddr::4433 HttpsCertFile:./conf/cert.pem HttpsKeyFile:./conf/key.pem} Enable:true EnableHttps:true UrlPattern:/} GopNum:0} RtspConfig:{Enable:true Addr::5544 OutWaitKeyFrameFlag:true} RecordConfig:{EnableFlv:false FlvOutPath:./lal_record/flv/ EnableMpegts:false MpegtsOutPath:./lal_record/mpegts} RelayPushConfig:{Enable:false AddrList:[]} StaticRelayPullConfig:{Enable:false Addr:} HttpApiConfig:{Enable:true Addr::8083} ServerId:1 HttpNotifyConfig:{Enable:false UpdateIntervalSec:5 OnServerStart:http://127.0.0.1:10101/on_server_start OnUpdate:http://127.0.0.1:10101/on_update OnPubStart:http://127.0.0.1:10101/on_pub_start OnPubStop:http://127.0.0.1:10101/on_pub_stop OnSubStart:http://127.0.0.1:10101/on_sub_start OnSubStop:http://127.0.0.1:10101/on_sub_stop OnRelayPullStart:http://127.0.0.1:10101/on_relay_pull_start OnRelayPullStop:http://127.0.0.1:10101/on_relay_pull_stop OnRtmpConnect:http://127.0.0.1:10101/on_rtmp_connect} SimpleAuthConfig:{Key:q191201771 DangerousLalSecret:pengrl PubRtmpEnable:false SubRtmpEnable:false SubHttpflvEnable:false SubHttptsEnable:false PubRtspEnable:false SubRtspEnable:false HlsM3u8Enable:false} PprofConfig:{Enable:true Addr::8084} LogConfig:{Level:1 Filename:./logs/lalserver.log IsToStdout:true IsRotateDaily:true ShortFileFlag:true TimestampFlag:true TimestampWithMsFlag:true LevelFlag:true AssertBehavior:1} DebugConfig:{LogGroupIntervalSec:30 LogGroupMaxGroupNum:10 LogGroupMaxSubNumPerGroup:10}} - config.go:326
2022/05/18 06:51:03.702609 INFO start: 2022-05-18 06:51:03.7 - base.go:33
2022/05/18 06:51:03.702645 INFO wd: /lal - base.go:34
2022/05/18 06:51:03.702659 INFO args: ./bin/lalserver -c conf/lalserver.conf.json - base.go:35
2022/05/18 06:51:03.702676 INFO bininfo: GitTag=. GitCommitLog=. GitStatus=cleanly. BuildTime=2022.05.17.123652. GoVersion=go version go1.16.4 linux/amd64. runtime=linux/amd64. - base.go:36
2022/05/18 06:51:03.702689 INFO version: lal v0.29.1 (github.com/q191201771/lal) - base.go:37
2022/05/18 06:51:03.702706 INFO github: https://github.com/q191201771/lal - base.go:38
2022/05/18 06:51:03.702720 INFO doc: https://pengrl.com/lal - base.go:39
2022/05/18 06:51:03.702844 INFO start web pprof listen. addr=:8084 - server_manager.go:154
2022/05/18 06:51:03.702974 INFO add http listen for httpflv. addr=:8080, pattern=/ - server_manager.go:176
2022/05/18 06:51:03.703462 INFO add https listen for httpflv. addr=:4433, pattern=/ - server_manager.go:187
2022/05/18 06:51:03.703485 INFO add http listen for httpts. addr=:8080, pattern=/ - server_manager.go:176
2022/05/18 06:51:03.703496 INFO add https listen for httpts. addr=:4433, pattern=/ - server_manager.go:187
2022/05/18 06:51:03.703508 INFO add http listen for hls. addr=:8080, pattern=/hls/ - server_manager.go:176
2022/05/18 06:51:03.703521 INFO add https listen for hls. addr=:4433, pattern=/hls/ - server_manager.go:187
2022/05/18 06:51:03.703561 INFO start rtmp server listen. addr=:1935 - server.go:53
2022/05/18 06:51:03.703590 INFO start rtsp server listen. addr=:5544 - server.go:71
2022/05/18 06:51:03.703676 INFO start http-api server listen. addr=:8083 - http_api.go:41
2022/05/18 06:51:22.039064 INFO accept a rtmp connection. remoteAddr=10.100.105.46:35018 - server.go:77
2022/05/18 06:51:22.039162 DEBUG [NAZACONN1] lifecycle new connection. net.Conn=0xc0003a8000, naza.Connection=0xc0003b0000 - connection.go:192
2022/05/18 06:51:22.039204 INFO [RTMPPUBSUB1] lifecycle new rtmp ServerSession. session=0xc00038c600, remote addr=10.100.105.46:35018 - server_session.go:120
2022/05/18 06:51:22.039418 DEBUG handshake simple mode. - handshake.go:236
2022/05/18 06:51:22.039452 INFO [RTMPPUBSUB1] < R Handshake C0+C1. - server_session.go:218
2022/05/18 06:51:22.039465 INFO [RTMPPUBSUB1] > W Handshake S0+S1+S2. - server_session.go:220
2022/05/18 06:51:22.040235 INFO [RTMPPUBSUB1] < R Handshake C2. - server_session.go:228
2022/05/18 06:51:22.040301 INFO [RTMPPUBSUB1] < R connect('live'). tcUrl=rtmp://10.20.1.55:1935/live - server_session.go:383
2022/05/18 06:51:22.040326 INFO [RTMPPUBSUB1] > W Window Acknowledgement Size 5000000. - server_session.go:387
2022/05/18 06:51:22.040376 INFO [RTMPPUBSUB1] > W Set Peer Bandwidth. - server_session.go:392
2022/05/18 06:51:22.040411 INFO [RTMPPUBSUB1] > W SetChunkSize 4096. - server_session.go:397
2022/05/18 06:51:22.040447 INFO [RTMPPUBSUB1] > W _result('NetConnection.Connect.Success'). - server_session.go:402
2022/05/18 06:51:22.040964 DEBUG [RTMPPUBSUB1] read command message, ignore it. cmd=releaseStream, header={Csid:3 MsgLen:30 MsgTypeId:20 MsgStreamId:0 TimestampAbs:0}, b=len(core)=4096, rpos=25, wpos=30, hex=00000000 05 02 00 01 61 |....a|
- server_session.go:357
2022/05/18 06:51:22.041029 DEBUG [RTMPPUBSUB1] read command message, ignore it. cmd=FCPublish, header={Csid:3 MsgLen:26 MsgTypeId:20 MsgStreamId:0 TimestampAbs:0}, b=len(core)=4096, rpos=21, wpos=26, hex=00000000 05 02 00 01 61 |....a|
- server_session.go:357
2022/05/18 06:51:22.041047 INFO [RTMPPUBSUB1] < R createStream(). - server_session.go:414
2022/05/18 06:51:22.041058 INFO [RTMPPUBSUB1] > W _result(). - server_session.go:415
2022/05/18 06:51:22.041342 DEBUG [RTMPPUBSUB1] pubType=live - server_session.go:442
2022/05/18 06:51:22.041360 INFO [RTMPPUBSUB1] < R publish('a') - server_session.go:443
2022/05/18 06:51:22.041372 INFO [RTMPPUBSUB1] > W onStatus('NetStream.Publish.Start'). - server_session.go:445
2022/05/18 06:51:22.041485 INFO [GROUP1] lifecycle new group. group=0xc0003e2000, appName=live, streamName=a - group.go:113
2022/05/18 06:51:22.041506 DEBUG [GROUP1] [RTMPPUBSUB1] add rtmp pub session into group. - group__in.go:60
2022/05/18 06:51:22.041624 INFO [HLSMUXER1] lifecycle new hls muxer. muxer=0xc0003e6000, streamName=a - muxer.go:115
2022/05/18 06:51:22.041662 INFO [HLSMUXER1] start hls muxer. - muxer.go:120
2022/05/18 06:51:22.072395 DEBUG [GROUP1] cache rtmp metadata. size:212 - gop_cache.go:97
2022/05/18 06:51:22.072431 DEBUG [GROUP1] cache httpflv metadata. size:215 - gop_cache.go:97
2022/05/18 06:51:22.072471 WARN rtmp msg too short, ignore. header={Csid:4 MsgLen:5 MsgTypeId:9 MsgStreamId:1 TimestampAbs:0}, payload=00000000 17 00 00 00 00 |.....|
- rtmp2rtsp.go:80
2022/05/18 06:51:22.072497 DEBUG [GROUP1] cache rtmp video seq header. size:17 - gop_cache.go:108
2022/05/18 06:51:22.072516 DEBUG [GROUP1] cache httpflv video seq header. size:20 - gop_cache.go:108
2022/05/18 06:51:22.090319 DEBUG [0xc000384240] Buffer::Grow. realloc, this round need=131072, copy=0, cap=(4096 -> 131072) - buffer.go:147
2022/05/18 06:51:22.141917 DEBUG [0xc000384240] Buffer::Grow. realloc, this round need=262144, copy=0, cap=(131072 -> 262144) - buffer.go:147
2022/05/18 06:51:22.445271 WARN [RTMP2MPEGTS1] rtmp msg too short, ignore. header={Csid:4 MsgLen:5 MsgTypeId:9 MsgStreamId:1 TimestampAbs:0}, payload=00000000 17 00 00 00 00 |.....|
- rtmp2mpegts.go:148
2022/05/18 06:51:22.445402 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:114425 MsgTypeId:9 MsgStreamId:1 TimestampAbs:33}, payload=00000000 27 01 00 00 00 00 00 00 01 27 64 00 2a ac ce 80 |'........'d.*...|
00000010 78 02 27 e5 c0 5a 80 81 01 78 00 00 03 00 08 00 |x.'..Z...x......|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.445460 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:195894 MsgTypeId:9 MsgStreamId:1 TimestampAbs:67}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e2 01 09 ad ff fe |'........!......|
00000010 1f ae 1d 4e 51 2e 70 f5 47 d9 86 70 51 9a 2e 92 |...NQ.p.G..pQ...|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.445509 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:187412 MsgTypeId:9 MsgStreamId:1 TimestampAbs:100}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e4 02 09 ad ff fe |'........!......|
00000010 30 39 e1 fc a0 79 45 7b d1 bd 0d 08 41 16 5f 5e |09...yE{....A._^|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.445544 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:81573 MsgTypeId:9 MsgStreamId:1 TimestampAbs:133}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e6 03 08 eb 7f eb |'........!......|
00000010 0a 80 cf 16 21 ac 62 f9 72 3f a8 3e f1 40 4a c1 |....!.b.r?.>.@J.|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.445592 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:18384 MsgTypeId:9 MsgStreamId:1 TimestampAbs:167}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e8 04 08 5a df f2 |'........!...Z..|
00000010 51 3c a0 b6 de 04 a7 ce 71 03 86 44 82 15 fa 43 |Q<......q..D...C|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.445665 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:9399 MsgTypeId:9 MsgStreamId:1 TimestampAbs:200}, payload=00000000 27 01 00 00 00 00 00 00 01 21 ea 05 08 62 df f3 |'........!...b..|
00000010 80 ff 54 cf 62 03 d0 ee 91 c0 4a ac f4 74 8e 64 |..T.b.....J..t.d|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.445751 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:5583 MsgTypeId:9 MsgStreamId:1 TimestampAbs:233}, payload=00000000 27 01 00 00 00 00 00 00 01 21 ec 06 08 6a df 77 |'........!...j.w|
00000010 f9 46 7e f7 1d c2 4c 70 6f 81 00 69 3b ea f9 c1 |.F~...Lpo..i;...|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.445824 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3711 MsgTypeId:9 MsgStreamId:1 TimestampAbs:267}, payload=00000000 27 01 00 00 00 00 00 00 01 21 ee 07 08 72 df f5 |'........!...r..|
00000010 1c 28 11 5e 29 a0 40 ad 45 5e 4b 09 ac 00 00 03 |.(.^).@.E^K.....|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.445877 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:4791 MsgTypeId:9 MsgStreamId:1 TimestampAbs:300}, payload=00000000 27 01 00 00 00 00 00 00 01 21 f0 08 08 72 df f5 |'........!...r..|
00000010 b9 d4 6b 34 fb c2 a1 9e 7e 59 83 bd c8 00 05 19 |..k4....~Y......|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.445926 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3818 MsgTypeId:9 MsgStreamId:1 TimestampAbs:333}, payload=00000000 27 01 00 00 00 00 00 00 01 21 f2 09 08 72 df 1c |'........!...r..|
00000010 69 19 7c 06 ed ea af ac 81 18 33 69 a2 4a eb 1c |i.|.......3i.J..|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.445974 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3965 MsgTypeId:9 MsgStreamId:1 TimestampAbs:367}, payload=00000000 27 01 00 00 00 00 00 00 01 21 f4 0a 08 72 df d1 |'........!...r..|
00000010 70 c8 b5 e4 97 11 e5 87 17 3a f0 00 00 0c 7f 39 |p........:.....9|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.446045 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3640 MsgTypeId:9 MsgStreamId:1 TimestampAbs:400}, payload=00000000 27 01 00 00 00 00 00 00 01 21 f6 0b 08 72 df 80 |'........!...r..|
00000010 11 2c 00 00 03 00 00 03 02 0a c2 24 5a b8 7e 81 |.,.........$Z.~.|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.446082 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3537 MsgTypeId:9 MsgStreamId:1 TimestampAbs:433}, payload=00000000 27 01 00 00 00 00 00 00 01 21 f8 0c 08 72 df b6 |'........!...r..|
00000010 f8 1a 6d 68 e7 a0 84 f3 b9 91 32 08 ee 75 0d 4d |..mh......2..u.M|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.446128 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3704 MsgTypeId:9 MsgStreamId:1 TimestampAbs:467}, payload=00000000 27 01 00 00 00 00 00 00 01 21 fa 0d 08 72 df a1 |'........!...r..|
00000010 e0 82 c6 d2 8d 3a 00 02 84 00 0b 9a a3 d4 09 84 |.....:..........|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.455809 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:4723 MsgTypeId:9 MsgStreamId:1 TimestampAbs:500}, payload=00000000 27 01 00 00 00 00 00 00 01 21 fc 0e 08 72 df 08 |'........!...r..|
00000010 4f cd 44 3e 30 10 79 1c 58 d7 de 5c a3 e4 08 d2 |O.D>0.y.X..\....|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.468230 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:9017 MsgTypeId:9 MsgStreamId:1 TimestampAbs:533}, payload=00000000 27 01 00 00 00 00 00 00 01 21 fe 0f 08 72 df 9c |'........!...r..|
00000010 b9 23 cb 64 9c 5a 59 fd ac 5c bf d7 1c f4 b3 49 |.#.d.ZY..\.....I|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.468282 ERROR assert failed. excepted=<nil>, but actual=lal.sdp: fxxk(pack.go:38) - rtmp2rtsp.go:145
2022/05/18 06:51:22.478600 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:4009 MsgTypeId:9 MsgStreamId:1 TimestampAbs:567}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e0 10 08 7a df b7 |'........!...z..|
00000010 4e 8e 48 f0 82 85 00 0b 12 56 00 25 96 39 65 85 |N.H......V.%.9e.|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.488330 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:1908 MsgTypeId:9 MsgStreamId:1 TimestampAbs:600}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e2 11 08 7a df 71 |'........!...z.q|
00000010 c0 e3 32 0c 99 0e ce 10 6e e4 b4 6d f6 dc c6 b8 |..2.....n..m....|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.497646 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:1746 MsgTypeId:9 MsgStreamId:1 TimestampAbs:633}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e4 12 08 7a df 2a |'........!...z.*|
00000010 b9 66 ad bc 36 cb 2a 83 43 39 b1 8d 23 d9 1a f9 |.f..6.*.C9..#...|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.507055 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:1482 MsgTypeId:9 MsgStreamId:1 TimestampAbs:667}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e6 13 08 7a df 0a |'........!...z..|
00000010 13 55 8d 7c df 6b 84 11 0d 3e 06 ee 6a 60 f4 c2 |.U.|.k...>..j`..|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.516260 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:1391 MsgTypeId:9 MsgStreamId:1 TimestampAbs:700}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e8 14 08 7a df 23 |'........!...z.#|
00000010 9b 37 3a 35 f5 5b b1 67 9d 8e 07 3c b1 b9 b6 b4 |.7:5.[.g...<....|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.525094 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:428 MsgTypeId:9 MsgStreamId:1 TimestampAbs:733}, payload=00000000 27 01 00 00 00 00 00 00 01 21 ea 15 08 7a df 0d |'........!...z..|
00000010 4f 9b 09 9a 9a 1e ab f9 ce 83 57 c1 44 eb e1 3c |O.........W.D..<|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.554435 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:840 MsgTypeId:9 MsgStreamId:1 TimestampAbs:767}, payload=00000000 27 01 00 00 00 00 00 00 01 21 ec 16 08 7a df a1 |'........!...z..|
00000010 64 3e a3 00 07 e2 9d 05 bf 40 00 3e cf 82 58 89 |d>.......@.>..X.|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.594061 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:958 MsgTypeId:9 MsgStreamId:1 TimestampAbs:800}, payload=00000000 27 01 00 00 00 00 00 00 01 21 ee 17 08 7a df 4e |'........!...z.N|
00000010 f6 49 01 e0 5f 3e f9 db 47 c7 8b 3b 5c 35 8b d2 |.I.._>..G..;\5..|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.623322 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:842 MsgTypeId:9 MsgStreamId:1 TimestampAbs:833}, payload=00000000 27 01 00 00 00 00 00 00 01 21 f0 18 08 7a df 18 |'........!...z..|
00000010 b7 ff 0f ca 8f 66 21 69 b9 4b 6b 43 fb aa dc 6f |.....f!i.KkC...o|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.668516 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:43011 MsgTypeId:9 MsgStreamId:1 TimestampAbs:867}, payload=00000000 27 01 00 00 00 00 00 00 01 27 64 00 2a ac ce 80 |'........'d.*...|
00000010 78 02 27 e5 c0 5a 80 81 01 78 00 00 03 00 08 00 |x.'..Z...x......|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.689528 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:4968 MsgTypeId:9 MsgStreamId:1 TimestampAbs:900}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e2 01 08 7a df d1 |'........!...z..|
00000010 2f 44 0f 35 c7 f8 6e 05 d0 3b 84 25 26 28 00 00 |/D.5..n..;.%&(..|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.720182 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:4163 MsgTypeId:9 MsgStreamId:1 TimestampAbs:933}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e4 02 08 7a df 38 |'........!...z.8|
00000010 15 07 7c 76 fb fb 09 78 c2 f2 0f dd a9 a1 cb 03 |..|v...x........|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.760553 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3542 MsgTypeId:9 MsgStreamId:1 TimestampAbs:967}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e6 03 08 7a df 1f |'........!...z..|
00000010 7a c9 52 ed fc 37 46 01 ea 58 73 0c 10 23 1e 76 |z.R..7F..Xs..#.v|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.790974 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3707 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1000}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e8 04 08 7a df 15 |'........!...z..|
00000010 d6 37 aa d8 02 44 73 00 4a 73 30 2b 6b c2 03 83 |.7...Ds.Js0+k...|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.821293 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3789 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1033}, payload=00000000 27 01 00 00 00 00 00 00 01 21 ea 05 08 7a df 69 |'........!...z.i|
00000010 f0 b4 06 cd b1 49 6c bb 20 00 00 03 02 ca 0f fc |.....Il. .......|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.862143 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:4416 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1067}, payload=00000000 27 01 00 00 00 00 00 00 01 21 ec 06 08 7a df 21 |'........!...z.!|
00000010 77 ef 3f 00 09 a8 b1 90 33 07 96 fd 3d 80 07 dd |w.?.....3...=...|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.891919 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:2058 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1100}, payload=00000000 27 01 00 00 00 00 00 00 01 21 ee 07 08 7a df 22 |'........!...z."|
00000010 e7 24 68 00 04 16 90 f6 24 dc 64 00 03 54 4b 31 |.$h.....$.d..TK1|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.922509 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:4640 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1133}, payload=00000000 27 01 00 00 00 00 00 00 01 21 f0 08 08 6a df f1 |'........!...j..|
00000010 16 9e 69 bc 80 81 a7 23 e3 dd c8 54 0e 47 ae fd |..i....#...T.G..|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.953150 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3984 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1167}, payload=00000000 27 01 00 00 00 00 00 00 01 21 f2 09 08 6a df ce |'........!...j..|
00000010 b4 87 23 a6 9d 5c 80 0f 64 bf 03 02 00 00 03 00 |..#..\..d.......|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.994122 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:5067 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1200}, payload=00000000 27 01 00 00 00 00 00 00 01 21 f4 0a 08 6a df 9d |'........!...j..|
00000010 06 60 01 0b 3f 76 4d 80 00 02 12 9b 8c 32 d4 56 |.`..?vM......2.V|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.025574 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:6174 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1233}, payload=00000000 27 01 00 00 00 00 00 00 01 21 f6 0b 08 6a df cd |'........!...j..|
00000010 00 09 bc 69 07 29 10 a0 00 00 62 ea 56 41 dc 71 |...i.)....b.VA.q|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.087334 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:5705 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1267}, payload=00000000 27 01 00 00 00 00 00 00 01 21 f8 0c 08 6a df d0 |'........!...j..|
00000010 2b e7 ea fa d7 87 db 90 cc af fd af 5f c0 d6 58 |+..........._..X|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.128447 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:5575 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1300}, payload=00000000 27 01 00 00 00 00 00 00 01 21 fa 0d 08 6a df 55 |'........!...j.U|
00000010 92 81 bd 19 e5 ef 6b 9d 93 fb 76 ba 2b 40 00 00 |......k...v.+@..|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.159408 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:5442 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1333}, payload=00000000 27 01 00 00 00 00 00 00 01 21 fc 0e 08 6a df 6a |'........!...j.j|
00000010 d3 57 1c 04 6a 7c 12 50 00 cd 60 4b b3 91 9c f9 |.W..j|.P..`K....|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.190616 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:5710 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1367}, payload=00000000 27 01 00 00 00 00 00 00 01 21 fe 0f 08 6a df 6a |'........!...j.j|
00000010 88 40 25 e6 91 80 00 00 f8 47 a0 a4 91 9f 48 68 |.@%......G....Hh|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.221367 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:4913 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1400}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e0 10 08 6a df 4c |'........!...j.L|
00000010 97 a8 34 ef ef 47 f8 6f 15 a3 7f bf 00 54 e5 7d |..4..G.o.....T.}|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.252091 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:4373 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1433}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e2 11 08 6a df 54 |'........!...j.T|
00000010 2a 46 01 4b e0 5e 48 00 00 03 01 95 4e eb 2c 31 |*F.K.^H.....N.,1|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.292656 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3598 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1467}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e4 12 08 6a df 5f |'........!...j._|
00000010 35 7b 76 c9 b9 95 84 47 dd 26 00 00 af a4 0e 2e |5{v....G.&......|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.323350 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:4017 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1500}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e6 13 08 6a df 6a |'........!...j.j|
00000010 c2 00 27 0f 44 22 0e f5 52 53 52 c2 30 57 02 1e |..'.D"..RSR.0W..|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.354243 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:5034 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1533}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e8 14 08 6a df 6b |'........!...j.k|
00000010 11 28 00 09 2c 40 00 01 fd d8 0a c2 d7 5a 1c 2b |.(..,@.......Z.+|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.395086 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:4753 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1567}, payload=00000000 27 01 00 00 00 00 00 00 01 21 ea 15 08 6a df e3 |'........!...j..|
00000010 06 f8 55 72 3f 3e 7b e1 05 8f a0 00 00 c5 28 ad |..Ur?>{.......(.|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.425369 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3083 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1600}, payload=00000000 27 01 00 00 00 00 00 00 01 21 ec 16 08 6a df 10 |'........!...j..|
00000010 39 96 e0 c1 00 00 04 2b be 12 33 21 49 56 ec e5 |9......+..3!IV..|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.455490 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3094 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1633}, payload=00000000 27 01 00 00 00 00 00 00 01 21 ee 17 08 6a df 11 |'........!...j..|
00000010 b5 30 46 47 1b c2 c2 cf 8f c6 36 a0 ef 95 66 0a |.0FG......6...f.|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.485627 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3139 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1667}, payload=00000000 27 01 00 00 00 00 00 00 01 21 f0 18 08 6a df d0 |'........!...j..|
00000010 36 80 c5 53 66 7b 02 ba c7 77 69 a3 44 36 95 d0 |6..Sf{...wi.D6..|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.550726 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:69442 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1700}, payload=00000000 27 01 00 00 00 00 00 00 01 27 64 00 2a ac ce 80 |'........'d.*...|
00000010 78 02 27 e5 c0 5a 80 81 01 78 00 00 03 00 08 00 |x.'..Z...x......|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.563223 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:9396 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1733}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e2 01 08 6a df b2 |'........!...j..|
00000010 26 d9 3b 22 37 74 22 df 47 76 95 7e e3 5c 18 24 |&.;"7t".Gv.~.\.$|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.595411 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:7704 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1767}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e4 02 08 6a df b9 |'........!...j..|
00000010 4b b9 60 51 60 83 1b 21 8e 7f 85 15 cc 12 f0 b4 |K.`Q`..!........|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.628331 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:10505 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1800}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e6 03 08 62 df 71 |'........!...b.q|
00000010 fe d3 92 58 d5 60 72 ac eb 24 d2 f3 34 2e 66 33 |...X.`r..$..4.f3|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.662995 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:14741 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1833}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e8 04 08 5a df f2 |'........!...Z..|
00000010 f0 19 a3 56 db 8c a9 1e 44 e4 ae c4 9f 15 39 f3 |...V....D.....9.|
- rtmp2mpegts.go:182
```
|
1.0
|
Docker方式启动后,推rtmp流,报 ERROR [lal: buffer too short(avc.go:532)] - 按照README.md里面,用docker方式启动,推rtmp流,刷下面错误(推rtsp流就没有报错):
```
2022/05/18 06:51:03.701809 INFO initial log succ. - config.go:235
2022/05/18 06:51:03.701894 INFO
__ ___ __
/ / / | / /
/ / / /| | / /
/ /___/ ___ |/ /___
/_____/_/ |_/_____/
- config.go:238
2022/05/18 06:51:03.702392 INFO load conf file succ. filename=conf/lalserver.conf.json, raw content={ "# doc of config": "https://pengrl.com/lal/#/ConfigBrief", "conf_version": "v0.3.1", "rtmp": { "enable": true, "addr": ":1935", "gop_num": 0, "merge_write_size": 0, "add_dummy_audio_enable": false, "add_dummy_audio_wait_audio_ms": 150 }, "default_http": { "http_listen_addr": ":8080", "https_listen_addr": ":4433", "https_cert_file": "./conf/cert.pem", "https_key_file": "./conf/key.pem" }, "httpflv": { "enable": true, "enable_https": true, "url_pattern": "/", "gop_num": 0 }, "hls": { "enable": true, "enable_https": true, "url_pattern": "/hls/", "out_path": "./lal_record/hls/", "fragment_duration_ms": 3000, "fragment_num": 6, "delete_threshold": 6, "cleanup_mode": 1, "use_memory_as_disk_flag": false }, "httpts": { "enable": true, "enable_https": true, "url_pattern": "/", "gop_num": 0 }, "rtsp": { "enable": true, "addr": ":5544", "out_wait_key_frame_flag": true }, "record": { "enable_flv": false, "flv_out_path": "./lal_record/flv/", "enable_mpegts": false, "mpegts_out_path": "./lal_record/mpegts" }, "relay_push": { "enable": false, "addr_list":[ ] }, "static_relay_pull": { "enable": false, "addr": "" }, "http_api": { "enable": true, "addr": ":8083" }, "server_id": "1", "http_notify": { "enable": false, "update_interval_sec": 5, "on_update": "http://127.0.0.1:10101/on_update", "on_pub_start": "http://127.0.0.1:10101/on_pub_start", "on_pub_stop": "http://127.0.0.1:10101/on_pub_stop", "on_sub_start": "http://127.0.0.1:10101/on_sub_start", "on_sub_stop": "http://127.0.0.1:10101/on_sub_stop", "on_relay_pull_start": "http://127.0.0.1:10101/on_relay_pull_start", "on_relay_pull_stop": "http://127.0.0.1:10101/on_relay_pull_stop", "on_rtmp_connect": "http://127.0.0.1:10101/on_rtmp_connect", "on_server_start": "http://127.0.0.1:10101/on_server_start" }, "simple_auth": { "key": "q191201771", "dangerous_lal_secret": "pengrl", "pub_rtmp_enable": false, "sub_rtmp_enable": false, "sub_httpflv_enable": false, "sub_httpts_enable": false, "pub_rtsp_enable": false, "sub_rtsp_enable": false, "hls_m3u8_enable": false }, "pprof": { "enable": true, "addr": ":8084" }, "log": { "level": 1, "filename": "./logs/lalserver.log", "is_to_stdout": true, "is_rotate_daily": true, "short_file_flag": true, "timestamp_flag": true, "timestamp_with_ms_flag": true, "level_flag": true, "assert_behavior": 1 }, "debug": { "log_group_interval_sec": 30, "log_group_max_group_num": 10, "log_group_max_sub_num_per_group": 10 } } parsed=&{ConfVersion:v0.3.1 RtmpConfig:{Enable:true Addr::1935 GopNum:0 MergeWriteSize:0 AddDummyAudioEnable:false AddDummyAudioWaitAudioMs:150} DefaultHttpConfig:{CommonHttpAddrConfig:{HttpListenAddr::8080 HttpsListenAddr::4433 HttpsCertFile:./conf/cert.pem HttpsKeyFile:./conf/key.pem}} HttpflvConfig:{CommonHttpServerConfig:{CommonHttpAddrConfig:{HttpListenAddr::8080 HttpsListenAddr::4433 HttpsCertFile:./conf/cert.pem HttpsKeyFile:./conf/key.pem} Enable:true EnableHttps:true UrlPattern:/} GopNum:0} HlsConfig:{CommonHttpServerConfig:{CommonHttpAddrConfig:{HttpListenAddr::8080 HttpsListenAddr::4433 HttpsCertFile:./conf/cert.pem HttpsKeyFile:./conf/key.pem} Enable:true EnableHttps:true UrlPattern:/hls/} UseMemoryAsDiskFlag:false MuxerConfig:{OutPath:./lal_record/hls/ FragmentDurationMs:3000 FragmentNum:6 DeleteThreshold:6 CleanupMode:1}} HttptsConfig:{CommonHttpServerConfig:{CommonHttpAddrConfig:{HttpListenAddr::8080 HttpsListenAddr::4433 HttpsCertFile:./conf/cert.pem HttpsKeyFile:./conf/key.pem} Enable:true EnableHttps:true UrlPattern:/} GopNum:0} RtspConfig:{Enable:true Addr::5544 OutWaitKeyFrameFlag:true} RecordConfig:{EnableFlv:false FlvOutPath:./lal_record/flv/ EnableMpegts:false MpegtsOutPath:./lal_record/mpegts} RelayPushConfig:{Enable:false AddrList:[]} StaticRelayPullConfig:{Enable:false Addr:} HttpApiConfig:{Enable:true Addr::8083} ServerId:1 HttpNotifyConfig:{Enable:false UpdateIntervalSec:5 OnServerStart:http://127.0.0.1:10101/on_server_start OnUpdate:http://127.0.0.1:10101/on_update OnPubStart:http://127.0.0.1:10101/on_pub_start OnPubStop:http://127.0.0.1:10101/on_pub_stop OnSubStart:http://127.0.0.1:10101/on_sub_start OnSubStop:http://127.0.0.1:10101/on_sub_stop OnRelayPullStart:http://127.0.0.1:10101/on_relay_pull_start OnRelayPullStop:http://127.0.0.1:10101/on_relay_pull_stop OnRtmpConnect:http://127.0.0.1:10101/on_rtmp_connect} SimpleAuthConfig:{Key:q191201771 DangerousLalSecret:pengrl PubRtmpEnable:false SubRtmpEnable:false SubHttpflvEnable:false SubHttptsEnable:false PubRtspEnable:false SubRtspEnable:false HlsM3u8Enable:false} PprofConfig:{Enable:true Addr::8084} LogConfig:{Level:1 Filename:./logs/lalserver.log IsToStdout:true IsRotateDaily:true ShortFileFlag:true TimestampFlag:true TimestampWithMsFlag:true LevelFlag:true AssertBehavior:1} DebugConfig:{LogGroupIntervalSec:30 LogGroupMaxGroupNum:10 LogGroupMaxSubNumPerGroup:10}} - config.go:326
2022/05/18 06:51:03.702609 INFO start: 2022-05-18 06:51:03.7 - base.go:33
2022/05/18 06:51:03.702645 INFO wd: /lal - base.go:34
2022/05/18 06:51:03.702659 INFO args: ./bin/lalserver -c conf/lalserver.conf.json - base.go:35
2022/05/18 06:51:03.702676 INFO bininfo: GitTag=. GitCommitLog=. GitStatus=cleanly. BuildTime=2022.05.17.123652. GoVersion=go version go1.16.4 linux/amd64. runtime=linux/amd64. - base.go:36
2022/05/18 06:51:03.702689 INFO version: lal v0.29.1 (github.com/q191201771/lal) - base.go:37
2022/05/18 06:51:03.702706 INFO github: https://github.com/q191201771/lal - base.go:38
2022/05/18 06:51:03.702720 INFO doc: https://pengrl.com/lal - base.go:39
2022/05/18 06:51:03.702844 INFO start web pprof listen. addr=:8084 - server_manager.go:154
2022/05/18 06:51:03.702974 INFO add http listen for httpflv. addr=:8080, pattern=/ - server_manager.go:176
2022/05/18 06:51:03.703462 INFO add https listen for httpflv. addr=:4433, pattern=/ - server_manager.go:187
2022/05/18 06:51:03.703485 INFO add http listen for httpts. addr=:8080, pattern=/ - server_manager.go:176
2022/05/18 06:51:03.703496 INFO add https listen for httpts. addr=:4433, pattern=/ - server_manager.go:187
2022/05/18 06:51:03.703508 INFO add http listen for hls. addr=:8080, pattern=/hls/ - server_manager.go:176
2022/05/18 06:51:03.703521 INFO add https listen for hls. addr=:4433, pattern=/hls/ - server_manager.go:187
2022/05/18 06:51:03.703561 INFO start rtmp server listen. addr=:1935 - server.go:53
2022/05/18 06:51:03.703590 INFO start rtsp server listen. addr=:5544 - server.go:71
2022/05/18 06:51:03.703676 INFO start http-api server listen. addr=:8083 - http_api.go:41
2022/05/18 06:51:22.039064 INFO accept a rtmp connection. remoteAddr=10.100.105.46:35018 - server.go:77
2022/05/18 06:51:22.039162 DEBUG [NAZACONN1] lifecycle new connection. net.Conn=0xc0003a8000, naza.Connection=0xc0003b0000 - connection.go:192
2022/05/18 06:51:22.039204 INFO [RTMPPUBSUB1] lifecycle new rtmp ServerSession. session=0xc00038c600, remote addr=10.100.105.46:35018 - server_session.go:120
2022/05/18 06:51:22.039418 DEBUG handshake simple mode. - handshake.go:236
2022/05/18 06:51:22.039452 INFO [RTMPPUBSUB1] < R Handshake C0+C1. - server_session.go:218
2022/05/18 06:51:22.039465 INFO [RTMPPUBSUB1] > W Handshake S0+S1+S2. - server_session.go:220
2022/05/18 06:51:22.040235 INFO [RTMPPUBSUB1] < R Handshake C2. - server_session.go:228
2022/05/18 06:51:22.040301 INFO [RTMPPUBSUB1] < R connect('live'). tcUrl=rtmp://10.20.1.55:1935/live - server_session.go:383
2022/05/18 06:51:22.040326 INFO [RTMPPUBSUB1] > W Window Acknowledgement Size 5000000. - server_session.go:387
2022/05/18 06:51:22.040376 INFO [RTMPPUBSUB1] > W Set Peer Bandwidth. - server_session.go:392
2022/05/18 06:51:22.040411 INFO [RTMPPUBSUB1] > W SetChunkSize 4096. - server_session.go:397
2022/05/18 06:51:22.040447 INFO [RTMPPUBSUB1] > W _result('NetConnection.Connect.Success'). - server_session.go:402
2022/05/18 06:51:22.040964 DEBUG [RTMPPUBSUB1] read command message, ignore it. cmd=releaseStream, header={Csid:3 MsgLen:30 MsgTypeId:20 MsgStreamId:0 TimestampAbs:0}, b=len(core)=4096, rpos=25, wpos=30, hex=00000000 05 02 00 01 61 |....a|
- server_session.go:357
2022/05/18 06:51:22.041029 DEBUG [RTMPPUBSUB1] read command message, ignore it. cmd=FCPublish, header={Csid:3 MsgLen:26 MsgTypeId:20 MsgStreamId:0 TimestampAbs:0}, b=len(core)=4096, rpos=21, wpos=26, hex=00000000 05 02 00 01 61 |....a|
- server_session.go:357
2022/05/18 06:51:22.041047 INFO [RTMPPUBSUB1] < R createStream(). - server_session.go:414
2022/05/18 06:51:22.041058 INFO [RTMPPUBSUB1] > W _result(). - server_session.go:415
2022/05/18 06:51:22.041342 DEBUG [RTMPPUBSUB1] pubType=live - server_session.go:442
2022/05/18 06:51:22.041360 INFO [RTMPPUBSUB1] < R publish('a') - server_session.go:443
2022/05/18 06:51:22.041372 INFO [RTMPPUBSUB1] > W onStatus('NetStream.Publish.Start'). - server_session.go:445
2022/05/18 06:51:22.041485 INFO [GROUP1] lifecycle new group. group=0xc0003e2000, appName=live, streamName=a - group.go:113
2022/05/18 06:51:22.041506 DEBUG [GROUP1] [RTMPPUBSUB1] add rtmp pub session into group. - group__in.go:60
2022/05/18 06:51:22.041624 INFO [HLSMUXER1] lifecycle new hls muxer. muxer=0xc0003e6000, streamName=a - muxer.go:115
2022/05/18 06:51:22.041662 INFO [HLSMUXER1] start hls muxer. - muxer.go:120
2022/05/18 06:51:22.072395 DEBUG [GROUP1] cache rtmp metadata. size:212 - gop_cache.go:97
2022/05/18 06:51:22.072431 DEBUG [GROUP1] cache httpflv metadata. size:215 - gop_cache.go:97
2022/05/18 06:51:22.072471 WARN rtmp msg too short, ignore. header={Csid:4 MsgLen:5 MsgTypeId:9 MsgStreamId:1 TimestampAbs:0}, payload=00000000 17 00 00 00 00 |.....|
- rtmp2rtsp.go:80
2022/05/18 06:51:22.072497 DEBUG [GROUP1] cache rtmp video seq header. size:17 - gop_cache.go:108
2022/05/18 06:51:22.072516 DEBUG [GROUP1] cache httpflv video seq header. size:20 - gop_cache.go:108
2022/05/18 06:51:22.090319 DEBUG [0xc000384240] Buffer::Grow. realloc, this round need=131072, copy=0, cap=(4096 -> 131072) - buffer.go:147
2022/05/18 06:51:22.141917 DEBUG [0xc000384240] Buffer::Grow. realloc, this round need=262144, copy=0, cap=(131072 -> 262144) - buffer.go:147
2022/05/18 06:51:22.445271 WARN [RTMP2MPEGTS1] rtmp msg too short, ignore. header={Csid:4 MsgLen:5 MsgTypeId:9 MsgStreamId:1 TimestampAbs:0}, payload=00000000 17 00 00 00 00 |.....|
- rtmp2mpegts.go:148
2022/05/18 06:51:22.445402 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:114425 MsgTypeId:9 MsgStreamId:1 TimestampAbs:33}, payload=00000000 27 01 00 00 00 00 00 00 01 27 64 00 2a ac ce 80 |'........'d.*...|
00000010 78 02 27 e5 c0 5a 80 81 01 78 00 00 03 00 08 00 |x.'..Z...x......|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.445460 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:195894 MsgTypeId:9 MsgStreamId:1 TimestampAbs:67}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e2 01 09 ad ff fe |'........!......|
00000010 1f ae 1d 4e 51 2e 70 f5 47 d9 86 70 51 9a 2e 92 |...NQ.p.G..pQ...|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.445509 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:187412 MsgTypeId:9 MsgStreamId:1 TimestampAbs:100}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e4 02 09 ad ff fe |'........!......|
00000010 30 39 e1 fc a0 79 45 7b d1 bd 0d 08 41 16 5f 5e |09...yE{....A._^|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.445544 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:81573 MsgTypeId:9 MsgStreamId:1 TimestampAbs:133}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e6 03 08 eb 7f eb |'........!......|
00000010 0a 80 cf 16 21 ac 62 f9 72 3f a8 3e f1 40 4a c1 |....!.b.r?.>.@J.|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.445592 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:18384 MsgTypeId:9 MsgStreamId:1 TimestampAbs:167}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e8 04 08 5a df f2 |'........!...Z..|
00000010 51 3c a0 b6 de 04 a7 ce 71 03 86 44 82 15 fa 43 |Q<......q..D...C|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.445665 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:9399 MsgTypeId:9 MsgStreamId:1 TimestampAbs:200}, payload=00000000 27 01 00 00 00 00 00 00 01 21 ea 05 08 62 df f3 |'........!...b..|
00000010 80 ff 54 cf 62 03 d0 ee 91 c0 4a ac f4 74 8e 64 |..T.b.....J..t.d|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.445751 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:5583 MsgTypeId:9 MsgStreamId:1 TimestampAbs:233}, payload=00000000 27 01 00 00 00 00 00 00 01 21 ec 06 08 6a df 77 |'........!...j.w|
00000010 f9 46 7e f7 1d c2 4c 70 6f 81 00 69 3b ea f9 c1 |.F~...Lpo..i;...|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.445824 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3711 MsgTypeId:9 MsgStreamId:1 TimestampAbs:267}, payload=00000000 27 01 00 00 00 00 00 00 01 21 ee 07 08 72 df f5 |'........!...r..|
00000010 1c 28 11 5e 29 a0 40 ad 45 5e 4b 09 ac 00 00 03 |.(.^).@.E^K.....|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.445877 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:4791 MsgTypeId:9 MsgStreamId:1 TimestampAbs:300}, payload=00000000 27 01 00 00 00 00 00 00 01 21 f0 08 08 72 df f5 |'........!...r..|
00000010 b9 d4 6b 34 fb c2 a1 9e 7e 59 83 bd c8 00 05 19 |..k4....~Y......|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.445926 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3818 MsgTypeId:9 MsgStreamId:1 TimestampAbs:333}, payload=00000000 27 01 00 00 00 00 00 00 01 21 f2 09 08 72 df 1c |'........!...r..|
00000010 69 19 7c 06 ed ea af ac 81 18 33 69 a2 4a eb 1c |i.|.......3i.J..|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.445974 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3965 MsgTypeId:9 MsgStreamId:1 TimestampAbs:367}, payload=00000000 27 01 00 00 00 00 00 00 01 21 f4 0a 08 72 df d1 |'........!...r..|
00000010 70 c8 b5 e4 97 11 e5 87 17 3a f0 00 00 0c 7f 39 |p........:.....9|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.446045 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3640 MsgTypeId:9 MsgStreamId:1 TimestampAbs:400}, payload=00000000 27 01 00 00 00 00 00 00 01 21 f6 0b 08 72 df 80 |'........!...r..|
00000010 11 2c 00 00 03 00 00 03 02 0a c2 24 5a b8 7e 81 |.,.........$Z.~.|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.446082 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3537 MsgTypeId:9 MsgStreamId:1 TimestampAbs:433}, payload=00000000 27 01 00 00 00 00 00 00 01 21 f8 0c 08 72 df b6 |'........!...r..|
00000010 f8 1a 6d 68 e7 a0 84 f3 b9 91 32 08 ee 75 0d 4d |..mh......2..u.M|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.446128 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3704 MsgTypeId:9 MsgStreamId:1 TimestampAbs:467}, payload=00000000 27 01 00 00 00 00 00 00 01 21 fa 0d 08 72 df a1 |'........!...r..|
00000010 e0 82 c6 d2 8d 3a 00 02 84 00 0b 9a a3 d4 09 84 |.....:..........|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.455809 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:4723 MsgTypeId:9 MsgStreamId:1 TimestampAbs:500}, payload=00000000 27 01 00 00 00 00 00 00 01 21 fc 0e 08 72 df 08 |'........!...r..|
00000010 4f cd 44 3e 30 10 79 1c 58 d7 de 5c a3 e4 08 d2 |O.D>0.y.X..\....|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.468230 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:9017 MsgTypeId:9 MsgStreamId:1 TimestampAbs:533}, payload=00000000 27 01 00 00 00 00 00 00 01 21 fe 0f 08 72 df 9c |'........!...r..|
00000010 b9 23 cb 64 9c 5a 59 fd ac 5c bf d7 1c f4 b3 49 |.#.d.ZY..\.....I|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.468282 ERROR assert failed. excepted=<nil>, but actual=lal.sdp: fxxk(pack.go:38) - rtmp2rtsp.go:145
2022/05/18 06:51:22.478600 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:4009 MsgTypeId:9 MsgStreamId:1 TimestampAbs:567}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e0 10 08 7a df b7 |'........!...z..|
00000010 4e 8e 48 f0 82 85 00 0b 12 56 00 25 96 39 65 85 |N.H......V.%.9e.|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.488330 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:1908 MsgTypeId:9 MsgStreamId:1 TimestampAbs:600}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e2 11 08 7a df 71 |'........!...z.q|
00000010 c0 e3 32 0c 99 0e ce 10 6e e4 b4 6d f6 dc c6 b8 |..2.....n..m....|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.497646 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:1746 MsgTypeId:9 MsgStreamId:1 TimestampAbs:633}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e4 12 08 7a df 2a |'........!...z.*|
00000010 b9 66 ad bc 36 cb 2a 83 43 39 b1 8d 23 d9 1a f9 |.f..6.*.C9..#...|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.507055 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:1482 MsgTypeId:9 MsgStreamId:1 TimestampAbs:667}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e6 13 08 7a df 0a |'........!...z..|
00000010 13 55 8d 7c df 6b 84 11 0d 3e 06 ee 6a 60 f4 c2 |.U.|.k...>..j`..|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.516260 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:1391 MsgTypeId:9 MsgStreamId:1 TimestampAbs:700}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e8 14 08 7a df 23 |'........!...z.#|
00000010 9b 37 3a 35 f5 5b b1 67 9d 8e 07 3c b1 b9 b6 b4 |.7:5.[.g...<....|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.525094 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:428 MsgTypeId:9 MsgStreamId:1 TimestampAbs:733}, payload=00000000 27 01 00 00 00 00 00 00 01 21 ea 15 08 7a df 0d |'........!...z..|
00000010 4f 9b 09 9a 9a 1e ab f9 ce 83 57 c1 44 eb e1 3c |O.........W.D..<|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.554435 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:840 MsgTypeId:9 MsgStreamId:1 TimestampAbs:767}, payload=00000000 27 01 00 00 00 00 00 00 01 21 ec 16 08 7a df a1 |'........!...z..|
00000010 64 3e a3 00 07 e2 9d 05 bf 40 00 3e cf 82 58 89 |d>.......@.>..X.|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.594061 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:958 MsgTypeId:9 MsgStreamId:1 TimestampAbs:800}, payload=00000000 27 01 00 00 00 00 00 00 01 21 ee 17 08 7a df 4e |'........!...z.N|
00000010 f6 49 01 e0 5f 3e f9 db 47 c7 8b 3b 5c 35 8b d2 |.I.._>..G..;\5..|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.623322 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:842 MsgTypeId:9 MsgStreamId:1 TimestampAbs:833}, payload=00000000 27 01 00 00 00 00 00 00 01 21 f0 18 08 7a df 18 |'........!...z..|
00000010 b7 ff 0f ca 8f 66 21 69 b9 4b 6b 43 fb aa dc 6f |.....f!i.KkC...o|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.668516 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:43011 MsgTypeId:9 MsgStreamId:1 TimestampAbs:867}, payload=00000000 27 01 00 00 00 00 00 00 01 27 64 00 2a ac ce 80 |'........'d.*...|
00000010 78 02 27 e5 c0 5a 80 81 01 78 00 00 03 00 08 00 |x.'..Z...x......|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.689528 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:4968 MsgTypeId:9 MsgStreamId:1 TimestampAbs:900}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e2 01 08 7a df d1 |'........!...z..|
00000010 2f 44 0f 35 c7 f8 6e 05 d0 3b 84 25 26 28 00 00 |/D.5..n..;.%&(..|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.720182 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:4163 MsgTypeId:9 MsgStreamId:1 TimestampAbs:933}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e4 02 08 7a df 38 |'........!...z.8|
00000010 15 07 7c 76 fb fb 09 78 c2 f2 0f dd a9 a1 cb 03 |..|v...x........|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.760553 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3542 MsgTypeId:9 MsgStreamId:1 TimestampAbs:967}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e6 03 08 7a df 1f |'........!...z..|
00000010 7a c9 52 ed fc 37 46 01 ea 58 73 0c 10 23 1e 76 |z.R..7F..Xs..#.v|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.790974 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3707 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1000}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e8 04 08 7a df 15 |'........!...z..|
00000010 d6 37 aa d8 02 44 73 00 4a 73 30 2b 6b c2 03 83 |.7...Ds.Js0+k...|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.821293 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3789 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1033}, payload=00000000 27 01 00 00 00 00 00 00 01 21 ea 05 08 7a df 69 |'........!...z.i|
00000010 f0 b4 06 cd b1 49 6c bb 20 00 00 03 02 ca 0f fc |.....Il. .......|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.862143 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:4416 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1067}, payload=00000000 27 01 00 00 00 00 00 00 01 21 ec 06 08 7a df 21 |'........!...z.!|
00000010 77 ef 3f 00 09 a8 b1 90 33 07 96 fd 3d 80 07 dd |w.?.....3...=...|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.891919 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:2058 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1100}, payload=00000000 27 01 00 00 00 00 00 00 01 21 ee 07 08 7a df 22 |'........!...z."|
00000010 e7 24 68 00 04 16 90 f6 24 dc 64 00 03 54 4b 31 |.$h.....$.d..TK1|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.922509 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:4640 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1133}, payload=00000000 27 01 00 00 00 00 00 00 01 21 f0 08 08 6a df f1 |'........!...j..|
00000010 16 9e 69 bc 80 81 a7 23 e3 dd c8 54 0e 47 ae fd |..i....#...T.G..|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.953150 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3984 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1167}, payload=00000000 27 01 00 00 00 00 00 00 01 21 f2 09 08 6a df ce |'........!...j..|
00000010 b4 87 23 a6 9d 5c 80 0f 64 bf 03 02 00 00 03 00 |..#..\..d.......|
- rtmp2mpegts.go:182
2022/05/18 06:51:22.994122 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:5067 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1200}, payload=00000000 27 01 00 00 00 00 00 00 01 21 f4 0a 08 6a df 9d |'........!...j..|
00000010 06 60 01 0b 3f 76 4d 80 00 02 12 9b 8c 32 d4 56 |.`..?vM......2.V|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.025574 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:6174 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1233}, payload=00000000 27 01 00 00 00 00 00 00 01 21 f6 0b 08 6a df cd |'........!...j..|
00000010 00 09 bc 69 07 29 10 a0 00 00 62 ea 56 41 dc 71 |...i.)....b.VA.q|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.087334 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:5705 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1267}, payload=00000000 27 01 00 00 00 00 00 00 01 21 f8 0c 08 6a df d0 |'........!...j..|
00000010 2b e7 ea fa d7 87 db 90 cc af fd af 5f c0 d6 58 |+..........._..X|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.128447 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:5575 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1300}, payload=00000000 27 01 00 00 00 00 00 00 01 21 fa 0d 08 6a df 55 |'........!...j.U|
00000010 92 81 bd 19 e5 ef 6b 9d 93 fb 76 ba 2b 40 00 00 |......k...v.+@..|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.159408 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:5442 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1333}, payload=00000000 27 01 00 00 00 00 00 00 01 21 fc 0e 08 6a df 6a |'........!...j.j|
00000010 d3 57 1c 04 6a 7c 12 50 00 cd 60 4b b3 91 9c f9 |.W..j|.P..`K....|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.190616 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:5710 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1367}, payload=00000000 27 01 00 00 00 00 00 00 01 21 fe 0f 08 6a df 6a |'........!...j.j|
00000010 88 40 25 e6 91 80 00 00 f8 47 a0 a4 91 9f 48 68 |.@%......G....Hh|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.221367 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:4913 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1400}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e0 10 08 6a df 4c |'........!...j.L|
00000010 97 a8 34 ef ef 47 f8 6f 15 a3 7f bf 00 54 e5 7d |..4..G.o.....T.}|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.252091 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:4373 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1433}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e2 11 08 6a df 54 |'........!...j.T|
00000010 2a 46 01 4b e0 5e 48 00 00 03 01 95 4e eb 2c 31 |*F.K.^H.....N.,1|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.292656 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3598 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1467}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e4 12 08 6a df 5f |'........!...j._|
00000010 35 7b 76 c9 b9 95 84 47 dd 26 00 00 af a4 0e 2e |5{v....G.&......|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.323350 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:4017 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1500}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e6 13 08 6a df 6a |'........!...j.j|
00000010 c2 00 27 0f 44 22 0e f5 52 53 52 c2 30 57 02 1e |..'.D"..RSR.0W..|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.354243 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:5034 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1533}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e8 14 08 6a df 6b |'........!...j.k|
00000010 11 28 00 09 2c 40 00 01 fd d8 0a c2 d7 5a 1c 2b |.(..,@.......Z.+|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.395086 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:4753 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1567}, payload=00000000 27 01 00 00 00 00 00 00 01 21 ea 15 08 6a df e3 |'........!...j..|
00000010 06 f8 55 72 3f 3e 7b e1 05 8f a0 00 00 c5 28 ad |..Ur?>{.......(.|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.425369 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3083 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1600}, payload=00000000 27 01 00 00 00 00 00 00 01 21 ec 16 08 6a df 10 |'........!...j..|
00000010 39 96 e0 c1 00 00 04 2b be 12 33 21 49 56 ec e5 |9......+..3!IV..|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.455490 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3094 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1633}, payload=00000000 27 01 00 00 00 00 00 00 01 21 ee 17 08 6a df 11 |'........!...j..|
00000010 b5 30 46 47 1b c2 c2 cf 8f c6 36 a0 ef 95 66 0a |.0FG......6...f.|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.485627 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:3139 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1667}, payload=00000000 27 01 00 00 00 00 00 00 01 21 f0 18 08 6a df d0 |'........!...j..|
00000010 36 80 c5 53 66 7b 02 ba c7 77 69 a3 44 36 95 d0 |6..Sf{...wi.D6..|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.550726 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:69442 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1700}, payload=00000000 27 01 00 00 00 00 00 00 01 27 64 00 2a ac ce 80 |'........'d.*...|
00000010 78 02 27 e5 c0 5a 80 81 01 78 00 00 03 00 08 00 |x.'..Z...x......|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.563223 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:9396 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1733}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e2 01 08 6a df b2 |'........!...j..|
00000010 26 d9 3b 22 37 74 22 df 47 76 95 7e e3 5c 18 24 |&.;"7t".Gv.~.\.$|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.595411 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:7704 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1767}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e4 02 08 6a df b9 |'........!...j..|
00000010 4b b9 60 51 60 83 1b 21 8e 7f 85 15 cc 12 f0 b4 |K.`Q`..!........|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.628331 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:10505 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1800}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e6 03 08 62 df 71 |'........!...b.q|
00000010 fe d3 92 58 d5 60 72 ac eb 24 d2 f3 34 2e 66 33 |...X.`r..$..4.f3|
- rtmp2mpegts.go:182
2022/05/18 06:51:23.662995 ERROR [lal: buffer too short(avc.go:532)] iterate nalu failed. err=RTMP2MPEGTS1, header={Csid:4 MsgLen:14741 MsgTypeId:9 MsgStreamId:1 TimestampAbs:1833}, payload=00000000 27 01 00 00 00 00 00 00 01 21 e8 04 08 5a df f2 |'........!...Z..|
00000010 f0 19 a3 56 db 8c a9 1e 44 e4 ae c4 9f 15 39 f3 |...V....D.....9.|
- rtmp2mpegts.go:182
```
|
process
|
docker方式启动后,推rtmp流,报 error 按照readme md里面,用docker方式启动,推rtmp流,刷下面错误(推rtsp流就没有报错 : info initial log succ config go info config go info load conf file succ filename conf lalserver conf json raw content doc of config conf version rtmp enable true addr gop num merge write size add dummy audio enable false add dummy audio wait audio ms default http http listen addr https listen addr https cert file conf cert pem https key file conf key pem httpflv enable true enable https true url pattern gop num hls enable true enable https true url pattern hls out path lal record hls fragment duration ms fragment num delete threshold cleanup mode use memory as disk flag false httpts enable true enable https true url pattern gop num rtsp enable true addr out wait key frame flag true record enable flv false flv out path lal record flv enable mpegts false mpegts out path lal record mpegts relay push enable false addr list static relay pull enable false addr http api enable true addr server id http notify enable false update interval sec on update on pub start on pub stop on sub start on sub stop on relay pull start on relay pull stop on rtmp connect on server start simple auth key dangerous lal secret pengrl pub rtmp enable false sub rtmp enable false sub httpflv enable false sub httpts enable false pub rtsp enable false sub rtsp enable false hls enable false pprof enable true addr log level filename logs lalserver log is to stdout true is rotate daily true short file flag true timestamp flag true timestamp with ms flag true level flag true assert behavior debug log group interval sec log group max group num log group max sub num per group parsed confversion rtmpconfig enable true addr gopnum mergewritesize adddummyaudioenable false adddummyaudiowaitaudioms defaulthttpconfig commonhttpaddrconfig httplistenaddr httpslistenaddr httpscertfile conf cert pem httpskeyfile conf key pem httpflvconfig commonhttpserverconfig commonhttpaddrconfig httplistenaddr httpslistenaddr httpscertfile conf cert pem httpskeyfile conf key pem enable true enablehttps true urlpattern gopnum hlsconfig commonhttpserverconfig commonhttpaddrconfig httplistenaddr httpslistenaddr httpscertfile conf cert pem httpskeyfile conf key pem enable true enablehttps true urlpattern hls usememoryasdiskflag false muxerconfig outpath lal record hls fragmentdurationms fragmentnum deletethreshold cleanupmode httptsconfig commonhttpserverconfig commonhttpaddrconfig httplistenaddr httpslistenaddr httpscertfile conf cert pem httpskeyfile conf key pem enable true enablehttps true urlpattern gopnum rtspconfig enable true addr outwaitkeyframeflag true recordconfig enableflv false flvoutpath lal record flv enablempegts false mpegtsoutpath lal record mpegts relaypushconfig enable false addrlist staticrelaypullconfig enable false addr httpapiconfig enable true addr serverid httpnotifyconfig enable false updateintervalsec onserverstart onupdate onpubstart onpubstop onsubstart onsubstop onrelaypullstart onrelaypullstop onrtmpconnect simpleauthconfig key dangerouslalsecret pengrl pubrtmpenable false subrtmpenable false subhttpflvenable false subhttptsenable false pubrtspenable false subrtspenable false false pprofconfig enable true addr logconfig level filename logs lalserver log istostdout true isrotatedaily true shortfileflag true timestampflag true timestampwithmsflag true levelflag true assertbehavior debugconfig loggroupintervalsec loggroupmaxgroupnum loggroupmaxsubnumpergroup config go info start base go info wd lal base go info args bin lalserver c conf lalserver conf json base go info bininfo gittag gitcommitlog gitstatus cleanly buildtime goversion go version linux runtime linux base go info version lal github com lal base go info github base go info doc base go info start web pprof listen addr server manager go info add http listen for httpflv addr pattern server manager go info add https listen for httpflv addr pattern server manager go info add http listen for httpts addr pattern server manager go info add https listen for httpts addr pattern server manager go info add http listen for hls addr pattern hls server manager go info add https listen for hls addr pattern hls server manager go info start rtmp server listen addr server go info start rtsp server listen addr server go info start http api server listen addr http api go info accept a rtmp connection remoteaddr server go debug lifecycle new connection net conn naza connection connection go info lifecycle new rtmp serversession session remote addr server session go debug handshake simple mode handshake go info r handshake server session go info w handshake server session go info r handshake server session go info r connect live tcurl rtmp live server session go info w window acknowledgement size server session go info w set peer bandwidth server session go info w setchunksize server session go info w result netconnection connect success server session go debug read command message ignore it cmd releasestream header csid msglen msgtypeid msgstreamid timestampabs b len core rpos wpos hex a server session go debug read command message ignore it cmd fcpublish header csid msglen msgtypeid msgstreamid timestampabs b len core rpos wpos hex a server session go info r createstream server session go info w result server session go debug pubtype live server session go info r publish a server session go info w onstatus netstream publish start server session go info lifecycle new group group appname live streamname a group go debug add rtmp pub session into group group in go info lifecycle new hls muxer muxer streamname a muxer go info start hls muxer muxer go debug cache rtmp metadata size gop cache go debug cache httpflv metadata size gop cache go warn rtmp msg too short ignore header csid msglen msgtypeid msgstreamid timestampabs payload go debug cache rtmp video seq header size gop cache go debug cache httpflv video seq header size gop cache go debug buffer grow realloc this round need copy cap buffer go debug buffer grow realloc this round need copy cap buffer go warn rtmp msg too short ignore header csid msglen msgtypeid msgstreamid timestampabs payload go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload ac ce d x z x go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload ad ff fe ae nq p g pq go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload ad ff fe fc bd ye a go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload eb eb cf ac b r j go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df z de ce fa q q d c go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload ea df b ff cf ee ac t b j t d go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload ec df j w ea f lpo i go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload ee df r ad ac e k go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df r fb bd y go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df r ed ea af ac eb i j go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df r p go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df r z go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df r ee mh u m go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload fa df r go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload fc df r cd de o d y x go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload fe df r cb fd ac bf d zy i go error assert failed excepted but actual lal sdp fxxk pack go go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df z n h v go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df z q ce dc n m go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df z ad bc cb f go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df z df ee u k j go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df z g go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload ea df z ab ce eb o w d go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload ec df z bf cf d x go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload ee df z n db i g go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df z ff ca fb aa dc f i kkc o go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload ac ce d x z x go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df z d n go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df z fb fb dd cb v x go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df z ed fc ea z r xs v go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df z aa ds k go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload ea df z i cd bb ca fc il go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload ec df z ef fd dd w go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload ee df z dc h d go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df j bc dd ae fd i t g go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df ce j bf d go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df j vm v go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df cd j bc ea dc i b va q go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df j ea fa db cc af fd af x go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload fa df j u bd ef fb ba k v go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload fc df j j cd w j p k go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload fe df j j g hh go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df j l ef ef bf g o t go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df j t eb f k h n go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df j dd af v g go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df j j d rsr go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df j k fd z go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload ea df j ad ur go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload ec df j be ec iv go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload ee df j cf ef f go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df j ba sf wi go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload ac ce d x z x go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df j df gv go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df j cc k q go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df b q fe ac eb x r go error iterate nalu failed err header csid msglen msgtypeid msgstreamid timestampabs payload df z db ae v d go
| 1
|
808,848
| 30,113,561,606
|
IssuesEvent
|
2023-06-30 09:38:50
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Adding filter of summarized field grouped by latitude and longitude on a map causes table visualization error
|
Type:Bug Priority:P2 Visualization/Maps .Reproduced .Team/42 :milky_way:
|
**Describe the bug**
- Filtering on aggregate column after binning on a map breaks the map and causes unexpected UI Table visualization
**Logs**
<img width="741" alt="image" src="https://user-images.githubusercontent.com/8808703/231587277-a9817d6b-9527-4ba2-b2c9-6c291e385fba.png">
**To Reproduce**
Steps to reproduce the behavior (if you can reproduce the bug using the Sample Database, we will find the issue faster):
1. Use People from Sample Dataset to create a map with count of rows summary grouped by latitude and longitude (binned)
<img width="1276" alt="image" src="https://user-images.githubusercontent.com/8808703/231585380-95ef33f9-8b5d-4b7c-8f20-0396b4742496.png">
2. Add a filter on Count > 2
<img width="1270" alt="image" src="https://user-images.githubusercontent.com/8808703/231585467-30090bda-6133-44ca-80e5-39081f464527.png">
3. Map disappears, Source Table name disappears (top left), and clicking gear to edit visualization shows 3 columns to add that already exist.
<img width="1265" alt="image" src="https://user-images.githubusercontent.com/8808703/231585687-bed1d30b-7634-4a22-bf60-a2fbd24b098a.png">
**Expected behavior**
- Expect More Columns in Table view not to show the 3 columns that have already been added.
**Information about your Metabase Installation:**
You can get this information by going to Admin -> Troubleshooting, or simply post the JSON you see in that page.
- Metabase version: 45.3.1, 46.1, and master (as of 2023-04-12)
**Severity**
- customer reported issue
|
1.0
|
Adding filter of summarized field grouped by latitude and longitude on a map causes table visualization error - **Describe the bug**
- Filtering on aggregate column after binning on a map breaks the map and causes unexpected UI Table visualization
**Logs**
<img width="741" alt="image" src="https://user-images.githubusercontent.com/8808703/231587277-a9817d6b-9527-4ba2-b2c9-6c291e385fba.png">
**To Reproduce**
Steps to reproduce the behavior (if you can reproduce the bug using the Sample Database, we will find the issue faster):
1. Use People from Sample Dataset to create a map with count of rows summary grouped by latitude and longitude (binned)
<img width="1276" alt="image" src="https://user-images.githubusercontent.com/8808703/231585380-95ef33f9-8b5d-4b7c-8f20-0396b4742496.png">
2. Add a filter on Count > 2
<img width="1270" alt="image" src="https://user-images.githubusercontent.com/8808703/231585467-30090bda-6133-44ca-80e5-39081f464527.png">
3. Map disappears, Source Table name disappears (top left), and clicking gear to edit visualization shows 3 columns to add that already exist.
<img width="1265" alt="image" src="https://user-images.githubusercontent.com/8808703/231585687-bed1d30b-7634-4a22-bf60-a2fbd24b098a.png">
**Expected behavior**
- Expect More Columns in Table view not to show the 3 columns that have already been added.
**Information about your Metabase Installation:**
You can get this information by going to Admin -> Troubleshooting, or simply post the JSON you see in that page.
- Metabase version: 45.3.1, 46.1, and master (as of 2023-04-12)
**Severity**
- customer reported issue
|
non_process
|
adding filter of summarized field grouped by latitude and longitude on a map causes table visualization error describe the bug filtering on aggregate column after binning on a map breaks the map and causes unexpected ui table visualization logs img width alt image src to reproduce steps to reproduce the behavior if you can reproduce the bug using the sample database we will find the issue faster use people from sample dataset to create a map with count of rows summary grouped by latitude and longitude binned img width alt image src add a filter on count img width alt image src map disappears source table name disappears top left and clicking gear to edit visualization shows columns to add that already exist img width alt image src expected behavior expect more columns in table view not to show the columns that have already been added information about your metabase installation you can get this information by going to admin troubleshooting or simply post the json you see in that page metabase version and master as of severity customer reported issue
| 0
|
1,400
| 3,967,585,376
|
IssuesEvent
|
2016-05-03 16:42:26
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
Empty map causes odd mapref error
|
bug P2 preprocess
|
I have an empty map that is pulled into my primary map. The map is generated, and in this case, happened to have no content, just
```xml
<map title="sample title">
</map>
```
I've added a reference to this in hierarchy.ditamap:
```xml
<topicref href="copy.ditamap" format="ditamap"/>
```
With the map as shown above, I get this error:
> [mapref] [DOTX031E][ERROR]: The file copy.ditamap is not available to resolve link information. The location of this problem was at (File = C:\DITA-OT1.8\samples\hierarchy.ditamap, Element = topicref:20;31:49)
If I add anything into copy.ditamap -- even just a topicref with a navtitle, and no topic -- the error goes away.
|
1.0
|
Empty map causes odd mapref error - I have an empty map that is pulled into my primary map. The map is generated, and in this case, happened to have no content, just
```xml
<map title="sample title">
</map>
```
I've added a reference to this in hierarchy.ditamap:
```xml
<topicref href="copy.ditamap" format="ditamap"/>
```
With the map as shown above, I get this error:
> [mapref] [DOTX031E][ERROR]: The file copy.ditamap is not available to resolve link information. The location of this problem was at (File = C:\DITA-OT1.8\samples\hierarchy.ditamap, Element = topicref:20;31:49)
If I add anything into copy.ditamap -- even just a topicref with a navtitle, and no topic -- the error goes away.
|
process
|
empty map causes odd mapref error i have an empty map that is pulled into my primary map the map is generated and in this case happened to have no content just xml i ve added a reference to this in hierarchy ditamap xml with the map as shown above i get this error the file copy ditamap is not available to resolve link information the location of this problem was at file c dita samples hierarchy ditamap element topicref if i add anything into copy ditamap even just a topicref with a navtitle and no topic the error goes away
| 1
|
232,383
| 25,577,299,559
|
IssuesEvent
|
2022-11-30 23:38:14
|
pactflow/example-bi-directional-consumer-wiremock
|
https://api.github.com/repos/pactflow/example-bi-directional-consumer-wiremock
|
opened
|
CVE-2022-42003 (High) detected in jackson-databind-2.10.2.jar
|
security vulnerability
|
## CVE-2022-42003 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.10.2.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.10.2/528de95f198afafbcfb0c09d2e43b6e0ea663ec/jackson-databind-2.10.2.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.5.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.2.5.RELEASE.jar
- :x: **jackson-databind-2.10.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/pactflow/example-bi-directional-consumer-wiremock/commit/db1210550e283da70ff46b65309a89709f71e7a1">db1210550e283da70ff46b65309a89709f71e7a1</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In FasterXML jackson-databind before 2.14.0-rc1, resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting, when the UNWRAP_SINGLE_VALUE_ARRAYS feature is enabled. Additional fix version in 2.13.4.1 and 2.12.17.1
<p>Publish Date: 2022-10-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-42003>CVE-2022-42003</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-02</p>
<p>Fix Resolution (com.fasterxml.jackson.core:jackson-databind): 2.13.0-rc1</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.6.0</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
|
True
|
CVE-2022-42003 (High) detected in jackson-databind-2.10.2.jar - ## CVE-2022-42003 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.10.2.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.10.2/528de95f198afafbcfb0c09d2e43b6e0ea663ec/jackson-databind-2.10.2.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.5.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.2.5.RELEASE.jar
- :x: **jackson-databind-2.10.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/pactflow/example-bi-directional-consumer-wiremock/commit/db1210550e283da70ff46b65309a89709f71e7a1">db1210550e283da70ff46b65309a89709f71e7a1</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In FasterXML jackson-databind before 2.14.0-rc1, resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting, when the UNWRAP_SINGLE_VALUE_ARRAYS feature is enabled. Additional fix version in 2.13.4.1 and 2.12.17.1
<p>Publish Date: 2022-10-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-42003>CVE-2022-42003</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-02</p>
<p>Fix Resolution (com.fasterxml.jackson.core:jackson-databind): 2.13.0-rc1</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.6.0</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
|
non_process
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file build gradle path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter web release jar root library spring boot starter json release jar x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details in fasterxml jackson databind before resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting when the unwrap single value arrays feature is enabled additional fix version in and publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution com fasterxml jackson core jackson databind direct dependency fix resolution org springframework boot spring boot starter web check this box to open an automated fix pr
| 0
|
19,858
| 26,269,138,176
|
IssuesEvent
|
2023-01-06 15:20:56
|
apache/arrow-datafusion
|
https://api.github.com/repos/apache/arrow-datafusion
|
closed
|
Release DataFusion 15.0.0
|
enhancement development-process
|
**Is your feature request related to a problem or challenge? Please describe what you are trying to do.**
I plan on creating DataFusion 15.0.0-rc1 on Friday 2nd December (4 weeks since the previous release).
**Describe the solution you'd like**
- [ ] [Update version & generate changelog](https://github.com/apache/arrow-datafusion/pull/4470)
- [ ] Cut RC and start vote
- [ ] Vote passes
- [ ] Release to crates.io
- [ ] Publish documentation
**Describe alternatives you've considered**
**Additional context**
|
1.0
|
Release DataFusion 15.0.0 - **Is your feature request related to a problem or challenge? Please describe what you are trying to do.**
I plan on creating DataFusion 15.0.0-rc1 on Friday 2nd December (4 weeks since the previous release).
**Describe the solution you'd like**
- [ ] [Update version & generate changelog](https://github.com/apache/arrow-datafusion/pull/4470)
- [ ] Cut RC and start vote
- [ ] Vote passes
- [ ] Release to crates.io
- [ ] Publish documentation
**Describe alternatives you've considered**
**Additional context**
|
process
|
release datafusion is your feature request related to a problem or challenge please describe what you are trying to do i plan on creating datafusion on friday december weeks since the previous release describe the solution you d like cut rc and start vote vote passes release to crates io publish documentation describe alternatives you ve considered additional context
| 1
|
7,246
| 10,412,869,857
|
IssuesEvent
|
2019-09-13 17:03:06
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Merge terms - 'response to phytoalexin production by other organism involved in symbiotic interaction' and children (was:obsolete...)
|
Other term-related request multi-species process obsoletion
|
While working on #17441
I came across this branch:
- GO:0052549 response to phytoalexin production by other organism involved in symbiotic interaction
- - GO:0052566 response to host phytoalexin production
- - - GO:0052378 evasion or tolerance by organism of phytoalexins produced by other organism involved in symbiotic interaction
- - - - GO:0052061 evasion or tolerance by symbiont of host-produced phytoalexins
- - - GO:0052304 modulation by organism of phytoalexin production in other organism involved in symbiotic interaction
- - - GO:0052329 positive regulation by organism of phytoalexin production in other organism involved in symbiotic interaction
- - - - GO:0052165 modulation by symbiont of host phytoalexin production
- - - - - GO:1990217 negative regulation by symbiont of host phytoalexin production
- - - - - GO:0052344 positive regulation by symbiont of host phytoalexin production
- - - - - - GO:0052062 induction by symbiont of host phytoalexin production
The only term with annotations is 'GO:0052062 induction by symbiont of host phytoalexin production'; 5 annotations by PAMGO_VMD. The process described in the annotated papers would fit better under 'GO:0080185 effector-dependent induction by symbiont of host immune response'.
I propose we obsolete the terms listed above, and add 'replace by' or 'consider' 'GO:0080185 effector-dependent induction by symbiont of host immune response'.
@mgiglio99 @ValWood what do you think ?
Thanks, Pascale
|
1.0
|
Merge terms - 'response to phytoalexin production by other organism involved in symbiotic interaction' and children (was:obsolete...) - While working on #17441
I came across this branch:
- GO:0052549 response to phytoalexin production by other organism involved in symbiotic interaction
- - GO:0052566 response to host phytoalexin production
- - - GO:0052378 evasion or tolerance by organism of phytoalexins produced by other organism involved in symbiotic interaction
- - - - GO:0052061 evasion or tolerance by symbiont of host-produced phytoalexins
- - - GO:0052304 modulation by organism of phytoalexin production in other organism involved in symbiotic interaction
- - - GO:0052329 positive regulation by organism of phytoalexin production in other organism involved in symbiotic interaction
- - - - GO:0052165 modulation by symbiont of host phytoalexin production
- - - - - GO:1990217 negative regulation by symbiont of host phytoalexin production
- - - - - GO:0052344 positive regulation by symbiont of host phytoalexin production
- - - - - - GO:0052062 induction by symbiont of host phytoalexin production
The only term with annotations is 'GO:0052062 induction by symbiont of host phytoalexin production'; 5 annotations by PAMGO_VMD. The process described in the annotated papers would fit better under 'GO:0080185 effector-dependent induction by symbiont of host immune response'.
I propose we obsolete the terms listed above, and add 'replace by' or 'consider' 'GO:0080185 effector-dependent induction by symbiont of host immune response'.
@mgiglio99 @ValWood what do you think ?
Thanks, Pascale
|
process
|
merge terms response to phytoalexin production by other organism involved in symbiotic interaction and children was obsolete while working on i came across this branch go response to phytoalexin production by other organism involved in symbiotic interaction go response to host phytoalexin production go evasion or tolerance by organism of phytoalexins produced by other organism involved in symbiotic interaction go evasion or tolerance by symbiont of host produced phytoalexins go modulation by organism of phytoalexin production in other organism involved in symbiotic interaction go positive regulation by organism of phytoalexin production in other organism involved in symbiotic interaction go modulation by symbiont of host phytoalexin production go negative regulation by symbiont of host phytoalexin production go positive regulation by symbiont of host phytoalexin production go induction by symbiont of host phytoalexin production the only term with annotations is go induction by symbiont of host phytoalexin production annotations by pamgo vmd the process described in the annotated papers would fit better under go effector dependent induction by symbiont of host immune response i propose we obsolete the terms listed above and add replace by or consider go effector dependent induction by symbiont of host immune response valwood what do you think thanks pascale
| 1
|
365,521
| 25,540,171,875
|
IssuesEvent
|
2022-11-29 14:48:33
|
qiboteam/qibo
|
https://api.github.com/repos/qiboteam/qibo
|
closed
|
Remove qibo logo from doc when site is online
|
documentation
|
For now we are leaving the logo in the documentation compiled with `sphinx`. When the site is available the logo will have to be removed because it will already be in the navbar.
|
1.0
|
Remove qibo logo from doc when site is online - For now we are leaving the logo in the documentation compiled with `sphinx`. When the site is available the logo will have to be removed because it will already be in the navbar.
|
non_process
|
remove qibo logo from doc when site is online for now we are leaving the logo in the documentation compiled with sphinx when the site is available the logo will have to be removed because it will already be in the navbar
| 0
|
250,911
| 21,388,607,594
|
IssuesEvent
|
2022-04-21 03:24:09
|
Nithin-Kamineni/peekNshop
|
https://api.github.com/repos/Nithin-Kamineni/peekNshop
|
closed
|
Backend testing of other user functionalities
|
back-end Sprint-4 testing
|
Backend testing of other user functionalities like the favourite stores of particular user, change Address, change User detailsand so on.
|
1.0
|
Backend testing of other user functionalities - Backend testing of other user functionalities like the favourite stores of particular user, change Address, change User detailsand so on.
|
non_process
|
backend testing of other user functionalities backend testing of other user functionalities like the favourite stores of particular user change address change user detailsand so on
| 0
|
16,100
| 5,214,884,511
|
IssuesEvent
|
2017-01-26 01:45:00
|
serde-rs/serde
|
https://api.github.com/repos/serde-rs/serde
|
opened
|
Struct fields and variant tags both go through deserialize_struct_field
|
bug codegen
|
In `#[derive(Deserialize)]` we are generating effectively the same Deserialize implementation for deciding which struct field is next vs which variant we are looking at. As a result, both implementations go through Deserializer::deserialize_struct_field which is unexpected for Deserializer authors trying to implement `EnumVisitor::visit_variant`.
|
1.0
|
Struct fields and variant tags both go through deserialize_struct_field - In `#[derive(Deserialize)]` we are generating effectively the same Deserialize implementation for deciding which struct field is next vs which variant we are looking at. As a result, both implementations go through Deserializer::deserialize_struct_field which is unexpected for Deserializer authors trying to implement `EnumVisitor::visit_variant`.
|
non_process
|
struct fields and variant tags both go through deserialize struct field in we are generating effectively the same deserialize implementation for deciding which struct field is next vs which variant we are looking at as a result both implementations go through deserializer deserialize struct field which is unexpected for deserializer authors trying to implement enumvisitor visit variant
| 0
|
77,398
| 7,573,453,036
|
IssuesEvent
|
2018-04-23 17:49:29
|
apache/incubator-openwhisk-wskdeploy
|
https://api.github.com/repos/apache/incubator-openwhisk-wskdeploy
|
closed
|
manifest_basic_tar_grammar.yaml also tests feeds and api grammar; split them out
|
priority: low tests: unit
|
Originally, this test file was supposed to test basic **Trigger**-**Action**-**Rule** (TAR) grammars. Over time, **feeds** and now **api** grammars were added to the same test file.
As we add more "entities" like **feeds** and **apis** that build on more basic ones like **triggers**, etc. we should look to test them in their own manifest files.
This will improve testing granularity; please split out **feeds** grammar unit tests and **api** grammar unit tests into separate manifests.
|
1.0
|
manifest_basic_tar_grammar.yaml also tests feeds and api grammar; split them out - Originally, this test file was supposed to test basic **Trigger**-**Action**-**Rule** (TAR) grammars. Over time, **feeds** and now **api** grammars were added to the same test file.
As we add more "entities" like **feeds** and **apis** that build on more basic ones like **triggers**, etc. we should look to test them in their own manifest files.
This will improve testing granularity; please split out **feeds** grammar unit tests and **api** grammar unit tests into separate manifests.
|
non_process
|
manifest basic tar grammar yaml also tests feeds and api grammar split them out originally this test file was supposed to test basic trigger action rule tar grammars over time feeds and now api grammars were added to the same test file as we add more entities like feeds and apis that build on more basic ones like triggers etc we should look to test them in their own manifest files this will improve testing granularity please split out feeds grammar unit tests and api grammar unit tests into separate manifests
| 0
|
117,475
| 11,947,744,420
|
IssuesEvent
|
2020-04-03 10:29:15
|
thoughtbot/administrate
|
https://api.github.com/repos/thoughtbot/administrate
|
opened
|
We should bring community plugins into the docs
|
documentation
|
In #1535, @sedubois writes:
> - people can create their own plugins and share them on the [list of plugins wiki page](https://github.com/thoughtbot/administrate/wiki/List-of-Plugins), but that information tends to be outdated or not verified and so the quality information gets diluted;
> - external plugins might not be as stable as Administrate itself, because people are basically on their own and e.g. cannot ask for their PRs to be reviewed by other Administrate users (and the wiki page itself is not subject to a review process), and conversely the maintainer him/herself might not be available any more (as my example above shows);
> - various users (such as myself) might have functional custom administrate fields in their own private systems but have no clue how to turn that into a gem (although I'm not a good example because I could just have forked the outdated one; but then what should I put on the wiki page? Keep the outdated one, replace with my own, keep both?);
> - even if one finds an external plugin that is functional, it does not benefit from a clear API documentation like the ["official" API documentation](http://administrate-prototype.herokuapp.com/);
> - the documentation of these external plugins is fragmented (not available from the same central place);
> - we might also end up in situations where an external plugin already addresses a need, but due to lack of visibility then we partially try to address the need internally (as in this PR), which defeats a bit the purpose of supporting external plugins.
>
> I don't know what is the solution. Maybe the documentation system could integrate external plugins as well...
The [intention of the wiki page was to reduce friction for people adding their plugins][1], but in practice that's just split the source of information.
I now think it'd be better to have a specific plugin listing page in the docs, with a section for Community Plugins _plus_ an invitation for people to add theirs. This should add a bit of feedback too, which hopefully answers the other points.
[1]: https://github.com/thoughtbot/administrate/issues/1015
|
1.0
|
We should bring community plugins into the docs - In #1535, @sedubois writes:
> - people can create their own plugins and share them on the [list of plugins wiki page](https://github.com/thoughtbot/administrate/wiki/List-of-Plugins), but that information tends to be outdated or not verified and so the quality information gets diluted;
> - external plugins might not be as stable as Administrate itself, because people are basically on their own and e.g. cannot ask for their PRs to be reviewed by other Administrate users (and the wiki page itself is not subject to a review process), and conversely the maintainer him/herself might not be available any more (as my example above shows);
> - various users (such as myself) might have functional custom administrate fields in their own private systems but have no clue how to turn that into a gem (although I'm not a good example because I could just have forked the outdated one; but then what should I put on the wiki page? Keep the outdated one, replace with my own, keep both?);
> - even if one finds an external plugin that is functional, it does not benefit from a clear API documentation like the ["official" API documentation](http://administrate-prototype.herokuapp.com/);
> - the documentation of these external plugins is fragmented (not available from the same central place);
> - we might also end up in situations where an external plugin already addresses a need, but due to lack of visibility then we partially try to address the need internally (as in this PR), which defeats a bit the purpose of supporting external plugins.
>
> I don't know what is the solution. Maybe the documentation system could integrate external plugins as well...
The [intention of the wiki page was to reduce friction for people adding their plugins][1], but in practice that's just split the source of information.
I now think it'd be better to have a specific plugin listing page in the docs, with a section for Community Plugins _plus_ an invitation for people to add theirs. This should add a bit of feedback too, which hopefully answers the other points.
[1]: https://github.com/thoughtbot/administrate/issues/1015
|
non_process
|
we should bring community plugins into the docs in sedubois writes people can create their own plugins and share them on the but that information tends to be outdated or not verified and so the quality information gets diluted external plugins might not be as stable as administrate itself because people are basically on their own and e g cannot ask for their prs to be reviewed by other administrate users and the wiki page itself is not subject to a review process and conversely the maintainer him herself might not be available any more as my example above shows various users such as myself might have functional custom administrate fields in their own private systems but have no clue how to turn that into a gem although i m not a good example because i could just have forked the outdated one but then what should i put on the wiki page keep the outdated one replace with my own keep both even if one finds an external plugin that is functional it does not benefit from a clear api documentation like the the documentation of these external plugins is fragmented not available from the same central place we might also end up in situations where an external plugin already addresses a need but due to lack of visibility then we partially try to address the need internally as in this pr which defeats a bit the purpose of supporting external plugins i don t know what is the solution maybe the documentation system could integrate external plugins as well the but in practice that s just split the source of information i now think it d be better to have a specific plugin listing page in the docs with a section for community plugins plus an invitation for people to add theirs this should add a bit of feedback too which hopefully answers the other points
| 0
|
5,585
| 8,442,070,399
|
IssuesEvent
|
2018-10-18 12:14:21
|
kiwicom/orbit-components
|
https://api.github.com/repos/kiwicom/orbit-components
|
closed
|
Radio - value covering info on smaller width
|
bug processing
|
As the title says. Here's a screenshot: https://monosnap.com/file/nueREAGeNt3Wj4SCy3I9blT1ddGRsJ
## Expected Behavior
The label span should wrap the text
## Current Behavior
It doesnt wrap it, the text gets pushed down and the label consumes the same space in one block
## Possible Solution
Add styles below to the radio label
```css
display: inline-block;
height: 100%;
```
## Steps to Reproduce
Give the radio component container a set width and lorem ipsum the value and info.
## Context (Environment)
Several radio button groups in a row
|
1.0
|
Radio - value covering info on smaller width - As the title says. Here's a screenshot: https://monosnap.com/file/nueREAGeNt3Wj4SCy3I9blT1ddGRsJ
## Expected Behavior
The label span should wrap the text
## Current Behavior
It doesnt wrap it, the text gets pushed down and the label consumes the same space in one block
## Possible Solution
Add styles below to the radio label
```css
display: inline-block;
height: 100%;
```
## Steps to Reproduce
Give the radio component container a set width and lorem ipsum the value and info.
## Context (Environment)
Several radio button groups in a row
|
process
|
radio value covering info on smaller width as the title says here s a screenshot expected behavior the label span should wrap the text current behavior it doesnt wrap it the text gets pushed down and the label consumes the same space in one block possible solution add styles below to the radio label css display inline block height steps to reproduce give the radio component container a set width and lorem ipsum the value and info context environment several radio button groups in a row
| 1
|
351,087
| 31,934,099,402
|
IssuesEvent
|
2023-09-19 09:20:09
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
reopened
|
Fix Array API linalg.test_cholesky
|
Array API Sub Task Failing Test ToDo_internal
|
| | |
|---|---|
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5132836219/jobs/9234645598"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5132836219/jobs/9234645598"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5132836219/jobs/9234645598"><img src=https://img.shields.io/badge/-failure-red></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5132836219/jobs/9234645598"><img src=https://img.shields.io/badge/-failure-red></a>
|
1.0
|
Fix Array API linalg.test_cholesky - | | |
|---|---|
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5132836219/jobs/9234645598"><img src=https://img.shields.io/badge/-failure-red></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5132836219/jobs/9234645598"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5132836219/jobs/9234645598"><img src=https://img.shields.io/badge/-failure-red></a>
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5132836219/jobs/9234645598"><img src=https://img.shields.io/badge/-failure-red></a>
|
non_process
|
fix array api linalg test cholesky torch a href src jax a href src numpy a href src tensorflow a href src
| 0
|
50,261
| 13,187,407,143
|
IssuesEvent
|
2020-08-13 03:19:03
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
closed
|
Update operator docs for pnf system (Trac #405)
|
Migrated from Trac defect jeb + pnf
|
PnF operator docs need to be updated:
- New location for DB on fpslave01
- New location for "start/stop" scripts and that these should auto-start on boot
- 1 writer -> 2 writers
- Expanded client pool.
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/405
, reported by blaufuss and owned by blaufuss_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2012-10-31T17:32:24",
"description": "PnF operator docs need to be updated:\n- New location for DB on fpslave01\n- New location for \"start/stop\" scripts and that these should auto-start on boot\n- 1 writer -> 2 writers\n- Expanded client pool.",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"_ts": "1351704744000000",
"component": "jeb + pnf",
"summary": "Update operator docs for pnf system",
"priority": "normal",
"keywords": "",
"time": "2012-05-25T14:11:38",
"milestone": "",
"owner": "blaufuss",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
Update operator docs for pnf system (Trac #405) - PnF operator docs need to be updated:
- New location for DB on fpslave01
- New location for "start/stop" scripts and that these should auto-start on boot
- 1 writer -> 2 writers
- Expanded client pool.
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/405
, reported by blaufuss and owned by blaufuss_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2012-10-31T17:32:24",
"description": "PnF operator docs need to be updated:\n- New location for DB on fpslave01\n- New location for \"start/stop\" scripts and that these should auto-start on boot\n- 1 writer -> 2 writers\n- Expanded client pool.",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"_ts": "1351704744000000",
"component": "jeb + pnf",
"summary": "Update operator docs for pnf system",
"priority": "normal",
"keywords": "",
"time": "2012-05-25T14:11:38",
"milestone": "",
"owner": "blaufuss",
"type": "defect"
}
```
</p>
</details>
|
non_process
|
update operator docs for pnf system trac pnf operator docs need to be updated new location for db on new location for start stop scripts and that these should auto start on boot writer writers expanded client pool migrated from reported by blaufuss and owned by blaufuss json status closed changetime description pnf operator docs need to be updated n new location for db on n new location for start stop scripts and that these should auto start on boot n writer writers n expanded client pool reporter blaufuss cc resolution fixed ts component jeb pnf summary update operator docs for pnf system priority normal keywords time milestone owner blaufuss type defect
| 0
|
774
| 3,257,730,206
|
IssuesEvent
|
2015-10-20 19:07:05
|
symfony/symfony
|
https://api.github.com/repos/symfony/symfony
|
closed
|
Broken test SigchildEnabledProcessTest::testPTYCommand
|
Process Unconfirmed
|
```
There was 1 failure:
1) Symfony\Component\Process\Tests\SigchildEnabledProcessTest::testPTYCommand
Failed asserting that two strings are equal.
--- Expected
+++ Actual
@@ @@
'foo
+sh: 1: 3: Bad file descriptor
'
```
```
cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.2 LTS"
```
```
uname -a
Linux ewgra-Inspiron-5720 3.13.0-58-generic #97-Ubuntu SMP Wed Jul 8 02:56:15 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
```
At Process::start proc_open have arguments:
$commandLine : string(61) "(echo "foo") 3>/dev/null; code=$?; echo $code >&3; exit $code"
$descriptors =
array(4) {
[0]=>
array(1) {
[0]=>
string(3) "pty"
}
[1]=>
array(1) {
[0]=>
string(3) "pty"
}
[2]=>
array(1) {
[0]=>
string(3) "pty"
}
[3]=>
array(2) {
[0]=>
string(4) "pipe"
[1]=>
string(1) "w"
}
}
$this->processPipes->pipes = array(0) {}
|
1.0
|
Broken test SigchildEnabledProcessTest::testPTYCommand - ```
There was 1 failure:
1) Symfony\Component\Process\Tests\SigchildEnabledProcessTest::testPTYCommand
Failed asserting that two strings are equal.
--- Expected
+++ Actual
@@ @@
'foo
+sh: 1: 3: Bad file descriptor
'
```
```
cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.2 LTS"
```
```
uname -a
Linux ewgra-Inspiron-5720 3.13.0-58-generic #97-Ubuntu SMP Wed Jul 8 02:56:15 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
```
At Process::start proc_open have arguments:
$commandLine : string(61) "(echo "foo") 3>/dev/null; code=$?; echo $code >&3; exit $code"
$descriptors =
array(4) {
[0]=>
array(1) {
[0]=>
string(3) "pty"
}
[1]=>
array(1) {
[0]=>
string(3) "pty"
}
[2]=>
array(1) {
[0]=>
string(3) "pty"
}
[3]=>
array(2) {
[0]=>
string(4) "pipe"
[1]=>
string(1) "w"
}
}
$this->processPipes->pipes = array(0) {}
|
process
|
broken test sigchildenabledprocesstest testptycommand there was failure symfony component process tests sigchildenabledprocesstest testptycommand failed asserting that two strings are equal expected actual foo sh bad file descriptor cat etc lsb release distrib id ubuntu distrib release distrib codename trusty distrib description ubuntu lts uname a linux ewgra inspiron generic ubuntu smp wed jul utc gnu linux at process start proc open have arguments commandline string echo foo dev null code echo code exit code descriptors array array string pty array string pty array string pty array string pipe string w this processpipes pipes array
| 1
|
24,582
| 2,669,238,972
|
IssuesEvent
|
2015-03-23 14:31:51
|
Connexions/webview
|
https://api.github.com/repos/Connexions/webview
|
closed
|
Changing text format in editor is not triggering save button
|
bug High Priority
|
1. Create a page.
2. Add some text and save.
3. Highlight the text and choose a format from the dropdown. Save is not triggered.
|
1.0
|
Changing text format in editor is not triggering save button - 1. Create a page.
2. Add some text and save.
3. Highlight the text and choose a format from the dropdown. Save is not triggered.
|
non_process
|
changing text format in editor is not triggering save button create a page add some text and save highlight the text and choose a format from the dropdown save is not triggered
| 0
|
358,792
| 10,640,963,303
|
IssuesEvent
|
2019-10-16 08:30:03
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.audible.co.uk - site is not usable
|
ML Correct ML ON browser-firefox engine-gecko priority-normal
|
<!-- @browser: Firefox 70.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:70.0) Gecko/20100101 Firefox/70.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: https://www.audible.co.uk/ep/acx-redemption?bp_o=true
**Browser / Version**: Firefox 70.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: Sometimes it showing this
**Steps to Reproduce**:
[](https://webcompat.com/uploads/2019/10/62f95914-5737-4e4d-8cc9-d89b4859e256.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20191010142853</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
<p>Console Messages:</p>
<pre>
[{'level': 'warn', 'log': ['Loading failed for the <script> with source chrome-extension://team@livestartpage.com/pages/client/livestartpage-message-add.js.'], 'uri': 'https://www.audible.co.uk/ep/acx-redemption?bp_o=true', 'pos': '1:1'}, {'level': 'warn', 'log': ['onmozfullscreenchange is deprecated.'], 'uri': 'https://www.audible.co.uk/ep/acx-redemption?bp_o=true', 'pos': '0:0'}, {'level': 'warn', 'log': ['onmozfullscreenerror is deprecated.'], 'uri': 'https://www.audible.co.uk/ep/acx-redemption?bp_o=true', 'pos': '0:0'}]
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.audible.co.uk - site is not usable - <!-- @browser: Firefox 70.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:70.0) Gecko/20100101 Firefox/70.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: https://www.audible.co.uk/ep/acx-redemption?bp_o=true
**Browser / Version**: Firefox 70.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: Sometimes it showing this
**Steps to Reproduce**:
[](https://webcompat.com/uploads/2019/10/62f95914-5737-4e4d-8cc9-d89b4859e256.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20191010142853</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
<p>Console Messages:</p>
<pre>
[{'level': 'warn', 'log': ['Loading failed for the <script> with source chrome-extension://team@livestartpage.com/pages/client/livestartpage-message-add.js.'], 'uri': 'https://www.audible.co.uk/ep/acx-redemption?bp_o=true', 'pos': '1:1'}, {'level': 'warn', 'log': ['onmozfullscreenchange is deprecated.'], 'uri': 'https://www.audible.co.uk/ep/acx-redemption?bp_o=true', 'pos': '0:0'}, {'level': 'warn', 'log': ['onmozfullscreenerror is deprecated.'], 'uri': 'https://www.audible.co.uk/ep/acx-redemption?bp_o=true', 'pos': '0:0'}]
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
site is not usable url browser version firefox operating system windows tested another browser yes problem type site is not usable description sometimes it showing this steps to reproduce browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false console messages uri pos level warn log uri pos level warn log uri pos from with ❤️
| 0
|
25,236
| 18,292,930,621
|
IssuesEvent
|
2021-10-05 17:10:39
|
crystal-lang/crystal
|
https://api.github.com/repos/crystal-lang/crystal
|
opened
|
`test_macos` is broken
|
kind:bug topic:infrastructure
|
Just as #11275 has been fixed, there's another CI issue on macos: https://github.com/crystal-lang/crystal/runs/3805485097
```
nix-shell --pure --run 'TZ=America/New_York make std_spec clean threads=1 junit_output=.junit/std_spec.xml'
warning: file 'nixpkgs' was not found in the Nix search path (add it using $NIX_PATH or -I), at (string):1:9; will use bash from your environment
Using /nix/store/n55dgnwhnc8a6m2q9qhcymvyc0lx3wkg-llvm-10.0.0/bin/llvm-config [version=10.0.0]
clang++ -c -o src/llvm/ext/llvm_ext.o src/llvm/ext/llvm_ext.cc -I/nix/store/n55dgnwhnc8a6m2q9qhcymvyc0lx3wkg-llvm-10.0.0/include -std=c++14 -stdlib=libc++ -fno-exceptions -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS
./bin/crystal build --threads 1 --exclude-warnings spec/std --exclude-warnings spec/compiler -o .build/std_spec spec/std_spec.cr
ld: file not found: /usr/lib/system/libcache.dylib for architecture x86_64
clang-7: error: linker command failed with exit code 1 (use -v to see invocation)
Error: execution of command failed with code: 1: `clang "${@}" -o /Users/runner/.cache/crystal/Users-runner-work-crystal-crystal-src-ecr-process.cr/macro_run -rdynamic -lpcre -lgc -lpthread -L/nix/store/gbmlv7mbs2hybxakxn9qd4piycsgigmq-libevent-2.1.11/lib -levent -liconv -ldl`
```
`file 'nixpkgs' was not found in the Nix search path` sounds suspicious. It's only a warning, though. So not sure if that's at fault.
|
1.0
|
`test_macos` is broken - Just as #11275 has been fixed, there's another CI issue on macos: https://github.com/crystal-lang/crystal/runs/3805485097
```
nix-shell --pure --run 'TZ=America/New_York make std_spec clean threads=1 junit_output=.junit/std_spec.xml'
warning: file 'nixpkgs' was not found in the Nix search path (add it using $NIX_PATH or -I), at (string):1:9; will use bash from your environment
Using /nix/store/n55dgnwhnc8a6m2q9qhcymvyc0lx3wkg-llvm-10.0.0/bin/llvm-config [version=10.0.0]
clang++ -c -o src/llvm/ext/llvm_ext.o src/llvm/ext/llvm_ext.cc -I/nix/store/n55dgnwhnc8a6m2q9qhcymvyc0lx3wkg-llvm-10.0.0/include -std=c++14 -stdlib=libc++ -fno-exceptions -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS
./bin/crystal build --threads 1 --exclude-warnings spec/std --exclude-warnings spec/compiler -o .build/std_spec spec/std_spec.cr
ld: file not found: /usr/lib/system/libcache.dylib for architecture x86_64
clang-7: error: linker command failed with exit code 1 (use -v to see invocation)
Error: execution of command failed with code: 1: `clang "${@}" -o /Users/runner/.cache/crystal/Users-runner-work-crystal-crystal-src-ecr-process.cr/macro_run -rdynamic -lpcre -lgc -lpthread -L/nix/store/gbmlv7mbs2hybxakxn9qd4piycsgigmq-libevent-2.1.11/lib -levent -liconv -ldl`
```
`file 'nixpkgs' was not found in the Nix search path` sounds suspicious. It's only a warning, though. So not sure if that's at fault.
|
non_process
|
test macos is broken just as has been fixed there s another ci issue on macos nix shell pure run tz america new york make std spec clean threads junit output junit std spec xml warning file nixpkgs was not found in the nix search path add it using nix path or i at string will use bash from your environment using nix store llvm bin llvm config clang c o src llvm ext llvm ext o src llvm ext llvm ext cc i nix store llvm include std c stdlib libc fno exceptions d stdc constant macros d stdc format macros d stdc limit macros bin crystal build threads exclude warnings spec std exclude warnings spec compiler o build std spec spec std spec cr ld file not found usr lib system libcache dylib for architecture clang error linker command failed with exit code use v to see invocation error execution of command failed with code clang o users runner cache crystal users runner work crystal crystal src ecr process cr macro run rdynamic lpcre lgc lpthread l nix store libevent lib levent liconv ldl file nixpkgs was not found in the nix search path sounds suspicious it s only a warning though so not sure if that s at fault
| 0
|
173,674
| 27,511,175,756
|
IssuesEvent
|
2023-03-06 08:56:45
|
starplanter93/The_Garden_of_Musicsheet
|
https://api.github.com/repos/starplanter93/The_Garden_of_Musicsheet
|
closed
|
Design: 마이페이지 및 작성페이지 반응형 구현
|
Design
|
## Description
마이페이지 및 작성페이지 반응형 구현
## Todo
- [x] 마이페이지 반응형
- [x] 작성페이지 반응형
## ETC
기타사항
|
1.0
|
Design: 마이페이지 및 작성페이지 반응형 구현 - ## Description
마이페이지 및 작성페이지 반응형 구현
## Todo
- [x] 마이페이지 반응형
- [x] 작성페이지 반응형
## ETC
기타사항
|
non_process
|
design 마이페이지 및 작성페이지 반응형 구현 description 마이페이지 및 작성페이지 반응형 구현 todo 마이페이지 반응형 작성페이지 반응형 etc 기타사항
| 0
|
120,270
| 25,771,160,212
|
IssuesEvent
|
2022-12-09 08:06:02
|
pulumi/pulumi-yaml
|
https://api.github.com/repos/pulumi/pulumi-yaml
|
closed
|
Invalid generated yaml for example
|
kind/bug impact/usability area/docs language/yaml area/codegen
|
### What happened?
This doc page:
https://www.pulumi.com/registry/packages/azure-native/api-docs/containerservice/managedcluster/
Includes an example with this snippet:
```
- availabilityZones:
- 1
- 2
- 3
```
### Steps to reproduce
https://www.pulumi.com/registry/packages/azure-native/api-docs/containerservice/managedcluster/
### Expected Behavior
```
- availabilityZones:
- '1'
- '2'
- '3'
```
or
```
- availabilityZones: ['1', '2', '3']
```
### Actual Behavior
```
- availabilityZones:
- 1
- 2
- 3
```
### Versions used
_No response_
### Additional context
_No response_
### Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
|
1.0
|
Invalid generated yaml for example - ### What happened?
This doc page:
https://www.pulumi.com/registry/packages/azure-native/api-docs/containerservice/managedcluster/
Includes an example with this snippet:
```
- availabilityZones:
- 1
- 2
- 3
```
### Steps to reproduce
https://www.pulumi.com/registry/packages/azure-native/api-docs/containerservice/managedcluster/
### Expected Behavior
```
- availabilityZones:
- '1'
- '2'
- '3'
```
or
```
- availabilityZones: ['1', '2', '3']
```
### Actual Behavior
```
- availabilityZones:
- 1
- 2
- 3
```
### Versions used
_No response_
### Additional context
_No response_
### Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
|
non_process
|
invalid generated yaml for example what happened this doc page includes an example with this snippet availabilityzones steps to reproduce expected behavior availabilityzones or availabilityzones actual behavior availabilityzones versions used no response additional context no response contributing vote on this issue by adding a 👍 reaction to contribute a fix for this issue leave a comment and link to your pull request if you ve opened one already
| 0
|
5,932
| 8,755,260,880
|
IssuesEvent
|
2018-12-14 14:22:46
|
u-root/u-bmc
|
https://api.github.com/repos/u-root/u-bmc
|
closed
|
Remove u-boot and use Linux as bootloader
|
process
|
Seeing as we don't need u-boot, and the philosophy we try to follow is the lesser amount of attack surfaces - having Linux as the bootloader makes sense.
This would make #6 (possibly, it would make it easier to solve anyhow), #75, #76 obsolete and would allow us to insert early boot code (like LPC disable) on platforms to make the whole system function better.
|
1.0
|
Remove u-boot and use Linux as bootloader - Seeing as we don't need u-boot, and the philosophy we try to follow is the lesser amount of attack surfaces - having Linux as the bootloader makes sense.
This would make #6 (possibly, it would make it easier to solve anyhow), #75, #76 obsolete and would allow us to insert early boot code (like LPC disable) on platforms to make the whole system function better.
|
process
|
remove u boot and use linux as bootloader seeing as we don t need u boot and the philosophy we try to follow is the lesser amount of attack surfaces having linux as the bootloader makes sense this would make possibly it would make it easier to solve anyhow obsolete and would allow us to insert early boot code like lpc disable on platforms to make the whole system function better
| 1
|
222,249
| 17,401,459,409
|
IssuesEvent
|
2021-08-02 20:20:07
|
RFD-FHEM/RFFHEM
|
https://api.github.com/repos/RFD-FHEM/RFFHEM
|
closed
|
Github checks are failing if job is skipped
|
fixed unittest
|
## Expected Behavior
Status checks are taken from the last successfull commit if there is a skipped one
## Actual Behavior
Required checks are failing, if the last commit job is skipped:

|
1.0
|
Github checks are failing if job is skipped - ## Expected Behavior
Status checks are taken from the last successfull commit if there is a skipped one
## Actual Behavior
Required checks are failing, if the last commit job is skipped:

|
non_process
|
github checks are failing if job is skipped expected behavior status checks are taken from the last successfull commit if there is a skipped one actual behavior required checks are failing if the last commit job is skipped
| 0
|
13,080
| 15,420,959,662
|
IssuesEvent
|
2021-03-05 12:21:49
|
threefoldtech/zos
|
https://api.github.com/repos/threefoldtech/zos
|
closed
|
execute the update of network resource in a transaction
|
priority_minor process_wontfix type_feature
|
At the moment when a network reservation comes into a node to update an existing network resource. If something wrong happens during the update of the network resource, the full network resource is pretty much dead. I wonder if we could not try to execute these change into a transaction that can be rolledback. So if for some reason the new network reservation is wrong, we rollback the transaction and bing the network back to a working state.
|
1.0
|
execute the update of network resource in a transaction - At the moment when a network reservation comes into a node to update an existing network resource. If something wrong happens during the update of the network resource, the full network resource is pretty much dead. I wonder if we could not try to execute these change into a transaction that can be rolledback. So if for some reason the new network reservation is wrong, we rollback the transaction and bing the network back to a working state.
|
process
|
execute the update of network resource in a transaction at the moment when a network reservation comes into a node to update an existing network resource if something wrong happens during the update of the network resource the full network resource is pretty much dead i wonder if we could not try to execute these change into a transaction that can be rolledback so if for some reason the new network reservation is wrong we rollback the transaction and bing the network back to a working state
| 1
|
353,184
| 10,549,671,547
|
IssuesEvent
|
2019-10-03 09:15:57
|
RADAR-base/radar-upload-source-connector
|
https://api.github.com/repos/RADAR-base/radar-upload-source-connector
|
closed
|
Pagination of Participants and Records
|
high-priority upload-backend upload-frontend
|
To allow pagination, the records response should return a `lastId` and a `limit`
The front-end can use this information to issue request to query next page.
`GET /records?project-id=<projectname>&limit=<limit>&lastId=<lastIdreturnedfrompreviospage>`
|
1.0
|
Pagination of Participants and Records - To allow pagination, the records response should return a `lastId` and a `limit`
The front-end can use this information to issue request to query next page.
`GET /records?project-id=<projectname>&limit=<limit>&lastId=<lastIdreturnedfrompreviospage>`
|
non_process
|
pagination of participants and records to allow pagination the records response should return a lastid and a limit the front end can use this information to issue request to query next page get records project id limit lastid
| 0
|
15,665
| 19,847,142,891
|
IssuesEvent
|
2022-01-21 08:07:08
|
ooi-data/RS01SBPD-DP01A-04-FLNTUA102-recovered_wfp-dpc_flnturtd_instrument_recovered
|
https://api.github.com/repos/ooi-data/RS01SBPD-DP01A-04-FLNTUA102-recovered_wfp-dpc_flnturtd_instrument_recovered
|
opened
|
🛑 Processing failed: ValueError
|
process
|
## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T08:07:07.625068.
## Details
Flow name: `RS01SBPD-DP01A-04-FLNTUA102-recovered_wfp-dpc_flnturtd_instrument_recovered`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
|
1.0
|
🛑 Processing failed: ValueError - ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T08:07:07.625068.
## Details
Flow name: `RS01SBPD-DP01A-04-FLNTUA102-recovered_wfp-dpc_flnturtd_instrument_recovered`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
|
process
|
🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name recovered wfp dpc flnturtd instrument recovered task name processing task error type valueerror error message not enough values to unpack expected got traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages xarray core variable py line in values return as array or item self data file srv conda envs notebook lib site packages xarray core variable py line in as array or item data np asarray data file srv conda envs notebook lib site packages dask array core py line in array x self compute file srv conda envs notebook lib site packages dask base py line in compute result compute self traverse false kwargs file srv conda envs notebook lib site packages dask base py line in compute results schedule dsk keys kwargs file srv conda envs notebook lib site packages dask threaded py line in get results get async file srv conda envs notebook lib site packages dask local py line in get async raise exception exc tb file srv conda envs notebook lib site packages dask local py line in reraise raise exc file srv conda envs notebook lib site packages dask local py line in execute task result execute task task data file srv conda envs notebook lib site packages dask core py line in execute task return func execute task a cache for a in args file srv conda envs notebook lib site packages dask array core py line in getter c np asarray c file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array self ensure cached file srv conda envs notebook lib site packages xarray core indexing py line in ensure cached self array numpyindexingadapter np asarray self array file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray coding variables py line in array return self func self array file srv conda envs notebook lib site packages xarray coding variables py line in apply mask data np asarray data dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray backends zarr py line in getitem return array file srv conda envs notebook lib site packages zarr core py line in getitem return self get basic selection selection fields fields file srv conda envs notebook lib site packages zarr core py line in get basic selection return self get basic selection nd selection selection out out file srv conda envs notebook lib site packages zarr core py line in get basic selection nd return self get selection indexer indexer out out fields fields file srv conda envs notebook lib site packages zarr core py line in get selection lchunk coords lchunk selection lout selection zip indexer valueerror not enough values to unpack expected got
| 1
|
3,110
| 6,130,841,991
|
IssuesEvent
|
2017-06-24 09:40:14
|
kmycode/storycanvas-csharp
|
https://api.github.com/repos/kmycode/storycanvas-csharp
|
closed
|
エンティティのOrderプロパティの撤廃
|
correction priority-middle processing
|
順番に厳密性をもたせようとしてこのプロパティを導入したが、今はむしろ邪魔。
リスト・コレクション内の順番を変えるだけで十分で、わざわざプロパティを作る理由が見当たらなくなった。
|
1.0
|
エンティティのOrderプロパティの撤廃 - 順番に厳密性をもたせようとしてこのプロパティを導入したが、今はむしろ邪魔。
リスト・コレクション内の順番を変えるだけで十分で、わざわざプロパティを作る理由が見当たらなくなった。
|
process
|
エンティティのorderプロパティの撤廃 順番に厳密性をもたせようとしてこのプロパティを導入したが、今はむしろ邪魔。 リスト・コレクション内の順番を変えるだけで十分で、わざわざプロパティを作る理由が見当たらなくなった。
| 1
|
128,422
| 17,534,343,788
|
IssuesEvent
|
2021-08-12 03:44:48
|
valentinavolgina2/sunny-hikes
|
https://api.github.com/repos/valentinavolgina2/sunny-hikes
|
closed
|
[Login/Registration] Email address should be removed from confirmation notification.
|
design sign in/sign up
|
Steps:
1. Go to new prod https://www.seattlesunseeker.com/login
2. Type Username and Password into the corresponding fields
3. Click on Login button
**Actual result:** The user gets the following message: "Your registration has not been confirmed. A link to activate your account has been sent to XXXX@XXXX (the real email address is hidden) and should be arriving shortly. If it doesn't arrive in your inbox, check your spam folder."
**Expected result:** It's really bad practice to display an email like this. It should not be displayed at all, or it might be displayed partially.

|
1.0
|
[Login/Registration] Email address should be removed from confirmation notification. - Steps:
1. Go to new prod https://www.seattlesunseeker.com/login
2. Type Username and Password into the corresponding fields
3. Click on Login button
**Actual result:** The user gets the following message: "Your registration has not been confirmed. A link to activate your account has been sent to XXXX@XXXX (the real email address is hidden) and should be arriving shortly. If it doesn't arrive in your inbox, check your spam folder."
**Expected result:** It's really bad practice to display an email like this. It should not be displayed at all, or it might be displayed partially.

|
non_process
|
email address should be removed from confirmation notification steps go to new prod type username and password into the corresponding fields click on login button actual result the user gets the following message your registration has not been confirmed a link to activate your account has been sent to xxxx xxxx the real email address is hidden and should be arriving shortly if it doesn t arrive in your inbox check your spam folder expected result it s really bad practice to display an email like this it should not be displayed at all or it might be displayed partially
| 0
|
5,489
| 8,359,512,815
|
IssuesEvent
|
2018-10-03 08:30:24
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
closed
|
In every entity can't add watcher with permission
|
Process bug bug
|
@abrahamos
open some few things in some entity.
choose them with multiple select and click on watchers.
add some friends.
click one of them and set them as a editor.
click on update.
the friend add but without permission.
|
1.0
|
In every entity can't add watcher with permission - @abrahamos
open some few things in some entity.
choose them with multiple select and click on watchers.
add some friends.
click one of them and set them as a editor.
click on update.
the friend add but without permission.
|
process
|
in every entity can t add watcher with permission abrahamos open some few things in some entity choose them with multiple select and click on watchers add some friends click one of them and set them as a editor click on update the friend add but without permission
| 1
|
84,973
| 10,423,768,841
|
IssuesEvent
|
2019-09-16 12:15:38
|
dotnet/winforms
|
https://api.github.com/repos/dotnet/winforms
|
closed
|
New VB default font needs to be highlighted in Read Me/Whats New
|
documentation: breaking
|
* .NET Core Version: 3.0 Preview6
* Have you experienced this same ISSUE with .NET Framework?: No
**Problem description:**
The default font for VB Forms has changed and is larger causing layouts to look different
**Actual behavior:**
Controls are not in the same place between Framework and Core
**Expected behavior:**
Things would look the same, but since that is not happening some customer facing visible document of this change
**Minimal repro:**

|
1.0
|
New VB default font needs to be highlighted in Read Me/Whats New - * .NET Core Version: 3.0 Preview6
* Have you experienced this same ISSUE with .NET Framework?: No
**Problem description:**
The default font for VB Forms has changed and is larger causing layouts to look different
**Actual behavior:**
Controls are not in the same place between Framework and Core
**Expected behavior:**
Things would look the same, but since that is not happening some customer facing visible document of this change
**Minimal repro:**

|
non_process
|
new vb default font needs to be highlighted in read me whats new net core version have you experienced this same issue with net framework no problem description the default font for vb forms has changed and is larger causing layouts to look different actual behavior controls are not in the same place between framework and core expected behavior things would look the same but since that is not happening some customer facing visible document of this change minimal repro
| 0
|
3,352
| 6,486,746,810
|
IssuesEvent
|
2017-08-19 22:51:03
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
The basenode should store 'timestamp' for lastModified and dateCreated in all classes.
|
libs-utillib status-inprocess type-enhancement
|
It would be very helpful if every piece of data had a timestamp of when it was written. This would help immensly in debugging weird problems because we could see when the error was written.
|
1.0
|
The basenode should store 'timestamp' for lastModified and dateCreated in all classes. - It would be very helpful if every piece of data had a timestamp of when it was written. This would help immensly in debugging weird problems because we could see when the error was written.
|
process
|
the basenode should store timestamp for lastmodified and datecreated in all classes it would be very helpful if every piece of data had a timestamp of when it was written this would help immensly in debugging weird problems because we could see when the error was written
| 1
|
3,072
| 6,066,565,099
|
IssuesEvent
|
2017-06-14 18:46:57
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
Concurrent sessions are not logged out immediately even after a new login occur
|
kind/bug process/cherry-pick process/cherry-picked status/resolved status/to-test version/1.6
|
**Rancher versions:**
rancher/server: master build
**Steps to Reproduce:**
- Enable `auth.limit.concurrent.sessions`.
- Open one user session and then open another new session for the same user.
**Results:**
In the first session you will not be logged out immediately, and you will still see the resources created until one of the following is done:
1.Manual refresh
2. request a new url
3. After 5 minutes.
|
2.0
|
Concurrent sessions are not logged out immediately even after a new login occur - **Rancher versions:**
rancher/server: master build
**Steps to Reproduce:**
- Enable `auth.limit.concurrent.sessions`.
- Open one user session and then open another new session for the same user.
**Results:**
In the first session you will not be logged out immediately, and you will still see the resources created until one of the following is done:
1.Manual refresh
2. request a new url
3. After 5 minutes.
|
process
|
concurrent sessions are not logged out immediately even after a new login occur rancher versions rancher server master build steps to reproduce enable auth limit concurrent sessions open one user session and then open another new session for the same user results in the first session you will not be logged out immediately and you will still see the resources created until one of the following is done manual refresh request a new url after minutes
| 1
|
2,296
| 5,115,692,780
|
IssuesEvent
|
2017-01-06 22:40:28
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Option for downloading CSV with no rows limit
|
Bug Priority/P1 Query Processor
|
It would be great to facilitate data extraction through Metabase.
A few potential enhancements to do so:
- Choose columns to display/export (already covered by #1445)
- Have a LIMIT to the query for visualisation update, and choose to remove this LIMIT when exporting
=> this is possible today (but not so user-friendly) by setting the limit, hit "get answer" to check data, and remove the LIMIT options before clicking on the export to CSV icon
- Allow additional options to CSV export (remove LIMIT condition)
|
1.0
|
Option for downloading CSV with no rows limit - It would be great to facilitate data extraction through Metabase.
A few potential enhancements to do so:
- Choose columns to display/export (already covered by #1445)
- Have a LIMIT to the query for visualisation update, and choose to remove this LIMIT when exporting
=> this is possible today (but not so user-friendly) by setting the limit, hit "get answer" to check data, and remove the LIMIT options before clicking on the export to CSV icon
- Allow additional options to CSV export (remove LIMIT condition)
|
process
|
option for downloading csv with no rows limit it would be great to facilitate data extraction through metabase a few potential enhancements to do so choose columns to display export already covered by have a limit to the query for visualisation update and choose to remove this limit when exporting this is possible today but not so user friendly by setting the limit hit get answer to check data and remove the limit options before clicking on the export to csv icon allow additional options to csv export remove limit condition
| 1
|
135,143
| 12,675,959,591
|
IssuesEvent
|
2020-06-19 03:28:28
|
cashapp/sqldelight
|
https://api.github.com/repos/cashapp/sqldelight
|
closed
|
Comprehension questions: initial table creation and migration
|
component: sqlite-migrations documentation
|
Hi,
I've read through the documentation but there are still some open questions regarding initial table creation and migration. I'm working on the JVM, no Android or iOS.
The `.sq` files should both create the tables and define queries. Is it recommended to use `IF NOT EXISTS` here? Should `Database.Schema.create()` be called during every startup or is there a way to detect that the tables have already been created (besides using `IF NOT EXISTS`)?
The documentations states that the `.sq` files define the latest table / query versions and that migrations should be used to update a previously created table to the latest version. So in case of a database change I need to do both update the `.sq` files **and** create a new migration `.sqm` file that applies the changes I just made inside the `.sq` files to an older database version? This seems redundant and potentially error-prone as I need to figure out the steps to arrive at the table version defined inside the `.sq` files.
According to the documentation, before the first migration a `.db` should be created which, I think, represents the (at the time) latest database version. During the build the migrations get applied to this "snapshot" and sqldelight checks if the migrated database is equal to the latest version. So basically if the `.db` file represents the database at version `v1` and I change the database afterwards to version `v2` and write migration files, sqldelight can check whether my migrations applied to the `v1` database result in the `v2` database?
My background is Flyway, so I'm a little confused how to perform the initial table creation and subsequent migrations in sqldelight. I tried to use `migrate()` for the initial table creation but this did not work.
Maybe an example step by step guide on how to perform these steps would be helpful (start with an initial database and the perform several changes).
|
1.0
|
Comprehension questions: initial table creation and migration - Hi,
I've read through the documentation but there are still some open questions regarding initial table creation and migration. I'm working on the JVM, no Android or iOS.
The `.sq` files should both create the tables and define queries. Is it recommended to use `IF NOT EXISTS` here? Should `Database.Schema.create()` be called during every startup or is there a way to detect that the tables have already been created (besides using `IF NOT EXISTS`)?
The documentations states that the `.sq` files define the latest table / query versions and that migrations should be used to update a previously created table to the latest version. So in case of a database change I need to do both update the `.sq` files **and** create a new migration `.sqm` file that applies the changes I just made inside the `.sq` files to an older database version? This seems redundant and potentially error-prone as I need to figure out the steps to arrive at the table version defined inside the `.sq` files.
According to the documentation, before the first migration a `.db` should be created which, I think, represents the (at the time) latest database version. During the build the migrations get applied to this "snapshot" and sqldelight checks if the migrated database is equal to the latest version. So basically if the `.db` file represents the database at version `v1` and I change the database afterwards to version `v2` and write migration files, sqldelight can check whether my migrations applied to the `v1` database result in the `v2` database?
My background is Flyway, so I'm a little confused how to perform the initial table creation and subsequent migrations in sqldelight. I tried to use `migrate()` for the initial table creation but this did not work.
Maybe an example step by step guide on how to perform these steps would be helpful (start with an initial database and the perform several changes).
|
non_process
|
comprehension questions initial table creation and migration hi i ve read through the documentation but there are still some open questions regarding initial table creation and migration i m working on the jvm no android or ios the sq files should both create the tables and define queries is it recommended to use if not exists here should database schema create be called during every startup or is there a way to detect that the tables have already been created besides using if not exists the documentations states that the sq files define the latest table query versions and that migrations should be used to update a previously created table to the latest version so in case of a database change i need to do both update the sq files and create a new migration sqm file that applies the changes i just made inside the sq files to an older database version this seems redundant and potentially error prone as i need to figure out the steps to arrive at the table version defined inside the sq files according to the documentation before the first migration a db should be created which i think represents the at the time latest database version during the build the migrations get applied to this snapshot and sqldelight checks if the migrated database is equal to the latest version so basically if the db file represents the database at version and i change the database afterwards to version and write migration files sqldelight can check whether my migrations applied to the database result in the database my background is flyway so i m a little confused how to perform the initial table creation and subsequent migrations in sqldelight i tried to use migrate for the initial table creation but this did not work maybe an example step by step guide on how to perform these steps would be helpful start with an initial database and the perform several changes
| 0
|
5,025
| 7,846,657,773
|
IssuesEvent
|
2018-06-19 16:03:23
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
any automatic process to address processing doc?
|
Processing
|
I note that there are some processing tools that have help in doc but not in desktop:
- `Point Displacement`, `Execute SQL`...
- `symmetrical difference` has help when opened from vector menu but not from toolbox (need an issue report?)
Is there any process to automatically fill help (or list of algs) between doc and app or is that done manually?
@volaya @alexbruy ?
|
1.0
|
any automatic process to address processing doc? - I note that there are some processing tools that have help in doc but not in desktop:
- `Point Displacement`, `Execute SQL`...
- `symmetrical difference` has help when opened from vector menu but not from toolbox (need an issue report?)
Is there any process to automatically fill help (or list of algs) between doc and app or is that done manually?
@volaya @alexbruy ?
|
process
|
any automatic process to address processing doc i note that there are some processing tools that have help in doc but not in desktop point displacement execute sql symmetrical difference has help when opened from vector menu but not from toolbox need an issue report is there any process to automatically fill help or list of algs between doc and app or is that done manually volaya alexbruy
| 1
|
76,765
| 21,568,419,469
|
IssuesEvent
|
2022-05-02 03:56:03
|
seek-oss/vanilla-extract
|
https://api.github.com/repos/seek-oss/vanilla-extract
|
closed
|
url( ) local imports does not works with esbuild and esbuild-plugin plugin
|
bug/integration bug/esbuild
|
### Describe the bug
Can not setup local url( ) import in css.ts files
### Link to reproduction
https://github.com/dmytro-shpak/esbuild-vanilla-extract-bug/
```
npm install
npm test
```
> Could not resolve "./x.svg" (the plugin "vanilla-extract" didn't set a resolve directory)`
### System Info
Output of `npx envinfo --system --npmPackages @vanilla-extract/css,@vanilla-extract/webpack-plugin,@vanilla-extract/esbuild-plugin,@vanilla-extract/vite-plugin,@vanilla-extract/sprinkles,webpack,esbuild,vite --binaries --browsers`:
```node
System:
OS: Linux 5.4 Ubuntu 20.04.3 LTS (Focal Fossa)
CPU: (16) x64 AMD Ryzen 7 3700X 8-Core Processor
Memory: 58.36 GB / 62.74 GB
Container: Yes
Shell: 5.0.17 - /bin/bash
Binaries:
Node: 16.13.0 - ~/.nvm/versions/node/v16.13.0/bin/node
npm: 8.1.0 - ~/.nvm/versions/node/v16.13.0/bin/npm
npmPackages:
@vanilla-extract/css: ^1.6.3 => 1.6.3
@vanilla-extract/esbuild-plugin: ^2.0.0 => 2.0.0
esbuild: ^0.13.13 => 0.13.13
```
### The workaround
Use babel for *.css.ts files
```
import babel from 'esbuild-plugin-babel';
```
```
plugins: [
babel({
filter: /.*.css.ts/,
config: {
presets: ['@babel/preset-typescript'],
plugins: ['@vanilla-extract/babel-plugin'],
},
}),
],
```
|
1.0
|
url( ) local imports does not works with esbuild and esbuild-plugin plugin - ### Describe the bug
Can not setup local url( ) import in css.ts files
### Link to reproduction
https://github.com/dmytro-shpak/esbuild-vanilla-extract-bug/
```
npm install
npm test
```
> Could not resolve "./x.svg" (the plugin "vanilla-extract" didn't set a resolve directory)`
### System Info
Output of `npx envinfo --system --npmPackages @vanilla-extract/css,@vanilla-extract/webpack-plugin,@vanilla-extract/esbuild-plugin,@vanilla-extract/vite-plugin,@vanilla-extract/sprinkles,webpack,esbuild,vite --binaries --browsers`:
```node
System:
OS: Linux 5.4 Ubuntu 20.04.3 LTS (Focal Fossa)
CPU: (16) x64 AMD Ryzen 7 3700X 8-Core Processor
Memory: 58.36 GB / 62.74 GB
Container: Yes
Shell: 5.0.17 - /bin/bash
Binaries:
Node: 16.13.0 - ~/.nvm/versions/node/v16.13.0/bin/node
npm: 8.1.0 - ~/.nvm/versions/node/v16.13.0/bin/npm
npmPackages:
@vanilla-extract/css: ^1.6.3 => 1.6.3
@vanilla-extract/esbuild-plugin: ^2.0.0 => 2.0.0
esbuild: ^0.13.13 => 0.13.13
```
### The workaround
Use babel for *.css.ts files
```
import babel from 'esbuild-plugin-babel';
```
```
plugins: [
babel({
filter: /.*.css.ts/,
config: {
presets: ['@babel/preset-typescript'],
plugins: ['@vanilla-extract/babel-plugin'],
},
}),
],
```
|
non_process
|
url local imports does not works with esbuild and esbuild plugin plugin describe the bug can not setup local url import in css ts files link to reproduction npm install npm test could not resolve x svg the plugin vanilla extract didn t set a resolve directory system info output of npx envinfo system npmpackages vanilla extract css vanilla extract webpack plugin vanilla extract esbuild plugin vanilla extract vite plugin vanilla extract sprinkles webpack esbuild vite binaries browsers node system os linux ubuntu lts focal fossa cpu amd ryzen core processor memory gb gb container yes shell bin bash binaries node nvm versions node bin node npm nvm versions node bin npm npmpackages vanilla extract css vanilla extract esbuild plugin esbuild the workaround use babel for css ts files import babel from esbuild plugin babel plugins babel filter css ts config presets plugins
| 0
|
153,019
| 13,491,096,031
|
IssuesEvent
|
2020-09-11 16:00:39
|
COSC481W-2020Fall/cosc481w-581-2020-fall-convoluted-classifiers
|
https://api.github.com/repos/COSC481W-2020Fall/cosc481w-581-2020-fall-convoluted-classifiers
|
closed
|
Research Python Modules
|
documentation
|
We need to research and create a list of Python modules that can be used for our project.
|
1.0
|
Research Python Modules - We need to research and create a list of Python modules that can be used for our project.
|
non_process
|
research python modules we need to research and create a list of python modules that can be used for our project
| 0
|
12,220
| 8,643,752,555
|
IssuesEvent
|
2018-11-25 20:53:06
|
patrickfav/armadillo
|
https://api.github.com/repos/patrickfav/armadillo
|
closed
|
Password is not being used for derivating encryption key
|
bug security
|
The user provided password is supposed to be used to derivate the encryption key. However, it seems that currently is not being used.
How to reproduce:
1. Instantiate Armadillo with password A.
2. Save some data
3. Instantiate Armadillo with password B (without deleting data).
4. Try to retrieved data stored with password A.
-> You are able to retrieve the plain data
|
True
|
Password is not being used for derivating encryption key - The user provided password is supposed to be used to derivate the encryption key. However, it seems that currently is not being used.
How to reproduce:
1. Instantiate Armadillo with password A.
2. Save some data
3. Instantiate Armadillo with password B (without deleting data).
4. Try to retrieved data stored with password A.
-> You are able to retrieve the plain data
|
non_process
|
password is not being used for derivating encryption key the user provided password is supposed to be used to derivate the encryption key however it seems that currently is not being used how to reproduce instantiate armadillo with password a save some data instantiate armadillo with password b without deleting data try to retrieved data stored with password a you are able to retrieve the plain data
| 0
|
6,403
| 5,411,211,678
|
IssuesEvent
|
2017-03-01 10:59:02
|
devtools-html/debugger.html
|
https://api.github.com/repos/devtools-html/debugger.html
|
reopened
|
Lots of "Not responding" / Spinning wheels with large js file
|
bug bug: P1 performance Release: Commitment
|
I know this isn't the most useful bug report, but I am getting lots of "Not responding" issues on Windows 10. That is, the Firefox window regularly hangs for about 5-10secs. Happens:
- When I simply Ctrl+F in a JS File which is 2MB
- When I have breakpoints in this file and reload
My system isn't "weak" either:
* Intel(R) Core(TM) i5-3570 CPU @ 3.40GHz, 3401 MHz, 4 Kern(e), 4 logische(r) Prozessor(en)
* 12 GB RAM
Really hard getting any work done with current Firefox Developer Editon I am afraid to say :-/ Let me know if I could / should run any diagnostics/profiling or whatever that might help you...
|
True
|
Lots of "Not responding" / Spinning wheels with large js file - I know this isn't the most useful bug report, but I am getting lots of "Not responding" issues on Windows 10. That is, the Firefox window regularly hangs for about 5-10secs. Happens:
- When I simply Ctrl+F in a JS File which is 2MB
- When I have breakpoints in this file and reload
My system isn't "weak" either:
* Intel(R) Core(TM) i5-3570 CPU @ 3.40GHz, 3401 MHz, 4 Kern(e), 4 logische(r) Prozessor(en)
* 12 GB RAM
Really hard getting any work done with current Firefox Developer Editon I am afraid to say :-/ Let me know if I could / should run any diagnostics/profiling or whatever that might help you...
|
non_process
|
lots of not responding spinning wheels with large js file i know this isn t the most useful bug report but i am getting lots of not responding issues on windows that is the firefox window regularly hangs for about happens when i simply ctrl f in a js file which is when i have breakpoints in this file and reload my system isn t weak either intel r core tm cpu mhz kern e logische r prozessor en gb ram really hard getting any work done with current firefox developer editon i am afraid to say let me know if i could should run any diagnostics profiling or whatever that might help you
| 0
|
579,239
| 17,186,253,182
|
IssuesEvent
|
2021-07-16 02:43:09
|
TeamDooRiBon/DooRi-iOS
|
https://api.github.com/repos/TeamDooRiBon/DooRi-iOS
|
closed
|
[FEAT] 살펴보기뷰 API 연결
|
Feat Minjae 🐻❄️ Network P1 / Priority High Sangjin 🐨
|
# 👀 이슈 (issue)
살펴보기뷰에 대한 API를 연결합니다.
<img width="250" alt="스크린샷 2021-07-16 오전 2 01 58" src="https://user-images.githubusercontent.com/61109660/125828324-6c2d2d0f-471c-420d-85d9-3b5bcf653b55.png">
# 🚀 to-do
<!-- 진행할 작업에 대해 적어주세요 -->
- [ ] Postman 테스트
- [ ] 데이터 모델 생성
- [ ] 데이터 서비스 구현
- [ ] API 연결
|
1.0
|
[FEAT] 살펴보기뷰 API 연결 - # 👀 이슈 (issue)
살펴보기뷰에 대한 API를 연결합니다.
<img width="250" alt="스크린샷 2021-07-16 오전 2 01 58" src="https://user-images.githubusercontent.com/61109660/125828324-6c2d2d0f-471c-420d-85d9-3b5bcf653b55.png">
# 🚀 to-do
<!-- 진행할 작업에 대해 적어주세요 -->
- [ ] Postman 테스트
- [ ] 데이터 모델 생성
- [ ] 데이터 서비스 구현
- [ ] API 연결
|
non_process
|
살펴보기뷰 api 연결 👀 이슈 issue 살펴보기뷰에 대한 api를 연결합니다 img width alt 스크린샷 오전 src 🚀 to do postman 테스트 데이터 모델 생성 데이터 서비스 구현 api 연결
| 0
|
8,446
| 11,614,671,832
|
IssuesEvent
|
2020-02-26 13:00:24
|
scikit-learn/scikit-learn
|
https://api.github.com/repos/scikit-learn/scikit-learn
|
closed
|
sklearn.preprocessing.StandardScaler gets NaN variance when partial_fit with sparse data
|
Bug module:preprocessing
|
#### Describe the bug
When I feed a specific dataset (which is sparse) to sklearn.preprocessing.StandardScaler.partial_fit in a specific order, I get variance which is NaN although data does **NOT** contains any NaNs and is very small.
When I convert the sparse arrays to dense, it works. When I change the order to feed the data, it works too.
#### Steps/Code to Reproduce
Please work with the data I attached. [sparse_data.tar.gz](https://github.com/scikit-learn/scikit-learn/files/4208684/sparse_data.tar.gz)
```python
import scipy.sparse as sp
from sklearn import preprocessing
s0 = sp.load_npz('0.npz')
s1 = sp.load_npz('1.npz')
# Buggy behavior
ss0 = preprocessing.StandardScaler(with_mean=False)
ss0.partial_fit(s0)
print(ss0.var_)
ss0.partial_fit(s1)
print(ss0.var_) # => gets NaN
# When use dence array, it works
ss1 = preprocessing.StandardScaler(with_mean=False)
ss1.partial_fit(s0.toarray())
print(ss1.var_)
ss1.partial_fit(s1.toarray())
print(ss1.var_)
# When change the order of data, it works
ss2 = preprocessing.StandardScaler(with_mean=False)
ss2.partial_fit(s1)
print(ss2.var_)
ss2.partial_fit(s0)
print(ss2.var_)
```
EDIT: Fix sample code around ss2
#### Expected Results
```python
ss0.var_ # => [0.15896542]
ss1.var_ # => [0.15896542]
ss2.var_ # => [0.15896542]
```
#### Actual Results
```python
ss0.var_ # => [nan]
ss1.var_ # => [0.15896542]
ss2.var_ # => [0.15896542]
```
#### Versions
I confirmed this issue in two different environments.
```
System:
python: 3.7.3 (default, Apr 22 2019, 02:40:09) [Clang 10.0.1 (clang-1001.0.46.4)]
executable: /usr/local/var/pyenv/versions/3.7.3/bin/python3
machine: Darwin-19.3.0-x86_64-i386-64bit
Python dependencies:
pip: 20.0.2
setuptools: 40.8.0
sklearn: 0.22
numpy: 1.18.0
scipy: 1.4.1
Cython: None
pandas: 0.25.3
matplotlib: 3.1.2
joblib: 0.14.1
Built with OpenMP: True
```
```
System:
python: 3.7.6 (default, Feb 14 2020, 16:41:52) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
executable: /home/***/ws/siml/.venv/bin/python3
machine: Linux-4.18.0-147.5.1.el8_1.x86_64-x86_64-with-centos-8.1.1911-Core
Python dependencies:
pip: 19.2.3
setuptools: 41.2.0
sklearn: 0.22.1
numpy: 1.18.1
scipy: 1.4.1
Cython: None
pandas: 0.25.3
matplotlib: 3.1.3
joblib: 0.14.1
Built with OpenMP: True
```
|
1.0
|
sklearn.preprocessing.StandardScaler gets NaN variance when partial_fit with sparse data - #### Describe the bug
When I feed a specific dataset (which is sparse) to sklearn.preprocessing.StandardScaler.partial_fit in a specific order, I get variance which is NaN although data does **NOT** contains any NaNs and is very small.
When I convert the sparse arrays to dense, it works. When I change the order to feed the data, it works too.
#### Steps/Code to Reproduce
Please work with the data I attached. [sparse_data.tar.gz](https://github.com/scikit-learn/scikit-learn/files/4208684/sparse_data.tar.gz)
```python
import scipy.sparse as sp
from sklearn import preprocessing
s0 = sp.load_npz('0.npz')
s1 = sp.load_npz('1.npz')
# Buggy behavior
ss0 = preprocessing.StandardScaler(with_mean=False)
ss0.partial_fit(s0)
print(ss0.var_)
ss0.partial_fit(s1)
print(ss0.var_) # => gets NaN
# When use dence array, it works
ss1 = preprocessing.StandardScaler(with_mean=False)
ss1.partial_fit(s0.toarray())
print(ss1.var_)
ss1.partial_fit(s1.toarray())
print(ss1.var_)
# When change the order of data, it works
ss2 = preprocessing.StandardScaler(with_mean=False)
ss2.partial_fit(s1)
print(ss2.var_)
ss2.partial_fit(s0)
print(ss2.var_)
```
EDIT: Fix sample code around ss2
#### Expected Results
```python
ss0.var_ # => [0.15896542]
ss1.var_ # => [0.15896542]
ss2.var_ # => [0.15896542]
```
#### Actual Results
```python
ss0.var_ # => [nan]
ss1.var_ # => [0.15896542]
ss2.var_ # => [0.15896542]
```
#### Versions
I confirmed this issue in two different environments.
```
System:
python: 3.7.3 (default, Apr 22 2019, 02:40:09) [Clang 10.0.1 (clang-1001.0.46.4)]
executable: /usr/local/var/pyenv/versions/3.7.3/bin/python3
machine: Darwin-19.3.0-x86_64-i386-64bit
Python dependencies:
pip: 20.0.2
setuptools: 40.8.0
sklearn: 0.22
numpy: 1.18.0
scipy: 1.4.1
Cython: None
pandas: 0.25.3
matplotlib: 3.1.2
joblib: 0.14.1
Built with OpenMP: True
```
```
System:
python: 3.7.6 (default, Feb 14 2020, 16:41:52) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
executable: /home/***/ws/siml/.venv/bin/python3
machine: Linux-4.18.0-147.5.1.el8_1.x86_64-x86_64-with-centos-8.1.1911-Core
Python dependencies:
pip: 19.2.3
setuptools: 41.2.0
sklearn: 0.22.1
numpy: 1.18.1
scipy: 1.4.1
Cython: None
pandas: 0.25.3
matplotlib: 3.1.3
joblib: 0.14.1
Built with OpenMP: True
```
|
process
|
sklearn preprocessing standardscaler gets nan variance when partial fit with sparse data describe the bug when i feed a specific dataset which is sparse to sklearn preprocessing standardscaler partial fit in a specific order i get variance which is nan although data does not contains any nans and is very small when i convert the sparse arrays to dense it works when i change the order to feed the data it works too steps code to reproduce please work with the data i attached python import scipy sparse as sp from sklearn import preprocessing sp load npz npz sp load npz npz buggy behavior preprocessing standardscaler with mean false partial fit print var partial fit print var gets nan when use dence array it works preprocessing standardscaler with mean false partial fit toarray print var partial fit toarray print var when change the order of data it works preprocessing standardscaler with mean false partial fit print var partial fit print var edit fix sample code around expected results python var var var actual results python var var var versions i confirmed this issue in two different environments system python default apr executable usr local var pyenv versions bin machine darwin python dependencies pip setuptools sklearn numpy scipy cython none pandas matplotlib joblib built with openmp true system python default feb executable home ws siml venv bin machine linux with centos core python dependencies pip setuptools sklearn numpy scipy cython none pandas matplotlib joblib built with openmp true
| 1
|
17,352
| 23,174,902,473
|
IssuesEvent
|
2022-07-31 09:14:02
|
droidyuecom/comments_droidyuecom
|
https://api.github.com/repos/droidyuecom/comments_droidyuecom
|
opened
|
使用 flutter attach 实现代码与应用进程关联 - 技术小黑屋
|
Gitalk 2022/07/31/flutter-attach-process/
|
https://droidyue.com/blog/2022/07/31/flutter-attach-process/
使用 Flutter Attach 实现代码与应用进程关联 Jul 31st, 2022 当我们使用 flutter run 调试 App 时,假如数据线接触不良或者断开,当我们想要继续调试的时候,可能就需要再次执行 flutter run。 但其实,还有一个命令叫做 flutter …
|
1.0
|
使用 flutter attach 实现代码与应用进程关联 - 技术小黑屋 - https://droidyue.com/blog/2022/07/31/flutter-attach-process/
使用 Flutter Attach 实现代码与应用进程关联 Jul 31st, 2022 当我们使用 flutter run 调试 App 时,假如数据线接触不良或者断开,当我们想要继续调试的时候,可能就需要再次执行 flutter run。 但其实,还有一个命令叫做 flutter …
|
process
|
使用 flutter attach 实现代码与应用进程关联 技术小黑屋 使用 flutter attach 实现代码与应用进程关联 jul 当我们使用 flutter run 调试 app 时,假如数据线接触不良或者断开,当我们想要继续调试的时候,可能就需要再次执行 flutter run。 但其实,还有一个命令叫做 flutter …
| 1
|
319,313
| 23,765,120,749
|
IssuesEvent
|
2022-09-01 12:11:55
|
actions/cache
|
https://api.github.com/repos/actions/cache
|
closed
|
using cache on tags does not work
|
documentation area:tags
|
hi!
I have a cache created on a non-default branch, all works fine except when the workflow is run on **a tag**, the cache isn't hit
when it's not a tagged commit, and my github.ref is "...heads" it starts working again.
I guess it's related to https://github.com/actions/cache#cache-scopes
Essentially, it looks like there is no way to use cache in runs on tagged revisions, or is it? (I don't have any caches on my master yet). Is it an intended behavior?
At least, it'd be good if it was described in readme in that "scopes" section.
|
1.0
|
using cache on tags does not work - hi!
I have a cache created on a non-default branch, all works fine except when the workflow is run on **a tag**, the cache isn't hit
when it's not a tagged commit, and my github.ref is "...heads" it starts working again.
I guess it's related to https://github.com/actions/cache#cache-scopes
Essentially, it looks like there is no way to use cache in runs on tagged revisions, or is it? (I don't have any caches on my master yet). Is it an intended behavior?
At least, it'd be good if it was described in readme in that "scopes" section.
|
non_process
|
using cache on tags does not work hi i have a cache created on a non default branch all works fine except when the workflow is run on a tag the cache isn t hit when it s not a tagged commit and my github ref is heads it starts working again i guess it s related to essentially it looks like there is no way to use cache in runs on tagged revisions or is it i don t have any caches on my master yet is it an intended behavior at least it d be good if it was described in readme in that scopes section
| 0
|
13,944
| 16,720,307,525
|
IssuesEvent
|
2021-06-10 06:19:23
|
aodn/imos-toolbox
|
https://api.github.com/repos/aodn/imos-toolbox
|
closed
|
WorkhorseParser - wrong assignment of velocity components
|
Type:Reprocessing Type:bug Unit:Instrument Reader Unit:TimeSeries
|
This bug affects v2.6.11 & v2.6.12
When reading ENU datasets with the newly refactored workhorse Parser, the variable mappings are reversed and a wrong assignment is being done. The current bug lies in assigning `velocity1->VCUR` and `velocity2->UCUR`, while the correct is the reverse.
This only occurs for ENU datasets.
The problem is located in the import_mappings on the recently refactored Workhorse parser (v2.6.11+). It went undetected because even the tests got the typo (see `+Workhorse/import_mappings.m`). We are also missing a content regression test against ENU files, which would have picked the problem.
The origin is likely related to a wrong copy/paste/edit since the original workhorseParser firstly defined a VCUR variable, then a UCUR variable, but with the correct assignments.
Thanks to Tim Austin for reporting
|
1.0
|
WorkhorseParser - wrong assignment of velocity components - This bug affects v2.6.11 & v2.6.12
When reading ENU datasets with the newly refactored workhorse Parser, the variable mappings are reversed and a wrong assignment is being done. The current bug lies in assigning `velocity1->VCUR` and `velocity2->UCUR`, while the correct is the reverse.
This only occurs for ENU datasets.
The problem is located in the import_mappings on the recently refactored Workhorse parser (v2.6.11+). It went undetected because even the tests got the typo (see `+Workhorse/import_mappings.m`). We are also missing a content regression test against ENU files, which would have picked the problem.
The origin is likely related to a wrong copy/paste/edit since the original workhorseParser firstly defined a VCUR variable, then a UCUR variable, but with the correct assignments.
Thanks to Tim Austin for reporting
|
process
|
workhorseparser wrong assignment of velocity components this bug affects when reading enu datasets with the newly refactored workhorse parser the variable mappings are reversed and a wrong assignment is being done the current bug lies in assigning vcur and ucur while the correct is the reverse this only occurs for enu datasets the problem is located in the import mappings on the recently refactored workhorse parser it went undetected because even the tests got the typo see workhorse import mappings m we are also missing a content regression test against enu files which would have picked the problem the origin is likely related to a wrong copy paste edit since the original workhorseparser firstly defined a vcur variable then a ucur variable but with the correct assignments thanks to tim austin for reporting
| 1
|
31,686
| 26,005,889,061
|
IssuesEvent
|
2022-12-20 19:19:06
|
FuelLabs/infrastructure
|
https://api.github.com/repos/FuelLabs/infrastructure
|
closed
|
Upgrade fuel-dev & fuel-prod to v1.24
|
infrastructure
|
Update terraform code to deploy v1.24 for fuel-dev and fuel-prod:
-fuel-dev
v1.21 -> 1.22 -> 1.23 -> 1.24
- fuel-prod
v1.23 -> 1.24
|
1.0
|
Upgrade fuel-dev & fuel-prod to v1.24 - Update terraform code to deploy v1.24 for fuel-dev and fuel-prod:
-fuel-dev
v1.21 -> 1.22 -> 1.23 -> 1.24
- fuel-prod
v1.23 -> 1.24
|
non_process
|
upgrade fuel dev fuel prod to update terraform code to deploy for fuel dev and fuel prod fuel dev fuel prod
| 0
|
115,079
| 4,651,652,172
|
IssuesEvent
|
2016-10-03 11:01:08
|
TheScienceMuseum/collectionsonline
|
https://api.github.com/repos/TheScienceMuseum/collectionsonline
|
closed
|
Use the right property (name) of the category object for the facet
|
enhancement priority-4
|
Use the right property (name) of the category object for the facet (category.name) to just show the category name (without the museum name).
~~This update should ideally be made to the index,which I will chase up.~~
~~But in the interim would it be possible to strip the Museum name from the front of Category field. Either at the point we extract it from the index, or before we display it (both in the facets and on the object page and ideally the API as well).~~
~~So basically strip everything before the first - (dash) to leave just the category name.~~
|
1.0
|
Use the right property (name) of the category object for the facet - Use the right property (name) of the category object for the facet (category.name) to just show the category name (without the museum name).
~~This update should ideally be made to the index,which I will chase up.~~
~~But in the interim would it be possible to strip the Museum name from the front of Category field. Either at the point we extract it from the index, or before we display it (both in the facets and on the object page and ideally the API as well).~~
~~So basically strip everything before the first - (dash) to leave just the category name.~~
|
non_process
|
use the right property name of the category object for the facet use the right property name of the category object for the facet category name to just show the category name without the museum name this update should ideally be made to the index which i will chase up but in the interim would it be possible to strip the museum name from the front of category field either at the point we extract it from the index or before we display it both in the facets and on the object page and ideally the api as well so basically strip everything before the first dash to leave just the category name
| 0
|
3,268
| 6,344,377,706
|
IssuesEvent
|
2017-07-27 19:46:34
|
rubberduck-vba/Rubberduck
|
https://api.github.com/repos/rubberduck-vba/Rubberduck
|
closed
|
Resolver Error when method call has 0 or >1 args in parentheses on continued line
|
bug parse-tree-processing
|
This would appear to be a regression, and/or not covered by tests.
```vb
Sub test _
() 'These are fine
Debug.Print Now _
() 'These are not
End Sub
```
Refer #2888
|
1.0
|
Resolver Error when method call has 0 or >1 args in parentheses on continued line - This would appear to be a regression, and/or not covered by tests.
```vb
Sub test _
() 'These are fine
Debug.Print Now _
() 'These are not
End Sub
```
Refer #2888
|
process
|
resolver error when method call has or args in parentheses on continued line this would appear to be a regression and or not covered by tests vb sub test these are fine debug print now these are not end sub refer
| 1
|
5,579
| 8,432,465,562
|
IssuesEvent
|
2018-10-17 02:12:41
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
opened
|
Kokoro skipping jobs inappropriately
|
priority: p1 testing type: process
|
E.g., for PR #6202, which scribbles all over `bigquery/`, the [Kokoro - BigQuery job](https://source.cloud.google.com/results/invocations/de8b3c7d-1952-4e13-acf2-d0491c739608/log) logs (line 95)
> bigquery was not modified, returning.
And returns successfully without running any tests.
|
1.0
|
Kokoro skipping jobs inappropriately - E.g., for PR #6202, which scribbles all over `bigquery/`, the [Kokoro - BigQuery job](https://source.cloud.google.com/results/invocations/de8b3c7d-1952-4e13-acf2-d0491c739608/log) logs (line 95)
> bigquery was not modified, returning.
And returns successfully without running any tests.
|
process
|
kokoro skipping jobs inappropriately e g for pr which scribbles all over bigquery the logs line bigquery was not modified returning and returns successfully without running any tests
| 1
|
102,935
| 11,310,267,803
|
IssuesEvent
|
2020-01-19 18:27:19
|
simensrostad/TTK4235
|
https://api.github.com/repos/simensrostad/TTK4235
|
opened
|
Construct and generate documentation
|
documentation
|
Make description and update header files. Use doxygen to generate documentation.
|
1.0
|
Construct and generate documentation - Make description and update header files. Use doxygen to generate documentation.
|
non_process
|
construct and generate documentation make description and update header files use doxygen to generate documentation
| 0
|
283,502
| 30,913,322,493
|
IssuesEvent
|
2023-08-05 01:39:50
|
hshivhare67/kernel_v4.19.72_CVE-2022-42896_new
|
https://api.github.com/repos/hshivhare67/kernel_v4.19.72_CVE-2022-42896_new
|
reopened
|
CVE-2021-28952 (High) detected in linuxlinux-4.19.279
|
Mend: dependency security vulnerability
|
## CVE-2021-28952 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.279</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/sound/soc/qcom/sdm845.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/sound/soc/qcom/sdm845.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the Linux kernel through 5.11.8. The sound/soc/qcom/sdm845.c soundwire device driver has a buffer overflow when an unexpected port ID number is encountered, aka CID-1c668e1c0a0f. (This has been fixed in 5.12-rc4.)
<p>Publish Date: 2021-03-20
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-28952>CVE-2021-28952</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-28952">https://nvd.nist.gov/vuln/detail/CVE-2021-28952</a></p>
<p>Release Date: 2021-03-20</p>
<p>Fix Resolution: linux-libc-headers - 5.13</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-28952 (High) detected in linuxlinux-4.19.279 - ## CVE-2021-28952 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.279</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/sound/soc/qcom/sdm845.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/sound/soc/qcom/sdm845.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the Linux kernel through 5.11.8. The sound/soc/qcom/sdm845.c soundwire device driver has a buffer overflow when an unexpected port ID number is encountered, aka CID-1c668e1c0a0f. (This has been fixed in 5.12-rc4.)
<p>Publish Date: 2021-03-20
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-28952>CVE-2021-28952</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-28952">https://nvd.nist.gov/vuln/detail/CVE-2021-28952</a></p>
<p>Release Date: 2021-03-20</p>
<p>Fix Resolution: linux-libc-headers - 5.13</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in base branch master vulnerable source files sound soc qcom c sound soc qcom c vulnerability details an issue was discovered in the linux kernel through the sound soc qcom c soundwire device driver has a buffer overflow when an unexpected port id number is encountered aka cid this has been fixed in publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux libc headers step up your open source security game with mend
| 0
|
20,508
| 27,167,377,885
|
IssuesEvent
|
2023-02-17 16:21:38
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Example in "Use a template parameter as part of a condition" is not clear
|
devops/prod doc-bug Pri1 devops-cicd-process/tech
|
The example in the *Use a template parameter as part of a condition* section is not clear. It uses two code snippets, both with default value `true` for the parameter, then clarifies that the expected result is `true` - the lack of differentiation here makes the user liable to miss the point.
<details>
<summary>code snippets in question</summary>
```
# parameters.yml
parameters:
- name: doThing
default: true # value passed to the condition
type: boolean
jobs:
- job: B
steps:
- script: echo I did a thing
condition: and(succeeded(), eq('${{ parameters.doThing }}', 'true'))
```
```
# azure-pipeline.yml
parameters:
- name: doThing
default: true
type: boolean
trigger:
- none
extends:
template: parameters.yml
```
</details>
I recommend changing the default value in `azure-pipelines.yml` snippet to `false` to demonstrate the difference.
Furthermore, I would also add a new example clarifying that this does not affect *passing* parameters down from azure-pipelines.yml, rather only extending. The following code *does* work as expected, and would cause the job in the child template to be skipped:
<details>
<summary>recommended additional code snippet</summary>
```
# job-with-parameters.yml
parameters:
- name: doThing
default: true # defaults to true but value from calling pipeline is used
type: boolean
jobs:
- job: B
steps:
- script: echo I did a thing
condition: and(succeeded(), eq('${{ parameters.doThing }}', 'true'))
```
```
# azure-pipeline.yml
parameters:
- name: doThing
default: false
type: boolean
trigger:
- none
jobs:
- template: 'job-with-parameters.yml'
parameters:
doThing: ${{ parameters.doThing }}
```
</details>
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 21e5cee4-eaae-3a96-db91-540ac759e83a
* Version Independent ID: 9bdc837c-ffe0-d999-f922-f3a5debc7f92
* Content: [Conditions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/conditions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/conditions.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Example in "Use a template parameter as part of a condition" is not clear - The example in the *Use a template parameter as part of a condition* section is not clear. It uses two code snippets, both with default value `true` for the parameter, then clarifies that the expected result is `true` - the lack of differentiation here makes the user liable to miss the point.
<details>
<summary>code snippets in question</summary>
```
# parameters.yml
parameters:
- name: doThing
default: true # value passed to the condition
type: boolean
jobs:
- job: B
steps:
- script: echo I did a thing
condition: and(succeeded(), eq('${{ parameters.doThing }}', 'true'))
```
```
# azure-pipeline.yml
parameters:
- name: doThing
default: true
type: boolean
trigger:
- none
extends:
template: parameters.yml
```
</details>
I recommend changing the default value in `azure-pipelines.yml` snippet to `false` to demonstrate the difference.
Furthermore, I would also add a new example clarifying that this does not affect *passing* parameters down from azure-pipelines.yml, rather only extending. The following code *does* work as expected, and would cause the job in the child template to be skipped:
<details>
<summary>recommended additional code snippet</summary>
```
# job-with-parameters.yml
parameters:
- name: doThing
default: true # defaults to true but value from calling pipeline is used
type: boolean
jobs:
- job: B
steps:
- script: echo I did a thing
condition: and(succeeded(), eq('${{ parameters.doThing }}', 'true'))
```
```
# azure-pipeline.yml
parameters:
- name: doThing
default: false
type: boolean
trigger:
- none
jobs:
- template: 'job-with-parameters.yml'
parameters:
doThing: ${{ parameters.doThing }}
```
</details>
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 21e5cee4-eaae-3a96-db91-540ac759e83a
* Version Independent ID: 9bdc837c-ffe0-d999-f922-f3a5debc7f92
* Content: [Conditions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/conditions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/main/docs/pipelines/process/conditions.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
example in use a template parameter as part of a condition is not clear the example in the use a template parameter as part of a condition section is not clear it uses two code snippets both with default value true for the parameter then clarifies that the expected result is true the lack of differentiation here makes the user liable to miss the point code snippets in question parameters yml parameters name dothing default true value passed to the condition type boolean jobs job b steps script echo i did a thing condition and succeeded eq parameters dothing true azure pipeline yml parameters name dothing default true type boolean trigger none extends template parameters yml i recommend changing the default value in azure pipelines yml snippet to false to demonstrate the difference furthermore i would also add a new example clarifying that this does not affect passing parameters down from azure pipelines yml rather only extending the following code does work as expected and would cause the job in the child template to be skipped recommended additional code snippet job with parameters yml parameters name dothing default true defaults to true but value from calling pipeline is used type boolean jobs job b steps script echo i did a thing condition and succeeded eq parameters dothing true azure pipeline yml parameters name dothing default false type boolean trigger none jobs template job with parameters yml parameters dothing parameters dothing document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id eaae version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
161,693
| 25,384,202,834
|
IssuesEvent
|
2022-11-21 20:18:35
|
MozillaFoundation/Design
|
https://api.github.com/repos/MozillaFoundation/Design
|
opened
|
Design recap 2022
|
design DesignOps
|
Add to the existing deck [here](https://docs.google.com/presentation/d/1MGbETMOGXuUCVAufeafS99fcXmHwBWpTC-m7njOedNk/edit#slide=id.g139fb3a49ae_0_0).
### Goals & Audience
- Show the impact the design team has had
- Celebrate and remissness about all the work we've done this year
- Learn from our previous work and use it to guide planning (what types of work do we do, what did we enjoy working on the most, where can we have the most impact)
- Share at during the A&E team reflections meeting on **Tues Dec 6**
- Share with new contractors and new hires to onboard them to our work and impact
### Process
- Everyone add the projects they worked on
- Add goal of the project, highlights, and impact of the work
- Add relevant data (analytics, user testing numbers & quotes, staff quotes, press, etc)
- Link to deep dive retro decks or blog posts
### Check off when you're done
- [ ] Sabrina
- [ ] Rebecca
- [ ] Nancy
- [ ] Kristina
|
2.0
|
Design recap 2022 - Add to the existing deck [here](https://docs.google.com/presentation/d/1MGbETMOGXuUCVAufeafS99fcXmHwBWpTC-m7njOedNk/edit#slide=id.g139fb3a49ae_0_0).
### Goals & Audience
- Show the impact the design team has had
- Celebrate and remissness about all the work we've done this year
- Learn from our previous work and use it to guide planning (what types of work do we do, what did we enjoy working on the most, where can we have the most impact)
- Share at during the A&E team reflections meeting on **Tues Dec 6**
- Share with new contractors and new hires to onboard them to our work and impact
### Process
- Everyone add the projects they worked on
- Add goal of the project, highlights, and impact of the work
- Add relevant data (analytics, user testing numbers & quotes, staff quotes, press, etc)
- Link to deep dive retro decks or blog posts
### Check off when you're done
- [ ] Sabrina
- [ ] Rebecca
- [ ] Nancy
- [ ] Kristina
|
non_process
|
design recap add to the existing deck goals audience show the impact the design team has had celebrate and remissness about all the work we ve done this year learn from our previous work and use it to guide planning what types of work do we do what did we enjoy working on the most where can we have the most impact share at during the a e team reflections meeting on tues dec share with new contractors and new hires to onboard them to our work and impact process everyone add the projects they worked on add goal of the project highlights and impact of the work add relevant data analytics user testing numbers quotes staff quotes press etc link to deep dive retro decks or blog posts check off when you re done sabrina rebecca nancy kristina
| 0
|
414,067
| 12,098,351,384
|
IssuesEvent
|
2020-04-20 10:09:36
|
teamforus/forus
|
https://api.github.com/repos/teamforus/forus
|
closed
|
Provider is accepted two times for the same fund.
|
Difficulty: Medium Priority: Must have Scope: Small Topic: Backend bug
|
@GerbenBosschieter commented on [Thu Feb 20 2020](https://github.com/teamforus/development/issues/403)
# Description
We found a provider that is accepted for the same funds two times.
Couldn't find any reason why this happend.
The provider didn't accept the invitation and applied manually for the new fund.
## Task
Pleas research this bug. Can't reproduce this.
### Additional information
**Console**:
<img width="735" alt="Screen Shot 2020-02-20 at 09 39 54" src="https://user-images.githubusercontent.com/38419514/74916227-c52dd780-53c5-11ea-94b0-adddaac4c566.png">
**Database**:
<img width="822" alt="Screen Shot 2020-02-20 at 09 40 59" src="https://user-images.githubusercontent.com/38419514/74916250-cbbc4f00-53c5-11ea-8208-25fa840ae410.png">
---
@maxvisser commented on [Thu Feb 20 2020](https://github.com/teamforus/development/issues/403#issuecomment-588926588)
as request is created within the same minute of the previous one; Can it be plausible that he pressed a button twice which result in a double fund_provider record? @dev-rminds
|
1.0
|
Provider is accepted two times for the same fund. - @GerbenBosschieter commented on [Thu Feb 20 2020](https://github.com/teamforus/development/issues/403)
# Description
We found a provider that is accepted for the same funds two times.
Couldn't find any reason why this happend.
The provider didn't accept the invitation and applied manually for the new fund.
## Task
Pleas research this bug. Can't reproduce this.
### Additional information
**Console**:
<img width="735" alt="Screen Shot 2020-02-20 at 09 39 54" src="https://user-images.githubusercontent.com/38419514/74916227-c52dd780-53c5-11ea-94b0-adddaac4c566.png">
**Database**:
<img width="822" alt="Screen Shot 2020-02-20 at 09 40 59" src="https://user-images.githubusercontent.com/38419514/74916250-cbbc4f00-53c5-11ea-8208-25fa840ae410.png">
---
@maxvisser commented on [Thu Feb 20 2020](https://github.com/teamforus/development/issues/403#issuecomment-588926588)
as request is created within the same minute of the previous one; Can it be plausible that he pressed a button twice which result in a double fund_provider record? @dev-rminds
|
non_process
|
provider is accepted two times for the same fund gerbenbosschieter commented on description we found a provider that is accepted for the same funds two times couldn t find any reason why this happend the provider didn t accept the invitation and applied manually for the new fund task pleas research this bug can t reproduce this additional information console img width alt screen shot at src database img width alt screen shot at src maxvisser commented on as request is created within the same minute of the previous one can it be plausible that he pressed a button twice which result in a double fund provider record dev rminds
| 0
|
18,579
| 24,562,623,672
|
IssuesEvent
|
2022-10-12 21:58:00
|
NEARWEEK/NEWS
|
https://api.github.com/repos/NEARWEEK/NEWS
|
closed
|
Get input, finalize, drive & evaluate OKRs for Q4
|
Process
|
## 🎉 Subtasks
- [x] Get input from all of team
- [x] Provide first version, get feedback
- [x] Finalize OKRs
- [x] Make sure all team members translate their OKRs into Github milestones & issues
- [x] Setup a monthly evaluation call to evaluate progress on OKRs and all things process
## 🤼♂️ Reviewer
@P3ter-NEARWEEK & rest of the team
## 🔗 Work doc(s) / inspirational links
|
1.0
|
Get input, finalize, drive & evaluate OKRs for Q4 - ## 🎉 Subtasks
- [x] Get input from all of team
- [x] Provide first version, get feedback
- [x] Finalize OKRs
- [x] Make sure all team members translate their OKRs into Github milestones & issues
- [x] Setup a monthly evaluation call to evaluate progress on OKRs and all things process
## 🤼♂️ Reviewer
@P3ter-NEARWEEK & rest of the team
## 🔗 Work doc(s) / inspirational links
|
process
|
get input finalize drive evaluate okrs for 🎉 subtasks get input from all of team provide first version get feedback finalize okrs make sure all team members translate their okrs into github milestones issues setup a monthly evaluation call to evaluate progress on okrs and all things process 🤼♂️ reviewer nearweek rest of the team 🔗 work doc s inspirational links
| 1
|
67,636
| 17,024,410,878
|
IssuesEvent
|
2021-07-03 07:06:40
|
apache/shardingsphere
|
https://api.github.com/repos/apache/shardingsphere
|
closed
|
Calcite always can not download when mvn install in windows env
|
status: volunteer wanted type: build
|
Calcite lib always can not download when mvn install in windows env, please investigate the reason.
error log:
```
Error: Failed to execute goal on project shardingsphere-infra-optimize: Could not resolve dependencies for project org.apache.shardingsphere:shardingsphere-infra-optimize:jar:5.0.0-RC1-SNAPSHOT: Failed to collect dependencies at org.apache.calcite:calcite-core:jar:1.26.0: Failed to read artifact descriptor for org.apache.calcite:calcite-core:jar:1.26.0: Could not transfer artifact org.apache.calcite:calcite-core:pom:1.26.0 from/to central (https://repo.maven.apache.org/maven2): Transfer failed for https://repo.maven.apache.org/maven2/org/apache/calcite/calcite-core/1.26.0/calcite-core-1.26.0.pom: Connection reset -> [Help 1]
Error:
Error: To see the full stack trace of the errors, re-run Maven with the -e switch.
Error: Re-run Maven using the -X switch to enable full debug logging.
Error:
Error: For more information about the errors and possible solutions, please read the following articles:
Error: [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
Error:
Error: After correcting the problems, you can resume the build with the command
Error: mvn <args> -rf :shardingsphere-infra-optimize
Error: Process completed with exit code 1.
```
|
1.0
|
Calcite always can not download when mvn install in windows env - Calcite lib always can not download when mvn install in windows env, please investigate the reason.
error log:
```
Error: Failed to execute goal on project shardingsphere-infra-optimize: Could not resolve dependencies for project org.apache.shardingsphere:shardingsphere-infra-optimize:jar:5.0.0-RC1-SNAPSHOT: Failed to collect dependencies at org.apache.calcite:calcite-core:jar:1.26.0: Failed to read artifact descriptor for org.apache.calcite:calcite-core:jar:1.26.0: Could not transfer artifact org.apache.calcite:calcite-core:pom:1.26.0 from/to central (https://repo.maven.apache.org/maven2): Transfer failed for https://repo.maven.apache.org/maven2/org/apache/calcite/calcite-core/1.26.0/calcite-core-1.26.0.pom: Connection reset -> [Help 1]
Error:
Error: To see the full stack trace of the errors, re-run Maven with the -e switch.
Error: Re-run Maven using the -X switch to enable full debug logging.
Error:
Error: For more information about the errors and possible solutions, please read the following articles:
Error: [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
Error:
Error: After correcting the problems, you can resume the build with the command
Error: mvn <args> -rf :shardingsphere-infra-optimize
Error: Process completed with exit code 1.
```
|
non_process
|
calcite always can not download when mvn install in windows env calcite lib always can not download when mvn install in windows env please investigate the reason error log error failed to execute goal on project shardingsphere infra optimize could not resolve dependencies for project org apache shardingsphere shardingsphere infra optimize jar snapshot failed to collect dependencies at org apache calcite calcite core jar failed to read artifact descriptor for org apache calcite calcite core jar could not transfer artifact org apache calcite calcite core pom from to central transfer failed for connection reset error error to see the full stack trace of the errors re run maven with the e switch error re run maven using the x switch to enable full debug logging error error for more information about the errors and possible solutions please read the following articles error error error after correcting the problems you can resume the build with the command error mvn rf shardingsphere infra optimize error process completed with exit code
| 0
|
7,461
| 10,562,766,774
|
IssuesEvent
|
2019-10-04 19:10:07
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
Datastore: 'test_empty_array_put' systest flakes with 503 Deadline Exceeded
|
api: datastore backend flaky testing type: process
|
See: https://source.cloud.google.com/results/invocations/d630b05c-174f-49aa-97be-630fa44bd814/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Fdatastore/log
```python
________________ TestDatastoreTransaction.test_empty_array_put _________________
self = <tests.system.test_system.TestDatastoreTransaction testMethod=test_empty_array_put>
def test_empty_array_put(self):
local_client = clone_client(Config.CLIENT)
key = local_client.key("EmptyArray", 1234)
local_client = datastore.Client()
entity = datastore.Entity(key=key)
entity["children"] = []
> local_client.put(entity)
tests/system/test_system.py:551:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/datastore/client.py:421: in put
self.put_multi(entities=[entity])
google/cloud/datastore/client.py:448: in put_multi
current.commit()
google/cloud/datastore/batch.py:273: in commit
self._commit()
google/cloud/datastore/batch.py:249: in _commit
self.project, mode, self._mutations, transaction=self._id
google/cloud/datastore_v1/gapic/datastore_client.py:501: in commit
request, retry=retry, timeout=timeout, metadata=metadata
../api_core/google/api_core/gapic_v1/method.py:143: in __call__
return wrapped_func(*args, **kwargs)
../api_core/google/api_core/retry.py:270: in retry_wrapped_func
on_error=on_error,
../api_core/google/api_core/retry.py:179: in retry_target
return target()
../api_core/google/api_core/timeout.py:214: in func_with_timeout
return func(*args, **kwargs)
../api_core/google/api_core/grpc_helpers.py:59: in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = ServiceUnavailable(u'Deadline Exceeded',)
from_value = <_Rendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
de...re/ext/filters/deadline/deadline_filter.cc","file_line":69,"grpc_status":14}"
>
def raise_from(value, from_value):
> raise value
E ServiceUnavailable: 503 Deadline Exceeded
.nox/system-2-7/lib/python2.7/site-packages/six.py:737: ServiceUnavailable
```
|
1.0
|
Datastore: 'test_empty_array_put' systest flakes with 503 Deadline Exceeded - See: https://source.cloud.google.com/results/invocations/d630b05c-174f-49aa-97be-630fa44bd814/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Fdatastore/log
```python
________________ TestDatastoreTransaction.test_empty_array_put _________________
self = <tests.system.test_system.TestDatastoreTransaction testMethod=test_empty_array_put>
def test_empty_array_put(self):
local_client = clone_client(Config.CLIENT)
key = local_client.key("EmptyArray", 1234)
local_client = datastore.Client()
entity = datastore.Entity(key=key)
entity["children"] = []
> local_client.put(entity)
tests/system/test_system.py:551:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/datastore/client.py:421: in put
self.put_multi(entities=[entity])
google/cloud/datastore/client.py:448: in put_multi
current.commit()
google/cloud/datastore/batch.py:273: in commit
self._commit()
google/cloud/datastore/batch.py:249: in _commit
self.project, mode, self._mutations, transaction=self._id
google/cloud/datastore_v1/gapic/datastore_client.py:501: in commit
request, retry=retry, timeout=timeout, metadata=metadata
../api_core/google/api_core/gapic_v1/method.py:143: in __call__
return wrapped_func(*args, **kwargs)
../api_core/google/api_core/retry.py:270: in retry_wrapped_func
on_error=on_error,
../api_core/google/api_core/retry.py:179: in retry_target
return target()
../api_core/google/api_core/timeout.py:214: in func_with_timeout
return func(*args, **kwargs)
../api_core/google/api_core/grpc_helpers.py:59: in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = ServiceUnavailable(u'Deadline Exceeded',)
from_value = <_Rendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
de...re/ext/filters/deadline/deadline_filter.cc","file_line":69,"grpc_status":14}"
>
def raise_from(value, from_value):
> raise value
E ServiceUnavailable: 503 Deadline Exceeded
.nox/system-2-7/lib/python2.7/site-packages/six.py:737: ServiceUnavailable
```
|
process
|
datastore test empty array put systest flakes with deadline exceeded see python testdatastoretransaction test empty array put self def test empty array put self local client clone client config client key local client key emptyarray local client datastore client entity datastore entity key key entity local client put entity tests system test system py google cloud datastore client py in put self put multi entities google cloud datastore client py in put multi current commit google cloud datastore batch py in commit self commit google cloud datastore batch py in commit self project mode self mutations transaction self id google cloud datastore gapic datastore client py in commit request retry retry timeout timeout metadata metadata api core google api core gapic method py in call return wrapped func args kwargs api core google api core retry py in retry wrapped func on error on error api core google api core retry py in retry target return target api core google api core timeout py in func with timeout return func args kwargs api core google api core grpc helpers py in error remapped callable six raise from exceptions from grpc error exc exc value serviceunavailable u deadline exceeded from value rendezvous of rpc that terminated with status statuscode unavailable de re ext filters deadline deadline filter cc file line grpc status def raise from value from value raise value e serviceunavailable deadline exceeded nox system lib site packages six py serviceunavailable
| 1
|
139,261
| 12,840,303,167
|
IssuesEvent
|
2020-07-07 20:48:33
|
decentralized-identity/sidetree
|
https://api.github.com/repos/decentralized-identity/sidetree
|
closed
|
Add ability to fetch receipts for a DID in REST interface
|
documentation feature
|
Right now the REST interface will return the complete new DID doc when requesting the DID doc for a specific DID. This makes it hard to independently verify the correctness of the DID doc.
If we had the ability to also fetch the complete history of DID operations in the form of receipts then the end user could validate that the correct keys were used to sign all updates from the original DID document. The end user could also verify that the operations are included in past batches on the blockchain.
This receipt data could be included in the main API that fetches a DID doc or alternatively in a separate interface.
|
1.0
|
Add ability to fetch receipts for a DID in REST interface - Right now the REST interface will return the complete new DID doc when requesting the DID doc for a specific DID. This makes it hard to independently verify the correctness of the DID doc.
If we had the ability to also fetch the complete history of DID operations in the form of receipts then the end user could validate that the correct keys were used to sign all updates from the original DID document. The end user could also verify that the operations are included in past batches on the blockchain.
This receipt data could be included in the main API that fetches a DID doc or alternatively in a separate interface.
|
non_process
|
add ability to fetch receipts for a did in rest interface right now the rest interface will return the complete new did doc when requesting the did doc for a specific did this makes it hard to independently verify the correctness of the did doc if we had the ability to also fetch the complete history of did operations in the form of receipts then the end user could validate that the correct keys were used to sign all updates from the original did document the end user could also verify that the operations are included in past batches on the blockchain this receipt data could be included in the main api that fetches a did doc or alternatively in a separate interface
| 0
|
829,483
| 31,880,917,044
|
IssuesEvent
|
2023-09-16 11:13:39
|
uli/dragonbasic
|
https://api.github.com/repos/uli/dragonbasic
|
closed
|
Memory corruption occurs even when Strings allocate a maximum size of 64 words (256 bytes)
|
Bug Severity:High Priority:High
|
Bug found using Dragon Basic (commit ID: d2ce042366068083a5fe3089873a22221fffbc26) with memory corruption fix applied (commit ID: 901c0e5e88963df44aff28d3124ca842234dad3c)
After applying the original String memory corruption fix, I was still able to observe some edge cases where memory gets corrupted with strings. Because of the complexity of the project I am working on, I'm not sure If I'm able to exactly reproduce the same issue using a reduced test case, but I was able to still at least demonstrate some kind of memory issue with one. For example:
#title "TestCase"
#include <gba.dbc>
#Constant arraySize 100
#Constant initValue (-1)
Dim integer(arraySize)
Dim string$
Sub initializeArray
Local currentIndex
Log " "
For currentIndex = 0 To arraySize
integer[currentIndex] = initValue
Next
End Sub
Sub verifyArray
Local currentIndex
Log "Verifying array..." + Chr$(13) + Chr$(10)
For currentIndex = 0 To arraySize
If Not integer[currentIndex] = initValue
Log "Memory Corrupted: " + Str$(integer[currentIndex]) + " (Index: " + Str$(currentIndex) + ")" + Chr$(13) + Chr$(10)
End If
Next
Log "Verification complete!" + Chr$(13) + Chr$(10)
End Sub
Sub runTest
Log " " ;
initializeArray
verifyArray
string$ = "SOME STRING TO CORRUPT MEMORY"
verifyArray
End Sub
start:
runTest
while
loop
In this test we simply initalize an array of integers to an obvious number (-1) and then we check the array to make sure every element returns the same value. Then we initialize our string with some value and check the array again. While the first check passes just fine, the second check (after a string is set to a value) fails. The last index appears to have changed. The following is an example of the result in VBA:
Verifying array...
Verification complete!
Verifying array...
Memory Corrupted: 1297044253 (Index: 100)
Verification complete!
As you can see, something has corrupted our last index. In my personal project I am trying to develop, I noted that this corruption happens earlier in my array, but that could be due to it being so complex (across multiple DBC files, arrays, strings, etc - We're talking a whole DBC framework library + game on top of it).
Attached to this issue is the original test case files
[Testcase.zip](https://github.com/uli/dragonbasic/files/1670764/Testcase.zip)
EDIT: It should be noted that I added empty log statements to avoid `!currently_naked' failed.' errors. The fixes that try to address that bug causes other issues with arrays (specifically, comparing string arrays produce ' unknown word compare' errors), so I did not apply those fixes to my build of Dragon Basic.
|
1.0
|
Memory corruption occurs even when Strings allocate a maximum size of 64 words (256 bytes) - Bug found using Dragon Basic (commit ID: d2ce042366068083a5fe3089873a22221fffbc26) with memory corruption fix applied (commit ID: 901c0e5e88963df44aff28d3124ca842234dad3c)
After applying the original String memory corruption fix, I was still able to observe some edge cases where memory gets corrupted with strings. Because of the complexity of the project I am working on, I'm not sure If I'm able to exactly reproduce the same issue using a reduced test case, but I was able to still at least demonstrate some kind of memory issue with one. For example:
#title "TestCase"
#include <gba.dbc>
#Constant arraySize 100
#Constant initValue (-1)
Dim integer(arraySize)
Dim string$
Sub initializeArray
Local currentIndex
Log " "
For currentIndex = 0 To arraySize
integer[currentIndex] = initValue
Next
End Sub
Sub verifyArray
Local currentIndex
Log "Verifying array..." + Chr$(13) + Chr$(10)
For currentIndex = 0 To arraySize
If Not integer[currentIndex] = initValue
Log "Memory Corrupted: " + Str$(integer[currentIndex]) + " (Index: " + Str$(currentIndex) + ")" + Chr$(13) + Chr$(10)
End If
Next
Log "Verification complete!" + Chr$(13) + Chr$(10)
End Sub
Sub runTest
Log " " ;
initializeArray
verifyArray
string$ = "SOME STRING TO CORRUPT MEMORY"
verifyArray
End Sub
start:
runTest
while
loop
In this test we simply initalize an array of integers to an obvious number (-1) and then we check the array to make sure every element returns the same value. Then we initialize our string with some value and check the array again. While the first check passes just fine, the second check (after a string is set to a value) fails. The last index appears to have changed. The following is an example of the result in VBA:
Verifying array...
Verification complete!
Verifying array...
Memory Corrupted: 1297044253 (Index: 100)
Verification complete!
As you can see, something has corrupted our last index. In my personal project I am trying to develop, I noted that this corruption happens earlier in my array, but that could be due to it being so complex (across multiple DBC files, arrays, strings, etc - We're talking a whole DBC framework library + game on top of it).
Attached to this issue is the original test case files
[Testcase.zip](https://github.com/uli/dragonbasic/files/1670764/Testcase.zip)
EDIT: It should be noted that I added empty log statements to avoid `!currently_naked' failed.' errors. The fixes that try to address that bug causes other issues with arrays (specifically, comparing string arrays produce ' unknown word compare' errors), so I did not apply those fixes to my build of Dragon Basic.
|
non_process
|
memory corruption occurs even when strings allocate a maximum size of words bytes bug found using dragon basic commit id with memory corruption fix applied commit id after applying the original string memory corruption fix i was still able to observe some edge cases where memory gets corrupted with strings because of the complexity of the project i am working on i m not sure if i m able to exactly reproduce the same issue using a reduced test case but i was able to still at least demonstrate some kind of memory issue with one for example title testcase include constant arraysize constant initvalue dim integer arraysize dim string sub initializearray local currentindex log for currentindex to arraysize integer initvalue next end sub sub verifyarray local currentindex log verifying array chr chr for currentindex to arraysize if not integer initvalue log memory corrupted str integer index str currentindex chr chr end if next log verification complete chr chr end sub sub runtest log initializearray verifyarray string some string to corrupt memory verifyarray end sub start runtest while loop in this test we simply initalize an array of integers to an obvious number and then we check the array to make sure every element returns the same value then we initialize our string with some value and check the array again while the first check passes just fine the second check after a string is set to a value fails the last index appears to have changed the following is an example of the result in vba verifying array verification complete verifying array memory corrupted index verification complete as you can see something has corrupted our last index in my personal project i am trying to develop i noted that this corruption happens earlier in my array but that could be due to it being so complex across multiple dbc files arrays strings etc we re talking a whole dbc framework library game on top of it attached to this issue is the original test case files edit it should be noted that i added empty log statements to avoid currently naked failed errors the fixes that try to address that bug causes other issues with arrays specifically comparing string arrays produce unknown word compare errors so i did not apply those fixes to my build of dragon basic
| 0
|
4,834
| 7,726,286,857
|
IssuesEvent
|
2018-05-24 20:42:14
|
kaching-hq/Privacy-and-Security
|
https://api.github.com/repos/kaching-hq/Privacy-and-Security
|
opened
|
Logging processing operations
|
Processes Processing Control STORY
|
- [ ] Investigate if we log processing operations
- [ ] Which operations do we not log properly?
- [ ] Do we log manual operations properly?
- [ ] How can we log manual operations securely and efficiently?
- [ ] Can we extract logs in a human-readable form easily?
|
2.0
|
Logging processing operations - - [ ] Investigate if we log processing operations
- [ ] Which operations do we not log properly?
- [ ] Do we log manual operations properly?
- [ ] How can we log manual operations securely and efficiently?
- [ ] Can we extract logs in a human-readable form easily?
|
process
|
logging processing operations investigate if we log processing operations which operations do we not log properly do we log manual operations properly how can we log manual operations securely and efficiently can we extract logs in a human readable form easily
| 1
|
20,834
| 27,601,563,022
|
IssuesEvent
|
2023-03-09 10:17:38
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
[Mirror] protobuf v22.1
|
P2 type: process team-OSS mirror request
|
### Please list the URLs of the archives you'd like to mirror:
https://github.com/protocolbuffers/protobuf/archive/v22.1.tar.gz
https://github.com/protocolbuffers/protobuf/releases/download/v22.1/protoc-22.1-linux-aarch_64.zip
https://github.com/protocolbuffers/protobuf/releases/download/v22.1/protoc-22.1-linux-ppcle_64.zip
https://github.com/protocolbuffers/protobuf/releases/download/v22.1/protoc-22.1-linux-s390_64.zip
https://github.com/protocolbuffers/protobuf/releases/download/v22.1/protoc-22.1-linux-x86_32.zip
https://github.com/protocolbuffers/protobuf/releases/download/v22.1/protoc-22.1-linux-x86_64.zip
https://github.com/protocolbuffers/protobuf/releases/download/v22.1/protoc-22.1-osx-aarch_64.zip
https://github.com/protocolbuffers/protobuf/releases/download/v22.1/protoc-22.1-osx-x86_64.zip
https://github.com/protocolbuffers/protobuf/releases/download/v22.1/protoc-22.1-win32.zip
https://github.com/protocolbuffers/protobuf/releases/download/v22.1/protoc-22.1-win64.zip
https://repo1.maven.org/maven2/com/google/protobuf/protobuf-java/3.22.1/protobuf-java-3.22.1.jar
https://repo1.maven.org/maven2/com/google/protobuf/protobuf-java/3.22.1/protobuf-java-3.22.1-sources.jar
https://repo1.maven.org/maven2/com/google/protobuf/protobuf-java-util/3.22.1/protobuf-java-util-3.22.1.jar
https://repo1.maven.org/maven2/com/google/protobuf/protobuf-java-util/3.22.1/protobuf-java-util-3.22.1-sources.jar
https://repo1.maven.org/maven2/com/google/protobuf/protobuf-javalite/3.22.1/protobuf-javalite-3.22.1.jar
https://repo1.maven.org/maven2/com/google/protobuf/protobuf-javalite/3.22.1/protobuf-javalite-3.22.1-sources.jar
|
1.0
|
[Mirror] protobuf v22.1 - ### Please list the URLs of the archives you'd like to mirror:
https://github.com/protocolbuffers/protobuf/archive/v22.1.tar.gz
https://github.com/protocolbuffers/protobuf/releases/download/v22.1/protoc-22.1-linux-aarch_64.zip
https://github.com/protocolbuffers/protobuf/releases/download/v22.1/protoc-22.1-linux-ppcle_64.zip
https://github.com/protocolbuffers/protobuf/releases/download/v22.1/protoc-22.1-linux-s390_64.zip
https://github.com/protocolbuffers/protobuf/releases/download/v22.1/protoc-22.1-linux-x86_32.zip
https://github.com/protocolbuffers/protobuf/releases/download/v22.1/protoc-22.1-linux-x86_64.zip
https://github.com/protocolbuffers/protobuf/releases/download/v22.1/protoc-22.1-osx-aarch_64.zip
https://github.com/protocolbuffers/protobuf/releases/download/v22.1/protoc-22.1-osx-x86_64.zip
https://github.com/protocolbuffers/protobuf/releases/download/v22.1/protoc-22.1-win32.zip
https://github.com/protocolbuffers/protobuf/releases/download/v22.1/protoc-22.1-win64.zip
https://repo1.maven.org/maven2/com/google/protobuf/protobuf-java/3.22.1/protobuf-java-3.22.1.jar
https://repo1.maven.org/maven2/com/google/protobuf/protobuf-java/3.22.1/protobuf-java-3.22.1-sources.jar
https://repo1.maven.org/maven2/com/google/protobuf/protobuf-java-util/3.22.1/protobuf-java-util-3.22.1.jar
https://repo1.maven.org/maven2/com/google/protobuf/protobuf-java-util/3.22.1/protobuf-java-util-3.22.1-sources.jar
https://repo1.maven.org/maven2/com/google/protobuf/protobuf-javalite/3.22.1/protobuf-javalite-3.22.1.jar
https://repo1.maven.org/maven2/com/google/protobuf/protobuf-javalite/3.22.1/protobuf-javalite-3.22.1-sources.jar
|
process
|
protobuf please list the urls of the archives you d like to mirror
| 1
|
1,159
| 3,641,264,183
|
IssuesEvent
|
2016-02-13 14:07:45
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
Stack overflow in KeydefReader
|
bug P1 preprocess/keyref
|
If my DITA Map contains a construct in which the keyref refers to the same key defined on the topicref like:
```xml
<topicref href="images/Chrysanthemums.jpg" format="jpg" navtitle="Chrysanthemum"
keys="chrysanthemums"
keyref="chrysanthemums"/>
```
the publishing process will stack overflow with an infinite recursivity:
```
D:\projects\eXml\frameworks\dita\DITA-OT2.x\build.xml:41: The following error occurred while executing this line:
D:\projects\eXml\frameworks\dita\DITA-OT2.x\plugins\org.dita.base\build_preprocess.xml:270: java.lang.StackOverflowError
at org.apache.xerces.dom.NamedNodeMapImpl.getNamedItem(Unknown Source)
at org.apache.xerces.dom.ElementImpl.getAttribute(Unknown Source)
at org.dita.dost.reader.KeyrefReader.resolveIntermediate(KeyrefReader.java:255)
at org.dita.dost.reader.KeyrefReader.resolveIntermediate(KeyrefReader.java:261)
at org.dita.dost.reader.KeyrefReader.resolveIntermediate(KeyrefReader.java:261)
at org.dita.dost.reader.KeyrefReader.resolveIntermediate(KeyrefReader.java:261)
```
|
1.0
|
Stack overflow in KeydefReader - If my DITA Map contains a construct in which the keyref refers to the same key defined on the topicref like:
```xml
<topicref href="images/Chrysanthemums.jpg" format="jpg" navtitle="Chrysanthemum"
keys="chrysanthemums"
keyref="chrysanthemums"/>
```
the publishing process will stack overflow with an infinite recursivity:
```
D:\projects\eXml\frameworks\dita\DITA-OT2.x\build.xml:41: The following error occurred while executing this line:
D:\projects\eXml\frameworks\dita\DITA-OT2.x\plugins\org.dita.base\build_preprocess.xml:270: java.lang.StackOverflowError
at org.apache.xerces.dom.NamedNodeMapImpl.getNamedItem(Unknown Source)
at org.apache.xerces.dom.ElementImpl.getAttribute(Unknown Source)
at org.dita.dost.reader.KeyrefReader.resolveIntermediate(KeyrefReader.java:255)
at org.dita.dost.reader.KeyrefReader.resolveIntermediate(KeyrefReader.java:261)
at org.dita.dost.reader.KeyrefReader.resolveIntermediate(KeyrefReader.java:261)
at org.dita.dost.reader.KeyrefReader.resolveIntermediate(KeyrefReader.java:261)
```
|
process
|
stack overflow in keydefreader if my dita map contains a construct in which the keyref refers to the same key defined on the topicref like xml topicref href images chrysanthemums jpg format jpg navtitle chrysanthemum keys chrysanthemums keyref chrysanthemums the publishing process will stack overflow with an infinite recursivity d projects exml frameworks dita dita x build xml the following error occurred while executing this line d projects exml frameworks dita dita x plugins org dita base build preprocess xml java lang stackoverflowerror at org apache xerces dom namednodemapimpl getnameditem unknown source at org apache xerces dom elementimpl getattribute unknown source at org dita dost reader keyrefreader resolveintermediate keyrefreader java at org dita dost reader keyrefreader resolveintermediate keyrefreader java at org dita dost reader keyrefreader resolveintermediate keyrefreader java at org dita dost reader keyrefreader resolveintermediate keyrefreader java
| 1
|
17,186
| 22,768,704,925
|
IssuesEvent
|
2022-07-08 07:55:38
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
Obsoletion notice: GO:0039612 modulation by virus of host protein phosphorylation
|
obsoletion multi-species process
|
Dear all,
The proposal has been made to obsolete GO:0039612 modulation by virus of host protein phosphorylation & children
GO:0039612 modulation by virus of host protein phosphorylation
* GO:0039614 induction by virus of host protein phosphorylation
* GO:0039613 suppression by virus of host protein phosphorylation
* GO:0039584 suppression by virus of host protein kinase activity
GO:0039512 suppression by virus of host protein tyrosine kinase activity
There are 3 EXP annotations to these terms, see https://github.com/geneontology/go-annotation/issues/4233
There are no mappings to these terms, these terms are not present in any subsets.
You can comment on the ticket: https://github.com/geneontology/go-ontology/issues/23640
Thanks, Pascale
|
1.0
|
Obsoletion notice: GO:0039612 modulation by virus of host protein phosphorylation - Dear all,
The proposal has been made to obsolete GO:0039612 modulation by virus of host protein phosphorylation & children
GO:0039612 modulation by virus of host protein phosphorylation
* GO:0039614 induction by virus of host protein phosphorylation
* GO:0039613 suppression by virus of host protein phosphorylation
* GO:0039584 suppression by virus of host protein kinase activity
GO:0039512 suppression by virus of host protein tyrosine kinase activity
There are 3 EXP annotations to these terms, see https://github.com/geneontology/go-annotation/issues/4233
There are no mappings to these terms, these terms are not present in any subsets.
You can comment on the ticket: https://github.com/geneontology/go-ontology/issues/23640
Thanks, Pascale
|
process
|
obsoletion notice go modulation by virus of host protein phosphorylation dear all the proposal has been made to obsolete go modulation by virus of host protein phosphorylation children go modulation by virus of host protein phosphorylation go induction by virus of host protein phosphorylation go suppression by virus of host protein phosphorylation go suppression by virus of host protein kinase activity go suppression by virus of host protein tyrosine kinase activity there are exp annotations to these terms see there are no mappings to these terms these terms are not present in any subsets you can comment on the ticket thanks pascale
| 1
|
14,555
| 17,672,983,019
|
IssuesEvent
|
2021-08-23 08:48:35
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Our GitHub Actions should always include a "failure" mechanism with an alert system
|
process/candidate topic: internal topic: ci/cd kind/tech team/migrations
|
In some but not all our GitHub Actions workflows we have a failure mechanism which posts to Slack.
We should list where it's not yet implemented and implement where it makes sense.
Why? So we don't miss a failure that blocks a workflow for many days 😅
|
1.0
|
Our GitHub Actions should always include a "failure" mechanism with an alert system - In some but not all our GitHub Actions workflows we have a failure mechanism which posts to Slack.
We should list where it's not yet implemented and implement where it makes sense.
Why? So we don't miss a failure that blocks a workflow for many days 😅
|
process
|
our github actions should always include a failure mechanism with an alert system in some but not all our github actions workflows we have a failure mechanism which posts to slack we should list where it s not yet implemented and implement where it makes sense why so we don t miss a failure that blocks a workflow for many days 😅
| 1
|
5,895
| 8,710,284,336
|
IssuesEvent
|
2018-12-06 16:03:28
|
Open-EO/openeo-api
|
https://api.github.com/repos/Open-EO/openeo-api
|
closed
|
What if a process is not available? (Core Profiles)
|
processes vote
|
In the proof of concept API calls, there's always a check if the needed processes are available in the endpoint. This is entirely similar how OGC standards do things, but in practice, this is hardly ever used. Mainly because:
- in a client script, the developer does not 'declare' what functions he will be using later on
- The endpoint will return an error anyway when an unsupported function is used, so better to do error handling there?
Another way to allow optional implementations in backends, is to group functionality into profiles. For instance, a 'core' profile that is obligatory, and then optional stuff for specific use cases?
|
1.0
|
What if a process is not available? (Core Profiles) - In the proof of concept API calls, there's always a check if the needed processes are available in the endpoint. This is entirely similar how OGC standards do things, but in practice, this is hardly ever used. Mainly because:
- in a client script, the developer does not 'declare' what functions he will be using later on
- The endpoint will return an error anyway when an unsupported function is used, so better to do error handling there?
Another way to allow optional implementations in backends, is to group functionality into profiles. For instance, a 'core' profile that is obligatory, and then optional stuff for specific use cases?
|
process
|
what if a process is not available core profiles in the proof of concept api calls there s always a check if the needed processes are available in the endpoint this is entirely similar how ogc standards do things but in practice this is hardly ever used mainly because in a client script the developer does not declare what functions he will be using later on the endpoint will return an error anyway when an unsupported function is used so better to do error handling there another way to allow optional implementations in backends is to group functionality into profiles for instance a core profile that is obligatory and then optional stuff for specific use cases
| 1
|
7,573
| 10,685,477,662
|
IssuesEvent
|
2019-10-22 12:46:17
|
didi/mpx
|
https://api.github.com/repos/didi/mpx
|
closed
|
[Bug report] 通过this.xxx = 'xxxx' 设置data里边的值,没法响应到模板上面
|
bug processing
|
**问题描述**
通过mpx单文件构建为百度小程序的时候。
通过this.xxx = 'xxxx' 设置data里边的值,没法响应到模板上面。
官方文档上介绍说这样是可以的,但我没测试通过
**代码demo**
```html
<template>
<view>{{ message }}</view>
<!-- 这永远是 my test message. -->
</template>
```
```javascript
import { createPage } from '@mpxjs/core'
createPage({
data: {
message: 'my test message.'
},
onLoad () {
console.log(this.message) // output => undefined
console.log(this.data.message) // output => 'my test message.'
this.message = 'you test message.'
console.log(this.message) // output => 'you test message.'
console.log(this.data.message) // output => 'my test message.'
this.data.message = 'hhh message.'
console.log(this.data.message) // output => 'hhh message.'
// 这样写也不能改变模板里边message的值
this.setData({
message: 'xxxxxx'
})
console.log(this.data.message) // output => 'xxxxxx'
console.log(this.message) // output => 'you test message.'
}
})
```
|
1.0
|
[Bug report] 通过this.xxx = 'xxxx' 设置data里边的值,没法响应到模板上面 - **问题描述**
通过mpx单文件构建为百度小程序的时候。
通过this.xxx = 'xxxx' 设置data里边的值,没法响应到模板上面。
官方文档上介绍说这样是可以的,但我没测试通过
**代码demo**
```html
<template>
<view>{{ message }}</view>
<!-- 这永远是 my test message. -->
</template>
```
```javascript
import { createPage } from '@mpxjs/core'
createPage({
data: {
message: 'my test message.'
},
onLoad () {
console.log(this.message) // output => undefined
console.log(this.data.message) // output => 'my test message.'
this.message = 'you test message.'
console.log(this.message) // output => 'you test message.'
console.log(this.data.message) // output => 'my test message.'
this.data.message = 'hhh message.'
console.log(this.data.message) // output => 'hhh message.'
// 这样写也不能改变模板里边message的值
this.setData({
message: 'xxxxxx'
})
console.log(this.data.message) // output => 'xxxxxx'
console.log(this.message) // output => 'you test message.'
}
})
```
|
process
|
通过this xxx xxxx 设置data里边的值,没法响应到模板上面 问题描述 通过mpx单文件构建为百度小程序的时候。 通过this xxx xxxx 设置data里边的值,没法响应到模板上面。 官方文档上介绍说这样是可以的,但我没测试通过 代码demo html message javascript import createpage from mpxjs core createpage data message my test message onload console log this message output undefined console log this data message output my test message this message you test message console log this message output you test message console log this data message output my test message this data message hhh message console log this data message output hhh message 这样写也不能改变模板里边message的值 this setdata message xxxxxx console log this data message output xxxxxx console log this message output you test message
| 1
|
273,523
| 20,796,005,567
|
IssuesEvent
|
2022-03-17 09:22:31
|
devonfw/devon4ts
|
https://api.github.com/repos/devonfw/devon4ts
|
opened
|
Angular and NestJS libraries docs should be located in angular and nest sections
|
documentation enhancement
|
Although the library documents `guide-angular-libraries.asciidoc` and `guide-nest-libraries.asciidoc` are a very good idea, I suggest that, since they are not **generic to TypeScript**, we move them to their respective section.
I will include a PR with my suggestion.
|
1.0
|
Angular and NestJS libraries docs should be located in angular and nest sections - Although the library documents `guide-angular-libraries.asciidoc` and `guide-nest-libraries.asciidoc` are a very good idea, I suggest that, since they are not **generic to TypeScript**, we move them to their respective section.
I will include a PR with my suggestion.
|
non_process
|
angular and nestjs libraries docs should be located in angular and nest sections although the library documents guide angular libraries asciidoc and guide nest libraries asciidoc are a very good idea i suggest that since they are not generic to typescript we move them to their respective section i will include a pr with my suggestion
| 0
|
48,901
| 20,359,056,959
|
IssuesEvent
|
2022-02-20 12:11:04
|
PreMiD/Presences
|
https://api.github.com/repos/PreMiD/Presences
|
opened
|
Masterclass | masterclass.com
|
Service Request
|
### Website name
Masterclass
### Website URL
https://www.masterclass.com/
### Website logo
https://www.logo-designer.co/wp-content/uploads/2020/10/2020-masterclass-new-logo-design-identity-by-gretel-3.png
### Prerequisites
- [X] It is a paid service
- [ ] It displays NSFW content
- [ ] It is region restricted
### Description
Masterclass is a paid service that has 100s of professionals doing online videos teaching you how to do specific things (e.g Gordon Ramsay teaches cooking, Samuel L Jackson teaches acting, etc).
|
1.0
|
Masterclass | masterclass.com - ### Website name
Masterclass
### Website URL
https://www.masterclass.com/
### Website logo
https://www.logo-designer.co/wp-content/uploads/2020/10/2020-masterclass-new-logo-design-identity-by-gretel-3.png
### Prerequisites
- [X] It is a paid service
- [ ] It displays NSFW content
- [ ] It is region restricted
### Description
Masterclass is a paid service that has 100s of professionals doing online videos teaching you how to do specific things (e.g Gordon Ramsay teaches cooking, Samuel L Jackson teaches acting, etc).
|
non_process
|
masterclass masterclass com website name masterclass website url website logo prerequisites it is a paid service it displays nsfw content it is region restricted description masterclass is a paid service that has of professionals doing online videos teaching you how to do specific things e g gordon ramsay teaches cooking samuel l jackson teaches acting etc
| 0
|
48,036
| 25,318,171,171
|
IssuesEvent
|
2022-11-18 00:00:21
|
OctopusDeploy/Issues
|
https://api.github.com/repos/OctopusDeploy/Issues
|
closed
|
Switching Between Project Steps Calls ActionTemplate/Search Unnecessarily
|
kind/bug feature/performance team/deploy-fnm
|
# Prerequisites
- [x] I have verified the problem exists in the latest version
- [x] I have searched [open](https://github.com/OctopusDeploy/Issues/issues) and [closed](https://github.com/OctopusDeploy/Issues/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aclosed) issues to make sure it isn't already reported
- [x] I have written a descriptive issue title
- [x] I have linked the original source of this report
- [x] I have tagged the issue appropriately (area/*, kind/bug, tag/regression?)
# The bug
On the new project process screen, each time I switch between steps I am seeing calls to the ActionTemplate/Search endpoint (/api/[Space-ID]/actiontemplates/search).

In an of itself, that isn't a big deal. However, the responder (source/Octopus.Server/Web/Api/Actions/ActionTemplatesSearchResponder.cs) for that endpoint is doing the following:
- Pulls back all the built-in steps as the ActionTemplate class (source/Octopus.Core/Model/Projects/ActionTemplate.cs)
- Pulls back all the community steps as the CommunityActionTemplate class (source/Octopus.Core/Model/Projects/CommunityActionTemplate.cs)
- Calculates which community template has an update
- Returns a subset of all the data returned (source/Octopus.Core/Resources/ActionTemplateSearchResource.cs)
A lot of extra data is being returned from the database (step properties, packages, scripts, etc) which is then disgarded. For users with any sort of latency between the server and their database, the API will take a while return data. Which, in turn, causes the step screen to take a long time to render.
## Affected versions
**Octopus Server:** 2019.8.x and newer
## Workarounds
Work on eliminating latency between the database and the Octopus Server. Though in some cases that is not possible.
## Links
Reported from customer: https://secure.helpscout.net/conversation/1071297953/56665/
> Update: the above issue can be viewed on ZenDesk via this link: https://octopus.zendesk.com/agent/tickets/4686
|
True
|
Switching Between Project Steps Calls ActionTemplate/Search Unnecessarily - # Prerequisites
- [x] I have verified the problem exists in the latest version
- [x] I have searched [open](https://github.com/OctopusDeploy/Issues/issues) and [closed](https://github.com/OctopusDeploy/Issues/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aclosed) issues to make sure it isn't already reported
- [x] I have written a descriptive issue title
- [x] I have linked the original source of this report
- [x] I have tagged the issue appropriately (area/*, kind/bug, tag/regression?)
# The bug
On the new project process screen, each time I switch between steps I am seeing calls to the ActionTemplate/Search endpoint (/api/[Space-ID]/actiontemplates/search).

In an of itself, that isn't a big deal. However, the responder (source/Octopus.Server/Web/Api/Actions/ActionTemplatesSearchResponder.cs) for that endpoint is doing the following:
- Pulls back all the built-in steps as the ActionTemplate class (source/Octopus.Core/Model/Projects/ActionTemplate.cs)
- Pulls back all the community steps as the CommunityActionTemplate class (source/Octopus.Core/Model/Projects/CommunityActionTemplate.cs)
- Calculates which community template has an update
- Returns a subset of all the data returned (source/Octopus.Core/Resources/ActionTemplateSearchResource.cs)
A lot of extra data is being returned from the database (step properties, packages, scripts, etc) which is then disgarded. For users with any sort of latency between the server and their database, the API will take a while return data. Which, in turn, causes the step screen to take a long time to render.
## Affected versions
**Octopus Server:** 2019.8.x and newer
## Workarounds
Work on eliminating latency between the database and the Octopus Server. Though in some cases that is not possible.
## Links
Reported from customer: https://secure.helpscout.net/conversation/1071297953/56665/
> Update: the above issue can be viewed on ZenDesk via this link: https://octopus.zendesk.com/agent/tickets/4686
|
non_process
|
switching between project steps calls actiontemplate search unnecessarily prerequisites i have verified the problem exists in the latest version i have searched and issues to make sure it isn t already reported i have written a descriptive issue title i have linked the original source of this report i have tagged the issue appropriately area kind bug tag regression the bug on the new project process screen each time i switch between steps i am seeing calls to the actiontemplate search endpoint api actiontemplates search in an of itself that isn t a big deal however the responder source octopus server web api actions actiontemplatessearchresponder cs for that endpoint is doing the following pulls back all the built in steps as the actiontemplate class source octopus core model projects actiontemplate cs pulls back all the community steps as the communityactiontemplate class source octopus core model projects communityactiontemplate cs calculates which community template has an update returns a subset of all the data returned source octopus core resources actiontemplatesearchresource cs a lot of extra data is being returned from the database step properties packages scripts etc which is then disgarded for users with any sort of latency between the server and their database the api will take a while return data which in turn causes the step screen to take a long time to render affected versions octopus server x and newer workarounds work on eliminating latency between the database and the octopus server though in some cases that is not possible links reported from customer update the above issue can be viewed on zendesk via this link
| 0
|
107,181
| 23,363,680,087
|
IssuesEvent
|
2022-08-10 13:46:43
|
gitpod-io/gitpod
|
https://api.github.com/repos/gitpod-io/gitpod
|
closed
|
Epic: Integrate SSH Gateway into VS Code Desktop (SSH key upload & remove Local Companion)
|
team: IDE editor: code (desktop) type: epic component: ssh gateway aspect: connections
|
| Context | Integrate SSH Gateway into VS Code Desktop to bring Gitpod users a more stable and reliable desktop experience . |
|---------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Owner | Product - @loujaybee / Tech - @akosyakov |
| Value | More stable connections, no longer a reliance on local companion process or generated keys |
| Acceptance Criteria | - Current SSH method is removed from VS Code and replaced with SSH Gateway </br> - No impact on users on self-hosted who require the current SSH implementation |
| Growth Area | Activation & Retention / Expansion |
| Persona(s) | N/A |
| Hypothesis | Implementing the new SSH method will lead to a more reliable SSH approach in VS Code. |
| Measurement | N/A |
| In scope | N/A |
| Out of scope | - Any workspace management from VS Code (e.g. creating / updating workspaces) |
| Complexities | |
| Latest Update | 04.05.2022 - Descoped API's (moved to issue: https://github.com/gitpod-io/gitpod/issues/9757) and reduced to SSH Gateway within scope. Work has now started. |
**Related issues:**
* https://github.com/gitpod-io/gitpod/issues/5889
* https://github.com/gitpod-io/gitpod/issues/6127
* https://github.com/gitpod-io/gitpod/issues/6615
* https://github.com/gitpod-io/gitpod/issues/9619
|
1.0
|
Epic: Integrate SSH Gateway into VS Code Desktop (SSH key upload & remove Local Companion) - | Context | Integrate SSH Gateway into VS Code Desktop to bring Gitpod users a more stable and reliable desktop experience . |
|---------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Owner | Product - @loujaybee / Tech - @akosyakov |
| Value | More stable connections, no longer a reliance on local companion process or generated keys |
| Acceptance Criteria | - Current SSH method is removed from VS Code and replaced with SSH Gateway </br> - No impact on users on self-hosted who require the current SSH implementation |
| Growth Area | Activation & Retention / Expansion |
| Persona(s) | N/A |
| Hypothesis | Implementing the new SSH method will lead to a more reliable SSH approach in VS Code. |
| Measurement | N/A |
| In scope | N/A |
| Out of scope | - Any workspace management from VS Code (e.g. creating / updating workspaces) |
| Complexities | |
| Latest Update | 04.05.2022 - Descoped API's (moved to issue: https://github.com/gitpod-io/gitpod/issues/9757) and reduced to SSH Gateway within scope. Work has now started. |
**Related issues:**
* https://github.com/gitpod-io/gitpod/issues/5889
* https://github.com/gitpod-io/gitpod/issues/6127
* https://github.com/gitpod-io/gitpod/issues/6615
* https://github.com/gitpod-io/gitpod/issues/9619
|
non_process
|
epic integrate ssh gateway into vs code desktop ssh key upload remove local companion context integrate ssh gateway into vs code desktop to bring gitpod users a more stable and reliable desktop experience owner product loujaybee tech akosyakov value more stable connections no longer a reliance on local companion process or generated keys acceptance criteria current ssh method is removed from vs code and replaced with ssh gateway no impact on users on self hosted who require the current ssh implementation growth area activation retention expansion persona s n a hypothesis implementing the new ssh method will lead to a more reliable ssh approach in vs code measurement n a in scope n a out of scope any workspace management from vs code e g creating updating workspaces complexities latest update descoped api s moved to issue and reduced to ssh gateway within scope work has now started related issues
| 0
|
274,784
| 23,867,382,824
|
IssuesEvent
|
2022-09-07 12:15:28
|
Uuvana-Studios/longvinter-windows-client
|
https://api.github.com/repos/Uuvana-Studios/longvinter-windows-client
|
opened
|
Cannot pickup containers in house/outside of house
|
Bug Not Tested
|
**Describe the bug**:
When I clear out my containers (yellow and grey ones) I cannot pick up the bag that shows due to the containers not disappearing and actually being ready for pickup. I make sure the conatiner, my backpack and hands are fully empty and when right clicking the container shows the bag inside of it but cannot be picked up. I've even tried other ways to pick it up that I saw from other answers online like, holding left shift/holding left alt while left clicking and clicking spacebar and nothing is working.
**Expected behavior**:
Would really love for this to be fixed so I can further customize my house after upgrading it. I was lucky enough to have saved a space so I can go upstairs but others don't seems to be as lucky. So all in all would like to pick up and replace containers as many times as I can for organization.
**Screenshots**:
The first picture is of the yellow container showing the top of the bag in the middle showing (it is hard to see but I promise it is there).
Second photo is of the grey container with the bottom of the bag showing.


**Desktop (please complete the following information):**:
- OS: Windows 10 Version 21H2 for x64-based Systems
- Game Version 1.0.8b
- Steam Version v020 or whatever is the most updated version that is currently out
**Additional context**:
I also saw a post about this same issue #1149 from August 6th and wanted to make another report as it has been a month since that post has been made.
|
1.0
|
Cannot pickup containers in house/outside of house - **Describe the bug**:
When I clear out my containers (yellow and grey ones) I cannot pick up the bag that shows due to the containers not disappearing and actually being ready for pickup. I make sure the conatiner, my backpack and hands are fully empty and when right clicking the container shows the bag inside of it but cannot be picked up. I've even tried other ways to pick it up that I saw from other answers online like, holding left shift/holding left alt while left clicking and clicking spacebar and nothing is working.
**Expected behavior**:
Would really love for this to be fixed so I can further customize my house after upgrading it. I was lucky enough to have saved a space so I can go upstairs but others don't seems to be as lucky. So all in all would like to pick up and replace containers as many times as I can for organization.
**Screenshots**:
The first picture is of the yellow container showing the top of the bag in the middle showing (it is hard to see but I promise it is there).
Second photo is of the grey container with the bottom of the bag showing.


**Desktop (please complete the following information):**:
- OS: Windows 10 Version 21H2 for x64-based Systems
- Game Version 1.0.8b
- Steam Version v020 or whatever is the most updated version that is currently out
**Additional context**:
I also saw a post about this same issue #1149 from August 6th and wanted to make another report as it has been a month since that post has been made.
|
non_process
|
cannot pickup containers in house outside of house describe the bug when i clear out my containers yellow and grey ones i cannot pick up the bag that shows due to the containers not disappearing and actually being ready for pickup i make sure the conatiner my backpack and hands are fully empty and when right clicking the container shows the bag inside of it but cannot be picked up i ve even tried other ways to pick it up that i saw from other answers online like holding left shift holding left alt while left clicking and clicking spacebar and nothing is working expected behavior would really love for this to be fixed so i can further customize my house after upgrading it i was lucky enough to have saved a space so i can go upstairs but others don t seems to be as lucky so all in all would like to pick up and replace containers as many times as i can for organization screenshots the first picture is of the yellow container showing the top of the bag in the middle showing it is hard to see but i promise it is there second photo is of the grey container with the bottom of the bag showing desktop please complete the following information os windows version for based systems game version steam version or whatever is the most updated version that is currently out additional context i also saw a post about this same issue from august and wanted to make another report as it has been a month since that post has been made
| 0
|
1,984
| 4,816,107,624
|
IssuesEvent
|
2016-11-04 08:51:42
|
openvstorage/framework
|
https://api.github.com/repos/openvstorage/framework
|
closed
|
Clone vDisk fails
|
priority_critical process_wontfix state_question type_bug
|
Clone vDisk fails on the OVH setup (probably an issue with the create snapshot step in the clone flow)
|
1.0
|
Clone vDisk fails - Clone vDisk fails on the OVH setup (probably an issue with the create snapshot step in the clone flow)
|
process
|
clone vdisk fails clone vdisk fails on the ovh setup probably an issue with the create snapshot step in the clone flow
| 1
|
132,286
| 18,675,871,407
|
IssuesEvent
|
2021-10-31 14:53:56
|
pandas-dev/pandas
|
https://api.github.com/repos/pandas-dev/pandas
|
closed
|
API: require tz equality or just tzawareness-compat in setitem-like methods?
|
Enhancement Indexing API Design Timezones
|
There are a handful of methods that do casting/validation in DatetimeArray (and indirectly, Series and DatetimeIndex) methods:
`__setitem__`
shift
insert
fillna
take
searchsorted
`__cmp__`
In all cases, we upcast datetime64/datetime to Timestamp, try to parse strings, and check for tzawareness compat (i.e. raise if `(self.tz is None) ^ (other.tz is None)`)
In the cases of searchsorted and cmp, we stop there, and don't require the actual timezones to match. In the other cases, we are stricter and raise if `not tz_compare(self.tz, other.tz)`
In #37299 we discussed the idea of making these uniform. I am generally positive on this, as it would make things simpler both for users and in the code. cc @jorisvandenbossche @jreback
Side-notes
1) `__sub__` doesn't do the same casting, but also requires tz match but _could_ get by with only tzawareness compat, xref #37329. BTW the stdlib datetime only requires tzawareness compat.
2) In `Series/DataFrame.__setitem__` we cast to object instead of raising on tz mismatch. For brevity I lump this in with "we raise on tz mismatch".
|
1.0
|
API: require tz equality or just tzawareness-compat in setitem-like methods? - There are a handful of methods that do casting/validation in DatetimeArray (and indirectly, Series and DatetimeIndex) methods:
`__setitem__`
shift
insert
fillna
take
searchsorted
`__cmp__`
In all cases, we upcast datetime64/datetime to Timestamp, try to parse strings, and check for tzawareness compat (i.e. raise if `(self.tz is None) ^ (other.tz is None)`)
In the cases of searchsorted and cmp, we stop there, and don't require the actual timezones to match. In the other cases, we are stricter and raise if `not tz_compare(self.tz, other.tz)`
In #37299 we discussed the idea of making these uniform. I am generally positive on this, as it would make things simpler both for users and in the code. cc @jorisvandenbossche @jreback
Side-notes
1) `__sub__` doesn't do the same casting, but also requires tz match but _could_ get by with only tzawareness compat, xref #37329. BTW the stdlib datetime only requires tzawareness compat.
2) In `Series/DataFrame.__setitem__` we cast to object instead of raising on tz mismatch. For brevity I lump this in with "we raise on tz mismatch".
|
non_process
|
api require tz equality or just tzawareness compat in setitem like methods there are a handful of methods that do casting validation in datetimearray and indirectly series and datetimeindex methods setitem shift insert fillna take searchsorted cmp in all cases we upcast datetime to timestamp try to parse strings and check for tzawareness compat i e raise if self tz is none other tz is none in the cases of searchsorted and cmp we stop there and don t require the actual timezones to match in the other cases we are stricter and raise if not tz compare self tz other tz in we discussed the idea of making these uniform i am generally positive on this as it would make things simpler both for users and in the code cc jorisvandenbossche jreback side notes sub doesn t do the same casting but also requires tz match but could get by with only tzawareness compat xref btw the stdlib datetime only requires tzawareness compat in series dataframe setitem we cast to object instead of raising on tz mismatch for brevity i lump this in with we raise on tz mismatch
| 0
|
857
| 3,316,902,096
|
IssuesEvent
|
2015-11-06 19:04:38
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
On OSX, launching a non existent process looks for Windows specific DllImport
|
Mac OSX packaging System.Diagnostics.Process System.IO
|
```
Unhandled Exception: System.DllNotFoundException: Unable to load DLL 'api-ms-win-core-errorhandling-l1-1-0.dll': The specified module could not be found.
(Exception from HRESULT: 0x8007007E)
at Interop.mincore.SetErrorMode(UInt32 newMode)
at System.IO.Win32FileSystem.FillAttributeInfo(String path, WIN32_FILE_ATTRIBUTE_DATA& data, Boolean tryagain, Boolean returnErrorOnNotFound)
at System.IO.Win32FileSystem.FileExists(String fullPath)
at System.IO.File.Exists(String path)
at System.Diagnostics.Process.ResolvePath(String filename)
at System.Diagnostics.Process.StartCore(ProcessStartInfo startInfo)
at System.Diagnostics.Process.Start(ProcessStartInfo startInfo)
at Program.Main()
```
/cc @stephentoub
|
1.0
|
On OSX, launching a non existent process looks for Windows specific DllImport - ```
Unhandled Exception: System.DllNotFoundException: Unable to load DLL 'api-ms-win-core-errorhandling-l1-1-0.dll': The specified module could not be found.
(Exception from HRESULT: 0x8007007E)
at Interop.mincore.SetErrorMode(UInt32 newMode)
at System.IO.Win32FileSystem.FillAttributeInfo(String path, WIN32_FILE_ATTRIBUTE_DATA& data, Boolean tryagain, Boolean returnErrorOnNotFound)
at System.IO.Win32FileSystem.FileExists(String fullPath)
at System.IO.File.Exists(String path)
at System.Diagnostics.Process.ResolvePath(String filename)
at System.Diagnostics.Process.StartCore(ProcessStartInfo startInfo)
at System.Diagnostics.Process.Start(ProcessStartInfo startInfo)
at Program.Main()
```
/cc @stephentoub
|
process
|
on osx launching a non existent process looks for windows specific dllimport unhandled exception system dllnotfoundexception unable to load dll api ms win core errorhandling dll the specified module could not be found exception from hresult at interop mincore seterrormode newmode at system io fillattributeinfo string path file attribute data data boolean tryagain boolean returnerroronnotfound at system io fileexists string fullpath at system io file exists string path at system diagnostics process resolvepath string filename at system diagnostics process startcore processstartinfo startinfo at system diagnostics process start processstartinfo startinfo at program main cc stephentoub
| 1
|
175,449
| 13,552,281,416
|
IssuesEvent
|
2020-09-17 12:23:30
|
SwissClinicalTrialOrganisation/secuTrialR_validation
|
https://api.github.com/repos/SwissClinicalTrialOrganisation/secuTrialR_validation
|
opened
|
Retrieve participants present in secuTrialdata
|
Test
|
**Test objectives:**
Check if participant lists are generated correctly.
**Test type 1:**
Check if errors are triggered.
**Test type 2:**
Check if participants are selected accordingly.
**Technical test source:** [test-get.R](https://github.com/SwissClinicalTrialOrganisation/secuTrialR/blob/master/tests/testthat/test-get.R)
**Linkage to related documents:**
User requirement: #39
Functional specification: #40
|
1.0
|
Retrieve participants present in secuTrialdata - **Test objectives:**
Check if participant lists are generated correctly.
**Test type 1:**
Check if errors are triggered.
**Test type 2:**
Check if participants are selected accordingly.
**Technical test source:** [test-get.R](https://github.com/SwissClinicalTrialOrganisation/secuTrialR/blob/master/tests/testthat/test-get.R)
**Linkage to related documents:**
User requirement: #39
Functional specification: #40
|
non_process
|
retrieve participants present in secutrialdata test objectives check if participant lists are generated correctly test type check if errors are triggered test type check if participants are selected accordingly technical test source linkage to related documents user requirement functional specification
| 0
|
17,182
| 22,764,337,407
|
IssuesEvent
|
2022-07-08 01:42:10
|
turnkeylinux/tracker
|
https://api.github.com/repos/turnkeylinux/tracker
|
opened
|
Processmaker can't login (to Processmaker web UI) with password set at firstboot
|
bug workaround processmaker
|
It's been brought to my attention that the 'admin' user password (set via inithooks - either pre-set within the [Hub](https://hub.turnkeylinux.org/) or set interactively at firstboot) for our [Processmaker appliance](https://www.turnkeylinux.org/processmaker) does not allow login to the Processmaker web UI.
FWIW, it's because the user has "expired".
To work around the issue, please log in via SSH (either using an SSH client such as OpenSSH or PuTTY; or Webshell) and run the following commands (if not running as `root`; prefix with `sudo`):
```
mysql -e "UPDATE wf_workflow.USERS SET USR_DUE_DATE='2040-01-01' where USR_USERNAME='admin';
mysql -e "UPDATE wf_workflow.RBAC_USERS set USR_DUE_DATE='2040-01-01' where USR_USERNAME='admin';
```
You should now be able to login with the password that you set. If you would like to reset the password, please just re-run the inithook:
```
/usr/lib/inithooks/bin/processmaker.py
```
|
1.0
|
Processmaker can't login (to Processmaker web UI) with password set at firstboot - It's been brought to my attention that the 'admin' user password (set via inithooks - either pre-set within the [Hub](https://hub.turnkeylinux.org/) or set interactively at firstboot) for our [Processmaker appliance](https://www.turnkeylinux.org/processmaker) does not allow login to the Processmaker web UI.
FWIW, it's because the user has "expired".
To work around the issue, please log in via SSH (either using an SSH client such as OpenSSH or PuTTY; or Webshell) and run the following commands (if not running as `root`; prefix with `sudo`):
```
mysql -e "UPDATE wf_workflow.USERS SET USR_DUE_DATE='2040-01-01' where USR_USERNAME='admin';
mysql -e "UPDATE wf_workflow.RBAC_USERS set USR_DUE_DATE='2040-01-01' where USR_USERNAME='admin';
```
You should now be able to login with the password that you set. If you would like to reset the password, please just re-run the inithook:
```
/usr/lib/inithooks/bin/processmaker.py
```
|
process
|
processmaker can t login to processmaker web ui with password set at firstboot it s been brought to my attention that the admin user password set via inithooks either pre set within the or set interactively at firstboot for our does not allow login to the processmaker web ui fwiw it s because the user has expired to work around the issue please log in via ssh either using an ssh client such as openssh or putty or webshell and run the following commands if not running as root prefix with sudo mysql e update wf workflow users set usr due date where usr username admin mysql e update wf workflow rbac users set usr due date where usr username admin you should now be able to login with the password that you set if you would like to reset the password please just re run the inithook usr lib inithooks bin processmaker py
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.