Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 1 new columns ({'file_id'}) and 2 missing columns ({'repo_name', 'issue_id'}).
This happened while the json dataset builder was generating data using
hf://datasets/JoTeqtheFirstAI/issues-kaggle-notebooks315k/kaggle/issues-kaggle-notebooks315k-kaggle-shard-0.jsonl (at revision 70e70a9f8b59f26db56ddedab437c0805117f745), [/tmp/hf-datasets-cache/medium/datasets/40786346238643-config-parquet-and-info-JoTeqtheFirstAI-issues-ka-ee544b58/hub/datasets--JoTeqtheFirstAI--issues-kaggle-notebooks315k/snapshots/70e70a9f8b59f26db56ddedab437c0805117f745/issues/issues-kaggle-notebooks315k-issues-shard-0.jsonl (origin=hf://datasets/JoTeqtheFirstAI/issues-kaggle-notebooks315k@70e70a9f8b59f26db56ddedab437c0805117f745/issues/issues-kaggle-notebooks315k-issues-shard-0.jsonl), /tmp/hf-datasets-cache/medium/datasets/40786346238643-config-parquet-and-info-JoTeqtheFirstAI-issues-ka-ee544b58/hub/datasets--JoTeqtheFirstAI--issues-kaggle-notebooks315k/snapshots/70e70a9f8b59f26db56ddedab437c0805117f745/issues/issues-kaggle-notebooks315k-issues-shard-1.jsonl (origin=hf://datasets/JoTeqtheFirstAI/issues-kaggle-notebooks315k@70e70a9f8b59f26db56ddedab437c0805117f745/issues/issues-kaggle-notebooks315k-issues-shard-1.jsonl), /tmp/hf-datasets-cache/medium/datasets/40786346238643-config-parquet-and-info-JoTeqtheFirstAI-issues-ka-ee544b58/hub/datasets--JoTeqtheFirstAI--issues-kaggle-notebooks315k/snapshots/70e70a9f8b59f26db56ddedab437c0805117f745/issues/issues-kaggle-notebooks315k-issues-shard-2.jsonl (origin=hf://datasets/JoTeqtheFirstAI/issues-kaggle-notebooks315k@70e70a9f8b59f26db56ddedab437c0805117f745/issues/issues-kaggle-notebooks315k-issues-shard-2.jsonl), /tmp/hf-datasets-cache/medium/datasets/40786346238643-config-parquet-and-info-JoTeqtheFirstAI-issues-ka-ee544b58/hub/datasets--JoTeqtheFirstAI--issues-kaggle-notebooks315k/snapshots/70e70a9f8b59f26db56ddedab437c0805117f745/issues/issues-kaggle-notebooks315k-issues-shard-3.jsonl (origin=hf://datasets/JoTeqtheFirstAI/issues-kaggle-notebooks315k@70e70a9f8b59f26db56ddedab437c0805117f745/issues/issues-kaggle-notebooks315k-issues-shard-3.jsonl), /tmp/hf-datasets-cache/medium/datasets/40786346238643-config-parquet-and-info-JoTeqtheFirstAI-issues-ka-ee544b58/hub/datasets--JoTeqtheFirstAI--issues-kaggle-notebooks315k/snapshots/70e70a9f8b59f26db56ddedab437c0805117f745/issues/issues-kaggle-notebooks315k-issues-shard-4.jsonl (origin=hf://datasets/JoTeqtheFirstAI/issues-kaggle-notebooks315k@70e70a9f8b59f26db56ddedab437c0805117f745/issues/issues-kaggle-notebooks315k-issues-shard-4.jsonl), /tmp/hf-datasets-cache/medium/datasets/40786346238643-config-parquet-and-info-JoTeqtheFirstAI-issues-ka-ee544b58/hub/datasets--JoTeqtheFirstAI--issues-kaggle-notebooks315k/snapshots/70e70a9f8b59f26db56ddedab437c0805117f745/issues/issues-kaggle-notebooks315k-issues-shard-5.jsonl (origin=hf://datasets/JoTeqtheFirstAI/issues-kaggle-notebooks315k@70e70a9f8b59f26db56ddedab437c0805117f745/issues/issues-kaggle-notebooks315k-issues-shard-5.jsonl), /tmp/hf-datasets-cache/medium/datasets/40786346238643-config-parquet-and-info-JoTeqtheFirstAI-issues-ka-ee544b58/hub/datasets--JoTeqtheFirstAI--issues-kaggle-notebooks315k/snapshots/70e70a9f8b59f26db56ddedab437c0805117f745/kaggle/issues-kaggle-notebooks315k-kaggle-shard-0.jsonl (origin=hf://datasets/JoTeqtheFirstAI/issues-kaggle-notebooks315k@70e70a9f8b59f26db56ddedab437c0805117f745/kaggle/issues-kaggle-notebooks315k-kaggle-shard-0.jsonl), /tmp/hf-datasets-cache/medium/datasets/40786346238643-config-parquet-and-info-JoTeqtheFirstAI-issues-ka-ee544b58/hub/datasets--JoTeqtheFirstAI--issues-kaggle-notebooks315k/snapshots/70e70a9f8b59f26db56ddedab437c0805117f745/kaggle/issues-kaggle-notebooks315k-kaggle-shard-1.jsonl (origin=hf://datasets/JoTeqtheFirstAI/issues-kaggle-notebooks315k@70e70a9f8b59f26db56ddedab437c0805117f745/kaggle/issues-kaggle-notebooks315k-kaggle-shard-1.jsonl)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1887, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 675, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
file_id: string
text: string
to
{'repo_name': Value('string'), 'issue_id': Value('string'), 'text': Value('string')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1736, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1889, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 1 new columns ({'file_id'}) and 2 missing columns ({'repo_name', 'issue_id'}).
This happened while the json dataset builder was generating data using
hf://datasets/JoTeqtheFirstAI/issues-kaggle-notebooks315k/kaggle/issues-kaggle-notebooks315k-kaggle-shard-0.jsonl (at revision 70e70a9f8b59f26db56ddedab437c0805117f745), [/tmp/hf-datasets-cache/medium/datasets/40786346238643-config-parquet-and-info-JoTeqtheFirstAI-issues-ka-ee544b58/hub/datasets--JoTeqtheFirstAI--issues-kaggle-notebooks315k/snapshots/70e70a9f8b59f26db56ddedab437c0805117f745/issues/issues-kaggle-notebooks315k-issues-shard-0.jsonl (origin=hf://datasets/JoTeqtheFirstAI/issues-kaggle-notebooks315k@70e70a9f8b59f26db56ddedab437c0805117f745/issues/issues-kaggle-notebooks315k-issues-shard-0.jsonl), /tmp/hf-datasets-cache/medium/datasets/40786346238643-config-parquet-and-info-JoTeqtheFirstAI-issues-ka-ee544b58/hub/datasets--JoTeqtheFirstAI--issues-kaggle-notebooks315k/snapshots/70e70a9f8b59f26db56ddedab437c0805117f745/issues/issues-kaggle-notebooks315k-issues-shard-1.jsonl (origin=hf://datasets/JoTeqtheFirstAI/issues-kaggle-notebooks315k@70e70a9f8b59f26db56ddedab437c0805117f745/issues/issues-kaggle-notebooks315k-issues-shard-1.jsonl), /tmp/hf-datasets-cache/medium/datasets/40786346238643-config-parquet-and-info-JoTeqtheFirstAI-issues-ka-ee544b58/hub/datasets--JoTeqtheFirstAI--issues-kaggle-notebooks315k/snapshots/70e70a9f8b59f26db56ddedab437c0805117f745/issues/issues-kaggle-notebooks315k-issues-shard-2.jsonl (origin=hf://datasets/JoTeqtheFirstAI/issues-kaggle-notebooks315k@70e70a9f8b59f26db56ddedab437c0805117f745/issues/issues-kaggle-notebooks315k-issues-shard-2.jsonl), /tmp/hf-datasets-cache/medium/datasets/40786346238643-config-parquet-and-info-JoTeqtheFirstAI-issues-ka-ee544b58/hub/datasets--JoTeqtheFirstAI--issues-kaggle-notebooks315k/snapshots/70e70a9f8b59f26db56ddedab437c0805117f745/issues/issues-kaggle-notebooks315k-issues-shard-3.jsonl (origin=hf://datasets/JoTeqtheFirstAI/issues-kaggle-notebooks315k@70e70a9f8b59f26db56ddedab437c0805117f745/issues/issues-kaggle-notebooks315k-issues-shard-3.jsonl), /tmp/hf-datasets-cache/medium/datasets/40786346238643-config-parquet-and-info-JoTeqtheFirstAI-issues-ka-ee544b58/hub/datasets--JoTeqtheFirstAI--issues-kaggle-notebooks315k/snapshots/70e70a9f8b59f26db56ddedab437c0805117f745/issues/issues-kaggle-notebooks315k-issues-shard-4.jsonl (origin=hf://datasets/JoTeqtheFirstAI/issues-kaggle-notebooks315k@70e70a9f8b59f26db56ddedab437c0805117f745/issues/issues-kaggle-notebooks315k-issues-shard-4.jsonl), /tmp/hf-datasets-cache/medium/datasets/40786346238643-config-parquet-and-info-JoTeqtheFirstAI-issues-ka-ee544b58/hub/datasets--JoTeqtheFirstAI--issues-kaggle-notebooks315k/snapshots/70e70a9f8b59f26db56ddedab437c0805117f745/issues/issues-kaggle-notebooks315k-issues-shard-5.jsonl (origin=hf://datasets/JoTeqtheFirstAI/issues-kaggle-notebooks315k@70e70a9f8b59f26db56ddedab437c0805117f745/issues/issues-kaggle-notebooks315k-issues-shard-5.jsonl), /tmp/hf-datasets-cache/medium/datasets/40786346238643-config-parquet-and-info-JoTeqtheFirstAI-issues-ka-ee544b58/hub/datasets--JoTeqtheFirstAI--issues-kaggle-notebooks315k/snapshots/70e70a9f8b59f26db56ddedab437c0805117f745/kaggle/issues-kaggle-notebooks315k-kaggle-shard-0.jsonl (origin=hf://datasets/JoTeqtheFirstAI/issues-kaggle-notebooks315k@70e70a9f8b59f26db56ddedab437c0805117f745/kaggle/issues-kaggle-notebooks315k-kaggle-shard-0.jsonl), /tmp/hf-datasets-cache/medium/datasets/40786346238643-config-parquet-and-info-JoTeqtheFirstAI-issues-ka-ee544b58/hub/datasets--JoTeqtheFirstAI--issues-kaggle-notebooks315k/snapshots/70e70a9f8b59f26db56ddedab437c0805117f745/kaggle/issues-kaggle-notebooks315k-kaggle-shard-1.jsonl (origin=hf://datasets/JoTeqtheFirstAI/issues-kaggle-notebooks315k@70e70a9f8b59f26db56ddedab437c0805117f745/kaggle/issues-kaggle-notebooks315k-kaggle-shard-1.jsonl)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
repo_name string | issue_id string | text string |
|---|---|---|
datatheorem/TrustKit | 656820209 | Title: Bump OS support versions in September?
Question:
username_0: As iOS 14 is being released in several weeks, it would be nice to bump minimum support from the iOS 10 generation to the iOS 11 generation (+tv, watch, mac). This would maintain N-3 major OS version support (iOS 11, 12, 13, 14).
The main benefit of this change would be removal of the two insecure coding branches:
https://github.com/datatheorem/TrustKit/blob/master/TrustKit/Pinning/TSKSPKIHashCache.m#L201
https://github.com/datatheorem/TrustKit/blob/master/TrustKit/Pinning/TSKSPKIHashCache.m#L226
Which are periodically flagged by Yahoo security (even though we don't support iOS 10, we're 12+).
Answers:
username_1: Yeah, agreed and thanks for the notice!
username_1: Released as v1.7.0.
Status: Issue closed
|
NervJS/taro-ui | 816975517 | Title: Taro.initPxTransform的问题
Question:
username_0: **问题描述**
<!--- 问题描述:站在其它人的角度尽可能清晰地、简洁地把问题描述清楚 --->
在H5模式下,`Taro.initPxTransform({ designWidth: 750, deviceRatio: {} })`会导致项目设置的`designWidth`、`deviceRatio`的值被替换,导致`Taro.pxTransform`得到数值与预期不符合。
**复现步骤**
<!--- 复现问题的步骤 --->
<!---
1. 新建项目,设置designWidth为375
2. 引入taro-ui(非app.jsx文件引入)
3. 调用`Taro.pxTransform`
4. 得到错误的值
--->
**期望行为**
不应调用`Taro.initPxTransform`方法,保证项目的配置不被改动
**系统信息**
<!--- Taro v1.2 及以上版本已添加 `taro info` 命令,方便大家查看系统及依赖信息,运行该命令后将结果贴下面即可 --->
Taro v2.2.14
Browserslist: caniuse-lite is outdated. Please run:
npx browserslist@latest --update-db
Taro CLI 2.2.14 environment info:
System:
OS: Windows 10
Binaries:
Node: 12.16.3 - D:\Program Files\nodejs\node.EXE
Yarn: 1.22.4 - D:\Program Files\Yarn\bin\yarn.CMD
npm: 6.14.4 - D:\Program Files\nodejs\npm.CMD
- Taro 版本 [v2.2.14]
- Taro UI 版本 [v2.3.4]
- 报错平台 [h5] |
beatcracker/toptout | 876030255 | Title: Add Flagsmith
Question:
username_0: https://docs.flagsmith.com/deployment-overview/#api-telemetry
https://github.com/Flagsmith/flagsmith-api/blob/f5c61de73dacf9ed08416f18274541ef9d3de5aa/readme.md#api-telemetry
Flagsmith collects information about self hosted installations. This helps us understand how the platform is being used. This data is *never* shared outside of the organisation, and is anonymous by design. You can opt out of sending this telemetry on startup by setting the `ENABLE_TELEMETRY` environment variable to `False`.<issue_closed>
Status: Issue closed |
tensorflow/tensorflow | 319272932 | Title: Windows build fails with unresolved externals
Question:
username_0: I followed the instructions to build on Windows (Windows 10, Visual Studio 2017 version 15.6.7, CPU only), but the following projects fail to build:
tf_python_api
grpc_tensorflow_server
benchmark_model
tf_tutorials_example_trainer
tf_label_image_example
compare_graphs
transform_graph
summarize_graph
With the same four unresolved externals:
LNK2019 unresolved external symbol "public: class tensorflow::AttrBuilder & __cdecl tensorflow::AttrBuilder::NumInputs(int)" (?NumInputs@AttrBuilder@tensorflow@@QEAAAEAV12@H@Z) referenced in function "public: void __cdecl tensorflow::EagerOperation::AddInput(class tensorflow::TensorHandle *)" (?AddInput@EagerOperation@tensorflow@@QEAAXPEAVTensorHandle@2@@Z)
LNK2019 unresolved external symbol "class tensorflow::Status __cdecl tensorflow::OpDefForOp(char const *,class tensorflow::OpDef const * *)" (?OpDefForOp@tensorflow@@YA?AVStatus@1@PEBDPEAPEBVOpDef@1@@Z) referenced in function "class tensorflow::Status __cdecl tensorflow::EagerExecute(class tensorflow::EagerOperation *,class tensorflow::gtl::InlinedVector<class tensorflow::TensorHandle *,2> *,int *)" (?EagerExecute@tensorflow@@YA?AVStatus@1@PEAVEagerOperation@1@PEAV?$InlinedVector@PEAVTensorHandle@tensorflow@@$01@gtl@1@PEAH@Z)
LNK2019 unresolved external symbol "public: struct tensorflow::Fprint128 __cdecl tensorflow::AttrBuilder::CacheKey(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &)const " (?CacheKey@AttrBuilder@tensorflow@@QEBA?AUFprint128@2@AEBV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@@Z) referenced in function "class tensorflow::Status __cdecl tensorflow::EagerExecute(class tensorflow::EagerOperation *,class tensorflow::gtl::InlinedVector<class tensorflow::TensorHandle *,2> *,int *)" (?EagerExecute@tensorflow@@YA?AVStatus@1@PEAVEagerOperation@1@PEAV?$InlinedVector@PEAVTensorHandle@tensorflow@@$01@gtl@1@PEAH@Z)
LNK2019 unresolved external symbol "public: class tensorflow::NodeDef const & __cdecl tensorflow::AttrBuilder::BuildNodeDef(void)" (?BuildNodeDef@AttrBuilder@tensorflow@@QEAAAEBVNodeDef@2@XZ) referenced in function "class tensorflow::Status __cdecl tensorflow::EagerExecute(class tensorflow::EagerOperation *,class tensorflow::gtl::InlinedVector<class tensorflow::TensorHandle *,2> *,int *)" (?EagerExecute@tensorflow@@YA?AVStatus@1@PEAVEagerOperation@1@PEAV?$InlinedVector@PEAVTensorHandle@tensorflow@@$01@gtl@1@PEAH@Z)
Answers:
username_1: I also get the completely same error at the same day loL
OS:Windows 10
Install MKL from Intel official website
Tensorflow install from:git
Tensorflow version: this github
no GPU, CPU only
Build by cmake and compile by VS 14
Install command:
C:\...\build> cmake .. -A x64 -DCMAKE_BUILD_TYPE=Release ^
More? -DSWIG_EXECUTABLE=C:/tools/swigwin-3.0.12/swig.exe ^
More? -DPYTHON_EXECUTABLE=C:/Users/dear9/Anaconda3/envs/py35/python.exe ^
More? -DPYTHON_LIBRARIES=C:/Users/dear9/Anaconda3/envs/py35/libs/python35.lib ^
More? -Dtensorflow_ENABLE_MKL_SUPPORT=ON ^
More? -DMKL_HOME="C:\Program Files (x86)\IntelSWTools\compilers_and_libraries"
More? -Dtensorflow_WIN_CPU_SIMD_OPTIONS=/arch:AVX
the configuration has no error. Then
C:\...\build>MSBuild /p:Configuration=Release tf_tutorials_example_trainer.vcxproj
the error comes out
Do you need more information?
username_2: I have got same issue using
-Have I written custom code (as opposed to using a stock example script provided in TensorFlow) : NO
- OS Platform and Distribution : WINDOWS 10
-TensorFlow installed from (source or binary) : git clone
- TensorFlow version (use command below) :
last commit:
commit 8cb6e535f6ea14b380aa86d425169adad682cb8c (HEAD -> master, origin/master, origin/HEAD)
Merge: d0f5bc1756 29cd3f9632
Author: <NAME> <<EMAIL>>
Date: Wed May 2 03:08:57 2018 +0300
- Python version : 3.6.4
- Bazel version (if compiling from source) : NO using CMAKE 3.11.0
- GCC/Compiler version (if compiling from source): VS 2017 version 15.6.3
- CUDA/cuDNN version: no only cpu version
- GPU model and memory : NOT USE
- Exact command to reproduce :
Use cmake : cmakecache is
[CMakeCache.txt](https://github.com/tensorflow/tensorflow/files/1967505/CMakeCache.txt)
In vs 2017 errors are :
```
1>------ Build started: Project: zlib, Configuration: Release x64 ------
2>------ Build started: Project: nsync, Configuration: Release x64 ------
3>------ Build started: Project: farmhash, Configuration: Release x64 ------
4>------ Build started: Project: highwayhash, Configuration: Release x64 ------
5>------ Build started: Project: jpeg, Configuration: Release x64 ------
6>------ Build started: Project: gif, Configuration: Release x64 ------
7>------ Build started: Project: sqlite, Configuration: Release x64 ------
8>------ Build started: Project: lmdb, Configuration: Release x64 ------
9>------ Build started: Project: fft2d, Configuration: Release x64 ------
10>------ Build started: Project: snappy, Configuration: Release x64 ------
1>Performing update step for 'zlib'
2>Performing update step for 'nsync'
4>Performing update step for 'highwayhash'
11>------ Build started: Project: farmhash_create_destination_dir, Configuration: Release x64 ------
12>------ Build started: Project: jpeg_create_destination_dir, Configuration: Release x64 ------
13>------ Build started: Project: gif_create_destination_dir, Configuration: Release x64 ------
14>------ Build started: Project: sqlite_create_destination_dir, Configuration: Release x64 ------
15>------ Build started: Project: lmdb_create_destination_dir, Configuration: Release x64 ------
16>------ Build started: Project: double_conversion, Configuration: Release x64 ------
17>------ Build started: Project: png, Configuration: Release x64 ------
18>------ Build started: Project: protobuf, Configuration: Release x64 ------
19>------ Build started: Project: nsync_create_destination_dir, Configuration: Release x64 ------
20>------ Build started: Project: zlib_create_destination_dir, Configuration: Release x64 ------
21>------ Build started: Project: highwayhash_create_destination_dir, Configuration: Release x64 ------
22>------ Build started: Project: gemmlowp, Configuration: Release x64 ------
16>Performing update step for 'double_conversion'
23>------ Build started: Project: lmdb_copy_headers_to_destination, Configuration: Release x64 ------
24>------ Build started: Project: sqlite_copy_headers_to_destination, Configuration: Release x64 ------
18>Performing update step for 'protobuf'
25>------ Build started: Project: png_create_destination_dir, Configuration: Release x64 ------
26>------ Build started: Project: eigen, Configuration: Release x64 ------
27>------ Build started: Project: gif_copy_headers_to_destination, Configuration: Release x64 ------
28>------ Build started: Project: zlib_copy_headers_to_destination, Configuration: Release x64 ------
29>------ Build started: Project: highwayhash_copy_headers_to_destination, Configuration: Release x64 ------
30>------ Build started: Project: grpc, Configuration: Release x64 ------
31>------ Build started: Project: farmhash_copy_headers_to_destination, Configuration: Release x64 ------
[Truncated]
182>Done building project "_nearest_neighbor_ops.vcxproj" -- FAILED.
184>LINK : fatal error LNK1181: cannot open input file '\pywrap_tensorflow_internal.lib'
184>Done building project "_gru_ops.vcxproj" -- FAILED.
185>LINK : fatal error LNK1181: cannot open input file '\pywrap_tensorflow_internal.lib'
185>Done building project "_beam_search_ops.vcxproj" -- FAILED.
183>LINK : fatal error LNK1181: cannot open input file '\pywrap_tensorflow_internal.lib'
183>Done building project "_lstm_ops.vcxproj" -- FAILED.
186>------ Build started: Project: tf_python_api, Configuration: Release x64 ------
187>------ Skipped Build: Project: INSTALL, Configuration: Release x64 ------
187>Project not selected to build for this solution configuration
186>Generating __init__.py files for Python API.
188>------ Skipped Build: Project: tf_python_build_pip_package, Configuration: Release x64 ------
188>Project not selected to build for this solution configuration
========== Build: 174 succeeded, 12 failed, 82 up-to-date, 2 skipped ==========
```
username_0: Pretty much the same as the other two, but since you asked:
Have I written custom code: No
OS Platform and Distribution: Windows 10
TensorFlow installed from: git
TensorFlow version: latest in github (as of 4/27/2018)
Bazel version: don't know what this is
CUDA/cuDNN version: CPU only
GPU model and memory: N/A
Exact command to reproduce: cmake-gui to generate the Visual Studio solution/projects from CMakeLists.txt and then Build Solution in Visual Studio 2017 (Release)
username_2: Not for me |
ballerina-platform/ballerina-lang | 1085457584 | Title: Change return type code action does not chnage the signature of the correct method/function
Question:
username_0: **Description:**
Consider the following capture.

**Steps to reproduce:**
```ballerina
public function test(){
var var1 = object {
int id = 10;
function getId() {
return self.id;
}
};
}
```
**Affected Versions:**
slbeta6
Status: Issue closed
Answers:
username_1: Closing the issue, since the fix has been merged. |
bassmaster187/TeslaLogger | 803739844 | Title: Log "Wait for MFA code !!!" repeated every 10 seconds
Question:
username_0: Since last update, logs are polluted with repeating messages every 10 seconds
08.02.2021 17:32:53 : #1: Charging Complete
08.02.2021 17:32:53 : #1: change TeslaLogger state: Charge -> Start
08.02.2021 17:32:54 : #1: try to get new Token
08.02.2021 17:32:54 : #1: No Refresh Token
08.02.2021 17:32:54 : #1: Login with : 'XatXiXX.bXXtX<EMAIL>' / 'xxxxxxxxx'
08.02.2021 17:32:56 : #1: Start waiting for MFA code !!!
08.02.2021 17:33:06 : #1: Wait for MFA code !!!
08.02.2021 17:33:16 : #1: Wait for MFA code !!!
08.02.2021 17:33:23 : CopyChargePrice at '🔌 🏠 Domicile'
08.02.2021 17:33:23 : CopyChargePrice: reference charging session found for '🔌 🏠 Domicile', ID 244 - cost_per_kwh:0.2158 cost_per_session:0 cost_per_minute:0 started: 2/8/2021 4:50:54 PM
08.02.2021 17:33:26 : #1: Wait for MFA code !!!
08.02.2021 17:33:36 : #1: Wait for MFA code !!!
08.02.2021 17:33:46 : #1: Wait for MFA code !!!
08.02.2021 17:33:56 : #1: Wait for MFA code !!!
08.02.2021 17:34:06 : #1: Wait for MFA code !!!
08.02.2021 17:34:16 : #1: Wait for MFA code !!!
08.02.2021 17:34:26 : #1: Wait for MFA code !!!
08.02.2021 17:34:36 : #1: Wait for MFA code !!!
08.02.2021 17:34:46 : #1: Wait for MFA code !!!
08.02.2021 17:34:56 : #1: Wait for MFA code !!!
I use the docker version, completely upgraded today afternoon with :
docker-compose stop
git fetch
git reset --hard origin/master
git checkout origin/master -- docker-compose.yml
docker-compose build
docker-compose up -d
I upgraded because the server was not seeing the car as asleep anymore but I don't know if it's related
Answers:
username_1: go to: Settings / Credentials / Edit / Reconnect
Enter your MFA code from your authentificator app
username_0: OK
Solved the issue indeed.
Will I have to redo this periodically?
Status: Issue closed
username_1: No. Teslalogger should be able to get a new access token via refresh token. |
Difegue/LANraragi | 438616793 | Title: [Suggestion] Usability Improvements
Question:
username_0: LRR has a very nice web-interface and I've been impressed overall, however there are some QoL changes I'd like to see to make it more usable.
**Major**
- [ ] Option to scale images to browser Width or Height (whichever is smaller for that image)
- [ ] Option to Limit how much an image gets resized e.g don't scale above native image resolution, or limit it to only scaling up by 200%
- [ ] Go back to the library page you were on and retain search terms after editing meta-data or change it into an overlay/popup window rather then taking you to a whole new page (preferred)
**Minor**
- [ ] Delete archive button (with confirmation) in the library view
- [ ] Improve how LRR finds cover images, by e.g trying to find portrait oriented images first
- [ ] Remove "hover to show tags" and just make it show tags whenever hovering over the gallery. I can see how the tool-tip might get in the way, so perhaps a 0.5sec hover delay would fix that?
- [ ] Add view random gallery button to the bottom of page navigation.
When hovering to show tags, I think it would be nice to have the gallery title above the tags too, similar to how x-links does it.
Answers:
username_1: Thanks for the detailed breakdown! Github doesn't allow me to comment easily on checklist items, so I went ahead and checked off the ones I don't plan on doing.
Your first option sounds like what Scale to View already does -- not sure what else you need from it ?
For editing, the easiest solution is to move it to a new window/tab. I'm a personal middle-click abuser so I open most of my stuff in new tabs by default, but I agree it'd make for a better UX.
I'm mixed on the delete button - I don't mind adding it, but I'd like avoiding cluttering the gallery view with icons. An option I could try out is to move save/edit to a custom context menu ala google drive, and add delete there.

I agree that landscape covers showing up atm is a bit weird, but I'm not sure how to improve on that without making the thumbnailing a mess. Cropping landscape thumbnails to fit portrait might work better here.
I think moving the tag overlay to the entire gallery area is a bad idea as it hinders navigation, and adding hover delay just makes it harder to find out the tag overlay actually exists. I don't think it's that much of a QoL improvement here either. (You get more space to put your mouse over, at the cost of having to wait longer for the overlay to appear)
I'm not big on duplicating the random button either -- It makes sense on the reader, less so for a button that's usually not used when you already did some scrolling. I could make it a floating button, not sure yet. Giving that one a meh.
For the automatic bookmarking, I kinda see the point (not saving progress on small, 20-page doujins which you're not going to resume most of the time), but it's a lot of extra complexity for both me and end-users whereas you could just click on the first page button.
I'm ok with the rest - not sure if there's a "standard" color code for namespaces somewhere?
username_0: **Image Scaling**
The current "Scale To View makes it fit into the browsers window, what I was trying to suggest was a new option that instead of making sure the image fits completely into the window the smallest dimension (which would be width for 99% of pages) would be scaled to the width of the window. This would mean as a viewer you would just scroll downwards. And for double pages where they have more width than height it would scale the height to the window so you only had to scroll left or right to view the whole image for those pages.
The second task would expand upon that by adding options to limit that functionality by e.g not allowing it to scale above the original resolution, or not allowing it to scale the width above a certain %. I hope that makes sense. The current implementation is especially not that great for double-pages, you need to turn scale to view on and off all the time or zoom in manually to make the images take up more of your browser.
**Metadata & Delete button**
Fair enough, they aren't huge issues anyway. I think the delete button being added wouldn't be "cluttering the gallery view" much if e.g it didn't show up unless you were logged in with proper privileges though it isn't a huge deal.
**Landscape covers**
I found that there are a number of galleries with images named like 000a. 000b. for the landscape images, maybe just ignoring files with certain names and looking for images with "cover" in the name etc would help without having to check image dimensions is too hard.
**Tags on hover**
Personally I'd find it a lot better having them appear on hovering gallery thumbnails rather than having to hover over the "hover here for tags" then you could remove the "hover here for tags" and it would suddenly free up space for more buttons or perhaps showing the artist or some other info like image count at the bottom. I only suggested adding a delay in-case you thought people would constantly accidentally have the over-lay get in the way. But it's definitely a more subjective thing.
**Random Archive Button**
To me, it just seems a bit silly having to go back to the library page then click on the random gallery button rather than be able to do it directly from the image viewing page. It could just be a small icon at the bottom navigation with a question-mark or something. I don't think it would be in the way.
**Namespace Colors**
This is what HPX uses for Artist, Circle and Parody. IIRC it is a standard because I remember seeing it used elsewhere. I'll do some more digging when I get unbanned from exH for too many page visits lol.
Artist - Hex: #22a7f0 / Red 34 Green 167 Blue 240
Circle - Hex: #36d7b7 / Red 54 Green 215 Blue 183
Parody - Hex: #d2527f / Red 210 Green 82 Blue 127
username_0: I've added some more suggestions at the bottom of the list, would be curious on your thoughts.
username_1: Haven't had time to review the new ones until now; Sounds like good QoL improvements to me, I'll try working them in for beta 3 alongside regex searches and the like.
username_2: I would also suggest adding [this](https://stackoverflow.com/a/4407335) to the image container css in reading mode because it gets selected every time I quickly click through pages.
And ideally it would be great to add this on other pages or elements that are not supposed to be highlighted.
username_3: A small idea to extend the "Retain search terms and page number" part, since the total page count has been mentioned ther as well. How about showing the gallery page count (maybe only in thumbnail view). The only way to neatly add this to the list view, might be an extended kinda list view with a double/multi lined list with additional information. The (latter) idea came from e-h (obviously) and their different views.
username_1: I've had a few requests for pages read/total count in the survey; For the list view I wager a simple xx/yy number could fit. (likely replacing the New! icon)
username_4: - Have clicking on tag fill search box with namespace:tag format.
- Wrap 'Archive Overview' in some kind of JavaScript onclick action so that opening archive with 500 pages does not cause all 500 pages to be loaded by browser.
username_5: Please add sort by date added
username_1: Related to #195.
username_6: Can I asked for a way to changed Lanraragi logs time zone?
I tried to add TZ=mytimezone in my OMV, but it seem doesn't affecting Lanraragi.
username_1: The Docker containers are alpine-based and therefore [don't support timezones](https://www.grainger.xyz/timezone-in-docker-alpine-not-using-environment-variable-tz/) by default.
If this is really a blocker for you please open a new issue so it can be tracked properly! 😀
username_1: Closing, see https://github.com/username_1/LANraragi/issues/274#issuecomment-617350732.
This issue was growing a bit too messy with multiple comments from different persons anyway -- A lot of it has been tackled so I'd like separate issues for the few remaining improvements. 😃
Status: Issue closed
|
dotnet/roslyn | 450366903 | Title: Scrollbar is nebulous in the diff window of vs2019
Question:
username_0: It's the same project.
vs2017:

vs2019:

vs2017 is clear, but vs2019 is nebulous.
Answers:
username_1: @username_2 where does this one belong?
username_2: route it to the editor please. thanks! |
ember-cli/ember-cli | 67534297 | Title: build fails with ENOENT, no such file or directory
Question:
username_0: I'm migrating an existing app to ember-cli, so I ran `ember init` in the directory. Now I'm trying to run `ember build` just to get the base files working before I refactor my app, but the command keeps failing-
```
Build failed.
ENOENT, no such file or directory '/Users/sgunturi/Projects/bhajan-db/tmp/funnel-dest_noIJv8.tmp/'
```
The exact filename is different each time, but it's the same error.
Running `node@v0.10.36`, `npm@2.7.4`, `ember-cli@0.2.3`.
Answers:
username_1: It sounds like you have a unique project upgrading and all. Without an example for me to debug I will likely not be able to help
username_0: That's what I thought too, but I overwrote all the files during the init - `package.json` specifically. I didn't have an `app/` dir before either, so anything `ember init` created should be boilerplate, right? At this part of the process, there *theoretically* should be nothing different from my project and a brand new project created with `ember new` - that's what I need help with. Would there be anything different?
username_1: a diff tool may help you
Status: Issue closed
username_0: Will look into that - thanks
username_2: @username_0 I recently had a similar error message which was caused by not having a `app/styles` directory, since I removed this manually as the styles are handled by the parent project. Maybe this helps ...
username_3: @username_2 I just ran into this problem, and adding an empty `app/styles` directory solved it. (All of my styles are in the main Rails app, so I had removed the Ember styles app entirely.) |
flutter/flutter | 551007485 | Title: Update all API constructions to use keyHelper reference defined in cocoon config
Question:
username_0: We have added a keyHelper reference in cocoon config in https://github.com/flutter/cocoon/pull/624
Existing APIs have separate keyHelper definitions in their own scope.
To make it consistent, we will update all APIs to use the keyHelper defined in the cocoon config.<issue_closed>
Status: Issue closed |
react-navigation/react-navigation | 742251453 | Title: Screen with WebView crashes App during navigation
Question:
username_0: **Current Behavior**
- What code are you running and what is happening?
Navigating to a screen which contains a WebView crashes the whole app with no stack trace. Disabling animations on the screen fixes the problem e.g. <Stack.Screen ... options={{animationEnabled: false}} />
**Expected Behavior**
- What do you expect should be happening?
It should navigate
**Your Environment**
| software | version |
| ------------------------------ | ------- |
| iOS or Android | Android
| @react-navigation/native | 5.8.9
| @react-navigation/stack | 5.12.16
| react-native-gesture-handler | 1.7.0
| react-native-safe-area-context | 3.1.4
| react-native-screens | 2.10.1
| react-native | 5.8.9
| expo | 39.0.3
| node | 12.19.0
| npm or yarn | npm
Answers:
username_0: Sorry that was a typo. It is version 5.12.6
username_1: Do you use `enableScreens()`? If so, the issue with `WebView` should have been resolved in https://github.com/software-mansion/react-native-screens/pull/607 and should be available from `react-native-screens` v. 2.12.0.
username_0: @username_1 No I never used `enableScreens()` unless it is on by default?
username_1: @username_0 it depends on how you created the project. You can check if `enableScreens()` is present in your index.js or somewhere else in your code.
Status: Issue closed
username_2: See the workaround here https://github.com/react-navigation/react-navigation/issues/9067#issuecomment-728074223
username_3: Hi Matthew(username_0),
I was getting the app crash while navigating and after applying the property {{animationEnabled: false}}, the issue got resolved.
Thanks for the solution.
Regards,
Avinash
username_4: @username_0
<Stack.Screen ... options={{animationEnabled: false}} /> -> That's solved my problem. thanks.
In react-navigation >= 6.x use ` <Stack.Screen ... options={{animation:'none'}} />`
username_5: Thanks @username_4 That worked, on IOS and Android! |
apiato/apiato | 270031620 | Title: Social Authentication: [social_token] String data, right truncated: Data too long for column
Question:
username_0: When trying to social authenticate, the token from facebook cannot be added to the database so the whole operation fails because of token being too long for the 'social_token' column. So, i had to modify the design of the users table in order be able to store the token generated from facebook.
I don't know if the actual tokens are shorter, as i generated mine from the Graph Data Explorer for testing purposes. The initial column was 199 characters long. I set it to 255. The facebook token i received from the facebook graph API explorer was 211 characters long.<issue_closed>
Status: Issue closed |
FACN4/BSN_week1-group-project | 338468726 | Title: Document your code with comments
Question:
username_0: Again, this is more a question of best practices then an actual issue but it can be useful to document your code by writing instructive comments so that the code is easily understood and can easily be modified by other members of the team. It can also be helpful to break down the html into sections by adding a few section comments.
For example:
- This js function could use a small descriptive comment:
https://github.com/FACN4/BSN_week1-group-project/blob/c172a719a4c08412cc99693de93d2d4e8da87d2a/script.js#L1 |
OGGM/oggm | 409421975 | Title: Download Verification Failed Exception
Question:
username_0: I'm new to OGGM, and after playing with the getting started tutorial a few times, and running through the Set-up and OGGM Run with the default Rofental region, I am trying to run through these steps with a subset of glaciers I am studing in British Columbia. However, during the pre-processing a subset phase, I am throwing the following error:
2019-02-12 09:39:52: oggm.core.gis: DownloadVerificationFailedException occurred during task define_glacier_region on RGI60-02.07780: /home/pelto/OGGM/download_cache/srtm.csi.cgiar.org/wp-content/uploads/files/srtm_5x5/TIFF/srtm_13_02.zip failed to verify!
is: 69080116 d60e3f58
expected: 42210356 1f0dbe8d
I have to pinpoint the cause of this error, but am at a loss!
Answers:
username_1: The file in question was downloaded incompletely on the machine used as source of the hashes.
So the hash served its purpose, however the other way around than intended.
That specific file is fixed now, there might be more, given the sheer amount of files there are.
username_0: Thanks much Timo. That allowed almost all of my glaciers to properly verify the SRTM, however a few are in srtm_13_03, which appears to have the same incomplete download issue:
/home/pelto/OGGM/download_cache/srtm.csi.cgiar.org/wp-content/uploads/files/srtm_5x5/TIFF/srtm_13_03.zip failed to verify!
is: 83973605 b6a34104
expected: 43667597 080d5fea2019-02-12 12:35:38: oggm.core.gis: (RGI60-02.07003) define_glacier_region
username_1: I will need to find a way to evaluate all the srtm files we have stored. For now you can just turn off the dl_verify param and it will run.
username_2: Note that I will release a new way to initialize glacier directories very shortly, which won't require to download topo data anymore - I'll let you know
username_2: @username_0 I've finally managed to get this into the docs. Please have a look at these links:
https://docs.oggm.org/en/latest/input-data.html#pre-processed-directories
https://docs.oggm.org/en/latest/run.html
It is now the recommend way to initialize OGGM runs. You'll need to update OGGM to the latest repository version for this to run.
The documentation might still be unclear at times. I'm happy with any feedback you might have!
Status: Issue closed
|
open-austin/BASTA-tfwa | 815077617 | Title: Improve validation on Add Property page
Question:
username_0: Current behavior: On the Add Property page, the only validation that occurs is to ensure that the required fields aren't blank.
Recommended behavior: It would be nice if the address could be searched with the USPS address validation service and then for the user to be given one or more options of a validated/standardized address. Or, if the user doesn't use the USPS tool, maybe there could still be validation to ensure that the ZIP code is 5 or 9 digits, for example. |
magma/magma | 872023919 | Title: [NMS] AGW Health in NMS is not accurate
Question:
username_0: - **Affected Component:** NMS
Health in NMS is not accurate(sometimes shows "Bad" but AGW is already checkin is OK)

Answers:
username_1: We rely on checkin_status for this value. Can you check if the page gets refreshed eventually and turns good or it stays bad all the time ?
Status: Issue closed
|
TypesettingTools/DependencyControl | 287290939 | Title: Error installing DependencyControl scripts
Question:
username_0: Since there aren't any Aegisub builds for OSX with built-in DependencyControl available, I tried to install by myself following the installation instructions but I can't get to make the script work. Can anyone help me to solve this problem?
Here is the log:
02:34:55: A script in the Automation autoload directory failed to load.
Please review the errors, fix them and use the Rescan Autoload Dir button in Automation Manager to load the scripts again.
02:34:55: Failed to load Automation script '/Users/toshiacunacortez/Library/Application Support/Aegisub/automation/autoload/l0.DependencyControl.Toolbox.moon':
Error initialising Lua script "l0.DependencyControl.Toolbox.moon":
[string "/Users/toshiacunacortez/Library/Application S..."]:13: FFI could not load "PT.PreciseTimer.PreciseTimer". Search paths:
- "/Users/toshiacunacortez/Library/Application Support/Aegisub/automation/autoload/PT/PreciseTimer/libPreciseTimer.dylib" (dlopen(/Users/toshiacunacortez/Library/Application Support/Aegisub/automation/autoload/PT/PreciseTimer/libPreciseTimer.dylib, 5): image not found)
- "/Users/toshiacunacortez/Library/Application Support/Aegisub/automation/include/PT/PreciseTimer/libPreciseTimer.dylib" (dlopen(/Users/toshiacunacortez/Library/Application Support/Aegisub/automation/include/PT/PreciseTimer/libPreciseTimer.dylib, 5): image not found)
- "/Applications/Aegisub.app/Contents/SharedSupport/automation/include/PT/PreciseTimer/libPreciseTimer.dylib" (dlopen(/Applications/Aegisub.app/Contents/SharedSupport/automation/include/PT/PreciseTimer/libPreciseTimer.dylib, 5): image not found)
- "./PT/PreciseTimer/libPreciseTimer.dylib" (dlopen(./PT/PreciseTimer/libPreciseTimer.dylib, 5): image not found)
- "/usr/local/share/luajit-2.0.3/PT/PreciseTimer/libPreciseTimer.dylib" (dlopen(/usr/local/share/luajit-2.0.3/PT/PreciseTimer/libPreciseTimer.dylib, 5): image not found)
- "/usr/local/share/lua/5.1/PT/PreciseTimer/libPreciseTimer.dylib" (dlopen(/usr/local/share/lua/5.1/PT/PreciseTimer/libPreciseTimer.dylib, 5): image not found)
- "PreciseTimer" (dlopen(libPreciseTimer.dylib, 5): image not found)
Answers:
username_1: Unfortunately, the release archive for macOS has some misnamed files in it. In the following file tree, I have emphasized the two files that will be renamed using the syntax `original name --> new name`. I have shown these in the folder structure as bundled in the 7zip file, but if you have installed them correctly, the files in question should be in `/Users/<user>/Library/Application Support/Aegisub/automation/include/DM/DownloadManager/` and `/Users/<user>/Library/Application Support/Aegisub/automation/include/PT/PreciseTimer/`.
<pre>
DependencyControl-v0.6.3-OSX-x86_64
├── LICENSE
├── README.md
├── autoload
│ └── l0.DependencyControl.Toolbox.moon
├── include
│ ├── BM
│ │ ├── BadMutex
│ │ │ └── libBadMutex.dylib
│ │ └── BadMutex.lua
│ ├── DM
│ │ ├── DownloadManager
│ │ │ └── <b><i>libDownloadManager-osx64.dylib --> libDownloadManager.dylib</i></b>
│ │ └── DownloadManager.lua
│ ├── PT
│ │ ├── PreciseTimer
│ │ │ └── <b><i>libPreciseTimer-osx64.dylib --> libPreciseTimer.dylib</i></b>
│ │ └── PreciseTimer.lua
</pre>
username_0: Thank you very much! It works now!
username_2: thanks for this great tool.
you have built windows and osx packages. could you plz build it for linux too?
username_3: Updated the release to use newer builds and fix the naming issue. Closing.
Status: Issue closed
|
webbukkit/dynmap | 554901815 | Title: Redirect from "/" to "/index.html" does not properly use the domain from the host header
Question:
username_0: Affected versions: beta-10
**Previous and expected behavior:**
If the root path on the builtin webserver is accessed, it will return a 302 redirect to "/index.html". The full URL passed in location header for the 302 response was set to `http://<requests-host-header>:<configured-port>/index.html`.
**Current incorrect behavior:**
If the root path on the builtin webserver is accessed, it will return a 302 redirect to "/index.html". The full URL passed in location header for the 302 response was set to `http://<configured-ip-address>:<configured-port>/index.html`.
**Notes:**
I use Cloudflare in front of the webserver. I rewrite the request so that from the user perspective they are accessing over HTTPS on my domain. On the backend it accesses http, but on a different domain. No direct access is allowed to the IP address itself, except through Cloudflare, which requires requests (even re-written) have a domain name and not only an IP address.
**Workaround:**
I manually configured a 302 at the Cloudflare level to intercept requests to the "/" path and 302 redirect them to "/index.html".
Answers:
username_1: the same for me, beta10 has breaking changes for my reverse proxy.
username_2: Same issue here
username_0: I believe this might work for mod_rewrite to accomplish what I did with Cloudflare:
```
RewriteCond %{REQUEST_URI} !-f
RewriteRule ^/?$ "/index.html"
```
Status: Issue closed
username_4: Check out dev build of PR - https://dynmap.us/builds/dynmap/Dynmap-3.0-SNAPSHOT-spigot.jar |
knative/serving | 1077505808 | Title: Why one Dockerfile "Readiness probe failed:" and another do not?
Question:
username_0: kubectl version --short
```
Client Version: v1.22.2
Server Version: v1.21.1
```
This Dockerfile give "Readiness probe failed":
```
# syntax=docker/dockerfile:1
FROM golang:1.17.2-alpine as builder
WORKDIR /source
COPY ./src/go.mod ./
# COPY ./src/go.sum ./
RUN go mod download
COPY ./src ./
# COPY src/*.go ./
# output app in app folder
RUN go build -o /app
FROM golang:1.16-alpine
COPY --from=builder /app /app
EXPOSE 80
# execute app
CMD [ "/app" ]
```
Deploying a simple knative service:
```
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: helloworld-go
namespace: default
spec:
template:
spec:
containers:
- image: gcr.io/<projid>/dockerfile-micro-depend:latest
```
## Error
kubectl describe pod dockerfile-micro-depend-dhv8n-deployment-796484d788-hwpmm
```
Name: dockerfile-micro-depend-dhv8n-deployment-796484d788-hwpmm
Namespace: default
Priority: 0
Node: kind-control-plane/172.18.0.2
Start Time: Fri, 10 Dec 2021 17:37:26 +0100
Labels: app=dockerfile-micro-depend-dhv8n
pod-template-hash=796484d788
serving.knative.dev/configuration=dockerfile-micro-depend
serving.knative.dev/configurationGeneration=1
serving.knative.dev/revision=dockerfile-micro-depend-dhv8n
serving.knative.dev/revisionUID=053a72d2-c506-43cd-9c3c-bbd3e7296295
serving.knative.dev/service=dockerfile-micro-depend
[Truncated]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 82s default-scheduler Successfully assigned default/dockerfile-micro-depend-dhv8n-deployment-796484d788-hwpmm to kind-control-plane
Normal Pulling 81s kubelet Pulling image "gcr.io/<projid>/dockerfile-micro-depend@sha256:dc9b331e5e63af3067578652c8822af91c73c19226541c3495080d0049328452"
Normal Pulled 77s kubelet Successfully pulled image "gcr.io/<projid>/dockerfile-micro-depend@sha256:dc9b331e5e63af3067578652c8822af91c73c19226541c3495080d0049328452" in 4.6859265s
Normal Created 76s kubelet Created container user-container
Normal Started 76s kubelet Started container user-container
Normal Pulled 76s kubelet Container image "gcr.io/knative-releases/knative.dev/serving/cmd/queue@sha256:713bd548700bf7fe5452969611d1cc987051bd607d67a4e7623e140f06c209b2" already present on machine
Normal Created 76s kubelet Created container queue-proxy
Normal Started 76s kubelet Started container queue-proxy
Warning Unhealthy 5s (x7 over 66s) kubelet Readiness probe failed:
```
## Working:
Then I found [this "official" example](https://github.com/knative/docs/tree/main/docs/serving/samples/hello-world/helloworld-go) and used the Dockerfile, and it worked. Do you know why?
Clearly it has to do with some health checking on the pod, but I have no idea why the first Dockerfile is not working.<issue_closed>
Status: Issue closed |
bumptech/glide | 203792559 | Title: Stop GIF animation after 1 or 2 loops
Question:
username_0: Couldn't find anything in the glide API to stop GIF animation after 1 or 2 loops similar to GIF animation in Google Allo to reduce processing, power consumption. Is there any method available?
Answers:
username_1: Look around `GifDrawable`, I think you can do something like `.into(new GlideImageViewTarget(iv, 2))` to limit repeats. If you use explicit targets, don't forget to add `.fitCenter()`/`.centerCrop()`.
username_0: Ya thanks for that but I'm using Glide 4.0.0 Snapshot. How to implement this?
username_1: *Heh... that's exactly why we have the [issue template](https://raw.githubusercontent.com/bumptech/glide/master/ISSUE_TEMPLATE.md) ;)*
It looks like the target constructor param was removed when Sam simplified the drawable hierarchy (3be40cf73c2330a414d3e9c3097b2b5ce7fb89ef). The only ways I found to do it right now is these:
```java
.listener(new RequestListener<Drawable>() {
@Override public boolean onResourceReady(Drawable resource, Object model, Target<Drawable> target, DataSource dataSource, boolean isFirstResource) {
if (resource instanceof GifDrawable) {
((GifDrawable)resource).setLoopCount(2);
}
return false;
}
})
```
or the equivalent custom Target override if you want it more reusable (extract anon class to reuse):
```java
.into(new DrawableImageViewTarget(imageView) {
@Override public void onResourceReady(Drawable resource, @Nullable Transition<? super Drawable> transition) {
if (resource instanceof GifDrawable) {
((GifDrawable)resource).setLoopCount(2);
}
super.onResourceReady(resource, transition);
}
});
```
Status: Issue closed
username_2: I wrote a [GifDrawableImageViewTarget](https://gist.github.com/username_2/1934d8ef796a20f242eb82ab58f920bb) to implement this requirement.
Usage:
``` java
Glide.with(getContext())
.load(R.drawable.some_gif)
.into(new GifDrawableImageViewTarget(mGifView, 1));
```
username_3: You can try this way.
```
Glide.with(getContext())
.asGif()
.load("URL_HERE") // Replace with a valid url
.addListener(new RequestListener<GifDrawable>() {
@Override
public boolean onLoadFailed(@Nullable GlideException e, Object model, Target<GifDrawable> target, boolean isFirstResource) {
return false;
}
@Override
public boolean onResourceReady(GifDrawable resource, Object model, Target<GifDrawable> target, DataSource dataSource, boolean isFirstResource) {
resource.setLoopCount(1); // Place your loop count here.
return false;
}
})
.into(findViewById(R.id.your_image_view)); // Replace with your ImageView id.
``` |
SonarSonic/Sonar-Core | 459272180 | Title: Using breakBlock to drop items causes issues when moving blocks
Question:
username_0: Using Block.breakBlock like [here](https://github.com/SonarSonic/Sonar-Core/blob/1.12.2/src/main/java/sonar/core/common/block/SonarBlock.java#L77) to drop the item, can cause issues with mods that need to move or break items without drops. |
ocadotechnology/hexagonjs | 140953296 | Title: Autocomplete: does not autocomplete...
Question:
username_0: The autocomplete no longer autocompletes, an error is thrown and the dropdown is not updated:
`Uncaught TypeError: _.menu.dropdown._.dropdownContent is not a function`
This should probably be using either a setter/getter to get the dropdown content function or use the renamed variant.<issue_closed>
Status: Issue closed |
grails/grails-doc | 760391898 | Title: Broken Link In Docs
Question:
username_0: The docs at https://docs.grails.org/4.0.5/guide/gettingStarted.html#deployingAnApplication have an anchor tag with the text "list of known deployment issues` that points to https://grails.org/wiki/version/Deployment/92 which is a broken link. The source for that page is at https://github.com/grails/grails-doc/blob/dbb11ad4039ca606e37e0d498fd56db7b5684e8c/src/en/guide/gettingStarted/supportedJavaEEContainers.adoc. |
dma-ais/AisLib | 407623375 | Title: Another error : AisLib building
Question:
username_0: I tried your solution on a closed issue "AisLib is not building". The different error came up.
[INFO] Scanning for projects...
Downloading from dma-releases: http://repository-dma.forge.cloudbees.com/release/org/apache/maven/wagon/wagon-webdav-jackrabbit/2.10/wagon-webdav-jackrabbit-2.10.pom
Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/wagon/wagon-webdav-jackrabbit/2.10/wagon-webdav-jackrabbit-2.10.pom
[ERROR] [ERROR] Some problems were encountered while processing the POMs:
[ERROR] Unresolveable build extension: Plugin org.apache.maven.wagon:wagon-webdav-jackrabbit:2.10 or one of its dependencies could not be resolved: Failed to read artifact descriptor for org.apache.maven.wagon:wagon-webdav-jackrabbit:jar:2.10 @
@
[ERROR] The build could not read 1 project -> [Help 1]
[ERROR]
[ERROR] The project dk.dma:dma-root-pom:24 (C:\Users\Kafilah\dma-developers\rootpom\pom.xml) has 1 error
[ERROR] Unresolveable build extension: Plugin org.apache.maven.wagon:wagon-webdav-jackrabbit:2.10 or one of its dependencies could not be resolved: Failed to read artifact descriptor for org.apache.maven.wagon:wagon-webdav-jackrabbit:jar:2.10: Could not transfer artifact org.apache.maven.wagon:wagon-webdav-jackrabbit:pom:2.10 from/to dma-releases (http://repository-dma.forge.cloudbees.com/release/): repository-dma.forge.cloudbees.com: Unknown host repository-dma.forge.cloudbees.com -> [Help 2]
Thanks
Regards,
Oki
Answers:
username_1: version 25 of the root pom is available in the central repository now [https://search.maven.org/artifact/dk.dma/dma-root-pom/25/pom](https://search.maven.org/artifact/dk.dma/dma-root-pom/25/pom)
Status: Issue closed
|
easy-swoole/pay | 961335705 | Title: 【支付宝】转账到支付宝账户 文档错误
Question:
username_0: ```
$order = new \EasySwoole\Pay\AliPay\RequestBean\Transfer();
$order->setSubject('测试');
$order->setAmount('0.01');
/*
收款方账户类型。可取值:
1、ALIPAY_USERID:支付宝账号对应的支付宝唯一用户号。以2088开头的16位纯数字组成。
2、ALIPAY_LOGONID:支付宝登录号,支持邮箱和手机号格式。
*/
$order->setPayeeType('ALIPAY_LOGONID');
$order->setPayeeAccount('<EMAIL>');
```
其中setAmount\setPayeeType\setPayeeAccount 方法都没有<issue_closed>
Status: Issue closed |
neo-one-suite/neo-one | 489917961 | Title: Playground's NEO tracker instance doesn't work
Question:
username_0: ### Description
The playground's local instance of NEO tracker doesn't seem to work.
### Steps to Reproduce
1. Go to the playground
2. `yarn neo-one build`
3. `yarn start`
4. Press a button like `console.log` on the homepage and see the iframe produce a link to the transaction on NEO tracker
5. Click the link and see that the NEO tracker instance is not up
**Expected behavior:** The link to go to a local NEO tracker instance
**Actual behavior:** Nothing is up
**Reproduces how often:** 100%
### Additional Information
None.
Answers:
username_1: I think we should do the following:
1. Fully open-source neotracker and eliminate the private repo. Then we can run it on the neotracker redesign infra.
2. Eliminate @neo-one/monitor since it doesn't exist anymore.
3. Re-integrate it with the neo-one cli <- I've already done this on a local branch, just need 2/ to be done.
Status: Issue closed
|
moleculerjs/moleculer | 988446209 | Title: Error when create project using NATS Straming transporter via molecular init project.
Question:
username_0: ## Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [yes] I am running the latest version
- [yes] I checked the documentation and found no answer
- [yes] I checked to make sure that this issue has not already been filed
- [yes] I'm reporting the issue to the correct repository
## Current Behavior
Hi @icebob I'm getting error when create a project (via npx) using `NATS Streaming` as transporter.
## Expected Behavior
Must succeed create project when select `NATS Streaming` as transporter.
## Failure Information
```
Rendra@MacBookPro2019 ~/Projects/Moleculer % npx moleculer-cli -c moleculer init project moleculer-nats-streaming
Template repo: moleculerjs/moleculer-template-project
Downloading template...
? Add API Gateway (moleculer-web) service? Yes
? Would you like to communicate with other nodes? Yes
? Select a transporter NATS Streaming
? Would you like to use cache? Yes
? Select a cacher solution Redis
? Add DB sample service? Yes
? Would you like to enable metrics? Yes
? Would you like to enable tracing? Yes
? Add Docker & Kubernetes sample files? Yes
? Use ESLint to lint your code? No
Create 'moleculer-nats-streaming' folder...
? Would you like to run 'npm install'? Yes
Running 'npm install'...
npm ERR! code ERESOLVE
npm ERR! ERESOLVE could not resolve
npm ERR!
npm ERR! While resolving: moleculer@0.14.16
npm ERR! Found: node-nats-streaming@0.2.6
npm ERR! node_modules/node-nats-streaming
npm ERR! node-nats-streaming@"^0.2.6" from the root project
npm ERR!
npm ERR! Could not resolve dependency:
npm ERR! peerOptional node-nats-streaming@"^0.0.51 || ^0.3.0" from moleculer@0.14.16
npm ERR! node_modules/moleculer
npm ERR! moleculer@"^0.14.13" from the root project
npm ERR! peer moleculer@"^0.12.0 || ^0.13.0 || ^0.14.0" from moleculer-db@0.8.15
npm ERR! node_modules/moleculer-db
npm ERR! moleculer-db@"^0.8.4" from the root project
npm ERR! 2 more (moleculer-db-adapter-mongo, moleculer-web)
npm ERR!
npm ERR! Conflicting peer dependency: node-nats-streaming@0.3.2
npm ERR! node_modules/node-nats-streaming
npm ERR! peerOptional node-nats-streaming@"^0.0.51 || ^0.3.0" from moleculer@0.14.16
npm ERR! node_modules/moleculer
npm ERR! moleculer@"^0.14.13" from the root project
npm ERR! peer moleculer@"^0.12.0 || ^0.13.0 || ^0.14.0" from moleculer-db@0.8.15
[Truncated]
1351 error moleculer-db@"^0.8.4" from the root project
1351 error 2 more (moleculer-db-adapter-mongo, moleculer-web)
1351 error
1351 error Conflicting peer dependency: node-nats-streaming@0.3.2
1351 error node_modules/node-nats-streaming
1351 error peerOptional node-nats-streaming@"^0.0.51 || ^0.3.0" from moleculer@0.14.16
1351 error node_modules/moleculer
1351 error moleculer@"^0.14.13" from the root project
1351 error peer moleculer@"^0.12.0 || ^0.13.0 || ^0.14.0" from moleculer-db@0.8.15
1351 error node_modules/moleculer-db
1351 error moleculer-db@"^0.8.4" from the root project
1351 error 2 more (moleculer-db-adapter-mongo, moleculer-web)
1351 error
1351 error Fix the upstream dependency conflict, or retry
1351 error this command with --force, or --legacy-peer-deps
1351 error to accept an incorrect (and potentially broken) dependency resolution.
1351 error
1351 error See /Users/username_0/.npm/eresolve-report.txt for a full report.
1352 verbose exit 1
```
Answers:
username_0: Success when change `node-nats-streaming` to latest version (`^0.3.2`)
This is the new package.json
```
{
"name": "moleculer-nats-streaming",
"version": "1.0.0",
"description": "My Moleculer-based microservices project",
"scripts": {
"dev": "moleculer-runner --repl --hot services/**/*.service.js",
"start": "moleculer-runner",
"cli": "moleculer connect STAN",
"ci": "jest --watch",
"test": "jest --coverage",
"dc:up": "docker-compose up --build -d",
"dc:logs": "docker-compose logs -f",
"dc:down": "docker-compose down"
},
"keywords": [
"microservices",
"moleculer"
],
"author": "",
"devDependencies": {
"jest": "^26.6.3",
"jest-cli": "^26.6.3",
"moleculer-repl": "^0.6.4"
},
"dependencies": {
"moleculer-web": "^0.9.1",
"moleculer-db": "^0.8.4",
"moleculer-db-adapter-mongo": "^0.4.7",
"node-nats-streaming": "^0.3.2",
"ioredis": "^4.17.3",
"moleculer": "^0.14.13"
},
"engines": {
"node": ">= 10.x.x"
},
"jest": {
"coverageDirectory": "../coverage",
"testEnvironment": "node",
"rootDir": "./services",
"roots": [
"../test"
]
}
}
``` |
kristijanhusak/laravel-form-builder | 1063910537 | Title: How to add data attribute to form element
Question:
username_0: I want a form input like this.
`<input data-toggle="touchspin" type="text" >`
I am new to this package but i haven't seen anything related to this in the docs.
Answers:
username_1: use `attr` option for field, like that
```
->add ('tst', 'text', [
'attr' => [
'data-toggle' => 'touchspin'
]
])
```
username_0: Thanks
Status: Issue closed
|
apache/trafficcontrol | 1098537270 | Title: Update Server Capability Delivery Services table to use AG-Grid instead of jQuery "dataTables"
Question:
username_0: ## This Improvement request (usability, performance, tech debt, etc.) affects these Traffic Control components:
- Traffic Portal
## Current behavior:
The `traffic_portal/app/src/common/modules/table/serverCapabilityDeliveryServices/TableServerCapabilityDeliveryServicesController.js` table uses the jQuery "dataTables" plugin.
## New behavior:
The `traffic_portal/app/src/common/modules/table/serverCapabilityDeliveryServices/TableServerCapabilityDeliveryServicesController.js` table uses the faster, actively maintained AG-Grid component. |
PurpleI2P/i2pd | 188264067 | Title: Error while trying to install i2pd in ubuntu using sudo dpkg -i i2pd_2.10.0-1( )1_i386.deb
Question:
username_0: dpkg: error processing package i2pd (--install):
dependency problems - leaving unconfigured
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for ureadahead (0.100.0-19) ...
Processing triggers for systemd (231-9ubuntu1) ...
Errors were encountered while processing:
i2pd
Answers:
username_1: have you tried:
apt-get -f install
username_2: dpkg is only the package manager. apt is downloading and maintaining the dependencies. the command above will install all missing dependencies or remove problematic packages (in this case, the first).
username_0: @username_1 @username_2 Thanks. Should I use jessie, wheezy, trusty or xenial?
username_3: @username_0 What version of your ubuntu?
11 нояб. 2016 г. 12:46 AM пользователь "username_0" <
username_0: @username_3 16.10 32bits.
username_1: have you tried installing the packages that it says are missing?
username_3: @username_0 install packages by command:
```
sudo apt-get install libboost-date-time1.58.0 libboost-filesystem1.58.0 \
libboost-program-options1.58.0 libboost-system1.58.0
```
username_0: I solved the problem using i2pd ppa packages for Ubuntu https://i2p.rocks/blog/ppa-repository-for-i2pd-is-available.html
@username_4 You can mark this as closed
Status: Issue closed
|
enactjs/agate-apps | 361926003 | Title: Valet Mode
Question:
username_0: Add valet mode option. This option would lock down certain apps and settings to prevent an unauthorized user from performing certain functions. Radio (and AC?) settings would revert to pre-valet mode when exiting valet mode.
Answers:
username_0: Going to downplay this for right now based on conversations about demo.
username_1: blocked by https://github.com/enactjs/agate-apps/pull/49 |
denoland/deno | 549211237 | Title: Transpile JS Compiler Flag
Question:
username_0: While we have been working on more feature rich Node.js compatability, like all the built in modules and a fuller featured `require()`. One of the common use cases is just simple conversion from CommonJS to ESM.
This is something that Deno supports, but the ability to get access to it is a bit complicated. You have to use a `tsconfig.json` on the command line with the `"checkJs": true` compiler option set. This also causes some side effects, because a lot of CommonJS modules aren't type safe, and then you get all sorts of random errors that aren't useful.
With the implementation of the userland `Deno.transpileOnly()` API, which accepts all sorts of stuff and outputs JS ESM, we don't quite have the equivilant on the command line.
I think it would make sense to have a flag, which would do a transpile only on JS modules, but a full compile on TS modules without the need of a `tsconfig.json`. This would mean that we would be able to consume CommonJS JS modules and cache the results while throwing "caution to the wind" as far as ensuring the type safety of the files. This means a whole eco-system of CommonJS modules could be supported in the runtime. It also would mean that AMD JS modules would be consumable as well.
The biggest risks I would see is that it wouldn't be a magic wand, though end users might think it is. For example if there is a dependency on a built in module in Node.js somewhere in the dependency chain. No good. Also it wouldn't solve resolving other modules, where the specifier needs to be modified in order to resolve to a resource, which would be very common. We don't want to go down the rabbit hole of magical module resolution. The only thing we could possibly do is adjust the things that look like an extension-less specifier on transpile to a extensioned specifier.
Answers:
username_1: @username_0 do you still want to pursue this issue?
Status: Issue closed
username_0: I think we can leave it for now and see if a compelling use case re-emerges. |
epam/cloud-pipeline | 427807157 | Title: All the installation assets shall be updated with cloud-specific parameters
Question:
username_0: The following items are being registered in the Cloud Pipeline during the *fresh* installation:
* Docker images (`deploy/docker/cp-tools`)
* Folder template (`deploy/docker/cp-api-srv/folder-templates`)
* Pipelines templates (`workflows/pipe-templates`)
* Demo pipelines (`workflows/pipe-demo`)
All of them have an option to specify default compute environment in terms of the `VM size`/`Instance type`
Current installation routines use the value from the `spec.json` files as is. Those files contain a mixture of AWS/Azure options.
This shall be fixed to use the `current` cloud provider's default `VM size`/`Instance type`. Current provider is the one, which is set to a default region during installation<issue_closed>
Status: Issue closed |
bcgov/entity | 592824496 | Title: TBD
Question:
username_0: ## Description:
Acceptance for a Task:
- [ ] Requires deployments
- [ ] Add/ maintain selectors for QA purposes
- [ ] Test coverage acceptable
- [ ] Linters passed
- [ ] Peer Reviewed
- [ ] PR Accepted
- [ ] Production burn in completed
Answers:
username_0: ### Test Notes
As per description, the following
update NR query to use new legal-api endpoint
set debugging expiration date to tomorrow (NRs that expire today can't be consumed)
add handling in case of tasks/filings 404 (ie, NR data not yet in db)
use only approved NR name (not necessarily the first one)
don't display loading message when signing out ("Loading Dashboard" doesn't make sense)
fix NR display that can sometimes be "NR%201234567"
Status: Issue closed
username_0: ## Description:
This ticket is related to #2903 and #3103.
A few things have changed/evolved, requiring some refactoring. The following need to be updated:
- [x] update NR query to use new legal-api endpoint
- [x] set debugging expiration date to tomorrow (NRs that expire today can't be consumed)
- [x] add handling in case of tasks/filings 404 (ie, NR data not yet in db)
- [x] use only approved NR name (not necessarily the first one)
- [x] don't display loading message when signing out ("Loading Dashboard" doesn't make sense)
- [x] fix NR display that can sometimes be "NR%201234567"
Acceptance for a Task:
- [ ] Requires deployments
- [ ] Add/ maintain selectors for QA purposes
- [ ] Test coverage acceptable
- [ ] Linters passed
- [ ] Peer Reviewed
- [ ] PR Accepted
- [ ] Production burn in completed
username_0: Oops, disregard previous comment and close/reopen. I blame the Close and Comment button being too close to the Comment button. :P
username_0: ### Test Notes
- As a result of this ticket, the NR Number in the URL no longer supports the underscore (_). Just use a space (which most browsers convert to %20). I tested various URLs on various browsers and it looks good to me.
username_1: NR 3023504 says expires tomorrow in the filings ui, but says expired in the create ui.
Will follow-up tomorrow.
username_0: The NR data from Namex says the NR expired some time in 2019, so the Create UI is correct. In the Filings UI I currently -- temporarily -- override the date to "tomorrow" just we can see stuff and work with the NR. This temporary code will have to be removed some time. BTW, soon the Create UI will override the date similarly so they can show stuff and work with the NR.
Ideally, we should get Lorna to reset the NR data so they're not expired :) |
aws-amplify/amplify-cli | 741159876 | Title: NullPointer in CognitoUser.sendMFACode when using CUSTOM_AUTH flow
Question:
username_0: **Describe the bug**
I'm using `authenticationFlowType: 'CUSTOM_AUTH'` and have the 3 lambda triggers set up. I guess the important lambda would be the `DefineAuthChallenge`, which I've included the code below.
When the user enables MFA via phone number, it seems that there is a null pointer being thrown at [CognitoUser.js#L890](https://github.com/aws-amplify/amplify-js/blob/master/packages/amazon-cognito-identity-js/src/CognitoUser.js#L890).
I get the error:
```
Cannot read property 'NewDeviceMetadata' of undefined
```
The `dataAuthenticate` object that I guess is expected to have the `AuthenticationResult ` property instead looks like the JSON block below, which is the response of the next challenge:
```json
{
"ChallengeName": "CUSTOM_CHALLENGE",
"ChallengeParameters": {
"trigger": "true"
},
"Session": "B95sgt_NM93OJPx8Uc47....."
}
```
The `DefineAuthChallenge` lambda:
```python
def define_auth_challenge(event, context):
"""
https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-lambda-define-auth-challenge.html#aws-lambda-triggers-define-auth-challenge-example
:param event:
:return:
"""
session = event['request']['session']
session_len = len(session)
response = event['response']
if session_len == 1 and \
session[0]['challengeName'] == 'SRP_A':
response['issueTokens'] = False
response['failAuthentication'] = False
response['challengeName'] = 'PASSWORD_VERIFIER'
elif session_len == 2 and \
session[1]['challengeName'] == 'PASSWORD_VERIFIER' and \
session[1]['challengeResult'] is True:
response['issueTokens'] = False
response['failAuthentication'] = False
response['challengeName'] = 'CUSTOM_CHALLENGE'
elif session_len == 3 and \
session[2]['challengeName'] == 'CUSTOM_CHALLENGE' and \
session[2]['challengeResult'] is True:
response['issueTokens'] = True
response['failAuthentication'] = False
else:
response['issueTokens'] = False
response['failAuthentication'] = True
return event
```
**To Reproduce**
Steps to reproduce the behavior:
1. Create a User Pool
2. Add a CUSTOM_AUTH flow with a CUSTOM_CHALLENGE lambdas
[Truncated]
npm: 6.13.7
stylelint-config-rational-order: 0.1.2
stylelint-config-sass-guidelines: 7.0.0
stylelint: 13.1.0
tslint-to-eslint-config: 0.5.1
typescript: 3.7.5
```
</details>
**Smartphone (please complete the following information):**
**Additional context**
Add any other context about the problem here.
**Sample code**
Include additional sample code or a sample repository to help us reproduce the issue. (Be sure to remove any sensitive data)
**_You can turn on the debug mode to provide more info for us by setting window.LOG_LEVEL = 'DEBUG'; in your app._**
Answers:
username_1: Got it, thanks for the information on this @username_0 and getting back to me quickly. I went ahead and transferred this issue to the Amplify CLI repo to see if they have recommendations in regards to getting CUSTOM_AUTH working with captcha support. |
andrewlock/NetEscapades.AspNetCore.SecurityHeaders | 288428182 | Title: `AddCustomHeader` throws misleading exception
Question:
username_0: ```csharp
var collection = new HeaderPolicyCollection();
collection.AddCustomHeader(null, "asdf");
```
The snippet above throws an `ArgumentNullException` with the following message: "Value cannot be null.
". This is misleading because it is actually the header parameter that is null, not the value parameter.<issue_closed>
Status: Issue closed |
unlock-protocol/unlock | 415378894 | Title: Adding `view` only functions such as `name()` increase gas costs
Question:
username_0: **Describe the bug**
Adding `external view` or `view` functions to the PublicLock contract seems to increase the gas costs for transactions such as `purchaseFor` and `transferFrom` by about 22 gas per view function.
**To Reproduce**
Steps to reproduce the behavior:
1. Call `purchaseFor` and record gas costs
2. Add a new `view` function such as
```
function getSomething() view {}
```
3. Call `purchaseFor` and record gas costs
**Expected behavior**
Gas should not be impacted by adding another read-only method.
**Additional context**
I tried investigating for a bit already... no leads...
Answers:
username_1: I'd be curious to hear the rational about that? Also, arguably (at least for us), these functions are more likely to be used not in transactions but when "reading" the contract (so no gas involved?)
username_0: Yea... I don't have a clue what's going on here. I'm hoping that by walking away from this for a bit and returning maybe I'll discover I was making a silly mistake somewhere.
username_0: Confirmed the issue with this simple example:
```
contract Test {
function testWrite() public {}
function get() public pure returns (bytes memory) {
return bytes('yo');
}
}
```
Commenting out the `get()` function makes calling `testWrite` 22 gas cheaper. Copy-paste a `get2()` and calling `testWrite` costs 44 more. This trend seems to usual continue, each new view function adds 22 gas to the write function - however sometimes the gas change of adding one more reduces costs. So for this simple example we see:
- no get: 122
- 1 get: 144
- 2 gets: 166
- 3 gets: 188
- 4 gets: 210
- 5 gets: 232
- 6 gets: 210 ! (diverge from pattern)
- 7 gets: 210
- 8 gets: 232
- 9 gets: 232
If you look at the assembly executed, there is significantly more going on when `get` is added. WHY?! I don't understand.
username_0: Posted this question for help https://ethereum.stackexchange.com/questions/68098/why-does-adding-a-view-function-make-write-functions-cost-more-gas
username_0: An answer! See https://ethereum.stackexchange.com/a/68148/51260
CC @username_1
Begs the question, do we attempt to optimize for this? I'll give this some thought.
username_0: Every function we add (even read-only functions) potentially makes other functions more expensive. We could:
- Limit the API where we can. For example, I added 3 ways of grantingKeys, maybe we should only have 1.
- Move non-critical APIs to a different contract. For example, `getTokenIdFor(account)`. I'm not sure the best pattern here, but maybe the main contract has a fallback function which uses delegatecall to then try the non-critical contract instead. The non-critical contract has a fallback function which reverts.
Status: Issue closed
username_0: Closing this as I don't believe there is any more action.
Interesting lesson about Solidity though. Check out the stack exchange answer and the github issue they link for even more discussion.
CC @nfurfaro @username_1 |
onigetoc/m3u8-PHP-Parser | 273243595 | Title: fichier.php not found
Question:
username_0: I found a jsfiddle http://jsfiddle.net/username_1/d2ntn6x2/ of yours which appears to be the same with this but self-hosted. I just copied all the code and pasted them into separate files. After trying running the code and clicking the button I saw that error (fichier.php not found). Can you help me?
Thanks in advance.
Status: Issue closed
Answers:
username_1: Hi, i saw the question in my email and you should have a HLS m3u8 video player like this one:
http://www.scriptsmashup.com/product/video-pro-skin-builder-wordpress-html5-youtube-plugin
username_0: okay thanks for the answer. I have a question. Why when I use [your parser](http://codesniff.com/scripts/GC-m3u-parser/GC-m3u-parser.php) everything is okay but when I use [this](https://github.com/username_1/m3u8-PHP-Parser/blob/master/m3u-parser-simple.php) everything is undefined and when I use [this](https://github.com/username_1/m3u8-PHP-Parser/blob/master/m3u-parser.php) I get one channel only named iptv? (php files are on my server)
username_1: The js file is only a example and it may not be up to date, you should modify it to your need. Load the php file or see your console.log to parse it with javascripts or jQuery.
username_0: but [this thing](http://codesniff.com/scripts/GC-m3u-parser/GC-m3u-parser.php) works. Could you please send me that file?
username_1: I added this jQuery codes for simple parser.
https://github.com/username_1/m3u8-PHP-Parser/blob/master/getm3upls-simple.js
username_0: Thanks you so much. I'll try it tomorrow (here its 11:00pm) |
DynamoDS/DynamoRevit | 320600758 | Title: Rhythm failed to load. Error message
Question:
username_0: Hello everyone
Could you help me solve this problem?
Thank you very much
Best regards to the community
username_0
Error message
System.IO.FileLoadException
When I load the Rhythm mount, Version = 2018.3.15.0 I get 4 warnings with this information
Al cargar el montaje Rhythm, Version=2018.3.15.0, Culture=neutral, PublicKeyToken=null, Dynamo ha detectado que la dependencia RevitAPI, Version=18.0.0.0, Culture=neutral, PublicKeyToken=null ya se había cargado con una versión incompatible. Es probable que otro complemento de Revit haya cargado este montaje. Pruebe a desinstalar otros complementos e iniciar Dynamo de nuevo. Dynamo podría ser
inestable en este estado.
Es probable que uno de los montajes siguientes haya cargado la versión incompatible:
UIFrameworkInterop, UIFrameworkInterop, UIFrameworkInterop, UIFramework, UIFramework, UIFrameworkServices, AddInManagerUI, AddInManagerUI, APIInterop, RevitAPI, RevitAPI, RevitAPI, RevitAPI, RevitAPI, RevitAPI, RevitAPI, RevitAPI, RevitAPI, RevitAPI, RevitAPI, RevitAPI, RevitAPI, RevitAPI, RevitAPI, RevitAPI, RevitAPI, RevitAPI, RevitAPI, RevitAPI, RevitAPI, RevitAPI, RevitAPI, RevitAPI, RevitAPI,
RevitAPI, RevitAPI, RevitAPI, RevitAPI, RevitAPI, RevitAPI, GeomUtilAPI, GeomUtilAPI, GeomUtilAPI, GeomUtilAPI, UtilityAPI, RevitDBAPI, RevitDBAPI, RevitDBAPI, RevitDBAPI, RevitDBAPI, GraphicsAPI, GraphicsAPI, FamilyDBAPI, FamilyDBAPI, FamilyDBAPI, FamilyDBAPI, FamilyDBAPI, FamilyDBAPI, EssentialsDBAPI, EssentialsDBAPI, EssentialsDBAPI, EssentialsDBAPI, EssentialsDBAPI, EssentialsDBAPI, RoomAreaPlanDBAPI, RoomAreaPlanDBAPI, RoomAreaPlanDBAPI, RoomAreaPlanDBAPI, RoomAreaPlanDBAPI, ArrayElemsDBAPI, ArrayElemsDBAPI, ArrayElemsDBAPI, ArrayElemsDBAPI,
ArrayElemsDBAPI, StructuralDBAPI, StructuralDBAPI, StructuralDBAPI, StructuralDBAPI, StructuralDBAPI, StructuralDBAPI, HostObjDBAPI, HostObjDBAPI, HostObjDBAPI, HostObjDBAPI, HostObjDBAPI, HostObjDBAPI, SculptingDBAPI, SculptingDBAPI, SculptingDBAPI, SculptingDBAPI, SculptingDBAPI, ElementGroupDBAPI, ElementGroupDBAPI, ElementGroupDBAPI, ElementGroupDBAPI, CurtainGridFamilyDBAPI, CurtainGridFamilyDBAPI, CurtainGridFamilyDBAPI, CurtainGridFamilyDBAPI, SiteDBAPI, SiteDBAPI, SiteDBAPI, SiteDBAPI, SiteDBAPI, DetailDBAPI, DetailDBAPI, DetailDBAPI,
DetailDBAPI, DetailDBAPI, BuildingSystemsDBAPI, BuildingSystemsDBAPI, BuildingSystemsDBAPI, BuildingSystemsDBAPI, BuildingSystemsDBAPI, BuildingSystemsDBAPI, BuildingSystemsDBAPI, EnergyAnalysisDBAPI, EnergyAnalysisDBAPI, EnergyAnalysisDBAPI, AnalysisAppsDBAPI, AnalysisAppsDBAPI, AnalysisAppsDBAPI, AnalysisAppsDBAPI, StructuralAnalysisDBAPI, StructuralAnalysisDBAPI, StructuralAnalysisDBAPI, StructuralAnalysisDBAPI, StructuralAnalysisDBAPI, StructuralAnalysisDBAPI, RebarDBAPI, RebarDBAPI, RebarDBAPI, RebarDBAPI, RebarDBAPI,
AssemblyDBAPI, AssemblyDBAPI, AssemblyDBAPI, AssemblyDBAPI, APIDBAPI, APIDBAPI, APIDBAPI, APIDBAPI, APIDBAPI, APIDBAPI, DPartDBAPI, DPartDBAPI, DPartDBAPI, DPartDBAPI, StairRampDBAPI, StairRampDBAPI, StairRampDBAPI, StairRampDBAPI, StairRampDBAPI, InterfaceUtilAPI, InterfaceUtilAPI, InterfaceUtilAPI, InterfaceUtilAPI, InterfaceUtilAPI, InterfaceUtilAPI, InterfaceUtilAPI, InterfaceAPI, InterfaceAPI, PointCloudAccessAPI, PointCloudAccessAPI, NumberingDBAPI, NumberingDBAPI, NumberingDBAPI, NumberingDBAPI, MassingDBAPI, MassingDBAPI, MassingDBAPI, MassingDBAPI,
RevitAPIIFC, RevitAPIIFC, RevitAPIIFC, RevitAPIIFC, RevitAPIIFC, RevitAPIIFC, RevitAPIIFC, RevitAPIIFC, RevitAPIIFC, RevitAPIIFC, RevitAPIIFC, IntfIFCAPI, IntfIFCAPI, IntfIFCAPI, IntfIFCAPI, IntfIFCAPI, IntfIFCAPI, IntfIFCAPI, IntfIFCAPI, IntfIFCAPI, AddInManager, AddInManager, AddInManager, AddInManager, AddInManager, KeynoteDBServer, StraightSegmentCalculationServers, FittingAndAccessoryCalculationServers, RvtSteelConnectionsDB, RvtSteelConnectionsDB, RvtSteelConnectionsDB, RvtSteelConnectionsDB, CollaborateDB, CollaborateDB, CollaborateDB,
CollaborateDB, RSCloudClient, RSCloudClient, RSCloudClient, AddInJournaling, RevitAPIUI, RevitAPIUI, RevitAPIUI, RevitAPIUI, RevitAPIUI, RevitAPIUI, RevitAPIUI, RevitAPIUI, RevitAPIUI, RevitAPIUI, RevitAPIUI, RevitAPIUI, RevitAPIUI, RevitAPIUI, RevitAPIUI, RevitAPIUI, RevitAPIUI, RevitAPIUI, RevitAPIUI, RevitAPIUI, RevitAPIUI, RevitUIAPI, RevitUIAPI, RevitUIAPI, RevitUIAPI, RevitUIAPI, RevitUIAPI, DesktopMFCAPI, DesktopMFCAPI, DesktopMFCAPI, RevitMFCAPI, RevitMFCAPI, RevitMFCAPI, BuildingSystemsUIAPI, BuildingSystemsUIAPI, BuildingSystemsUIAPI, BuildingSystemsUIAPI, APIUIAPI, APIUIAPI, APIUIAPI,
APIUIAPI, APIUIAPI, APIUIAPI, EssentialsUIAPI, EssentialsUIAPI, EssentialsUIAPI, EssentialsUIAPI, EssentialsUIAPI, EnergyAnalysisUtilitiesAPI, DetailUIAPI, RaaSApplication, RaaSApplication, RaaSApplication, RaaSApplication, RaaSApplication, RaaSApplication, RaaSApplication, RaaSApplication, AutoLoader, AutoLoader, EnergyAnalysis, EnergyAnalysis, EnergyAnalysis, EnergyAnalysis, FabricationPartBrowser, FabricationPartBrowser, FabricationPartBrowser, FabricationPartBrowser, FabricationPartBrowser, FittingAndAccessoryCalculationUIServers,
FittingAndAccessoryCalculationUIServers, FittingAndAccessoryCalculationUIServers, IFCExportUI, IFCExportUI, IFCExportUI, IFCExportUI, IFCExportUI, ImportShape, ImportShape, KeynoteUIServer, KeynoteUIServer, MemberForces, MemberForces, MemberForces, MemberForces, ObjectNumberingUI, ObjectNumberingUI, ObjectNumberingUI, PointCloudSnappingServer, PointCloudSnappingServer, PressureLossReport, PressureLossReport, SectionProperties, SectionProperties, SectionProperties, SectionProperties, SectionProperties, SpaceNaming, SpaceNaming, BatchPrint, BatchPrint, BatchPrint,
Collaborate, Collaborate, Collaborate, Collaborate, Collaborate, Collaborate, Collaborate, CollaborateBrowser, CollaborateBrowser, CollaborateBrowser, CollaborateBrowser, CollaborateBrowser, SkyscraperClientHost, SkyscraperClientHost, SkyscraperClientHost, SkyscraperClientHost, C4RNET, C4RNET, C4RNET, eTransmitForRevit, eTransmitForRevit, eTransmitForRevit, ModelReview, ModelReview, ModelReview, ModelReview, Microsoft.GeneratedCode, Microsoft.GeneratedCode, RevitDBLink, RevitDBLink, SiteDesigner, SiteDesigner, SiteDesigner, WorksharingLib, WorksharingLib,
WorksharingCommand, WorksharingCommand, SSONETUI, SSONETUI, SSONETUI, SSONETUI, SSONETUI, SSONET, SSONET, SSONET, Elementos intersectantes, Elementos intersectantes, Repositorio común, Repositorio común, Elementos vinculados, Elementos vinculados, Estructura de carpetas, Estructura de carpetas, Identificación de usuario, Identificación de usuario, Información ampliada, Información ampliada, Mediciones y presupuesto, Mediciones y presupuesto, Mediciones y presupuesto, Utilidades de emplazamiento, Utilidades de emplazamiento, CoinsSectionBoxApp2017,
CoinsSectionBoxApp2017, SmartMonkey, SmartMonkey, SmartMonkey, Cost-It, Cost-It, Cost-It, cyrevit, cyrevit, cyrvtcom, cyrvtcom, DynamoRevitVersionSelector, DynamoRevitVersionSelector, DynamoRevitDS, DynamoRevitDS, RevitServices, RevitServices, ExportadorHULC2017, ExportadorHULC2017, ExportViewSelectorAddin, ExportViewSelectorAddin, FormItConverterRibbon, FormItConverterRibbon, AXMImporter, AXMImporter, RMC2017, RMC2017, RvtSteelConnectionsUI, RvtSteelConnectionsUI, RvtSteelConnectionsUI, RvtSteelConnectionsUI, RvtSteelConnectionsUI, RevitAPIUIMacrosInterop,
RevitAPIUIMacrosInterop, RevitAPIMacrosInterop, RevitAPIMacrosInterop, RevitAPIMacrosInterop, AddInJournalClient, RSEnterpriseClientInterop, RSEnterpriseClientInterop, RSEnterpriseClientInterop, RSEnterpriseClientInterop, RSEnterpriseClientInterop, RSEnterpriseClientInterop, RSEnterpriseClient, RSEnterpriseClient, RSEnterpriseClient, RSEnterpriseClient, RSEnterpriseClient, RSEnterpriseClient, RevitAPIBrowserUtils, RevitAPIBrowserUtils, RevitAPIBrowserUtils, RevitAPIBrowserUtils, RevitAPIBrowserUtils, RevitAPIBrowserUtils, RevitAPIBrowserUtils, RevitAPIBrowserUtils, RebarUIStartUpAPI, RebarUIStartUpAPI, RebarUIStartUpAPI, RevitNodes, RevitNodes, RevitRaaS, RevitRaaS, RevitRaaS, DSRevitNodesUI, DSRevitNodesUI, Beaker, DynaTools, DynaTools
## Dynamo version
2.0.0.4665
## Revit version
2017.2.3
Windows 10
## What did you do?
Load Rhythm package
## What did you expect to see?
Load of package without errors
## What did you see instead?
4 error messages as the exposed
Answers:
username_1: looks like you're loading a package for 2018 on 2017. It's probably safe to ignore if you don't notice any failures of the nodes.
username_2: I'm referencing the Revit 2018 API, you should be able to simply dismiss the errors and carry on in Revit 2017. |
lizzieinvancouver/decsens | 593997956 | Title: title options
Question:
username_0: 1. The illusion of declining temperature sensitivity with warming
2. A simple explanation for declining temperature sensitivity with warming
3. As climate change accelerates biology, chasing statistical artifacts ensues
Status: Issue closed
Answers:
username_0: Seems we're going with: A simple explanation for declining temperature sensitivity with warming |
remyroy/CDDA-Game-Launcher | 439725991 | Title: Unhandled exception: [Enter a title]
Question:
username_0: * Description: [Enter what you did and what happened]
* Version: 1.3.16
* OS: Windows-8.1-6.3.9600 (64-bit)
* Type: `<class 'TypeError'>`
* Value: '<' not supported between instances of 'NoneType' and 'str'
* Traceback:
```
File "cddagl\ui.py", line 3078, in lb_http_finished
```
Status: Issue closed
Answers:
username_1: This is fixed in 1.3.19. See #281 for more details. |
TypeStrong/ts-node | 236000908 | Title: Cache doesn't work properly with multithreaded usage
Question:
username_0: My team runs its test pass by forking off multiple parallel mocha processes. When these run over the same code base, one process sometimes picks up an empty file from the cache that another process is in the middle of writing.
I believe a simple fix for this is to verify that the cached file size is non-zero before considering it valid. I'm happy to submit a PR to that effect.
Answers:
username_1: Maybe this is naive, but if it's picking up an empty file couldn't it pick up the file at any other (non-empty) point?
username_0: Typically for FS writes (at least in file systems I'm familiar with), the file is first allocated with zero length, then the content write occurs growing the metadata size > 0. There are streaming write systems which could potentially still run afoul of this problem, but this seems to solve the problem on my Mac OSX and CoreOS Linux machines.
username_0: This could be made more robust by saving some moniker in the file once it has been completely written and checking that before considering the file valid, or saving another file on disk to indicate the same. The first of these would probably slow things down a bit by requiring a file scan to look up the information, but the last might be faster at the expense of twice the file count.
username_1: Makes sense. This might be a good first step. Does it seem reasonable to switch to a single `.json` file as the cache in the future and use a lock file to avoid duplicate read/write?
username_0: I'm not quite sure I understand what you're proposing. Do you mean store the state of the cache in a single `.json` file which is protected by a separate lock file?
If so, as long as that lock is only held when updating cache state, that seems worth a try.
username_1: Wouldn't it also need it on read, so that this situation we have now doesn't occur again?
username_0: Sorry, yes - I was assuming a reader/writer lock.
username_0: Can we get https://github.com/TypeStrong/ts-node/pull/356 checked in and pushed as a temporary improvement? That would really help my team out :)
username_0: Would you be ok taking a dependency on something like https://www.npmjs.com/package/fs-ext?
username_1: What's it for? I think if it can solve your problem and doesn't break anything backward compatible (or does in a big forward thinking way so it's an obvious v4) I'm happy to do so.
username_0: Sorry for the delay - I was on vacation. This would enable the use of cooperative file-system locking on the cache files as they're being written to.
username_1: How's that compare to something like `lockfile`? I'm happy to improve the situation, my only fear is decreasing the performance which has been an issue for some time (hence a cache and suggesting the locking solution combined with moving to a single file cache to mitigate any perf overhead).
username_0: `lockfile` works if you want a separate lock file rather than locking the cache file itself. Do you imagine one lockfile per cache file? I don't see how this would work well with a single cache file - parallel builds would serialize on each other as the single file gets locked.
username_1: So your thought is keeping separate cache files of each content and locking each one on read/write? My reasoning was that it'd be a single cost at startup by using a single file, instead of a consistent cost throughout the entire runtime, but I might be thinking about this incorrectly.
username_0: Moving to a single file would block parallel compilers behind that single file. I believe we need to keep individual cache files for this to work properly.
username_0: I'll try putting together a PR for this and send it your way.
username_1: Absolutely. Would be curious on what your thoughts are for a "fast" default (no type checking), it might alleviate a need for caching by default.
username_0: RE: same project: the benefit is when a project runs a bunch of compile steps in parallel where they share some underlying shared code (as is the case for my repo). Specifically, we run our ~9500 mocha unit tests in batches through 8 parallel threads. Some of these parallel tests end up requiring the same files. Without some form of locking on the cache files, the tests clobber each other attempting to write to the cache. Without the cache being enabled, we pay the cost of pre-compiling the same files over and over across batches.
RE: fast default: we want type checking enabled in our testing.
username_1: Makes sense. As a default, would it be detrimental (e.g. instead of `--fast`, there'd be `--type-check` or similar)?
Another idea - what if we check that the file ends in a valid source map before using it? That might be the perfect compromise since it's always at the end of the file.
username_0: https://github.com/TypeStrong/ts-node/pull/394
Status: Issue closed
username_1: I believe this has been solved 😄 Let me know if it wasn't and I'll re-open. |
Baseflow/flutter-permission-handler | 912711776 | Title: permissions.request() not called or not working
Question:
username_0: Hi ,
I'm using : permission_handler: ^8.0.1 ,
flutter doctor
[✓] Flutter (Channel beta, 2.2.0, on macOS 11.3.1 20E241 darwin-x64, locale en-EG)
[✓] Android toolchain - develop for Android devices (Android SDK version 30.0.2)
[✓] Xcode - develop for iOS and macOS
[✓] Chrome - develop for the web
[✓] Android Studio (version 4.2)
[✓] VS Code (version 1.56.2)
[✓] Connected device (2 available)
=====
My problem in Ios 14.5 , iphone 12 pro max simulator .
The app never give me microphone or storage permission , it always denied status
and permissions.request() not called or not do any thing , i opened my app setting but there is no option to microphone
var permissions = <Permission>[];
permissions.add(Permission.microphone);
permissions.add(Permission.storage);
/// ask for the permissions.
await permissions.request();
my info plist file i added
<key>NSMicrophoneUsageDescription</key>
<string>My app uses the microphone to record your speech and convert it to text.</string>
Status: Issue closed
Answers:
username_1: Hi!
What was the solution?
username_2: I meet the same error. Is there any solution, please?
username_3: Check the Setup section in the docs (https://github.com/Baseflow/flutter-permission-handler/tree/master/permission_handler#setup)
Make sure you declared
```
<key>NSMicrophoneUsageDescription</key>
<string>... your description ...</string>
```
in `ios/Runner/Info.plist`
and
```
post_install do |installer|
installer.pods_project.targets.each do |target|
flutter_additional_ios_build_settings(target)
target.build_configurations.each do |config|
# Preprocessor definitions can be found in:
# https://github.com/Baseflow/flutter-permission-handler/blob/master/permission_handler/ios/Classes/PermissionHandlerEnums.h
config.build_settings['GCC_PREPROCESSOR_DEFINITIONS'] ||= [
'$(inherited)',
'PERMISSION_MICROPHONE=1',
# ... other permissions
]
end
end
end
```
in `ios/Podfile`
Same thing for other permissions you might need. |
wolfpet/kitchen | 97158921 | Title: ENH: Add syntax highlighting for [code] tags
Question:
username_0: add syntax *[code="language"]* that would get translated into
```html
<pre><code="language">...</code></pre>
```
Use https://highlightjs.org/usage/ library
Status: Issue closed
Answers:
username_0: Fixed (hosted version):
https://github.com/wolfpet/kitchen/commit/6d9c540b8afffffadca0c5de8e639754c35c38f1 |
MicrosoftDocs/azure-docs | 742847182 | Title: Why don’t you recommend using an existing Log Analytics Workspace?
Question:
username_0: Can you explain why you don't recommend you select an existing Log Analytics workspace if you have one already?
[Enter feedback here]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: e97882de-c266-edf0-65c1-a9a6e5563971
* Version Independent ID: f9788a1b-4e57-56b3-7987-f9bab45df304
* Content: [Tutorial - Monitor a hybrid machine with Azure Monitor for VMs - Azure Arc](https://docs.microsoft.com/en-us/azure/azure-arc/servers/learn/tutorial-enable-vm-insights?WT.mc_id=modinfra-0000-thmaure)
* Content Source: [articles/azure-arc/servers/learn/tutorial-enable-vm-insights.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/azure-arc/servers/learn/tutorial-enable-vm-insights.md)
* Service: **azure-arc**
* Sub-service: **azure-arc-servers**
* GitHub Login: @username_2
* Microsoft Alias: **magoedte**
Answers:
username_1: @username_0 Thanks for your comment! We will review and provide an update as appropriate.
username_2: @username_0 - Our tutorials are meant to help a customer learn a service as part of a proof of concept or ramp-up on the technology before moving forward and using it in a production manner. We especially don't want you to learn something in production, so that's why it suggest using a new workspace (but certainly you can use a dev or test workspace that may be available in your subscription). At least when you are finished with your evaluation following what's prescribed, you can burn it down without impacting anything else.
Hope that helps. #please-close.
Status: Issue closed
|
Intera/typo3-extension-authcode | 561541446 | Title: Extension is no longer compatible with TYPO3 v7
Question:
username_0: The following commit:
https://github.com/Intera/typo3-extension-authcode/commit/55da32abbabc897e8f30d8710215884fd7d7cb37?diff=split
causes an Exception in TYPO3 v7:
Uncaught TYPO3 Exception
Class 'TYPO3\CMS\Core\Crypto\Random' not found
the change in line 399:
$authCodeString = GeneralUtility::getRandomHexString(16);
to
$authCodeString = GeneralUtility::makeInstance(Random::class)->generateRandomHexString(16);
makes this version of EXT:authcode min requirement TYPO3 v8!
Possible solutions:
new Version-Number in ext_emconf with min requirement v8
or revert change in Classes/Domain/Repository/AuthCodeRepository.php
Best regards!
Tim
Status: Issue closed
Answers:
username_1: Hi Tim,
thank you for taking the time to create an issue.
I cleaned up the requirements now and released two new versions:
v0.3.0 for compatibility from TYPO3 6.2 to 8.7.
v9.0.0 for compatibility with TYPO3 9.5
I also added some functional tests to make sure the code from you example works for every version a expected.
Please let me know when you find any issues.
Kind regards
Alex |
tensorflow/tensorflow | 455443093 | Title: Polynomial Decay Document Page question
Question:
username_0: ## URL(s) with the issue:
https://www.tensorflow.org/api_docs/python/tf/train/polynomial_decay
Please provide a link to the documentation entry, for example:
https://www.tensorflow.org/api_docs/python/tf/train/polynomial_decay
## Description of issue (what needs changing):
A clarification of the example and why it actually works.
### Clear description
In the example mentioned, if the global step is 0, it is unclear how the learning rate will actually change.
The formula can be rewritten as follows:
d = (l - e) * (1 - g/s)^p + e
In the example, g = 0, which means the formula becomes (l - e) * 1 + e = l - e + e = l
So, I'm very unsure of why/how the learning rate in this example is actually going to decrease.
Answers:
username_1: ```global_step``` is an iteration. Thus when we set it to zero there should be no change since we didn't iterate over the batch. The formula to compute ```polynomial_decay``` does not accept negative values for ```global_step``` which makes sense since we cannot have negative iterations.
Status: Issue closed
username_0: If your explanation is correct, I am not sure how the documentation example reflects this:
Example: decay from 0.1 to 0.01 in 10000 steps using sqrt (i.e. power=0.5):
However, when you inspect the code segment, it is very clear that the global step is 0 and remains 0, which implies that you're not actually changing the learning rate. Then how is the example correct? |
naglepuff/NagleChess | 852846419 | Title: Move class lacks constructors
Question:
username_0: Write constructors for the Move class.
Choose sensible defaults for certain properties. By default a move should be a reversible one. Maybe a construction for each type of move would be useful? (can a move be both a pawn advancement and a capture?)
Status: Issue closed
Answers:
username_0: Added constructors covering each type of move.
See commit `768453a` |
vtex-apps/store-discussion | 717430672 | Title: How can I get the promotion highlight with cluster client active?
Question:
username_0: **What are you trying to accomplish? Please describe.**
I need to bring the price with my promotion. And this promotion has cluster client, but I need to show this in product page and summary, not just in the checkout.
I'm use this app: https://github.com/vtex-apps/product-highlights/tree/master/react . But I don't know where I can change to bring the promotion if has cluster client.
**What have you tried so far?**
I just extend the product-price app, but the values come to the page automatically with promotion when doesn't have cluester client with product-context. But where, in product-context or wherever, can I change the rule to bring the promotion with cluster client as well ?
| Account | Workspace |
|---|---|
|`portaldolojistahomo`|`gipol`|
Answers:
username_1: How can I test this? Which product and can you add me to the cluster? <EMAIL>
username_0: Sorry!
The link of the product:
https://gipol--portaldolojistahomo.myvtex.com/renault-logan-expression-2020-C89321-42/p
The promotion is: Promoção Flat 1.0
Actually I removed the check of "Cluster de clientes" here:

But the other promotion is: Promoção Flat Acima 1.0

And the car with this promotion:
https://gipol--portaldolojistahomo.myvtex.com/audi-a4-attraction-2018-A48707-42/p
I add you on the clusters! <3
username_1: OK, in the first product there's a promotion without restrictions available, so it shows up in the `discountHighlights` array:

On the second, there's nothing because the promotion does not fit in those cases:
https://help.vtex.com/en/tutorial/configuring-promotions-with-a-highlightflag--tutorials_2295?locale=en
For the first you can use product-highlights with type `promotion` to show that this product is available to this promotion.
Something like this:
```json
{
"vtex.product-highlights@2.x:product-highlights#promotion": {
"props": {
"type": "promotion"
},
"children": ["product-highlight-wrapper"]
},
"product-highlight-wrapper": {
"children": ["product-highlight-text"]
},
"product-highlight-text": {
"props": {
"message": "{highlightName}"
}
}
}
```
There is no way to show the calculated price with the promotion applied though
username_0: So, there is no way to send the price/promotion into second product with cluster of customers, like first product? There's no way I can develop this option on my own? ):
username_1: Sorry, I don't know how to achieve this.
Btw, if you are developing a B2B store there are some B2B apps that might be helpful:
https://github.com/search?q=topic%3Ab2b+org%3Avtex-apps+fork%3Atrue
username_0: Ok, thanks!
Now I don't know why into shelf the promotion doesn't appear, same with price. This link have products with the promotion without cluster customer:
https://gipol--portaldolojistahomo.myvtex.com/AUDI
I need to open a new issue @username_1 ?
Status: Issue closed
|
lushen124/Universal-FE-Randomizer | 435940751 | Title: Settings result in negative base stats that wrap around
Question:
username_0: I'm using the new 0.8.4 version and using the "Retain Personal Bases" option on FE7. The Chapter 2 boss Glass has negative stats in the log, but they show up in game as very high stats. Here are screenshots of Glass and my settings when I randomized this one, in case it's not just the "Retain Personal Bases" option.


Answers:
username_1: Weird. I know Glass has negative offsets, so I'll take a look. Thanks for reporting!
username_1: Oh, I see what it is. It's the reason why I didn't retain personal bases to begin with.
Glass has a -4 in Skill and -3 in Speed. This is to make him not too hard in Ch. 2 as a Mercenary, since at 0, he would have 8 SKL and 8 SPD, which is faster than Sain and Kent (and probably shatters the weapon triangle concept). But since he changed into a Knight, with a base 2 SKL and 0 SPD, those values underflowed and wrapped around.
I'll cap it to 0 on the lower side. The other two options should handle this appropriately since they adjust things based on final stats.
Status: Issue closed
|
tensorflow/tensorflow | 240575628 | Title: argument --learning_rate: conflicting option string: --learning_rate
Question:
username_0: Hi everyone,
I'm currently learning how to user Tensorflow, but when I'm running the fully_connected_feed.py file from tensorflow website. It returns the following error:
argument --learning_rate: conflicting option string: --learning_rate
Does someone know hot to fix the issue?
This is the entire Error message:
File "<ipython-input-8-db6c9214eb7b>", line 1, in <module>
debugfile('C:/Users/X188068/Desktop/Merck/Vevey Deviation/fully connected feed__Mechanics 101.py', wdir='C:/Users/X188068/Desktop/Merck/Vevey Deviation')
File "C:\Users\X188068\AppData\Local\Continuum\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 888, in debugfile
debugger.run("runfile(%r, args=%r, wdir=%r)" % (filename, args, wdir))
File "C:\Users\X188068\AppData\Local\Continuum\Anaconda3\lib\bdb.py", line 431, in run
exec(cmd, globals, locals)
File "<string>", line 1, in <module>
File "C:\Users\X188068\AppData\Local\Continuum\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 866, in runfile
execfile(filename, namespace)
File "C:\Users\X188068\AppData\Local\Continuum\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "c:/users/x188068/desktop/merck/vevey deviation/fully connected feed__mechanics 101.py", line 25, in <module>
flags.DEFINE_float('learning_rate', 0.01, 'Initial learning rate.')
File "C:\Users\X188068\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\platform\flags.py", line 132, in DEFINE_float
_define_helper(flag_name, default_value, docstring, float)
File "C:\Users\X188068\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\platform\flags.py", line 65, in _define_helper
type=flagtype)
File "C:\Users\X188068\AppData\Local\Continuum\Anaconda3\lib\argparse.py", line 1348, in add_argument
return self._add_action(action)
File "C:\Users\X188068\AppData\Local\Continuum\Anaconda3\lib\argparse.py", line 1711, in _add_action
self._optionals._add_action(action)
File "C:\Users\X188068\AppData\Local\Continuum\Anaconda3\lib\argparse.py", line 1552, in _add_action
action = super(_ArgumentGroup, self)._add_action(action)
File "C:\Users\X188068\AppData\Local\Continuum\Anaconda3\lib\argparse.py", line 1362, in _add_action
self._check_conflict(action)
File "C:\Users\X188068\AppData\Local\Continuum\Anaconda3\lib\argparse.py", line 1501, in _check_conflict
conflict_handler(action, confl_optionals)
File "C:\Users\X188068\AppData\Local\Continuum\Anaconda3\lib\argparse.py", line 1510, in _handle_conflict_error
raise ArgumentError(action, message % conflict_string)
ArgumentError: argument --learning_rate: conflicting option string: --learning_rate
Answers:
username_1: This question is better asked on [StackOverflow](http://stackoverflow.com/questions/tagged/tensorflow) since it is not a bug or feature request. There is also a larger community that reads questions there. Thanks!
Status: Issue closed
|
RENCI/ctmd | 733142430 | Title: Explain expected behavior for true/false and yes/no in Study profile tab to Dawn
Question:
username_0: I was able to upload study profile data that include YES/NO or TRUE/FALSE data. However, it doesn't seem to display that data when you click on the little green button to the right (STUDIES/clipboard tab), which expands the data.
Occasionally it will say false or true for a particular field, but not at all consistently. Most times, the data being displayed was blank.
Is this a bug or expected behavior? THANK YOU.
Answers:
username_0: This has been fixed. All data from the study profiles CSV upload (true/false, etc.) is now correctly showing on the expanded data listing. (when you click the little green box for each study on the STUDIES tab).
Status: Issue closed
|
FX-Examples/FX-SaaS-Example-Project-1 | 366289097 | Title: FX-Examples : ApiV1UsersPutAnonymousInvalid
Question:
username_0: Project : username_0
Job : Example_Project_1_Env
Env : Example_Project_1_Env
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 200
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Wed, 03 Oct 2018 11:17:38 GMT]}
Endpoint : http://192.168.3.11/api/v1/users
Request :
Response :
null
Logs :
Assertion [@StatusCode != 200] resolved-to [200 != 200] result [Failed]
--- FX Bot --- |
wri/gfw-mapbuilder | 474214258 | Title: Implement redesign for multiple selected features
Question:
username_0: - [ ] Dropdown -- display layer names (`count of features` e.g. 5)
- [ ] If a user closes out of a feature via "x" button and there is only 1 feature in that layer, then it should default to the next layer set
Answers:
username_1: @username_2 ,
Test Build: http://alpha.blueraster.io/gfw-mapbuilder/461-multiple-selected-features-redesign/
username_2: @username_1 & @username_0. Please take another lock at the mockup provided by the design team. I think there is an inconsistency between the design and how this feature is implemented in the build above. In the build above, the dropdown lets you select between features.
However, in the mock-up the dropdown let's you toggle between **layers** & clicking on the next button lets the user switch between feature if multiple features are selected in the same layer.
username_0: - [ ] Dropdown -- display layer names (`count of features` e.g. 5)
- [ ] If a user closes out of a feature via "x" button and there is only 1 feature in that layer, then it should default to the next layer set
username_1: @username_2 ,
I've updated the build: http://alpha.blueraster.io./gfw-mapbuilder/461-multiple-selected-features-redesign/
Hopefully, everything now works as expected according to the comp!
username_3: <img width="519" alt="Screen Shot 2019-10-03 at 12 05 44 PM" src="https://user-images.githubusercontent.com/5713523/66144208-66aebb00-e5d6-11e9-92f1-3074743ea729.png">
@username_1 See the `2/1` on this layer?! We need just the `1/1`!
Workflow: **X** out a feature that was the only feature in a layer, which took me to the first feature in the next layer (which had two features), I then selected another feature from a different layer (with only one feature), and instead of showing `1/1`, we saw `2/1`!
username_3: @username_1 On 'NEXT', our `index` remains the same:
<img width="556" alt="Screen Shot 2019-10-03 at 12 09 33 PM" src="https://user-images.githubusercontent.com/5713523/66144494-efc5f200-e5d6-11e9-867e-73734ee406fb.png">
username_2: @username_1 & @username_3. Moving this back to in progress until issue identified by Lucas is fixed!
username_1: @username_2 Could you test out this new build ?
http://alpha.blueraster.io/gfw-mapbuilder/461-multiple-selected-features-redesign/
Apologies for this ticket taking so long to complete. It has proven to be one of the most complex ones yet besides the fires bug to complete! We should make sure to thoroughly test it out since there are many ways the user can interact with this For example:
1. Just clicking on different features on the map
2. Click on the map to select feature, then using the prev/next buttons
3. Click on the map to select feature, click prev/next buttons, click on map again
4. Click on the map to select feature, click prev/next buttons, select dropdown option, use prev/next buttons again
5. Click on the map to select feature, click prev/next buttons, select dropdown option, use prev/next buttons again, click on map again to select different feature
6. Repeat 4 -5, but use dropdown first before interacting with next/prev buttons.
I'm sure I'm missing a few workflows, but this is the general idea.
username_2: @username_1. I'll test on Monday as I am on leave this afternoon. @username_0, please test if you have time today,
username_2: @username_1. Looks good! Do you mind putting up another build which uses the configuration for our Georgia atlas. Attached is the JSON.
[GEO.zip](https://github.com/wri/gfw-mapbuilder/files/3697674/GEO.zip)
username_2: @username_1. Just found 1 small bug.
If I)
1) Click in an area which results in selecting **two** features from the same layer.
2) Toggle to the layer containing two features using the dropdown
3) Click on the x icon the layer panel will jump to a feature in a different layer instead of going to the next feature in the same layer.
Attached is a video of workflow.In the attached video, two features get selected from the Concession Forestière layer. After toggling to the Concession Forestière and clicking the x the layer panel goes to a different layer instead of going to the next feature in the Concession Forestière layer.
[Layerbug.zip](https://github.com/wri/gfw-mapbuilder/files/3697717/Layerbug.zip)
username_1: Hi @username_2 ,
I don't believe that is a bug. When you click the 'x', it causes the tracking index to go down by 1. If you are at the first feature in a layer and click the `x`, you then jump to the next layer group because you can't go down further in the list of features in the current layer (i.e. you can't go to feature 0 of 2, -1 of 2, etc).
username_2: @username_1. Ok. I see what you mean.
I was thinking that if your layer contained multiple active features you would always go to the next or proceeding feature in that layer, unless the value was set to `1/1`. Would that significantly complicate things? If yes, I think we can stick with the current setup!
Richard
username_1: @username_2 , I'll see what I can do! But yes, this may significantly complicate things. I'll work on this now and can let you know at our meeting today how it goes.
username_1: @username_2 I asked Lucas about this and he thinks we should add this to the backlog for something we could improve upon later. Would that be ok with you?
username_2: @username_1. Sounds good!
username_2: Let's please put up a build for Georgia as well just to make sure we have rigorous testing on this!
username_1: @username_2 , sure, I'll make a Georgia build now!
username_1: @username_2 Georgia build: http://alpha.blueraster.io/gfw-mapbuilder/461-multiple-selected-features-georgia-build/
username_2: @username_1. Finding one bug which is present in both DRC & Georgia builds when you translate the app.
1) If I load: http://alpha.blueraster.io/gfw-mapbuilder/461-multiple-selected-features-redesign/
2) Use the toggle to change the language to to the secondary langauge
3) Click on a shape in the map.
The application does not open up the data tab, and instead shows the prompt asking the user to click on an areas. Please see attached video.
[language_bug.zip](https://github.com/wri/gfw-mapbuilder/files/3698952/language_bug.zip)
username_1: @username_2 Builds have been updated again to fix the language toggle issue.
username_2: Looks good!
Status: Issue closed
|
ampproject/amphtml | 158402564 | Title: Make it easier to determine what when to prod when
Question:
username_0: We have two main issues right now we should fix:
1. When we patch a release, the release notes do not point to the original release. We should have a link to the release it is based on.
2. We should make a note in the release notes for when they go to 1%, full production, etc. including date and time.
This will make it easier to track bugs to specific releases.
Answers:
username_1: Did #3435 and #3439 completely resolve this, or still more to do? Is Current still a good milestone?
username_2: not yet, theres some internal work left to be done to put in the percentage when we transition from opt-in to percentage and change of percentage value.
Status: Issue closed
username_3: Hey,
The AMP community has been working nonstop to make AMP better, but somehow we've still managed to grow an enormous backlog of open issues. This has made it difficult for the community to prioritize what we should work on next.
A new process is on the way and to give it a chance for success we will be closing issues that have not been updated in awhile.
If this issue still requires further attention, simply reopen it. Please try to reproduce it with the latest version to ensure it gets proper attention!
We really appreciate the contribution! Thank you for bearing with us as we drag ourselves out of the issue abyss. :)
username_0: @
username_0: We have two main issues right now we should fix:
1. When we patch a release, the release notes do not point to the original release. We should have a link to the release it is based on.
2. We should make a note in the release notes for when they go to 1%, full production, etc. including date and time.
This will make it easier to track bugs to specific releases.
username_0: Can we close this?
username_2: @username_0 dont think so. we don't do the 1% update yet
Status: Issue closed
username_2: closed by https://github.com/ampproject/amphtml/pull/8560 |
cybertec-postgresql/pg_timetable | 827926725 | Title: Error "x509: certificate signed by unknown authority" in SendMail task from Docker
Question:
username_0: **Describe the bug**
If a chain uses `SendMail` task and `pg_timetable` is launched from Docker error occurs:
**To Reproduce**
Steps to reproduce the behavior:
1. Create the chain with SendMail task, e.g. using `samples/Mail.sql` file
2. Start `pg_timetable`:
```bash
$ docker run --rm cybertecpostgresql/pg_timetable:3.3 -u pasha -h 192.168.0.221 -d timetable -c worker001 --password=<PASSWORD> --debug
```
3. Wait for the chain to run
```
[ 2021-03-10 15:30:58.824 | LOG ]: Starting chain ID: 1; configuration ID: 1
```
4. See the error
```
[ 2021-03-10 15:30:59.823 | ERROR ]: Task execution failed: {"ChainConfig":1,"ChainID":1,"TaskID":4,"TaskName":"SendMail","Script":"SendMail","Kind":"BUILTIN","RunUID":{"String":"","Valid":false},"IgnoreError":false,"Autonomous":false,"DatabaseConnection":{"String":"","Valid":false},"ConnectString":{"String":"","Valid":false},"StartedAt":"2021-03-10T15:30:58.900399Z","Duration":915508}; Error: x509: certificate signed by unknown authority
[ 2021-03-10 15:30:59.825 | ERROR ]: Chain ID: 1 failed
```
**Additional context**
The error occurs on the latest Docker images where the `FROM SCRATCH` build stage was used.<issue_closed>
Status: Issue closed |
cqframework/clinical_quality_language | 1149435117 | Title: Improve selector capability with FHIR
Question:
username_0: Consider the following expression:
```cql
define TestQuestionnaireResponse:
QuestionnaireResponse {
"id": id('phq-9-questionnaireresponse'),
"questionnaire": canonical('http://somewhere.org/fhir/uv/mycontentig/Questionnaire/phq-9-questionnaire'),
"status": QuestionnaireResponseStatus('completed'),
"subject": Reference {
"reference": string('Patient/example')
},
"authored": dateTime(@2021-09-13T16:29:00-07:00),
"item": {
FHIR.QuestionnaireResponse.Item {
"linkId": string('LittleInterest'),
"text": string('Little interest or pleasure in doing things'),
"answer": {
FHIR.QuestionnaireResponse.Item.Answer {
"value": Coding {
"system": uri('http://loinc.org'),
"code": code('LA6568-5'),
"display": string('Not at all')
}
}
}
},
FHIR.QuestionnaireResponse.Item {
"linkId": string('TotalScore'),
"text": string('Total score'),
"answer": {
FHIR.QuestionnaireResponse.Item.Answer {
"value": integer(3)
}
}
}
}
}
```
1. The helper functions should not be required, determine why the defined implicit conversions are not working as expected
2. Tuple compatibility rules should allow for the components to be constructed as anonymous tuples, rather than requiring the sub-component type name
Answers:
username_0: Full library for reference:
```cql
library QuestionnaireResponse
using FHIR version '4.0.1'
include FHIRHelpers version '4.0.1'
context Patient
define QuestionnaireResponse:
QuestionnaireResponse {
"id": id('phq-9-questionnaireresponse'),
"questionnaire": canonical('http://somewhere.org/fhir/uv/mycontentig/Questionnaire/phq-9-questionnaire'),
"status": QuestionnaireResponseStatus('completed'),
"subject": Reference {
"reference": string('Patient/example')
},
"authored": dateTime(@2021-09-13T16:29:00-07:00),
"item": {
FHIR.QuestionnaireResponse.Item {
"linkId": string('LittleInterest'),
"text": string('Little interest or pleasure in doing things'),
"answer": {
FHIR.QuestionnaireResponse.Item.Answer {
"value": Coding {
"system": uri('http://loinc.org'),
"code": code('LA6568-5'),
"display": string('Not at all')
}
}
}
},
FHIR.QuestionnaireResponse.Item {
"linkId": string('FeelingDown'),
"text": string('Feeling down, depressed, or hopeless'),
"answer": {
FHIR.QuestionnaireResponse.Item.Answer {
"value": Coding {
"system": uri('http://loinc.org'),
"code": code('LA6569-3'),
"display": string('Several days')
}
}
}
},
FHIR.QuestionnaireResponse.Item {
"linkId": string('TroubleSleeping'),
"text": string('Trouble falling or staying asleep'),
"answer": {
FHIR.QuestionnaireResponse.Item.Answer {
"value": Coding {
"system": uri('http://loinc.org'),
"code": code('LA6569-3'),
"display": string('Several days')
}
}
}
},
FHIR.QuestionnaireResponse.Item {
[Truncated]
define function string(value System.String):
string { value: value }
define function uri(value System.String):
uri { value: value }
define function code(value System.String):
code { value: value }
define function integer(value System.Integer):
integer { value: value }
define function QuestionnaireResponseStatus(value System.String):
QuestionnaireResponseStatus { value: value }
define function dateTime(value System.DateTime):
dateTime { value: value }
``` |
openSUSE/doc-ci | 926674322 | Title: The coupling of validation and list-images-missing is problematic
Question:
username_0: The gha-select-dcs optimizes the list of documents to validate within a repo, so we validate as few times as possible. That optimization is correct for `daps validate` itself, as XML validation validates all parts of the current MAIN irrespective of the ROOTID.
The optimization is not correct for `daps list-images-missing`. That target only checks for images needed to build the current document (i.e. within the scope of the current ROOTID). By and large, results will still be correct as long as we're preferring validation with DC-\*-all files (which have no ROOTID). Result will not be correct though for `l10n` dirs in `doc-sle` for languages where:
1. more than one guide is shipped and
2. no DC-\*-all file is included.
As an example, the broken state of SUSE/doc-sle@15f3b214 gives an a-OK result if you validate with the wrong DC file:
```
doc-sle :15f3b2148 > daps -d l10n/sles/de-de/DC-SLES-all list-images-missing
The following images are missing:
container_support_matrix
scc_eye_icon
doc-sle :15f3b2148 > daps -d l10n/sles/de-de/DC-SLES-deployment list-images-missing
All images for document "book-deployment" exist.
```
Status: Issue closed
Answers:
username_0: Fixed in DAPS 3.3, which is out now. |
Azure/azure-cli | 938620183 | Title: Support for global input param x-ms-correlation-request-id ...
Question:
username_0: We have set of related az commands that needs to be run for completing the user scenario. We are looking to track all those requests as group for logging / debugging perspective using "x-ms-correlation-request-id". This header can be passed by clients. As this is a common header, supporting the capability at the cli level will help any extensions to pass the header if needed.
This can be achieved either of the following options
1) Support for common param --header
2) Support for commong param --correlationid
Let me know if you need more information.
Following is the ARM RPC definition
x-ms-correlation-request-id | Optional. Caller-specified value identifying a set of related operations that the request belongs to, in the form of a GUID. If the caller does not specify this header, ARM will randomly generate a unique GUID. Used for tracing the correlation Id of the request; the resource provider must log this so that end-to-end requests can be correlated across Azure. Because this header can be client-generated or re-used for multiple requests, it should not be assumed to be unique by the RP implementation.
-- | --
<!--EndFragment-->
</body>
</html>
Answers:
username_1: @username_2 for awareness
username_2: A [quick search on `x-ms-correlation-request-id`](https://github.com/search?q=x-ms-correlation-request-id+in%3Atitle&type=Issues) shows same feature requests on
- Azure PowerShell: https://github.com/Azure/azure-powershell/issues/13520
- Azure SDK: https://github.com/Azure/azure-sdk/issues/576 |
kubernetes-sigs/kind | 950426109 | Title: Cluster creation with more than 1 node fails on Mac with docker desktop
Question:
username_0: ...
✗ Joining worker nodes 🚜
ERROR: failed to create cluster: failed to join node with kubeadm: command "docker exec --privileged kind-prometheus-operator-worker kubeadm join --config /kind/kubeadm.conf --skip-phases=preflight --v=6" failed with error: exit status 1
Command Output: I0722 08:23:18.994037 223 join.go:398] [preflight] found NodeName empty; using OS hostname as NodeName
I0722 08:23:18.994080 223 joinconfiguration.go:74] loading configuration from "/kind/kubeadm.conf"
I0722 08:23:18.995198 223 controlplaneprepare.go:214] [download-certs] Skipping certs download
I0722 08:23:18.995225 223 join.go:469] [preflight] Discovering cluster-info
I0722 08:23:18.995236 223 token.go:78] [discovery] Created cluster-info discovery client, requesting info from "kind-prometheus-operator-control-plane:6443"
I0722 08:23:18.999653 223 round_trippers.go:444] GET https://kind-prometheus-operator-control-plane:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s in 3 milliseconds
I0722 08:23:18.999708 223 token.go:215] [discovery] Failed to request cluster-info, will try again: Get "https://kind-prometheus-operator-control-plane:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": dial tcp: lookup kind-prometheus-operator-control-plane on 192.168.65.5:53: no such host
...
```
**Anything else we need to know?**:
**Environment:**
- kind version: (use `kind version`): `kind v0.11.1 go1.16.3 darwin/amd64`
- Kubernetes version: (use `kubectl version`):
```
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.11", GitCommit:"<PASSWORD>", GitTreeState:"clean", BuildDate:"2021-05-12T12:27:07Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"darwin/amd64"}
```
- Docker version: (use `docker info`):
```
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
compose: Docker Compose (Docker Inc., 2.0.0-beta.1)
scan: Docker Scan (Docker Inc., v0.8.0)
Server:
Containers: 2
Running: 2
Paused: 0
Stopped: 0
Images: 53
Server Version: 20.10.6
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
init version: de40ad0
[Truncated]
Operating System: Docker Desktop
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 9.732GiB
Name: docker-desktop
ID: MEJV:BBKC:AXXJ:PI4T:VBVT:4DM7:V45D:3DPO:OP2N:7XHY:YXFK:OBXJ
Docker Root Dir: /var/lib/docker
Debug Mode: false
HTTP Proxy: http.docker.internal:3128
HTTPS Proxy: http.docker.internal:3128
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
```
- OS (e.g. from `/etc/os-release`): MacOS Catalina
Answers:
username_1: Looks like the joining node cannot connect to the kube-apiserver. Can you
try debugging why that happens? There is a flag called --retain that leaves
the cluster / logs.
Given there is a single CP node there is no load balancer issue at play.
username_2: Does not reproduce for me.
1. Created config with:
```yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: kind-prometheus-operator
nodes:
- role: control-plane
image: kindest/node:v1.19.11@sha256:07db187ae84b4b7de440a73886f008cf903fcf5764ba8106a9fd5243d6f32729
- role: worker
image: kindest/node:v1.19.11@sha256:07db187ae84b4b7de440a73886f008cf903fcf5764ba8106a9fd5243d6f32729
```
2. `kind create cluster -v 3 --config=kind-config.yaml`
3. success
macOS Big Sur however, but same docker:
```
$ docker info
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
compose: Docker Compose (Docker Inc., 2.0.0-beta.1)
scan: Docker Scan (Docker Inc., v0.8.0)
Server:
Containers: 3
Running: 3
Paused: 0
Stopped: 0
Images: 20
Server Version: 20.10.6
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
```
You may have not enough CPU/RAM allocated to docker, out of disk space, or some other issue with limited resources.
username_0: Thanks for the quick turn around.
I had this `export KIND_EXPERIMENTAL_DOCKER_NETWORK=bridge` in my profile, once I removed that it was successful.
Status: Issue closed
username_2: We use the embedded DNS server, otherwise we'd have continued using bridge (this is in more detail in the release notes when we made the switch IIRC), we could catch this but `KIND_EXPERIMENTAL_*` are unsupported escape-hatches to begin with 🙃 (IIRC they should all print warnings about this when used). |
citation-file-format/citation-file-format | 518631305 | Title: Regular expression for DOIs too narrow
Question:
username_0: I use the DOI `10.1093/llc/fqu057`in one of my citation files. The validator complaints that the DOI does not match a regular expression and is invalid. This is not true, the DOI can be resolved without issues: http://doi.org/10.1093/llc/fqu057
Validating with cffconvert gives the following error message:
```
--- All found errors ---
validation.invalid
--- All found errors ---
["Value '10.1093/llc/fqu057' does not match pattern '^10\\.\\d{4,9}(\\.\\d+)?/[A-Za-z0-9-\\._;\\(\\)\\[\\]\\\\\\\\:]+$'. Path: '/references/0/doi'", "Value '1' is not of type 'str'. Path: '/references/0/issue'"]
```
CFF version is 1.0.3.
Test file (with fewer authors than in the actual file)
```
# YAML 1.2
# Metadata for citation of this software according to the CFF format (https://citation-file-format.github.io/)
cff-version: 1.0.3
message: If you use this software, please cite it using these metadata.
title: 'ANNIS'
doi: 10.5281/zenodo.2563138
authors:
- given-names: Thomas
family-names: Krause
affiliation: Humboldt-Universität zu Berlin
orcid: https://orcid.org/0000-0003-3731-2422
version: annis-3.6.0
date-released: 2019-02-12
repository-code: https://github.com/korpling/ANNIS
license: Apache-2.0
references:
- type: article
title: "ANNIS3: A new architecture for generic corpus query and visualization"
year: 2016
doi: 10.1093/llc/fqu057
authors:
- family-names: Krause
given-names: Thomas
orcid: https://orcid.org/0000-0003-3731-2422
affiliation: Humboldt-Universität zu Berlin
- family-names: Zeldes
given-names: Amir
affiliation: Georgetown University
journal: Digital Scholarship in the Humanities
volume: 31
issue: 1
issn: 2055-7671
```<issue_closed>
Status: Issue closed |
skipperbent/tinder-php-sdk | 276168533 | Title: login with authToken
Question:
username_0: Hi, I know someone managed to login to tinder by authToken (e.g. <PASSWORD>).
I can collect authToken with help of this sdk, now how can I authenticate with authToken?
thanks in advance.
Answers:
username_0: Removing authenticate function and set authToken from database, this works.
Status: Issue closed
|
nullforces/awful | 313132731 | Title: 新闻最好贴更正式一点的链接,如有后续也可贴上
Question:
username_0: 非常好的网页,因为有的直接指向人,不妨寻找更正式的新闻来源
比如:青岛求实:
- [一名女大学生的非正常死亡](http://zqb.cyol.com/html/2012-11/21/nw.D110000zgqnb_20121121_2-07.htm)
- [青岛求实学院相关负责人被追责](http://zqb.cyol.com/html/2012-11/23/nw.D110000zgqnb_20121123_3-07.htm)
另外, 事件的后续也很重要,比如王攀事件:
- [武汉理工回应研究生坠亡事件:停止导师王攀招生资格](http://www.chinanews.com/sh/2018/04-08/8485482.shtml)
Answers:
username_1: @username_0 很棒的建议呢,请问可以提交一个 Pull Request 么?
username_0: 我还不大会用github,先学习一下~
Status: Issue closed
|
vlaksuga/chinyoung | 484417882 | Title: about 본문 내용 변경
Question:
username_0: 기존내용 삭제, 해당글로 본문 변경
---------------------------
지난 수십년간 비약적인 기술개발로 진화를 거듭해온 '타일'은 이제 더 이상 우리가 알던 그 '타일'이 아닙니다. 인테리어 마감재를 넘어 건축물의 외장재로, 테이블이나 주방가구 등 카운터 상판, 혹은 가구 마감재로 타일의 사용처는 끊임없이 확장되고 있습니다.
(주)진영코리아는 이탈리아 타일 전문 수입공급원으로 파나리아 그룹(PANARIA GROUP)의 코토데스테(COTTO D'ESTE)를 비롯한 엄선된 제품을 통해 그 흐름을 주도하고 있습니다. 보편성을 바탕으로 트렌드와 최신기술이 접목된 최상의 품질, 풍부한 디자인과 다양한 사이즈, 합리적이고 폭넓은 가격대, 원활하고 안정적인 공급은 진영의 자부심입니다. 공간 디자인에 대한 전문성을 강화해 전문가와 소비자를 대상으로 포괄적인 컨설팅과 제품 공급을 목표로 하고 있습니다.
오로지 '타일'만으로 어떠한 건축물이라도 완성시킬 수 있는 지금.
본사 전시장 혹은 논현점을 방문해 타일의 신세계를 직접 경험해보세요.
CUSTOMER 소비자 문의 / WHOLESALE 유통 문의 / PROJECT 프로젝트 문의
Answers:
username_1: 확인후 클로즈
Status: Issue closed
|
Chlorie/ChloroBlog | 479272937 | Title: 无限长序列、惰性计算与C++(1) – ChloroBlog
Question:
username_0: https://chlorie.github.io/ChloroBlog/posts/2019-08-10/range-1.html
能鸽善鹉者的领域
Answers:
username_1: '''C++
using Intellichloire;
'''
username_2: 无限循环太屑了(
在函数里定义struct太屑了(
不好好用class封装数据太屑了(
sentinel() != iterator()无定义太屑了(
**第六段代码把struct iterator打成struct iota_t太屑了(认真)**
屑盐酸(暴论)
username_0: 草
local struct主要是为了掩蔽名字,不然还得搞个`namespace impl`出来,不好展示。
反过来的不等不用定义,因为纯属为了实现范围`for`的功能。如果要实现一个正经的random access或者bidirection iterator的话我不如直接去世
boilerplate大放送
打错名字这个我错乐(
这两个关于lazy range的blog确实很屑,写的时候也没太过脑子,不如当黑历史((
username_2: 大家都是学c艹的,为什么写得这么屑咱都懂,不用这么详细解释的(小声)
本来只想吐槽你那个typo,但是太闲了就一并吐槽了(小小声) |
annkissam/rummage_phoenix | 471490290 | Title: Live View support
Question:
username_0: Live View support would be a pretty nice addition for 2.0
A `scrivener_html` fork does this pretty simply: https://github.com/montebrown/scrivener_html/commit/5d403292c6c4bf07f2bf5b9bba3723d6eac05c8a#diff-4dfa7434880f68cff9ac0e604f73904aR399
Thoughts? |
connectivedx/fuzzy-chainsaw | 186343546 | Title: Release name patterns?
Question:
username_0: In effort to have a conversation recklessly early, I purpose types of fabrics for our release names: https://en.wikipedia.org/wiki/List_of_fabrics
Burlap, Leather, and Nylon, oh my!
Answers:
username_0: https://github.com/username_0/fuzzy-agnomen This exists now, we will be the masters of name generation.
Status: Issue closed
|
diffusionkinetics/open | 650629351 | Title: Perform IO in eval block
Question:
username_0: I would like an eval block to display the result after some IO.
I tried using
````haskell
```haskell top hide
-- Required to run and display IO
instance (Show a) => AskInliterate (IO a) where
askInliterate = answerWith (show . unsafePerformIO)
```
````
But no cell is rendered after
````
```haskell top
someIO :: IO String
someIO = return "Hello"
```
```haskell eval
someIO
```
````
Answers:
username_0: Well, I didn't see the small `=> "Hello"` after the code, how can I turn it into a cell below ?
Status: Issue closed
username_0: Got it with
````
```haskell top hide
-- Required to run and display IO
instance (Show a) => AskInliterate (IO a) where
askInliterate q cts io = do
putStrLn "<div class=\"row\">"
putStrLn "<div class=\"col-md-12\">"
putStr "<pre class=\"haskell\"><code>"
putStrLn $ q
putStrLn "</code></pre>"
putStrLn "</div>"
putStrLn "</div>"
putStrLn "<div class=\"row\">"
putStrLn "<div class=\"col-md-12\">"
putStrLn .show . unsafePerformIO $ io
putStrLn "</div>"
putStrLn "</div>"
```
```` |
statisticalbiotechnology/triqler | 418585660 | Title: What other search scores we can use for Triqler?
Question:
username_0: Hi Matt,
For the searchScore column in the input file, except for percolator score, what other search scores we can use here, for example, if we use MaxQuant, can I use the Score column from MSMS.txt file?
Best,
Weixian
Answers:
username_1: Hi Weixian,
In theory, you should be able to use any search engine score, just make sure that higher scores mean more confident identifications (or multiply by -1 otherwise). We have, however, not tested this with other scores than percolator's, but I would be very interested in your results.
username_0: Hi Mattew,
Thanks for your reply, I'll let you know once I have some results. Since the first glance at your paper, I noticed you did feature extraction at the very begining and mapped to PSM afterwards, so that reminded me of MaxQuant, but it should also work for Skyline-based quantitation pipelines, right? As long as we build the spectrum library with non-filtered PSM, and then extract PSM-intensity pairs which passed mprophet filter. Do you have any test for this kind of pipelines?
username_1: I'm not too familiar with the Skyline pipeline so I can't help you too much with that at the moment. However, if you think this would really be a good idea, we can discuss it in some more detail by e-mail.
username_0: I failed to find the full decoy search result which means I only found very few decoy from MaxQuant, even searched with decoy concatted database. Do you have any idea if Andomeda gives the intermediate result?
username_1: Sorry for the very late reply. As far as I could tell there are no (readable) intermediate files from Andromeda that could provide enough decoy information. Unfortunately, neither the msms.txt nor the allPeptides.txt provided the needed information. I have not checked yet if one could change some search parameters to obtain this, e.g. setting the FDR threshold to 1.0.
username_1: As it turns out, there was a file called `evidence.txt` that seemed to contain sufficient information to run Triqler. I have added a converter for this file to the Triqler package: https://github.com/statisticalbiotechnology/triqler/wiki/Converters
We recommended setting the MaxQuant FDR threshold to 100% for best performance, though it usually also works with the default thresholds of 1% FDR.
Status: Issue closed
|
barryvdh/laravel-httpcache | 501875044 | Title: Laravel 5.8 and Laravel 6.0 esi tag issue
Question:
username_0: Whenever I try to include an esi tag block like this
`<input type="hidden" name="_token" value="<esi:include src="{{ url('csrf') }}" />">`
or Like this
`<esi:include src="{{ url('csrf') }}" />`
it is giving me fatal error of **Maximum function nesting level of '256' reached, aborting!**
any help please.
Answers:
username_0: Please Answer |
clearlinux/distribution | 1174283094 | Title: Package request for CoreFreq
Question:
username_0: Official package name: **_CoreFreq_**
License (must be an OSI approved Open Source license): GPL2
Download URL of latest release: [`1.90.1`](https://github.com/username_0/CoreFreq/releases/tag/1.90.1)
Latest release date (must be recent): Mar 5, 2022
Description:
Hello,
May you please make [_CoreFreq_](https://github.com/username_0/CoreFreq) available through the `swupd` command ?
CoreFreq is a CPU monitoring software with BIOS like functionalities.
Intel Processors from Core2 up to the latest Alder Lake architecture are supported.
For your information I'm providing a [wiki](https://github.com/username_0/CoreFreq/wiki/CoreFreq-for-Clear-Linux) to build it from source in Clear Linux
**Thank you**
CyrIng
Answers:
username_0: * Screenshot for your information

username_1: Hmm this seems to include a kernel module that's not upstream... we really really don't like doing those
(and it's surprising, since all this information is also just available from userspace .. for example the turbostat and powertop tools manage pretty much the same information without kernel module)
username_1: (I understand that likely the kernel module lets you have somewhat higher accuracy.. but from a distro integration perspective... if it can run without the module using the normal userspace MSR read stuff, and has the kernel module as an option to get improved accuracy... then we could ship the userspace agent in the distro.. and if the user really wants the improved accuracy, he/she can add the kernel module themselves)
username_0: Thank you for your answer.
As you have notice, MSR are no more writable from userspace within latest kernel: one now need a boot kernel parameter.
PCI CSR, SMU & Chipset, Memory Controller & Timings and various other registers are not available from userspace.
Think about major manufacturers releasing closed source kernel module shipped with their GPU board: their blobs are blindly released in most repos.
In the contrary, CoreFreq is offering an unique opportunity to the Users to investigate (and maybe contribute) to a low level software; before giving a shot.
I respect your point of view
Best regards,
Cyril
Status: Issue closed
username_0: Thank you very much for any future help.
I chose Clear Linux because as far as I know it was provided by Intel and I am still hoping for any contribution, advice, help from their experts.
Linux users being the winners of all this: _surprisingly_, their feature requests push _CoreFreq_ even further every day. |
cypress-io/cypress | 699200041 | Title: CYPRESS_PROJECT_ID seems to be a reserved environment variable
Question:
username_0: <!-- Is this a question? Questions WILL BE CLOSED. Ask in our chat https://on.cypress.io/chat -->
### Current behavior:
I expected that the same way I can pass `CYPRESS_FOO` that `CYPRESS_PROJECT_ID` will also automatically be added to Cypress’ environment variables. It looks like this is a reserved env, which is not mentioned in the docs. There is only a notice about [CYPRESS_INTERNAL_ENV being reserved](https://docs.cypress.io/guides/guides/environment-variables.html#Option-3-CYPRESS).
So I guess this is just a documentation issue. However, `PROJECT_ID` is commonly used and there it might be better to use a different name or even better another "namespace" for such reserved env vars.
```
CYPRESS_FOO=bar CYPRESS_PROJECT_ID=123
```
```js
Cypress.env('FOO') // => bar
Cypress.env('CYPRESS_PROJECT_ID') // => undefined
```
### Desired behavior:
```
CYPRESS_FOO=bar CYPRESS_PROJECT_ID=123
```
```js
Cypress.env('FOO') // => bar
Cypress.env('CYPRESS_PROJECT_ID') // => 123
```
### Test code to reproduce
<!-- If we cannot fully run the tests as provided the issue WILL BE CLOSED -->
<!-- Issues without a reproducible example WILL BE CLOSED -->
<!-- You can fork https://github.com/cypress-io/cypress-test-tiny repo, set up a failing test, then link to your fork -->
### Versions
Cypress 5.1.0
Node v12.16.1
Answers:
username_1: I did a little digging. Here is what I found -
There is a reference to `CYPRESS_PROJECT_ID` in the documentation [here](https://github.com/cypress-io/cypress-documentation/blame/develop/source/guides/dashboard/projects.md#L73)
Adding it to the "reserved" section on the [page mentioned above](https://docs.cypress.io/guides/guides/environment-variables.html#Option-3-CYPRESS) might be even more confusing.
Maybe adding a warning in the cli similar to CYPRESS_INTERNAL_ENV [here](https://github.com/cypress-io/cypress/blob/develop/cli/lib/cli.js#L355)
I like @username_0's idea on using another namespace for "internal" Cypress variables. I think it would be helpful |
koenbollen/jl | 727865689 | Title: Elastic 'common schema' support?
Question:
username_0: Elastic.co has a 'common schema' that they encourage, and it'd be nice if the format was understood by jl.
Spec: https://www.elastic.co/guide/en/ecs/current/index.html
Sample line:
```json
{
"service": { "name": "gunicorn" },
"@timestamp": "2020-10-23T03:35:49.324754+00:00",
"message": "10.244.1.180 - - [23/Oct/2020:03:35:49 +0000] \"GET /users/users/notices/ HTTP/1.1\" 200 4942 \"http://localhost:4200/\" \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.80 Safari/537.36\"",
"time": 1603424149.3247535,
"log": {
"level": "INFO",
"logger": "gunicorn.access",
"origin": {
"file": { "line": 570, "name": "/app/ticketing/utils/log.py" },
"function": "access"
}
},
"process": {
"pid": 17,
"name": "MainProcess",
"thread": { "name": "MainThread", "id": 140056871733056 }
},
"request": {
"scheme": "https",
"path": "/users/users/notices/",
"method": "GET",
"customer": "test",
"view": {
"args": [],
"app": "users",
"namespace": "users",
"name": "users:user-notices"
}
},
"customer": "test",
"event": { "duration": 78518000 },
"http": {
"request": { "method": "GET", "referrer": "http://localhost:4200/" },
"response": { "body": { "bytes": 4942 }, "status_code": "200" },
"version": "1.1"
},
"related": { "ip": ["10.244.1.180"] },
"source": { "address": "10.244.1.180" },
"url": { "path": "/users/users/notices/", "query": "" },
"user_agent": {
"original": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.80 Safari/537.36"
}
}
```
Answers:
username_1: Hey Ash,
Thank you for the suggestion, I agree: would be nice if `jl` would support this format.
I'll try to make some time soon.
username_1: I've created the following pull request: https://github.com/username_1/jl/pull/20
This adds better support for nested fields and I've added some common fields of ECS in there.
It should now pickup the `log.level` correctly and it will allow you to select nested fields:
```bash
$ cat your-ecs.log | jl -f request.path
```
I did not touch the contents of the `"message"` field, since that would would be to specific for these kinds of gunicorn logs.
@username_0 Could you try this branch on your logs? Let me know if you need any help with that or expect any different behaviour.
username_0: Looks to be working great. (And yeah, don't wanna mess with 'message', it's way too specific.) There's a screenshot on the PR of it in action. Thank you!
username_1: I've merged the pullrequest, released jl and updated my homebrew tap (https://github.com/username_1/homebrew-public/commit/71c513286d95fbbecb282f076cd4233442b4e5ef).
Closing this issue now.
Thanks again for the suggestion 🎉
Status: Issue closed
|
pingcap/tidb | 623020092 | Title: Slow log missing cost time/backoff time in BatchGetChecker/IndexValueGet/PointGet
Question:
username_0: ## Bug Report
Please answer these questions before submitting your issue. Thanks!
### 1. Minimal reproduce step (Required)
<!-- a step by step guide for reproducing the bug. -->
### 2. What did you expect to see? (Required)
### 3. What did you see instead (Required)
### 4. Affected version (Required)
<!-- v3.0.0, v4.0.0, etc -->
### 5. Root Cause Analysis
<!-- should be filled by the investigator before it's closed -->
Status: Issue closed
Answers:
username_0: https://github.com/pingcap/tidb/pull/17591 can record total backoff now, no matter how deep backoff is |
twpayne/chezmoi | 571052834 | Title: homebrew assets are missing completion files
Question:
username_0: ## Describe the bug
The homebrew assets are missing completion files
```
==> Upgrading username_1/taps/chezmoi
==> Downloading https://github.com/username_1/chezmoi/releases/download/v1.7.14/chezmoi_1.7.14_linux_amd64.tar.gz
==> Downloading from https://github-production-release-asset-2e65be.s3.amazonaws.com/157245200/2e1eb080-5807-11ea-8964-2951fe2b818c?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=<KEY>%2F20200226
######################################################################## 100.0%
Error: An exception occurred within a child process:
Errno::ENOENT: No such file or directory - assets/completions/chezmoi-completion.bash
```
## To reproduce
I also just downloaded the tarball and they're not there:
```
$ tar tvf chezmoi_1.7.14_linux_amd64.tar.gz
-rw-r--r-- runner/docker 1076 2020-02-25 10:35 LICENSE
-rw-r--r-- runner/docker 8413 2020-02-25 10:35 README.md
-rw-r--r-- runner/docker 507 2020-02-25 10:35 docs/CHANGES.md
-rw-r--r-- runner/docker 6277 2020-02-25 10:35 docs/CONTRIBUTING.md
-rw-r--r-- runner/docker 10430 2020-02-25 10:35 docs/FAQ.md
-rw-r--r-- runner/docker 26782 2020-02-25 10:35 docs/HOWTO.md
-rw-r--r-- runner/docker 4539 2020-02-25 10:35 docs/INSTALL.md
-rw-r--r-- runner/docker 2534 2020-02-25 10:35 docs/QUICKSTART.md
-rw-r--r-- runner/docker 36298 2020-02-25 10:35 docs/REFERENCE.md
-rwxr-xr-x runner/docker 20745888 2020-02-25 10:36 chezmoi
```
## Expected behavior
homebrew should be able to upgrade chezmoi and install the files it expects
Status: Issue closed
Answers:
username_1: Thanks very much for reporting this :) 1.7.15 has been released with the fix.
username_0: Thanks! :tada: |
scylladb/scylla-doc-issues | 622294459 | Title: Issue in page Kafka Sink Connector Quickstart
Question:
username_0: I would like to report an issue in page http://docs.scylladb.com/using-scylla/integrations/kafka-connector
### Problem and Solution
There's some small typos which have an easy fix.
In the standalone mode JSON, the configuration file has a typo as well as the sentence.

There's no **scylladb.clas**s in kafka, so the correct way should be **connect.class**.
And then the producer has a wrong syntax.
`kafka-console-producer --broker-list localhost:9092 --topic sample-topic
{"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"department"},"payload":{"id":10,"name":"<NAME>","department":"engineering"}}`
Instead it should be:
`kafka-console-producer --broker-list localhost:9092 --topic sample-topic
{"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"department"}]},"payload":{"id":1,"name":"<NAME>","department":"stupid"}}`
Answers:
username_1: @username_3 can you please validate the above ?
username_2: @username_3 - ping
username_1: @username_3 can you please advise
username_2: no movement on the issue - closing
Status: Issue closed
username_3: not sure how I missed this -
@username_2 it is a bug/typo in the documentation - it needs to be updated, the config is ```connect.class```
username_2: I would like to report an issue in page http://docs.scylladb.com/using-scylla/integrations/kafka-connector
### Problem and Solution
There's some small typos which have an easy fix.
In the standalone mode JSON, the configuration file has a typo as well as the sentence.

There's no **scylladb.clas**s in kafka, so the correct way should be **connect.class**.
And then the producer has a wrong syntax.
`kafka-console-producer --broker-list localhost:9092 --topic sample-topic
{"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"department"},"payload":{"id":1,"name":"<NAME>","department":"engineering"}}`
Instead it should be:
`kafka-console-producer --broker-list localhost:9092 --topic sample-topic
{"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"department"}]},"payload":{"id":1,"name":"<NAME>","department":"engineering"}}`
username_2: @username_3 - thanks!
username_2: @username_3 - what about the other issue described above?
username_3: @username_2 the JSON is not structured as the fields and payloads need to be separated appropriately, Indeed the structure for that would be
```
kafka-console-producer --broker-list localhost:9092 --topic sample-topic {"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"name"},{"type":"string","optional":true,"field":"department"}]},"payload":{"id":1,"name":"<NAME>","department":"engineering"}}
```
it is missing a ```]}```
Status: Issue closed
|
jrobie8385/codecademy-ColmarAcademy | 335633739 | Title: Great job adding controls on your video!
Question:
username_0: Seen here: https://github.com/jrobie8385/codecademy-ColmarAcademy/blob/1e58959a10d2c0a259fe772de6fbdca09d851081/index.html#L133-L135
I like how you chose to add these here. If you want to challenge yourself a little, I recommend incorporating autoplay and looping on the video! 👍 |
FriendsOfREDAXO/uploader | 695418486 | Title: Rotate image when exif orientation is set
Question:
username_0: Hallo Leute.
Ich schreibe einfach mal auf Deutsch, da ich das Problem dann besser beschreiben kann.
Ich habe in 2017 das Problem unter Redaxo 4 gehabt, dass die Bildorientierung bei einigen Bildern nicht korrekt war:
https://www.redaxo.org/forum/allgemeines-r4-f27/medienpool-zeigt-die-bilder-in-der-falschen-ausric-t21978.html
Dieses Problem habe ich nun auch. Die Bilder werden beim Upload nicht richtig orientiert. In den Metadaten steht die Orientation aber drin.
Leider funktioniert der Codesnippet aus dem Link leider nicht, wenn das Uploader AddOn aktiv ist. (Ohne gehts, dass habe ich getestet)
In der Tat wäre es aber cool, wenn das UploaderAddon dieses Codesnippet bzw. die Funktion der Orientierung des Bildes implementieren würde und selbst schon die Bilder drehen würde.
Wo könnte man denn das im Uploader Addon unterbringen? Ggfs. gibts ja auch clientseitig eine gute Stelle wo man die Orientierung unterbringen kann.
Answers:
username_1: uploader entfernt keine exifs und dreht auch keine bilder. ich schlage vor, du schreibst dir einen effekt für den media manager, der bei bedarf das bild dreht.
Status: Issue closed
|
ISA-tools/stato | 729987808 | Title: annotating a data science workflow with STATO
Question:
username_0: Hello,
Thanks for creating a great resource!
I am wondering whether STATO could be used to annotate data science workflows including a set of data objects and their transformations. To take a simple case, starting from gene expression samples:
1) Merge sample vectors into a matrix.
2) Perform some normalizations like column standardization.
3) Compute pairwise sample correlations.
4) Compute significance (FDR) and apply threshold.
5) A set of gene pairs associated via gene co-expression.
I'm having a little trouble finding some of the relevant terms but also want to ask this question more broadly -- is workflow annotation a use case for STATO?
best,
marcin
Answers:
username_1: @username_0 thx for the kind words, much appreciated
STATO can definitely be used to do that. In fact, we have used it with collaborators to annotate Galaxy workflows.
We mainly used STATO to identify the statistical tests being used, have users to report the input (e.g. alpha value) and then to annotate the resulting data matrices generated by these analytical workflows.
STATO can also be to associated a particular data output / workflow with a suitable graphical rendering (e.g. Manhattan plot for GWAS data).
If terms are needed, we can setup a robot template for that.
Happy to follow-up the discussion and elaborate on the use case.
all the best. |
adventistchurch/alps | 219544060 | Title: News related templates have the same content
Question:
username_0: All 5 of the news related templates have the same content in them:
- https://alps.adventist.io/public/?p=pages-news
- https://alps.adventist.io/public/?p=pages-news-article-minimal
- https://alps.adventist.io/public/?p=pages-news-article-rtl
- https://alps.adventist.io/public/?p=pages-news-article
- https://alps.adventist.io/public/?p=pages-news-channel
Probably an error in the .mustache file but this needs to be resolved so we can see the differences.
Status: Issue closed
Answers:
username_0: All 5 of the news related templates have the same content in them:
- https://alps.adventist.io/public/?p=pages-news
- https://alps.adventist.io/public/?p=pages-news-article-minimal
- https://alps.adventist.io/public/?p=pages-news-article-rtl
- https://alps.adventist.io/public/?p=pages-news-article
- https://alps.adventist.io/public/?p=pages-news-channel
Probably an error in the .mustache file but this needs to be resolved so we can see the differences.
Status: Issue closed
|
dbeaver/dbeaver | 402536097 | Title: Export / Data Transfer to new Vertica table not working.
Question:
username_0: <!--
Thank you for reporting an issue.
*IMPORTANT* - *before* creating a new issue please look around:
- DBeaver documentation: https://github.com/dbeaver/dbeaver/wiki
and
- open issues in Github tracker: https://github.com/dbeaver/dbeaver/issues
If you cannot find a similar problem, then create a new issue. Short tips about new issues can be found here: https://github.com/dbeaver/dbeaver/wiki/Posting-issues
Please, do not create issue duplicates. If you find the same or similar issue, just add a comment or vote for this feature. It helps us to track the most popular requests and fix them faster.
Please fill in as much of the template as possible.
-->
#### System information:
Dbeaver CE 5.3.3
#### Connection specification:
Vertica 9.0
#### Describe the problem you're observing:
Export Data transfer to Vertica fails when creating new target table and 'create' mapping.
New table is actually created but I think verification query in metadata fails.
Looking at query history, TABLE_NAME is not properly set:
select * from (select '..' as catalog_name, schema_name, table_name, table_type, remarks from v_catalog.all_tables order by table_type, catalog_name, schema_name, table_name) as vmd where
TABLE_NAME ilike '%' escape E'\\' and SCHEMA_NAME ilike '..' escape E'\\'
#### Steps to reproduce, if exist:
#### Include any warning/errors/backtraces from the logs
Error:
Can't start data transfer
Reason:
New table onus_offus not found in container " .. "
Answers:
username_1: I'm getting the same behavior in a different environment. I'm using DBeaver CE version 6.0.3.201904211926
with PostgreSQL 9.6.11
Error:
Can't start data transfer
Reason:
New table my_table_name not found in container "RDS us-west-1 - my_schema"
Despite the error, it successfully creates the table with the correct columns. So my workaround is to backup in the wizard. Change the mapping from "create" to "existing". Then continue to the end of the wizard again.
In case the issue is linked to my settings, here's what I'm using for the CSV source and Database target:
Source settings:
Table settings:
Open new connection(s): No
Extract type: SINGLE_QUERY
Select row count: No
Selected rows only: No
Selected columns only: No
Target settings:
Database settings:
Open new connection(s): No
Use transactions: No
Truncate before load: No
(But I have tried variations on these settings, and I haven't found a variation that works correctly for me. NOTE: I have had success in the past, so I suspect that this is a regression somewhere.)
Status: Issue closed
username_2: cannot reproduce it for now.
If it is still actual - feel free to reopen the ticket
username_1: Confirmed: I cannot reproduce the issue either. It works nicely now! |
dherault/serverless-offline | 448879447 | Title: Serverless: Testing Strategies Spikes
Question:
username_0: 1. Write a framework over serverless to do integration testing by calling sls invoke local
2. Check solutions like running lambda locally with framework like serverless-offline /
3. Check framework which will take existing swagger file and run test cases automatically.<issue_closed>
Status: Issue closed |
FlowCrypt/flowcrypt-android | 469061794 | Title: There was a syncing problem
Question:
username_0: I opened the app after some time of inactivity.
Then I opened a message.

Retrying with the button didn't help. Had to kill the app.
Answers:
username_1: @username_0 Unfortunately, I can't reproduce this issue. If you have more details, add them, please. I think it's a rare bug.
username_0: I'll see if I can encounter it again
username_0: A customer just had this problem too. She did not now how to resolve it (she didn't know or think to kill the app).
What does the retry button do today?
Can you make the retry button for this screen completely disconnect from IMAP and reconnect? Maybe also refetch the token? That could give users a chance to fix it.
username_1: I'll try to do that. I think it'll help us.
username_1: @username_0
When you will see this error next time please notice the following info (it is important for debugging):
- Is any connection available? (WiFI, 3G)
- Is access to the Internet available? (you must be able to get info from different sites, for example, www.google.com and etc.) Because we can have a situation when we have WIFI connection but there is no internet access
This error means that we receive [MailConnectException](https://javaee.github.io/javamail/docs/api/com/sun/mail/util/MailConnectException.html)
username_0: I suspect maybe it's related to an authentication failure, or a dropped tcp connection with some state left over. Fully renewing auth token + reconnecting should really solve it. I remember my internet access was fine, almost certainly on Wifi.
username_1: Got it. Thanks! I've added some changes. I hope they will resolve this issue.
Status: Issue closed
username_2: I opened the app after some time of inactivity.
Then I opened a message.

Retrying with the button didn't help. Had to kill the app.
username_2: Screenshot:

username_2: ref: https://mail.google.com/mail/u/<EMAIL>/#inbox/FMfcgxwKjdvMVZNBWnswFZVfXrhgZjnB
username_2: ref: https://mail.google.com/mail/u/<EMAIL>/#inbox/FMfcgxwKjfDHRpGgRcxtKSzRpfmNXbKr
Hopefully this will improve once https://github.com/FlowCrypt/flowcrypt-android/issues/993 is finished.
Is there anything these users can do in the meantime that is likely to improve the situation for them?
username_1: Usually, it means that the app has lost a connection to a remote server. But it seems JavaMail not always can resolve reconnection. Unfortunately, for now, a user should kill the app (from `the settings - app settings - close` or from the `recent apps` window). The current realization doesn't allow me to manage such a situation properly. But after #993 I'll be able to do that.
@username_2 How often do we receive such feedbacks?
username_2: I've received four in the last week, but prior to that much less often. And one of those users from this week I suspect was having a somewhat different problem (emails taking a long time to sync, but not getting this error message), so perhaps that doesn't count towards this specific issue.
I'll let users know that killing the app may temporarily abate the problem. Sorry to bug, and thanks for the tip!
username_2: ref: https://mail.google.com/mail/u/<EMAIL>/#inbox/FMfcgxwKjdzpWtDQSDTfGFpWHkKFdCJF
username_1: @username_2 Please tell me do you receive such feedbacks? It should was fixed by #993
username_2: I haven't received any complaints about this since December 4th, so I think we can safely assume that your fix worked. I'll reopen this if any new support emails come in about this.
Status: Issue closed
|
ChristianRiesen/otp | 414149898 | Title: Composer now requires random_compat v99.99.99
Question:
username_0: Hey @username_1
I haven't yet looked into this for long enough to understand why, but the change in #32 appears to end up requiring password_compat v99.99.99.
I guess it's something to do with this?
https://github.com/paragonie/random_compat#version-99999
With that in mind, is this behaviour intended?
Chris
Answers:
username_0: It's ok, I think I should learn to read properly. That's exactly what the change was for. Never mind!
Status: Issue closed
username_1: No problem @username_0
The next version will not contain that anymore anyways. |
Azure/azure-xplat-cli | 114233897 | Title: ARM: `azure network traffic-manager profile endpoint create` throws Error
Question:
username_0: ```
azure network traffic-manager profile endpoint create test-roman-group MyTM MyEndpoint eastus -y externalEndpoint -e myendptdns.azure.com -u Enabled -w 100 -p 322
info: Executing command network traffic-manager profile endpoint create
+ Looking up the Traffic Manager profile "MyTM"
+ Updating Traffic Manager "MyTM"
error: Error
info: Error information has been recorded to C:\Users\roman.gromov\.azure\azure.err
error: network traffic-manager profile endpoint create command failed
```
Answers:
username_0: Will fix,
@username_1 please verify fix in our repo, `arm-fixes` branch
username_1: It is working fine now.
<pre>
<code>
azure network traffic-manager profile endpoint create armresgrpdnszone xplatTestTMPE xplatTestTMPEndPoint eastus -y externalEndpoint -e xplatTMPEndptdns1.azure.com -u Enabled -w 100 -p 200
info: Executing command network traffic-manager profile endpoint create
+ Looking up the Traffic Manager profile "xplatTestTMPE"
+ Updating Traffic Manager "xplatTestTMPE"
+ Looking up the Traffic Manager profile "xplatTestTMPE"
data: Id : /subscriptions/bfb5e0bf-124b-4d0c-9352-7c0a9f4d9948/resourceGroups/armresgrpdnszone/providers/Microsoft.Network/traff
icManagerProfiles/xplatTestTMPE
data: Name : xplatTestTMPE
data: Type : Microsoft.Network/trafficManagerProfiles
data: Location : global
data: Status : Enabled
data: Routing method : Weighted
data: DNS name : xplattmpendptdns1
data: Time to live : 300
data: Monitoring protoco : HTTP
data: Monitoring path : /index.html
data: Monitoring port : 80
data: Endpoints:
data: Name Location Target Status Weight Priority Type
data: -------------------- -------- --------------------------- ------- ------ -------- ----------------------------------------------------------
data: xplatTestTMPEndPoint East US xplattmpendptdns1.azure.com Enabled 100 200 Microsoft.Network/trafficManagerProfiles/externalEndpoints
info: network traffic-manager profile endpoint create command OK
</code>
</pre>
Status: Issue closed
|
massenergize/frontend-admin | 622156953 | Title: Put Cadmin Instruction Videos on Cadmin Interface
Question:
username_0: The cadmin instruction videos (can currently be found in the MEServices shared drive--> Training--> Cadmin instruction videos) should live in the information (?) icon on top left of interface and also on the dashboard at the very bottom as the final item before "Signout"


Answers:
username_1: Could you elaborate on which content will be going where, and how it will be displayed? @username_0 The following is how I'm interpreting it at the moment, but please let me know if any part of it is inaccurate:
When you click the (?) icon, it opens a pop-up menu that contains the instructions video corresponding to the specific page you're on (when applicable). As for the menu item above "signout", that would bring you to a page that contains all of the videos.
username_0: Hi @username_1 yes, exactly how you are describing it is how I meant it. For the (?) it would be great to have each instruction video apply to the corresponding page and to have an instructions video "menu" (or Table of Contents) right above the "signout" button. Let me know if there is anything that is still unclear!
username_1: https://stackoverflow.com/questions/43880756/streaming-a-video-from-google-drive-using-html5-video-tag/52397246#52397246
https://stackoverflow.com/questions/40951504/how-to-embed-videos-from-google-drive-to-webpage
username_1: I've started to implement this, and I noticed that there are multiple instruction videos for a given piece of information. I.e. Creating/Editing/Copying an action. The videos also span multiple pages of the portal. As a result, it's hard to find a one-to-one correspondence between the (?) icon on a given page and a single instruction video.
There's a few ways to address this:
- when on _any_ of the (for example) action-related pages, the (?) icon will open one dialogue box which contains _all_ of the action-related videos. You could click through them, in a similar way to clicking through the different sections of the current placeholder dialogue box that it opens.
- only create the all-videos page and make the (?) icon to the all-videos page, automatically scrolled to the relevant section
- only create the all-videos page and do away with the (?) icon
@username_0 @kaatvds @ellentohn thoughts?
username_1: The plan that Ellen and I came up with:
- same popup on every page, get to it from a button that lives on the top
- two bullets in pop-up: written instructions drive link and video table of contents drive link
- rename “guide” to “need help”, maybe change icon, maybe remove tooltip
Status: Issue closed
|
Darinth/PetCommandOh | 957093540 | Title: Actions to set pet behvaior (Passive, defensive, aggressive)
Question:
username_0: Unsure if this is doable within the level of effort I'd like to put into this, but at least look into how AI controls different behaviors and see if it's easy to manipuate some of this to control how pets will react. |
dgobbi/vtk-dicom | 703698300 | Title: Dealing with RescaleSlope in MRI
Question:
username_0: The modality LUT is not part of the MR IOD, but some manufacturers (e.g. Philips) include RescaleSlope and RescaleIntercept in quantitative MR images. Their purpose is not to add a modality LUT, but rather to achieve something similar to real-world value mapping. In other words, the RescaleSlope and RescaleIntercept in these images is not intended to be part of the display pipeline, and the VOI LUT applies directly to the stored values. This is different from CT, where the VOI LUT applies to the modality values (i.e. after rescale).
More info can be found by searching comp.protocols.dicom.
The issue here is that vtkDICOMReader is intended to produce modality values. This means that it should apply Rescale to PET and CT but not to MR (since the stored values are modality values are the same for MR). For MR, rescaling should be done separately by vtkDICOMApplyRescale (which uses RWVM if present, otherwise it uses RescaleSlope and RescaleIntercept).
Currently, it is the responsibility of the user to call `reader->AutoRescaleOff()` for MR images with RescaleSlope. Following the standard, rescaling should always be off for MR (and all other modalities where it isn't in the IOD).
Answers:
username_0: There is an issue here for PET, as well. In PET, the VOI LUT applies to stored values (though this is reported to be inconsistent). However, rescaling must be done with PET since each slice generally has different rescale values (and a different VOI LUT).
For PET, it seems that the display pipeline should ignore the VOI LUT unless it can be made consistent across the image series (e.g. by rescaling the LUT for each image). This is currently beyond the scope of vtk-dicom. |
facebook/metro | 432136653 | Title: unable to resolve path outside root directory
Question:
username_0: <!-- *Before creating an issue please make sure you are using the latest version of Metro, try re-installing your node_modules folder and run Metro once with `--reset-cache` to see if that fixes the problem you are experiencing.* -->
**Do you want to request a *feature* or report a *bug*?**
Report a bug
**What is the current behavior?**
unable to resolve path above root directory
**If the current behavior is a bug, please provide the steps to reproduce and a minimal repository on GitHub that we can `yarn install` and `yarn test`.**
create a react native project (all folders inside native directory). Create a new `src` directory sibling to `native`. Create any JS file within the `src` folder`
Now, inside `App.js` file inside your native folder try accessing the JS file from `src` dir --
```
import file from '../src/file.js'
```
error seen:
```
error: bundling failed: Error: Unable to resolve module `../src/help/me` from `/Users/abanik/project/react-native-starter/native/App.js`: The module `../src/help/me` could not be found from `/Users/abanik/project/react-native-starter/native/App.js`. Indeed, none of these files exist:
* `/Users/abanik/project/react-native-starter/src/help/me(.native||.android.js|.native.js|.js|.android.json|.native.json|.json|.android.ts|.native.ts|.ts|.android.tsx|.native.tsx|.tsx)`
* `/Users/abanik/project/react-native-starter/src/help/me/index(.native||.android.js|.native.js|.js|.android.json|.native.json|.json|.android.ts|.native.ts|.ts|.android.tsx|.native.tsx|.tsx)`
at ModuleResolver.resolveDependency (/Users/abanik/project/react-native-starter/native/node_modules/metro/src/node-haste/DependencyGraph/ModuleResolution.js:163:15)
at ResolutionRequest.resolveDependency (/Users/abanik/project/react-native-starter/native/node_modules/metro/src/node-haste/DependencyGraph/ResolutionRequest.js:52:18)
at DependencyGraph.resolveDependency (/Users/abanik/project/react-native-starter/native/node_modules/metro/src/node-haste/DependencyGraph.js:283:16)
at Object.resolve (/Users/abanik/project/react-native-starter/native/node_modules/metro/src/lib/transformHelpers.js:261:42)
at dependencies.map.result (/Users/abanik/project/react-native-starter/native/node_modules/metro/src/DeltaBundler/traverseDependencies.js:399:31)
at Array.map (<anonymous>)
at resolveDependencies (/Users/abanik/project/react-native-starter/native/node_modules/metro/src/DeltaBundler/traverseDependencies.js:396:18)
at /Users/abanik/project/react-native-starter/native/node_modules/metro/src/DeltaBundler/traverseDependencies.js:269:33
at Generator.next (<anonymous>)
at asyncGeneratorStep (/Users/abanik/project/react-native-starter/native/node_modules/metro/src/DeltaBundler/traverseDependencies.js:87:24)
```
**What is the expected behavior?**
able to resolve path.
**Please provide your exact Metro configuration and mention your Metro, node, yarn/npm version and operating system.**
`"metro-react-native-babel-preset": "^0.53.0",`
```
React Native Environment Info:
System:
OS: macOS 10.14.3
CPU: (8) x64 Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz
Memory: 191.12 MB / 16.00 GB
Shell: 5.3 - /bin/zsh
Binaries:
Node: 8.11.4 - ~/.nvm/versions/node/v8.11.4/bin/node
Yarn: 1.10.1 - /usr/local/bin/yarn
npm: 5.6.0 - ~/.nvm/versions/node/v8.11.4/bin/npm
Watchman: 4.9.0 - /usr/local/bin/watchman
SDKs:
iOS SDK:
Platforms: iOS 12.2, macOS 10.14, tvOS 12.2, watchOS 5.2
IDEs:
Android Studio: 3.3 AI-182.5107.16.33.5314842
Xcode: 10.2/10E125 - /usr/bin/xcodebuild
npmPackages:
react: 16.8.3 => 16.8.3
react-native: 0.59.1 => 0.59.1
npmGlobalPackages:
react-native-cli: 2.0.1
```
Answers:
username_1: Same issue for me .
I cant show image which without projectRoot although i can require it and get a id
Hope to fix it, its not a issue for old one.
username_2: I have the same issue with React Native 0.60. What's really strange is I updated my `metro.config.js` like this to watch my sibling folder...
```js
const path = require('path')
/**
* Metro configuration for React Native
* https://github.com/facebook/react-native
*
* @format
*/
module.exports = {
transformer: {
getTransformOptions: async () => ({
transform: {
experimentalImportSupport: false,
inlineRequires: false,
},
}),
},
resolver: {
extraNodeModules: {
"react": path.resolve(__dirname, "node_modules/react"),
"react-native": path.resolve(__dirname, "node_modules/react-native")
}
},
projectRoot: path.resolve(__dirname),
watchFolders: [
path.resolve(__dirname, "../src")
]
}
```
...and now when I run `react-native start` from the root of my project, it _says_ it's looking in my folder for JS files:
```
Looking for JS files in
/Users/rob/learn/react-native/AwesomeProject
/Users/rob/learn/react-native/src
```
But when I try to import my module in `App.js` like so...
```js
import React, { Component } from 'react';
import { Text, View } from 'react-native';
import Pizza from 'pizza' // This is a pizza.js file sitting in ../src
export default class HelloWorldApp extends Component {
render() {
return (
<View style={{ flex: 1, justifyContent: "center", alignItems: "center" }}>
<Text>Hello, world!!</Text>
</View>
)
}
}
```
[Truncated]
This might be related to https://github.com/facebook/react-native/issues/4968
To resolve try the following:
1. Clear watchman watches: `watchman watch-del-all`.
2. Delete the `node_modules` folder: `rm -rf node_modules && npm install`.
3. Reset Metro Bundler cache: `rm -rf /tmp/metro-bundler-cache-*` or `npm start -- --reset-cache`.
4. Remove haste cache: `rm -rf /tmp/haste-map-react-native-packager-*`.
at ModuleResolver.resolveDependency (/Users/rob/learn/react-native/AwesomeProject/node_modules/metro/src/node-haste/DependencyGraph/ModuleResolution.js:183:15)
at ResolutionRequest.resolveDependency (/Users/rob/learn/react-native/AwesomeProject/node_modules/metro/src/node-haste/DependencyGraph/ResolutionRequest.js:52:18)
at DependencyGraph.resolveDependency (/Users/rob/learn/react-native/AwesomeProject/node_modules/metro/src/node-haste/DependencyGraph.js:283:16)
at Object.resolve (/Users/rob/learn/react-native/AwesomeProject/node_modules/metro/src/lib/transformHelpers.js:264:42)
at /Users/rob/learn/react-native/AwesomeProject/node_modules/metro/src/DeltaBundler/traverseDependencies.js:399:31
at Array.map (<anonymous>)
at resolveDependencies (/Users/rob/learn/react-native/AwesomeProject/node_modules/metro/src/DeltaBundler/traverseDependencies.js:396:18)
at /Users/rob/learn/react-native/AwesomeProject/node_modules/metro/src/DeltaBundler/traverseDependencies.js:269:33
at Generator.next (<anonymous>)
at asyncGeneratorStep (/Users/rob/learn/react-native/AwesomeProject/node_modules/metro/src/DeltaBundler/traverseDependencies.js:87:24)
BUNDLE [android, dev] ./index.js ▓▓▓▓▓▓▓▓▓░░░░░░░ 56.9% (322/427), failed.
```
I've tried all the things mentioned in that output, but none of them worked.
username_3: I'm also having problem to run a example module that I'm building from scratch.
```
Loading dependency graph, done.
error: bundling failed: Error: Unable to resolve module `react-native-my-module` from `/example/App.js`: Module `react-native-my-module` does not exist in the Haste module map
This might be related to https://github.com/facebook/react-native/issues/4968
To resolve try the following:
1. Clear watchman watches: `watchman watch-del-all`.
2. Delete the `node_modules` folder: `rm -rf node_modules && npm install`.
3. Reset Metro Bundler cache: `rm -rf /tmp/metro-bundler-cache-*` or `npm start -- --reset-cache`.
4. Remove haste cache: `rm -rf /tmp/haste-map-react-native-packager-*`.
at ModuleResolver.resolveDependency (/example/node_modules/metro/src/node-haste/DependencyGraph/ModuleResolution.js:183:15)
at ResolutionRequest.resolveDependency (/example/node_modules/metro/src/node-haste/DependencyGraph/ResolutionRequest.js:52:18)
at DependencyGraph.resolveDependency (/example/node_modules/metro/src/node-haste/DependencyGraph.js:283:16)
at Object.resolve (/example/node_modules/metro/src/lib/transformHelpers.js:264:42)
at dependencies.map.result (/example/node_modules/metro/src/DeltaBundler/traverseDependencies.js:399:31)
at Array.map (<anonymous>)
at resolveDependencies (/example/node_modules/metro/src/DeltaBundler/traverseDependencies.js:396:18)
at /example/node_modules/metro/src/DeltaBundler/traverseDependencies.js:269:33
at Generator.next (<anonymous>)
at asyncGeneratorStep (/example/node_modules/metro/src/DeltaBundler/traverseDependencies.js:87:24)
BUNDLE [ios, dev] ./index.js ▓▓░░░░░░░░░░░░░░ 15.4% (117/298), failed.
```
**my app package.json**
```
"react": "16.8.6",
"react-native": "0.60.5",
"react-native-my-module": "file:../"
```
**my module package.json**
```
{
"name": "react-native-my-module",
"version": "1.0.0",
"description": "",
"main": "index.js",
"keywords": [
"react-native"
],
"author": "",
"license": "",
"peerDependencies": {
"react-native": "^0.41.2",
"react-native-windows": "0.41.0-rc.1"
}
}
```
Sometimes in other projects that I'm working on, I get this error and than I do [this](https://github.com/facebook/react-native/issues/23886#issuecomment-509528212) and it work. But now I did it for many times but it always the same error.
username_4: Same problem for me to use a local reference module which require react to be peerDependencies.
username_5: Honestly, this is a very surprising problem for first time react-native devs! Especially if you're spinning up a react-native app and trying to share code with a react web app. I'm spoiled by being able to include C++ headers from any path, I guess. |
pombase/curation | 595923517 | Title: review GO:0006348 | chromatin silencing at telomere
Question:
username_0: chp2 | ↑is_a GO:0006348 | chromatin silencing at telomere | IMP | | Thon G et al. (2000) | 6
-- | -- | -- | -- | -- | -- | --
hst4 | | IMP | | Freeman-Cook LL et al. (1999)
pcu4 | | IMP | | Jia S et al. (2005)
raf1 | | IMP | | Thon G et al. (2005)
raf2 | | IMP | | Thon G et al. (2005)
sir2 | | IMP | required | Shankaranarayana GD et al. (2003)
Answers:
username_0: Already done as part of heterochromatin review
Status: Issue closed
|
teloxide/teloxide | 882654341 | Title: Feature Request: `#[command(hidden)]`
Question:
username_0: It would be nice if we had something like `#[command(hidden)]` which tells `#[derive(BotCommand)]` not to print the command description out of `BotCommand::descriptions`.
Answers:
username_1: Honestly, the usefulness of `BotCommand::descriptions` is somewhat questionable to me.
E.g.: it's useless if your bot is multilingual or you want a different formatting/etc.
Also, couldn't it be a `const DESCRIPTION: &'static str`, instead of `fn descriptions() -> String`?
Maybe we should just have a `const COMMANDS: &'static [CommandInfo]` where `CommandInfo` is
```rust
struct CommandInfo {
command: &'static str,
// Some other info?
}
```
and let the user format the message?
username_2: @username_0 https://github.com/teloxide/teloxide/blob/dev/tests/command.rs#L185
username_1: Closing since this feature is already implemented.
Status: Issue closed
|
rust-lang-nursery/lazy-static.rs | 244709129 | Title: Thread-local lazy static
Question:
username_0: `lazy_static` and std's `thread_local` are very similar concepts, but `lazy_static`s interface (with `Deref`) is much nicer. I think it should be possible to implement a thread-local version of lazy_static.
Answers:
username_1: Indeed, some consistency between the two would be nice. I've always thought of that more as changing lazy_statics API to be closer to thread_local, but providing a wrapper for it with a nicer API seems like a interesting idea as well.
username_2: I've made a crate that combined `lazy_static`, `thread_local` and `RefCell`.
https://docs.rs/ref_thread_local/0.0.0/ref_thread_local/ |
igvteam/igv | 203132745 | Title: Feature request: Autoscale to Maximum
Question:
username_0: What I would find useful is if I could select multiple tracks and set them to autoscale to the maximum (in given view) of the track with the highest signal.
<img width="900" alt="screen shot 2017-01-25 at 10 20 54" src="https://cloud.githubusercontent.com/assets/5265707/22296505/57928afa-e2e8-11e6-9822-3090c034f55f.png">
Answers:
username_1: Have you tried the "group autoscale" feature? Select all the tracks, then right-click and select "Group Autoscale" from the popup menu.
username_0: Sorry. I was using v 2.3.68. Didn't realize this was added in the latest version.
Status: Issue closed
|
End of preview.
No dataset card yet
- Downloads last month
- 9