repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
ageitgey/face_recognition | python | 1,190 | face_locations found. After saving to Image, no face_encodings found | Hi expert,
I tried to save each faces into image. After saving, I load the small face image und tried to calculate the face_encodings. But a lot faces images had no face_encodings. Did I do something wrong?
face_locations = face_recognition.face_locations(image, number_of_times_to_upsample=2, model="cnn")
...
face_image = image[(top_new):(bottom_new), (left_new):(right_new)]
pil_image = Image.fromarray(face_image)
pil_image.save(savepath + fileName + "_" + str(i) + "." + fileExtension)
| open | 2020-07-21T14:48:18Z | 2020-07-21T14:49:52Z | https://github.com/ageitgey/face_recognition/issues/1190 | [] | zhangede | 0 |
pydata/pandas-datareader | pandas | 562 | Support for multiple symbols for MOEX | f = web.DataReader(['SBER','FXUS'], 'moex', start, end)
gives me
ValueError: Support for multiple symbols is not yet implemented
This is a feature request. | closed | 2018-08-12T15:51:38Z | 2018-08-12T19:51:18Z | https://github.com/pydata/pandas-datareader/issues/562 | [] | khazamov | 0 |
ml-tooling/opyrator | pydantic | 27 | Can't get hello_world to work |
**Hello World no go:**
**Technical details:**
I have followed the instructions on the Getting Started page, no go
[https://github.com/ml-tooling/opyrator#getting-started](url)
Created the file and run as instructed but I get this...
`2021-05-01 10:16:31.675 An update to the [server] config option section was detected. To have these changes be reflected, please restart streamlit.`

I ran "`streamlit hello`"and that is working fine

- Host Machine OS : Windows 10
- python : 3.9.4
I wonder if it is the very new version of python?
I am open to being stupid, that's OK, but this looks pretty cool and I want it to work.
| closed | 2021-05-01T00:32:26Z | 2021-05-07T23:22:46Z | https://github.com/ml-tooling/opyrator/issues/27 | [
"support"
] | Bandit253 | 5 |
vllm-project/vllm | pytorch | 15,102 | [Bug]: 0.8.0(V1) RayChannelTimeoutError when inferencing DeepSeekV3 on 16 H20 with large batch size | ### Your current environment
<details>
<summary>The output of `python collect_env.py`</summary>
```text
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.12.9 (main, Mar 17 2025, 21:01:58) [Clang 20.1.0 ] (64-bit runtime)
Python platform: Linux-5.15.0-130-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H20
GPU 1: NVIDIA H20
GPU 2: NVIDIA H20
GPU 3: NVIDIA H20
GPU 4: NVIDIA H20
GPU 5: NVIDIA H20
GPU 6: NVIDIA H20
GPU 7: NVIDIA H20
Nvidia driver version: 550.127.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 180
On-line CPU(s) list: 0-179
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8457C
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 45
Socket(s): 2
Stepping: 8
BogoMIPS: 5200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 4.2 MiB (90 instances)
L1i cache: 2.8 MiB (90 instances)
L2 cache: 180 MiB (90 instances)
L3 cache: 195 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-89
NUMA node1 CPU(s): 90-179
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] flashinfer-python==0.2.1.post2+cu124torch2.6
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pyzmq==26.3.0
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] transformers==4.49.0
[pip3] triton==3.2.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.8.0
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 SYS PIX NODE SYS SYS 0-89 0 N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 SYS PIX NODE SYS SYS 0-89 0 N/A
GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 SYS NODE PIX SYS SYS 0-89 0 N/A
GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 SYS NODE PIX SYS SYS 0-89 0 N/A
GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 SYS SYS SYS PIX NODE 90-179 1 N/A
GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 SYS SYS SYS PIX NODE 90-179 1 N/A
GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 SYS SYS SYS NODE PIX 90-179 1 N/A
GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X SYS SYS SYS NODE PIX 90-179 1 N/A
NIC0 SYS SYS SYS SYS SYS SYS SYS SYS X SYS SYS SYS SYS
NIC1 PIX PIX NODE NODE SYS SYS SYS SYS SYS X NODE SYS SYS
NIC2 NODE NODE PIX PIX SYS SYS SYS SYS SYS NODE X SYS SYS
NIC3 SYS SYS SYS SYS PIX PIX NODE NODE SYS SYS SYS X NODE
NIC4 SYS SYS SYS SYS NODE NODE PIX PIX SYS SYS SYS NODE X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
NIC4: mlx5_4
NVIDIA_VISIBLE_DEVICES=all
NVIDIA_REQUIRE_CUDA=cuda>=12.4 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=geforce,driver>=470,driver<471 brand=geforcertx,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=titan,driver>=470,driver<471 brand=titanrtx,driver>=470,driver<471 brand=tesla,driver>=525,driver<526 brand=unknown,driver>=525,driver<526 brand=nvidia,driver>=525,driver<526 brand=nvidiartx,driver>=525,driver<526 brand=geforce,driver>=525,driver<526 brand=geforcertx,driver>=525,driver<526 brand=quadro,driver>=525,driver<526 brand=quadrortx,driver>=525,driver<526 brand=titan,driver>=525,driver<526 brand=titanrtx,driver>=525,driver<526 brand=tesla,driver>=535,driver<536 brand=unknown,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=geforce,driver>=535,driver<536 brand=geforcertx,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=titan,driver>=535,driver<536 brand=titanrtx,driver>=535,driver<536
NCCL_VERSION=2.20.5-1
NCCL_SOCKET_IFNAME=eth0
NVIDIA_DRIVER_CAPABILITIES=compute,utility
NCCL_IB_HCA=mlx5
NVIDIA_PRODUCT_NAME=CUDA
VLLM_USAGE_SOURCE=production-docker-image
NCCL_IB_GID_INDEX=3
CUDA_VERSION=12.4.0
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
LD_LIBRARY_PATH=/opt/venv/lib/python3.12/site-packages/cv2/../../lib64:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
NCCL_IB_DISABLE=0
VLLM_HOST_IP=10.99.48.142
NCCL_CUMEM_ENABLE=0
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY
```
</details>
### 🐛 Describe the bug
Firstly I follow the doc https://docs.vllm.ai/en/latest/serving/distributed_serving.html to setup the distributed environment(2 nodes with 8 GPUs per node), and then run the api_server as below:
```bash
python3 -m vllm.entrypoints.openai.api_server --port 18011 --model /models/DeepSeek-V3 --tensor-parallel-size 16 --gpu-memory-utilization 0.92 --dtype auto --served-model-name deepseekv3 --max-num-seqs 50 --max-model-len 16384 --trust-remote-code --disable-log-requests --enable-chunked-prefill --enable-prefix-caching
```
Then I got the RayChannelTimeoutError in Ray module within the call `execute_model` to run ray dag.
```text
INFO 03-14 00:00:55 [async_llm.py:169] Added request cmpl-49612d570051487899170dc9fc843162-0.
INFO 03-14 00:00:59 [loggers.py:80] Avg prompt throughput: 102.5 tokens/s, Avg generation throughput: 0.1 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.2%, Prefix cache hit rate: 13.4%
ERROR 03-14 00:01:05 [core.py:337] EngineCore hit an exception: Traceback (most recent call last):
ERROR 03-14 00:01:05 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/ray/dag/compiled_dag_node.py", line 2344, in _execute_until
ERROR 03-14 00:01:05 [core.py:337] result = self._dag_output_fetcher.read(timeout)
ERROR 03-14 00:01:05 [core.py:337] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-14 00:01:05 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/ray/experimental/channel/common.py", line 318, in read
ERROR 03-14 00:01:05 [core.py:337] outputs = self._read_list(timeout)
ERROR 03-14 00:01:05 [core.py:337] ^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-14 00:01:05 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/ray/experimental/channel/common.py", line 409, in _read_list
ERROR 03-14 00:01:05 [core.py:337] raise e
ERROR 03-14 00:01:05 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/ray/experimental/channel/common.py", line 391, in _read_list
ERROR 03-14 00:01:05 [core.py:337] result = c.read(min(remaining_timeout, iteration_timeout))
ERROR 03-14 00:01:05 [core.py:337] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-14 00:01:05 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/ray/experimental/channel/shared_memory_channel.py", line 776, in read
ERROR 03-14 00:01:05 [core.py:337] return self._channel_dict[self._resolve_actor_id()].read(timeout)
ERROR 03-14 00:01:05 [core.py:337] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-14 00:01:05 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/ray/experimental/channel/shared_memory_channel.py", line 480, in read
ERROR 03-14 00:01:05 [core.py:337] ret = self._worker.get_objects(
ERROR 03-14 00:01:05 [core.py:337] ^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-14 00:01:05 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/ray/_private/worker.py", line 893, in get_objects
ERROR 03-14 00:01:05 [core.py:337] ] = self.core_worker.get_objects(
ERROR 03-14 00:01:05 [core.py:337] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-14 00:01:05 [core.py:337] File "python/ray/_raylet.pyx", line 3189, in ray._raylet.CoreWorker.get_objects
ERROR 03-14 00:01:05 [core.py:337] File "python/ray/includes/common.pxi", line 106, in ray._raylet.check_status
ERROR 03-14 00:01:05 [core.py:337] ray.exceptions.RayChannelTimeoutError: System error: Timed out waiting for object available to read. ObjectID: 00d95966d8a9e2f5795e7e010e186d6a031a70380100000002e1f505
ERROR 03-14 00:01:05 [core.py:337]
ERROR 03-14 00:01:05 [core.py:337] The above exception was the direct cause of the following exception:
ERROR 03-14 00:01:05 [core.py:337]
ERROR 03-14 00:01:05 [core.py:337] Traceback (most recent call last):
ERROR 03-14 00:01:05 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 330, in run_engine_core
ERROR 03-14 00:01:05 [core.py:337] engine_core.run_busy_loop()
ERROR 03-14 00:01:05 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 364, in run_busy_loop
ERROR 03-14 00:01:05 [core.py:337] outputs = step_fn()
ERROR 03-14 00:01:05 [core.py:337] ^^^^^^^^^
ERROR 03-14 00:01:05 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 192, in step
ERROR 03-14 00:01:05 [core.py:337] output = self.model_executor.execute_model(scheduler_output)
ERROR 03-14 00:01:05 [core.py:337] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-14 00:01:05 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/ray_distributed_executor.py", line 57, in execute_model
ERROR 03-14 00:01:05 [core.py:337] return refs[0].get()
ERROR 03-14 00:01:05 [core.py:337] ^^^^^^^^^^^^^
ERROR 03-14 00:01:05 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/ray/experimental/compiled_dag_ref.py", line 124, in get
ERROR 03-14 00:01:05 [core.py:337] self._dag._execute_until(
ERROR 03-14 00:01:05 [core.py:337] File "/usr/local/lib/python3.12/dist-packages/ray/dag/compiled_dag_node.py", line 2350, in _execute_until
ERROR 03-14 00:01:05 [core.py:337] raise RayChannelTimeoutError(
ERROR 03-14 00:01:05 [core.py:337] ray.exceptions.RayChannelTimeoutError: System error: If the execution is expected to take a long time, increase RAY_CGRAPH_get_timeout which is currently 10 seconds. Otherwise, this may indicate that the execution is hanging.
ERROR 03-14 00:01:05 [core.py:337]
INFO 03-14 00:01:05 [ray_distributed_executor.py:127] Shutting down Ray distributed executor. If you see error log from logging.cc regarding SIGTERM received, please ignore because this is the expected termination process in Ray.
CRITICAL 03-14 00:01:05 [core_client.py:260] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
2025-03-14 00:01:05,920 INFO compiled_dag_node.py:2109 -- Tearing down compiled DAG
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | open | 2025-03-19T07:13:52Z | 2025-03-24T12:02:33Z | https://github.com/vllm-project/vllm/issues/15102 | [
"bug",
"ray"
] | jeffye-dev | 22 |
apify/crawlee-python | web-scraping | 516 | How to get the content of an iframe? | Thank you! | closed | 2024-09-11T16:48:34Z | 2024-09-12T08:03:46Z | https://github.com/apify/crawlee-python/issues/516 | [
"t-tooling"
] | thalesfsp | 0 |
ResidentMario/missingno | data-visualization | 20 | Warning thrown with matplotlib 2.0 | I'm using matplotlib 2.0, and I thought I'd just quickly report this warning message that shows up when I call `msno.matrix(dataframe)`:
```
/Users/ericmjl/anaconda/lib/python3.5/site-packages/missingno/missingno.py:250: MatplotlibDeprecationWarning: The set_axis_bgcolor function was deprecated in version 2.0. Use set_facecolor instead.
ax1.set_axis_bgcolor((1, 1, 1))
```
It's probably a low-priority, mission-noncritical change, but just putting it here for the record. If I do have the time to get myself familiarized with the codebase, I might just put in a PR for it! :smile: | closed | 2017-02-05T04:06:42Z | 2017-02-14T02:49:03Z | https://github.com/ResidentMario/missingno/issues/20 | [] | ericmjl | 2 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 929 | Getting `sqlalchemy.exc.NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:postgres`since SQLAlchemy has released 1.4 | Getting `sqlalchemy.exc.NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:postgres`since SQLAlchemy has released [1.4](https://docs.sqlalchemy.org/en/14/index.html)
I'd freeze the **SQLAlchemy** version for now
https://github.com/pallets/flask-sqlalchemy/blob/222059e200e6b2e3b0ac57028b08290a648ae8ea/setup.py#L12 | closed | 2021-03-16T10:26:52Z | 2021-04-01T00:13:41Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/929 | [] | tbarda | 9 |
fastapi/fastapi | pydantic | 13,150 | Simplify tests for variants | ### Privileged issue
- [X] I'm @tiangolo or he asked me directly to create an issue here.
### Issue Content
## Summary
Simplify tests for variants, from multiple test files (one test file per variant) to a single test file with parameters to test each variant.
## Background
Currently, we have multiple source example variants for different Python versions:
* Python 3.8
* Python 3.9
* Python 3.10
And we have versions using `Annotated` and without using it.
Combining that, for each source app, we end up with different variants.
For example, for `docs_src/query_params_str_validations/tutorial010.py`, for this same `tutorial010`, we have these variants:
* `docs_src/query_params_str_validations/tutorial010_an_py39.py`
* Using `Annotated`, Python 3.9.
* `docs_src/query_params_str_validations/tutorial010_an_py310.py`
* Using `Annotated`, Python 3.10.
* `docs_src/query_params_str_validations/tutorial010_an.py`
* Using `Annotated`, Python 3.8 (as 3.8 is the oldest, this one doesn't have a part in the name like `py38`).
* `docs_src/query_params_str_validations/tutorial010_py310.py`
* Python 3.10, not using `Annotated` (as not using `Annotated` is the oldest form, it just doesn't have the `an` part in the file name.
* `docs_src/query_params_str_validations/tutorial010.py`
* Not using `Annotated`, Python 3.8.
Each of these files represent the same FastAPI app, but with the improved syntax for Python 3.9, or 3.10, or using `Annotated`, but in the end, the same app.
We want to keep these files like this because they have the different ways to create an app, the different supported syntaxes, including backward-compatible ones. They are shown in the docs and tested on CI.
Then, we have tests for that... currently, we just have a test file per variant file, so, we have:
* `tests/test_tutorial/test_query_params_str_validations/test_tutorial010_an_py39.py`
* `tests/test_tutorial/test_query_params_str_validations/test_tutorial010_an_py310.py`
* `tests/test_tutorial/test_query_params_str_validations/test_tutorial010_an.py`
* `tests/test_tutorial/test_query_params_str_validations/test_tutorial010_py310.py`
* `tests/test_tutorial/test_query_params_str_validations/test_tutorial010.py`
But then, each of the files is almost exactly the same code, only with Pytest "markers" to define that something should only be run on Python 3.10, etc. but apart from that, they have the same code.
## The Task
The task is to replace the multiple **test** files for each variant with a single file that uses Pytest parameters to import each specific app, and that uses Pytest markers for the files that require a specific version of Python.
An example of the result for one of these test variants is here: https://github.com/fastapi/fastapi/pull/13149
Not all tutorial tests have multiple variants, but there are a few that do. This can be done in one PR per tutorial (with the single test for all its variants).
## Instructions
These are not strict but they worked for me to simplify the process.
* Take one of the tests that requires a Python version, say Python 3.10, e.g. `docs_src/query_params_str_validations/tutorial010_an_py310.py`, copy it to a new file with a different name (only temporarily), e.g. with an extra `x` at the end: `docs_src/query_params_str_validations/tutorial010x.py`
* Copy the changes visible from the file in https://github.com/fastapi/fastapi/pull/13149/files, mainly:
* The `params=` part
* The `request: pytest.FixtureRequest` param
* The mod = importlib.import_module(` part
* The client = `TestClient(mod.app)` with the new `mod.app`
For that tutorial, e.g. tutorial010, there are a few variants, in this case, 5. There should be one param for each of those 5 files.
The ones with a name with a variant part for Python 3.10 (`py310`) should have `marks=needs_py310`, and the ones for Python 3.9 (`py39`) should have `marks=needs_py39`.
Once that is done and the tests in that file are passing, remove the other files, and rename that test to remove the extra `x` at the end. | open | 2025-01-03T09:57:09Z | 2025-02-19T19:37:18Z | https://github.com/fastapi/fastapi/issues/13150 | [] | tiangolo | 2 |
charlesq34/pointnet | tensorflow | 264 | ERROR: cannot verify shapenet.cs.stanford.edu's certificate, issued by ‘CN=InCommon RSA Server CA,OU=InCommon,O=Internet2,L=Ann Arbor,ST=MI,C=US’: | Hi thanks a lot for the interesting 3D computer vision research work.
Could you please have a look at the following error and guide me on how to fix it?
```
[35860:2264 0:981] 09:14:27 Mon Dec 28 [mona@goku:pts/5 +1] ~/research/code/DJ-RN/pointnet
$ python train.py
--2020-12-28 21:14:32-- https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip
Resolving shapenet.cs.stanford.edu (shapenet.cs.stanford.edu)... 171.67.77.19
Connecting to shapenet.cs.stanford.edu (shapenet.cs.stanford.edu)|171.67.77.19|:443... connected.
ERROR: cannot verify shapenet.cs.stanford.edu's certificate, issued by ‘CN=InCommon RSA Server CA,OU=InCommon,O=Internet2,L=Ann Arbor,ST=MI,C=US’:
Issued certificate has expired.
To connect to shapenet.cs.stanford.edu insecurely, use `--no-check-certificate'.
unzip: cannot find or open modelnet40_ply_hdf5_2048.zip, modelnet40_ply_hdf5_2048.zip.zip or modelnet40_ply_hdf5_2048.zip.ZIP.
mv: cannot stat 'modelnet40_ply_hdf5_2048': No such file or directory
rm: cannot remove 'modelnet40_ply_hdf5_2048.zip': No such file or directory
Traceback (most recent call last):
File "train.py", line 62, in <module>
TRAIN_FILES = provider.getDataFiles( \
File "/home/mona/research/code/DJ-RN/pointnet/provider.py", line 88, in getDataFiles
return [line.rstrip() for line in open(list_filename)]
FileNotFoundError: [Errno 2] No such file or directory: '/home/mona/research/code/DJ-RN/pointnet/data/modelnet40_ply_hdf5_2048/train_files.txt'
6966/31772MB(base)
``` | closed | 2020-12-29T02:16:06Z | 2020-12-29T02:20:48Z | https://github.com/charlesq34/pointnet/issues/264 | [] | monacv | 1 |
postmanlabs/httpbin | api | 598 | bytes endpoint with seed not stable between python 2 and python 3 | I'm upgrading a build environment from python 2 to python 3 and noticed that endpoints with seeded random numbers are not returning the same values. It seems to be related to usage of randint:
https://github.com/postmanlabs/httpbin/blob/f8ec666b4d1b654e4ff6aedd356f510dcac09f83/httpbin/core.py#L1448
It seems like randint is not seed safe and it looks like only random() is: https://bugs.python.org/issue27742#msg272544
Ubuntu 16.04.6 LTS
python 2.7.12 -> python 3.5.6 | open | 2020-02-11T21:09:25Z | 2020-02-11T21:09:25Z | https://github.com/postmanlabs/httpbin/issues/598 | [] | rajsite | 0 |
dmlc/gluon-cv | computer-vision | 841 | WaitToRead function cost too much time | 
Here is my test code:
void RunDemo() {
// context
Context ctx = Context::cpu();
if (args::gpu >= 0) {
ctx = Context::gpu(args::gpu);
if (!args::quite) {
LOG(INFO) << "Using GPU(" << args::gpu << ")...";
}
}
// load symbol and parameters
Symbol net;
std::map<std::string, NDArray> args, auxs;
LoadCheckpoint(args::model, args::epoch, &net, &args, &auxs, ctx);
std::string filepath = args::image;
readFileList((char* )filepath.c_str());
for (int i = 0; i<all_count; i++)
{
char one_filename[2000];
memset(one_filename, '\0', sizeof(one_filename));
strcpy(one_filename, all_filepath[i]);
strcat(one_filename, "/");
strcat(one_filename, all_filename[i]);
printf("%s\n",one_filename);
Mat image = imread(one_filename, 1);
if (!image.data)
continue;
image = ResizeShortWithin(image, args::min_size, args::max_size, args::multiplier);
if (!args::quite) {
LOG(INFO) << "Image shape: " << image.cols << " x " << image.rows;
}
// set input and bind executor
auto data = AsData(image, ctx);
args["data"] = data;
Executor *exec = net.SimpleBind(
ctx, args, std::map<std::string, NDArray>(),
std::map<std::string, OpReqType>(), auxs);
// begin forward
// NDArray::WaitAll();
auto start = std::chrono::steady_clock::now();
exec->Forward(false);
auto ids = exec->outputs[0].Copy(Context(kCPU, 0));
auto scores = exec->outputs[1].Copy(Context(kCPU, 0));
auto bboxes = exec->outputs[2].Copy(Context(kCPU, 0));
// NDArray::WaitAll();
auto end = std::chrono::steady_clock::now();
if (!args::quite) {
LOG(INFO) << "Elapsed time {Forward->Result}: " << std::chrono::duration<double, std::milli>(end - start).count() << " ms";
}
start = std::chrono::steady_clock::now();
bboxes.WaitToRead();
// scores.WaitToRead();
// ids.WaitToRead();
end = std::chrono::steady_clock::now();
if (!args::quite) {
LOG(INFO) << "Elapsed time {WaitToRead}: " << std::chrono::duration<double, std::milli>(end - start).count() << " ms";
}
int num = bboxes.GetShape()[1];
std::vector<std::string> class_names = synset::CLASS_NAMES;
float thresh = args::viz_thresh;
for (int j = 0; j < num; ++j) {
float score = scores.At(0, 0, j);
float label = ids.At(0, 0, j);
if (score < thresh) continue;
if (label < 0) continue;
int x1 = bboxes.At(0, j, 0);
int y1 = bboxes.At(0, j, 1);
int x2 = bboxes.At(0, j, 2);
int y2 = bboxes.At(0, j, 3);
int cls_id = static_cast<int>(label);
LOG(INFO) << x1 << " "<<y1<<" "<<x2<< " " << y2;
if (!args::quite) {
if (cls_id >= class_names.size()) {
LOG(INFO) << "id: " << cls_id << ", scores: " << score;
} else {
LOG(INFO) << "id: " << class_names[cls_id] << ", scores: " << score;
}
}
}
// draw boxes
//auto plt = viz::PlotBbox(image, bboxes, scores, ids, args::viz_thresh, synset::CLASS_NAMES, std::map<int, cv::Scalar>(), !args::quite);
// display drawn image
//if (!args::no_display) {
// cv::imshow("plot", plt);
// cv::waitKey();
//}
// output image
// if (!args::output.empty()) {
// cv::imwrite(args::output, plt);
//}
delete exec;
}
} | closed | 2019-06-28T06:55:14Z | 2019-12-20T23:34:27Z | https://github.com/dmlc/gluon-cv/issues/841 | [] | HouBiaoLiu | 2 |
explosion/spaCy | data-science | 13,725 | Empty MorphAnalysis Hash differs from Token.morph.key | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
Hello,
I've trained a Morphologizer and i saw that empty MorphAnalysis (`""`) actually have the hash value of `"_"`. Is it by design? Because the documentation doesn't mention this.
> key `int` | The hash of the features string.
```python
for i in doc:
nlp.vocab.strings[i.morph.key] == str(i.morph)
False
False
False
True
False
True
True
False
```
As i use the hash values in a lookup for something, it produced `KeyError`.
## How to reproduce the behaviour
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: Linux (Debian 12)
* Python Version Used: 3.11
* spaCy Version Used: 3.7.3 (i will train my Morphologizer soon on 3.8.3 to see if that change)
* Environment Information:
| open | 2024-12-26T10:07:16Z | 2024-12-26T10:07:40Z | https://github.com/explosion/spaCy/issues/13725 | [] | thjbdvlt | 0 |
horovod/horovod | deep-learning | 3,297 | Fail to install horovod 0.19.0 | **Environment:**
1. Framework: (TensorFlow, Keras, PyTorch, MXNet)
2. Framework version:
3. Horovod version:0.19.0
4. MPI version:4.0.3
5. CUDA version:10.0
6. NCCL version:2.5.6
7. Python version:3.6.8
8. Spark / PySpark version:
9. Ray version: None
10. OS and version: centos7
11. GCC version:7.3.1
12. CMake version:2.8.12.2
**Checklist:**
1. Did you search issues to find if somebody asked this question before?
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
**Bug report:**
Hi! I'm unable to install horovod 0.19.0 successfully by running "HOROVOD_WITHOUT_GLOO=1 HOROVOD_WITH_TENSORFLOW=1 HOROVOD_NCCL_INCLUDE=/usr/include HOROVOD_NCCL_LIB=/usr/lib64 HOROVOD_GPU_OPERATIONS=NCCL pip install --no-cache-dir horovod==0.19.0"
Error log shows:
[root@VM-29-31-centos ~]# HOROVOD_WITHOUT_GLOO=1 HOROVOD_WITH_TENSORFLOW=1 HOROVOD_NCCL_INCLUDE=/usr/include HOROVOD_NCCL_LIB=/usr/lib64 HOROVOD_GPU_OPERATIONS=NCCL pip install --no-cache-dir horovod==0.19.0
Collecting horovod==0.19.0
Downloading horovod-0.19.0.tar.gz (2.9 MB)
|████████████████████████████████| 2.9 MB 52.6 MB/s
Preparing metadata (setup.py) ... done
Requirement already satisfied: cloudpickle in /usr/local/lib/python3.6/site-packages (from horovod==0.19.0) (2.0.0)
Requirement already satisfied: psutil in /usr/local/lib64/python3.6/site-packages (from horovod==0.19.0) (5.8.0)
Requirement already satisfied: pyyaml in /usr/lib64/python3.6/site-packages (from horovod==0.19.0) (3.13)
Requirement already satisfied: six in ./.local/lib/python3.6/site-packages (from horovod==0.19.0) (1.16.0)
Requirement already satisfied: cffi>=1.4.0 in /usr/local/lib64/python3.6/site-packages (from horovod==0.19.0) (1.15.0)
Requirement already satisfied: pycparser in /usr/local/lib/python3.6/site-packages (from cffi>=1.4.0->horovod==0.19.0) (2.21)
Building wheels for collected packages: horovod
Building wheel for horovod (setup.py) ... /
error
ERROR: Command errored out with exit status 1:
command: /usr/bin/python3 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-rw1my8vd/horovod_c985d6fc46794d40a1fbe4f795ac2673/setup.py'"'"'; __file__='"'"'/tmp/pip-install-rw1my8vd/horovod_c985d6fc46794d40a1fbe4f795ac2673/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-6xu84ilg
cwd: /tmp/pip-install-rw1my8vd/horovod_c985d6fc46794d40a1fbe4f795ac2673/
Complete output (209 lines):
/usr/lib64/python3.6/distutils/dist.py:261: UserWarning: Unknown distribution option: 'test_requires'
warnings.warn(msg)
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.6
creating build/lib.linux-x86_64-3.6/horovod
copying horovod/__init__.py -> build/lib.linux-x86_64-3.6/horovod
creating build/lib.linux-x86_64-3.6/horovod/common
copying horovod/common/basics.py -> build/lib.linux-x86_64-3.6/horovod/common
copying horovod/common/__init__.py -> build/lib.linux-x86_64-3.6/horovod/common
copying horovod/common/util.py -> build/lib.linux-x86_64-3.6/horovod/common
creating build/lib.linux-x86_64-3.6/horovod/spark
copying horovod/spark/__init__.py -> build/lib.linux-x86_64-3.6/horovod/spark
creating build/lib.linux-x86_64-3.6/horovod/tensorflow
copying horovod/tensorflow/mpi_ops.py -> build/lib.linux-x86_64-3.6/horovod/tensorflow
copying horovod/tensorflow/__init__.py -> build/lib.linux-x86_64-3.6/horovod/tensorflow
copying horovod/tensorflow/compression.py -> build/lib.linux-x86_64-3.6/horovod/tensorflow
copying horovod/tensorflow/util.py -> build/lib.linux-x86_64-3.6/horovod/tensorflow
creating build/lib.linux-x86_64-3.6/horovod/mxnet
copying horovod/mxnet/mpi_ops.py -> build/lib.linux-x86_64-3.6/horovod/mxnet
copying horovod/mxnet/__init__.py -> build/lib.linux-x86_64-3.6/horovod/mxnet
creating build/lib.linux-x86_64-3.6/horovod/_keras
copying horovod/_keras/__init__.py -> build/lib.linux-x86_64-3.6/horovod/_keras
copying horovod/_keras/callbacks.py -> build/lib.linux-x86_64-3.6/horovod/_keras
creating build/lib.linux-x86_64-3.6/horovod/run
copying horovod/run/run_task.py -> build/lib.linux-x86_64-3.6/horovod/run
copying horovod/run/task_fn.py -> build/lib.linux-x86_64-3.6/horovod/run
copying horovod/run/__init__.py -> build/lib.linux-x86_64-3.6/horovod/run
copying horovod/run/gloo_run.py -> build/lib.linux-x86_64-3.6/horovod/run
copying horovod/run/run.py -> build/lib.linux-x86_64-3.6/horovod/run
copying horovod/run/mpi_run.py -> build/lib.linux-x86_64-3.6/horovod/run
creating build/lib.linux-x86_64-3.6/horovod/torch
copying horovod/torch/mpi_ops.py -> build/lib.linux-x86_64-3.6/horovod/torch
copying horovod/torch/__init__.py -> build/lib.linux-x86_64-3.6/horovod/torch
copying horovod/torch/compression.py -> build/lib.linux-x86_64-3.6/horovod/torch
creating build/lib.linux-x86_64-3.6/horovod/keras
copying horovod/keras/__init__.py -> build/lib.linux-x86_64-3.6/horovod/keras
copying horovod/keras/callbacks.py -> build/lib.linux-x86_64-3.6/horovod/keras
creating build/lib.linux-x86_64-3.6/horovod/spark/task
copying horovod/spark/task/task_info.py -> build/lib.linux-x86_64-3.6/horovod/spark/task
copying horovod/spark/task/task_service.py -> build/lib.linux-x86_64-3.6/horovod/spark/task
copying horovod/spark/task/mpirun_exec_fn.py -> build/lib.linux-x86_64-3.6/horovod/spark/task
copying horovod/spark/task/__init__.py -> build/lib.linux-x86_64-3.6/horovod/spark/task
creating build/lib.linux-x86_64-3.6/horovod/spark/common
copying horovod/spark/common/cache.py -> build/lib.linux-x86_64-3.6/horovod/spark/common
copying horovod/spark/common/params.py -> build/lib.linux-x86_64-3.6/horovod/spark/common
copying horovod/spark/common/serialization.py -> build/lib.linux-x86_64-3.6/horovod/spark/common
copying horovod/spark/common/backend.py -> build/lib.linux-x86_64-3.6/horovod/spark/common
copying horovod/spark/common/store.py -> build/lib.linux-x86_64-3.6/horovod/spark/common
copying horovod/spark/common/_namedtuple_fix.py -> build/lib.linux-x86_64-3.6/horovod/spark/common
copying horovod/spark/common/__init__.py -> build/lib.linux-x86_64-3.6/horovod/spark/common
copying horovod/spark/common/constants.py -> build/lib.linux-x86_64-3.6/horovod/spark/common
copying horovod/spark/common/estimator.py -> build/lib.linux-x86_64-3.6/horovod/spark/common
copying horovod/spark/common/util.py -> build/lib.linux-x86_64-3.6/horovod/spark/common
creating build/lib.linux-x86_64-3.6/horovod/spark/driver
copying horovod/spark/driver/driver_service.py -> build/lib.linux-x86_64-3.6/horovod/spark/driver
copying horovod/spark/driver/mpirun_rsh.py -> build/lib.linux-x86_64-3.6/horovod/spark/driver
copying horovod/spark/driver/__init__.py -> build/lib.linux-x86_64-3.6/horovod/spark/driver
copying horovod/spark/driver/job_id.py -> build/lib.linux-x86_64-3.6/horovod/spark/driver
creating build/lib.linux-x86_64-3.6/horovod/spark/torch
copying horovod/spark/torch/remote.py -> build/lib.linux-x86_64-3.6/horovod/spark/torch
copying horovod/spark/torch/__init__.py -> build/lib.linux-x86_64-3.6/horovod/spark/torch
copying horovod/spark/torch/estimator.py -> build/lib.linux-x86_64-3.6/horovod/spark/torch
copying horovod/spark/torch/util.py -> build/lib.linux-x86_64-3.6/horovod/spark/torch
creating build/lib.linux-x86_64-3.6/horovod/spark/keras
copying horovod/spark/keras/optimizer.py -> build/lib.linux-x86_64-3.6/horovod/spark/keras
copying horovod/spark/keras/remote.py -> build/lib.linux-x86_64-3.6/horovod/spark/keras
copying horovod/spark/keras/bare.py -> build/lib.linux-x86_64-3.6/horovod/spark/keras
copying horovod/spark/keras/__init__.py -> build/lib.linux-x86_64-3.6/horovod/spark/keras
copying horovod/spark/keras/estimator.py -> build/lib.linux-x86_64-3.6/horovod/spark/keras
copying horovod/spark/keras/util.py -> build/lib.linux-x86_64-3.6/horovod/spark/keras
copying horovod/spark/keras/tensorflow.py -> build/lib.linux-x86_64-3.6/horovod/spark/keras
creating build/lib.linux-x86_64-3.6/horovod/tensorflow/keras
copying horovod/tensorflow/keras/__init__.py -> build/lib.linux-x86_64-3.6/horovod/tensorflow/keras
copying horovod/tensorflow/keras/callbacks.py -> build/lib.linux-x86_64-3.6/horovod/tensorflow/keras
creating build/lib.linux-x86_64-3.6/horovod/run/task
copying horovod/run/task/task_service.py -> build/lib.linux-x86_64-3.6/horovod/run/task
copying horovod/run/task/__init__.py -> build/lib.linux-x86_64-3.6/horovod/run/task
creating build/lib.linux-x86_64-3.6/horovod/run/common
copying horovod/run/common/__init__.py -> build/lib.linux-x86_64-3.6/horovod/run/common
creating build/lib.linux-x86_64-3.6/horovod/run/http
copying horovod/run/http/http_client.py -> build/lib.linux-x86_64-3.6/horovod/run/http
copying horovod/run/http/__init__.py -> build/lib.linux-x86_64-3.6/horovod/run/http
copying horovod/run/http/http_server.py -> build/lib.linux-x86_64-3.6/horovod/run/http
creating build/lib.linux-x86_64-3.6/horovod/run/driver
copying horovod/run/driver/driver_service.py -> build/lib.linux-x86_64-3.6/horovod/run/driver
copying horovod/run/driver/__init__.py -> build/lib.linux-x86_64-3.6/horovod/run/driver
creating build/lib.linux-x86_64-3.6/horovod/run/util
copying horovod/run/util/cache.py -> build/lib.linux-x86_64-3.6/horovod/run/util
copying horovod/run/util/threads.py -> build/lib.linux-x86_64-3.6/horovod/run/util
copying horovod/run/util/__init__.py -> build/lib.linux-x86_64-3.6/horovod/run/util
copying horovod/run/util/network.py -> build/lib.linux-x86_64-3.6/horovod/run/util
creating build/lib.linux-x86_64-3.6/horovod/run/common/service
copying horovod/run/common/service/task_service.py -> build/lib.linux-x86_64-3.6/horovod/run/common/service
copying horovod/run/common/service/driver_service.py -> build/lib.linux-x86_64-3.6/horovod/run/common/service
copying horovod/run/common/service/__init__.py -> build/lib.linux-x86_64-3.6/horovod/run/common/service
creating build/lib.linux-x86_64-3.6/horovod/run/common/util
copying horovod/run/common/util/codec.py -> build/lib.linux-x86_64-3.6/horovod/run/common/util
copying horovod/run/common/util/secret.py -> build/lib.linux-x86_64-3.6/horovod/run/common/util
copying horovod/run/common/util/host_hash.py -> build/lib.linux-x86_64-3.6/horovod/run/common/util
copying horovod/run/common/util/settings.py -> build/lib.linux-x86_64-3.6/horovod/run/common/util
copying horovod/run/common/util/env.py -> build/lib.linux-x86_64-3.6/horovod/run/common/util
copying horovod/run/common/util/__init__.py -> build/lib.linux-x86_64-3.6/horovod/run/common/util
copying horovod/run/common/util/network.py -> build/lib.linux-x86_64-3.6/horovod/run/common/util
copying horovod/run/common/util/timeout.py -> build/lib.linux-x86_64-3.6/horovod/run/common/util
copying horovod/run/common/util/safe_shell_exec.py -> build/lib.linux-x86_64-3.6/horovod/run/common/util
copying horovod/run/common/util/config_parser.py -> build/lib.linux-x86_64-3.6/horovod/run/common/util
creating build/lib.linux-x86_64-3.6/horovod/torch/mpi_lib
copying horovod/torch/mpi_lib/__init__.py -> build/lib.linux-x86_64-3.6/horovod/torch/mpi_lib
creating build/lib.linux-x86_64-3.6/horovod/torch/mpi_lib_impl
copying horovod/torch/mpi_lib_impl/__init__.py -> build/lib.linux-x86_64-3.6/horovod/torch/mpi_lib_impl
running build_ext
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/include/python3.6m -c build/temp.linux-x86_64-3.6/test_compile/test_cpp_flags.cc -o build/temp.linux-x86_64-3.6/test_compile/test_cpp_flags.o
gcc -pthread -shared -Wl,-z,relro -g build/temp.linux-x86_64-3.6/test_compile/test_cpp_flags.o -L/usr/lib64 -o build/temp.linux-x86_64-3.6/test_compile/test_cpp_flags.so
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/usr/include/python3.6m -c build/temp.linux-x86_64-3.6/test_compile/test_link_flags.cc -o build/temp.linux-x86_64-3.6/test_compile/test_link_flags.o
gcc -pthread -shared -Wl,-z,relro -g -Wl,--version-script=horovod.lds build/temp.linux-x86_64-3.6/test_compile/test_link_flags.o -L/usr/lib64 -o build/temp.linux-x86_64-3.6/test_compile/test_link_flags.so
INFO: HOROVOD_WITHOUT_GLOO detected, skip compiling Horovod with Gloo.
INFO: Compiler /opt/rh/devtoolset-7/root/usr/bin/g++ (version 7.3.1 20180303 (Red Hat 7.3.1-5)) is not usable for this TensorFlow installation. Require g++ (version >=4.8.5, <5).
INFO: Compiler /opt/rh/devtoolset-8/root/usr/bin/g++ (version 8.3.1 20190311 (Red Hat 8.3.1-3)) is not usable for this TensorFlow installation. Require g++ (version >=4.8.5, <5).
INFO: Compilers /usr/bin/gcc and /usr/bin/g++ (version 4.8.5 20150623 (Red Hat 4.8.5-39)) selected for TensorFlow plugin build.
building 'horovod.tensorflow.mpi_lib' extension
creating build/temp.linux-x86_64-3.6/horovod
creating build/temp.linux-x86_64-3.6/horovod/common
creating build/temp.linux-x86_64-3.6/horovod/common/ops
creating build/temp.linux-x86_64-3.6/horovod/common/optim
creating build/temp.linux-x86_64-3.6/horovod/common/utils
creating build/temp.linux-x86_64-3.6/horovod/common/mpi
creating build/temp.linux-x86_64-3.6/horovod/common/ops/adasum
creating build/temp.linux-x86_64-3.6/horovod/tensorflow
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/common.cc -o build/temp.linux-x86_64-3.6/horovod/common/common.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/controller.cc -o build/temp.linux-x86_64-3.6/horovod/common/controller.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/fusion_buffer_manager.cc -o build/temp.linux-x86_64-3.6/horovod/common/fusion_buffer_manager.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/logging.cc -o build/temp.linux-x86_64-3.6/horovod/common/logging.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/message.cc -o build/temp.linux-x86_64-3.6/horovod/common/message.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/operations.cc -o build/temp.linux-x86_64-3.6/horovod/common/operations.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
In file included from horovod/common/mpi/mpi_context.h:25:0,
from horovod/common/operations.cc:47:
horovod/common/mpi/../half.h: In function ‘void horovod::common::HalfBits2Float(short unsigned int*, float*)’:
horovod/common/mpi/../half.h:70:44: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
*res = *reinterpret_cast<float const*>(&f);
^
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/parameter_manager.cc -o build/temp.linux-x86_64-3.6/horovod/common/parameter_manager.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
horovod/common/parameter_manager.cc: In member function ‘virtual bool horovod::common::ParameterManager::BayesianParameter::IsDoneTuning() const’:
horovod/common/parameter_manager.cc:466:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
return iteration_ > max_samples_;
^
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/response_cache.cc -o build/temp.linux-x86_64-3.6/horovod/common/response_cache.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/stall_inspector.cc -o build/temp.linux-x86_64-3.6/horovod/common/stall_inspector.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/thread_pool.cc -o build/temp.linux-x86_64-3.6/horovod/common/thread_pool.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/timeline.cc -o build/temp.linux-x86_64-3.6/horovod/common/timeline.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/tensor_queue.cc -o build/temp.linux-x86_64-3.6/horovod/common/tensor_queue.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/ops/collective_operations.cc -o build/temp.linux-x86_64-3.6/horovod/common/ops/collective_operations.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/ops/operation_manager.cc -o build/temp.linux-x86_64-3.6/horovod/common/ops/operation_manager.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/optim/bayesian_optimization.cc -o build/temp.linux-x86_64-3.6/horovod/common/optim/bayesian_optimization.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/optim/gaussian_process.cc -o build/temp.linux-x86_64-3.6/horovod/common/optim/gaussian_process.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/utils/env_parser.cc -o build/temp.linux-x86_64-3.6/horovod/common/utils/env_parser.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/half.cc -o build/temp.linux-x86_64-3.6/horovod/common/half.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
In file included from horovod/common/half.cc:16:0:
horovod/common/half.h: In function ‘void horovod::common::HalfBits2Float(short unsigned int*, float*)’:
horovod/common/half.h:70:44: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
*res = *reinterpret_cast<float const*>(&f);
^
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/mpi/mpi_context.cc -o build/temp.linux-x86_64-3.6/horovod/common/mpi/mpi_context.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
In file included from horovod/common/mpi/mpi_context.h:25:0,
from horovod/common/mpi/mpi_context.cc:17:
horovod/common/mpi/../half.h: In function ‘void horovod::common::HalfBits2Float(short unsigned int*, float*)’:
horovod/common/mpi/../half.h:70:44: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
*res = *reinterpret_cast<float const*>(&f);
^
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/mpi/mpi_controller.cc -o build/temp.linux-x86_64-3.6/horovod/common/mpi/mpi_controller.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
In file included from horovod/common/mpi/mpi_context.h:25:0,
from horovod/common/mpi/mpi_controller.h:19,
from horovod/common/mpi/mpi_controller.cc:16:
horovod/common/mpi/../half.h: In function ‘void horovod::common::HalfBits2Float(short unsigned int*, float*)’:
horovod/common/mpi/../half.h:70:44: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
*res = *reinterpret_cast<float const*>(&f);
^
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/ops/mpi_operations.cc -o build/temp.linux-x86_64-3.6/horovod/common/ops/mpi_operations.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
In file included from horovod/common/ops/../mpi/mpi_context.h:25:0,
from horovod/common/ops/mpi_operations.h:27,
from horovod/common/ops/mpi_operations.cc:17:
horovod/common/ops/../mpi/../half.h: In function ‘void horovod::common::HalfBits2Float(short unsigned int*, float*)’:
horovod/common/ops/../mpi/../half.h:70:44: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
*res = *reinterpret_cast<float const*>(&f);
^
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/ops/adasum/adasum_mpi.cc -o build/temp.linux-x86_64-3.6/horovod/common/ops/adasum/adasum_mpi.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
In file included from horovod/common/ops/adasum/../../mpi/mpi_context.h:25:0,
from horovod/common/ops/adasum/adasum_mpi.h:21,
from horovod/common/ops/adasum/adasum_mpi.cc:16:
horovod/common/ops/adasum/../../mpi/../half.h: In function ‘void horovod::common::HalfBits2Float(short unsigned int*, float*)’:
horovod/common/ops/adasum/../../mpi/../half.h:70:44: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
*res = *reinterpret_cast<float const*>(&f);
^
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/ops/adasum_mpi_operations.cc -o build/temp.linux-x86_64-3.6/horovod/common/ops/adasum_mpi_operations.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
In file included from horovod/common/ops/adasum/../../mpi/mpi_context.h:25:0,
from horovod/common/ops/adasum/adasum_mpi.h:21,
from horovod/common/ops/adasum_mpi_operations.h:22,
from horovod/common/ops/adasum_mpi_operations.cc:16:
horovod/common/ops/adasum/../../mpi/../half.h: In function ‘void horovod::common::HalfBits2Float(short unsigned int*, float*)’:
horovod/common/ops/adasum/../../mpi/../half.h:70:44: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
*res = *reinterpret_cast<float const*>(&f);
^
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/tensorflow/mpi_ops.cc -o build/temp.linux-x86_64-3.6/horovod/tensorflow/mpi_ops.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/g++ -pthread -shared -Wl,-z,relro -g -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv build/temp.linux-x86_64-3.6/horovod/common/common.o build/temp.linux-x86_64-3.6/horovod/common/controller.o build/temp.linux-x86_64-3.6/horovod/common/fusion_buffer_manager.o build/temp.linux-x86_64-3.6/horovod/common/logging.o build/temp.linux-x86_64-3.6/horovod/common/message.o build/temp.linux-x86_64-3.6/horovod/common/operations.o build/temp.linux-x86_64-3.6/horovod/common/parameter_manager.o build/temp.linux-x86_64-3.6/horovod/common/response_cache.o build/temp.linux-x86_64-3.6/horovod/common/stall_inspector.o build/temp.linux-x86_64-3.6/horovod/common/thread_pool.o build/temp.linux-x86_64-3.6/horovod/common/timeline.o build/temp.linux-x86_64-3.6/horovod/common/tensor_queue.o build/temp.linux-x86_64-3.6/horovod/common/ops/collective_operations.o build/temp.linux-x86_64-3.6/horovod/common/ops/operation_manager.o build/temp.linux-x86_64-3.6/horovod/common/optim/bayesian_optimization.o build/temp.linux-x86_64-3.6/horovod/common/optim/gaussian_process.o build/temp.linux-x86_64-3.6/horovod/common/utils/env_parser.o build/temp.linux-x86_64-3.6/horovod/common/half.o build/temp.linux-x86_64-3.6/horovod/common/mpi/mpi_context.o build/temp.linux-x86_64-3.6/horovod/common/mpi/mpi_controller.o build/temp.linux-x86_64-3.6/horovod/common/ops/mpi_operations.o build/temp.linux-x86_64-3.6/horovod/common/ops/adasum/adasum_mpi.o build/temp.linux-x86_64-3.6/horovod/common/ops/adasum_mpi_operations.o build/temp.linux-x86_64-3.6/horovod/tensorflow/mpi_ops.o -L/usr/lib64 -lpython3.6m -o build/lib.linux-x86_64-3.6/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so -Wl,--version-script=horovod.lds -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -L/root/.local/lib/python3.6/site-packages/tensorflow -l:libtensorflow_framework.so.1
/opt/rh/devtoolset-7/root/usr/bin/ld: cannot find -lpython3.6m
collect2: error: ld returned 1 exit status
error: command '/usr/bin/g++' failed with exit status 1
----------------------------------------
ERROR: Failed building wheel for horovod
Running setup.py clean for horovod
Failed to build horovod
Installing collected packages: horovod
Running setup.py install for horovod ... /
ERROR: Command errored out with exit status 1:
command: /usr/bin/python3 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-rw1my8vd/horovod_c985d6fc46794d40a1fbe4f795ac2673/setup.py'"'"'; __file__='"'"'/tmp/pip-install-rw1my8vd/horovod_c985d6fc46794d40a1fbe4f795ac2673/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-vlu2jf8f/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.6m/horovod
cwd: /tmp/pip-install-rw1my8vd/horovod_c985d6fc46794d40a1fbe4f795ac2673/
Complete output (211 lines):
/usr/lib64/python3.6/distutils/dist.py:261: UserWarning: Unknown distribution option: 'test_requires'
warnings.warn(msg)
running install
/root/.local/lib/python3.6/site-packages/setuptools/command/install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
setuptools.SetuptoolsDeprecationWarning,
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.6
creating build/lib.linux-x86_64-3.6/horovod
copying horovod/__init__.py -> build/lib.linux-x86_64-3.6/horovod
creating build/lib.linux-x86_64-3.6/horovod/common
copying horovod/common/basics.py -> build/lib.linux-x86_64-3.6/horovod/common
copying horovod/common/__init__.py -> build/lib.linux-x86_64-3.6/horovod/common
copying horovod/common/util.py -> build/lib.linux-x86_64-3.6/horovod/common
creating build/lib.linux-x86_64-3.6/horovod/spark
copying horovod/spark/__init__.py -> build/lib.linux-x86_64-3.6/horovod/spark
creating build/lib.linux-x86_64-3.6/horovod/tensorflow
copying horovod/tensorflow/mpi_ops.py -> build/lib.linux-x86_64-3.6/horovod/tensorflow
copying horovod/tensorflow/__init__.py -> build/lib.linux-x86_64-3.6/horovod/tensorflow
copying horovod/tensorflow/compression.py -> build/lib.linux-x86_64-3.6/horovod/tensorflow
copying horovod/tensorflow/util.py -> build/lib.linux-x86_64-3.6/horovod/tensorflow
creating build/lib.linux-x86_64-3.6/horovod/mxnet
copying horovod/mxnet/mpi_ops.py -> build/lib.linux-x86_64-3.6/horovod/mxnet
copying horovod/mxnet/__init__.py -> build/lib.linux-x86_64-3.6/horovod/mxnet
creating build/lib.linux-x86_64-3.6/horovod/_keras
copying horovod/_keras/__init__.py -> build/lib.linux-x86_64-3.6/horovod/_keras
copying horovod/_keras/callbacks.py -> build/lib.linux-x86_64-3.6/horovod/_keras
creating build/lib.linux-x86_64-3.6/horovod/run
copying horovod/run/run_task.py -> build/lib.linux-x86_64-3.6/horovod/run
copying horovod/run/task_fn.py -> build/lib.linux-x86_64-3.6/horovod/run
copying horovod/run/__init__.py -> build/lib.linux-x86_64-3.6/horovod/run
copying horovod/run/gloo_run.py -> build/lib.linux-x86_64-3.6/horovod/run
copying horovod/run/run.py -> build/lib.linux-x86_64-3.6/horovod/run
copying horovod/run/mpi_run.py -> build/lib.linux-x86_64-3.6/horovod/run
creating build/lib.linux-x86_64-3.6/horovod/torch
copying horovod/torch/mpi_ops.py -> build/lib.linux-x86_64-3.6/horovod/torch
copying horovod/torch/__init__.py -> build/lib.linux-x86_64-3.6/horovod/torch
copying horovod/torch/compression.py -> build/lib.linux-x86_64-3.6/horovod/torch
creating build/lib.linux-x86_64-3.6/horovod/keras
copying horovod/keras/__init__.py -> build/lib.linux-x86_64-3.6/horovod/keras
copying horovod/keras/callbacks.py -> build/lib.linux-x86_64-3.6/horovod/keras
creating build/lib.linux-x86_64-3.6/horovod/spark/task
copying horovod/spark/task/task_info.py -> build/lib.linux-x86_64-3.6/horovod/spark/task
copying horovod/spark/task/task_service.py -> build/lib.linux-x86_64-3.6/horovod/spark/task
copying horovod/spark/task/mpirun_exec_fn.py -> build/lib.linux-x86_64-3.6/horovod/spark/task
copying horovod/spark/task/__init__.py -> build/lib.linux-x86_64-3.6/horovod/spark/task
creating build/lib.linux-x86_64-3.6/horovod/spark/common
copying horovod/spark/common/cache.py -> build/lib.linux-x86_64-3.6/horovod/spark/common
copying horovod/spark/common/params.py -> build/lib.linux-x86_64-3.6/horovod/spark/common
copying horovod/spark/common/serialization.py -> build/lib.linux-x86_64-3.6/horovod/spark/common
copying horovod/spark/common/backend.py -> build/lib.linux-x86_64-3.6/horovod/spark/common
copying horovod/spark/common/store.py -> build/lib.linux-x86_64-3.6/horovod/spark/common
copying horovod/spark/common/_namedtuple_fix.py -> build/lib.linux-x86_64-3.6/horovod/spark/common
copying horovod/spark/common/__init__.py -> build/lib.linux-x86_64-3.6/horovod/spark/common
copying horovod/spark/common/constants.py -> build/lib.linux-x86_64-3.6/horovod/spark/common
copying horovod/spark/common/estimator.py -> build/lib.linux-x86_64-3.6/horovod/spark/common
copying horovod/spark/common/util.py -> build/lib.linux-x86_64-3.6/horovod/spark/common
creating build/lib.linux-x86_64-3.6/horovod/spark/driver
copying horovod/spark/driver/driver_service.py -> build/lib.linux-x86_64-3.6/horovod/spark/driver
copying horovod/spark/driver/mpirun_rsh.py -> build/lib.linux-x86_64-3.6/horovod/spark/driver
copying horovod/spark/driver/__init__.py -> build/lib.linux-x86_64-3.6/horovod/spark/driver
copying horovod/spark/driver/job_id.py -> build/lib.linux-x86_64-3.6/horovod/spark/driver
creating build/lib.linux-x86_64-3.6/horovod/spark/torch
copying horovod/spark/torch/remote.py -> build/lib.linux-x86_64-3.6/horovod/spark/torch
copying horovod/spark/torch/__init__.py -> build/lib.linux-x86_64-3.6/horovod/spark/torch
copying horovod/spark/torch/estimator.py -> build/lib.linux-x86_64-3.6/horovod/spark/torch
copying horovod/spark/torch/util.py -> build/lib.linux-x86_64-3.6/horovod/spark/torch
creating build/lib.linux-x86_64-3.6/horovod/spark/keras
copying horovod/spark/keras/optimizer.py -> build/lib.linux-x86_64-3.6/horovod/spark/keras
copying horovod/spark/keras/remote.py -> build/lib.linux-x86_64-3.6/horovod/spark/keras
copying horovod/spark/keras/bare.py -> build/lib.linux-x86_64-3.6/horovod/spark/keras
copying horovod/spark/keras/__init__.py -> build/lib.linux-x86_64-3.6/horovod/spark/keras
copying horovod/spark/keras/estimator.py -> build/lib.linux-x86_64-3.6/horovod/spark/keras
copying horovod/spark/keras/util.py -> build/lib.linux-x86_64-3.6/horovod/spark/keras
copying horovod/spark/keras/tensorflow.py -> build/lib.linux-x86_64-3.6/horovod/spark/keras
creating build/lib.linux-x86_64-3.6/horovod/tensorflow/keras
copying horovod/tensorflow/keras/__init__.py -> build/lib.linux-x86_64-3.6/horovod/tensorflow/keras
copying horovod/tensorflow/keras/callbacks.py -> build/lib.linux-x86_64-3.6/horovod/tensorflow/keras
creating build/lib.linux-x86_64-3.6/horovod/run/task
copying horovod/run/task/task_service.py -> build/lib.linux-x86_64-3.6/horovod/run/task
copying horovod/run/task/__init__.py -> build/lib.linux-x86_64-3.6/horovod/run/task
creating build/lib.linux-x86_64-3.6/horovod/run/common
copying horovod/run/common/__init__.py -> build/lib.linux-x86_64-3.6/horovod/run/common
creating build/lib.linux-x86_64-3.6/horovod/run/http
copying horovod/run/http/http_client.py -> build/lib.linux-x86_64-3.6/horovod/run/http
copying horovod/run/http/__init__.py -> build/lib.linux-x86_64-3.6/horovod/run/http
copying horovod/run/http/http_server.py -> build/lib.linux-x86_64-3.6/horovod/run/http
creating build/lib.linux-x86_64-3.6/horovod/run/driver
copying horovod/run/driver/driver_service.py -> build/lib.linux-x86_64-3.6/horovod/run/driver
copying horovod/run/driver/__init__.py -> build/lib.linux-x86_64-3.6/horovod/run/driver
creating build/lib.linux-x86_64-3.6/horovod/run/util
copying horovod/run/util/cache.py -> build/lib.linux-x86_64-3.6/horovod/run/util
copying horovod/run/util/threads.py -> build/lib.linux-x86_64-3.6/horovod/run/util
copying horovod/run/util/__init__.py -> build/lib.linux-x86_64-3.6/horovod/run/util
copying horovod/run/util/network.py -> build/lib.linux-x86_64-3.6/horovod/run/util
creating build/lib.linux-x86_64-3.6/horovod/run/common/service
copying horovod/run/common/service/task_service.py -> build/lib.linux-x86_64-3.6/horovod/run/common/service
copying horovod/run/common/service/driver_service.py -> build/lib.linux-x86_64-3.6/horovod/run/common/service
copying horovod/run/common/service/__init__.py -> build/lib.linux-x86_64-3.6/horovod/run/common/service
creating build/lib.linux-x86_64-3.6/horovod/run/common/util
copying horovod/run/common/util/codec.py -> build/lib.linux-x86_64-3.6/horovod/run/common/util
copying horovod/run/common/util/secret.py -> build/lib.linux-x86_64-3.6/horovod/run/common/util
copying horovod/run/common/util/host_hash.py -> build/lib.linux-x86_64-3.6/horovod/run/common/util
copying horovod/run/common/util/settings.py -> build/lib.linux-x86_64-3.6/horovod/run/common/util
copying horovod/run/common/util/env.py -> build/lib.linux-x86_64-3.6/horovod/run/common/util
copying horovod/run/common/util/__init__.py -> build/lib.linux-x86_64-3.6/horovod/run/common/util
copying horovod/run/common/util/network.py -> build/lib.linux-x86_64-3.6/horovod/run/common/util
copying horovod/run/common/util/timeout.py -> build/lib.linux-x86_64-3.6/horovod/run/common/util
copying horovod/run/common/util/safe_shell_exec.py -> build/lib.linux-x86_64-3.6/horovod/run/common/util
copying horovod/run/common/util/config_parser.py -> build/lib.linux-x86_64-3.6/horovod/run/common/util
creating build/lib.linux-x86_64-3.6/horovod/torch/mpi_lib
copying horovod/torch/mpi_lib/__init__.py -> build/lib.linux-x86_64-3.6/horovod/torch/mpi_lib
creating build/lib.linux-x86_64-3.6/horovod/torch/mpi_lib_impl
copying horovod/torch/mpi_lib_impl/__init__.py -> build/lib.linux-x86_64-3.6/horovod/torch/mpi_lib_impl
running build_ext
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/include/python3.6m -c build/temp.linux-x86_64-3.6/test_compile/test_cpp_flags.cc -o build/temp.linux-x86_64-3.6/test_compile/test_cpp_flags.o
gcc -pthread -shared -Wl,-z,relro -g build/temp.linux-x86_64-3.6/test_compile/test_cpp_flags.o -L/usr/lib64 -o build/temp.linux-x86_64-3.6/test_compile/test_cpp_flags.so
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/usr/include/python3.6m -c build/temp.linux-x86_64-3.6/test_compile/test_link_flags.cc -o build/temp.linux-x86_64-3.6/test_compile/test_link_flags.o
gcc -pthread -shared -Wl,-z,relro -g -Wl,--version-script=horovod.lds build/temp.linux-x86_64-3.6/test_compile/test_link_flags.o -L/usr/lib64 -o build/temp.linux-x86_64-3.6/test_compile/test_link_flags.so
INFO: HOROVOD_WITHOUT_GLOO detected, skip compiling Horovod with Gloo.
INFO: Compiler /opt/rh/devtoolset-7/root/usr/bin/g++ (version 7.3.1 20180303 (Red Hat 7.3.1-5)) is not usable for this TensorFlow installation. Require g++ (version >=4.8.5, <5).
INFO: Compiler /opt/rh/devtoolset-8/root/usr/bin/g++ (version 8.3.1 20190311 (Red Hat 8.3.1-3)) is not usable for this TensorFlow installation. Require g++ (version >=4.8.5, <5).
INFO: Compilers /usr/bin/gcc and /usr/bin/g++ (version 4.8.5 20150623 (Red Hat 4.8.5-39)) selected for TensorFlow plugin build.
building 'horovod.tensorflow.mpi_lib' extension
creating build/temp.linux-x86_64-3.6/horovod
creating build/temp.linux-x86_64-3.6/horovod/common
creating build/temp.linux-x86_64-3.6/horovod/common/ops
creating build/temp.linux-x86_64-3.6/horovod/common/optim
creating build/temp.linux-x86_64-3.6/horovod/common/utils
creating build/temp.linux-x86_64-3.6/horovod/common/mpi
creating build/temp.linux-x86_64-3.6/horovod/common/ops/adasum
creating build/temp.linux-x86_64-3.6/horovod/tensorflow
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/common.cc -o build/temp.linux-x86_64-3.6/horovod/common/common.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/controller.cc -o build/temp.linux-x86_64-3.6/horovod/common/controller.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/fusion_buffer_manager.cc -o build/temp.linux-x86_64-3.6/horovod/common/fusion_buffer_manager.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/logging.cc -o build/temp.linux-x86_64-3.6/horovod/common/logging.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/message.cc -o build/temp.linux-x86_64-3.6/horovod/common/message.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/operations.cc -o build/temp.linux-x86_64-3.6/horovod/common/operations.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
In file included from horovod/common/mpi/mpi_context.h:25:0,
from horovod/common/operations.cc:47:
horovod/common/mpi/../half.h: In function ‘void horovod::common::HalfBits2Float(short unsigned int*, float*)’:
horovod/common/mpi/../half.h:70:44: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
*res = *reinterpret_cast<float const*>(&f);
^
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/parameter_manager.cc -o build/temp.linux-x86_64-3.6/horovod/common/parameter_manager.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
horovod/common/parameter_manager.cc: In member function ‘virtual bool horovod::common::ParameterManager::BayesianParameter::IsDoneTuning() const’:
horovod/common/parameter_manager.cc:466:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
return iteration_ > max_samples_;
^
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/response_cache.cc -o build/temp.linux-x86_64-3.6/horovod/common/response_cache.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/stall_inspector.cc -o build/temp.linux-x86_64-3.6/horovod/common/stall_inspector.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/thread_pool.cc -o build/temp.linux-x86_64-3.6/horovod/common/thread_pool.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/timeline.cc -o build/temp.linux-x86_64-3.6/horovod/common/timeline.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/tensor_queue.cc -o build/temp.linux-x86_64-3.6/horovod/common/tensor_queue.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/ops/collective_operations.cc -o build/temp.linux-x86_64-3.6/horovod/common/ops/collective_operations.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/ops/operation_manager.cc -o build/temp.linux-x86_64-3.6/horovod/common/ops/operation_manager.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/optim/bayesian_optimization.cc -o build/temp.linux-x86_64-3.6/horovod/common/optim/bayesian_optimization.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/optim/gaussian_process.cc -o build/temp.linux-x86_64-3.6/horovod/common/optim/gaussian_process.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/utils/env_parser.cc -o build/temp.linux-x86_64-3.6/horovod/common/utils/env_parser.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/half.cc -o build/temp.linux-x86_64-3.6/horovod/common/half.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
In file included from horovod/common/half.cc:16:0:
horovod/common/half.h: In function ‘void horovod::common::HalfBits2Float(short unsigned int*, float*)’:
horovod/common/half.h:70:44: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
*res = *reinterpret_cast<float const*>(&f);
^
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/mpi/mpi_context.cc -o build/temp.linux-x86_64-3.6/horovod/common/mpi/mpi_context.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
In file included from horovod/common/mpi/mpi_context.h:25:0,
from horovod/common/mpi/mpi_context.cc:17:
horovod/common/mpi/../half.h: In function ‘void horovod::common::HalfBits2Float(short unsigned int*, float*)’:
horovod/common/mpi/../half.h:70:44: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
*res = *reinterpret_cast<float const*>(&f);
^
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/mpi/mpi_controller.cc -o build/temp.linux-x86_64-3.6/horovod/common/mpi/mpi_controller.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
In file included from horovod/common/mpi/mpi_context.h:25:0,
from horovod/common/mpi/mpi_controller.h:19,
from horovod/common/mpi/mpi_controller.cc:16:
horovod/common/mpi/../half.h: In function ‘void horovod::common::HalfBits2Float(short unsigned int*, float*)’:
horovod/common/mpi/../half.h:70:44: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
*res = *reinterpret_cast<float const*>(&f);
^
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/ops/mpi_operations.cc -o build/temp.linux-x86_64-3.6/horovod/common/ops/mpi_operations.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
In file included from horovod/common/ops/../mpi/mpi_context.h:25:0,
from horovod/common/ops/mpi_operations.h:27,
from horovod/common/ops/mpi_operations.cc:17:
horovod/common/ops/../mpi/../half.h: In function ‘void horovod::common::HalfBits2Float(short unsigned int*, float*)’:
horovod/common/ops/../mpi/../half.h:70:44: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
*res = *reinterpret_cast<float const*>(&f);
^
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/ops/adasum/adasum_mpi.cc -o build/temp.linux-x86_64-3.6/horovod/common/ops/adasum/adasum_mpi.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
In file included from horovod/common/ops/adasum/../../mpi/mpi_context.h:25:0,
from horovod/common/ops/adasum/adasum_mpi.h:21,
from horovod/common/ops/adasum/adasum_mpi.cc:16:
horovod/common/ops/adasum/../../mpi/../half.h: In function ‘void horovod::common::HalfBits2Float(short unsigned int*, float*)’:
horovod/common/ops/adasum/../../mpi/../half.h:70:44: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
*res = *reinterpret_cast<float const*>(&f);
^
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/common/ops/adasum_mpi_operations.cc -o build/temp.linux-x86_64-3.6/horovod/common/ops/adasum_mpi_operations.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
In file included from horovod/common/ops/adasum/../../mpi/mpi_context.h:25:0,
from horovod/common/ops/adasum/adasum_mpi.h:21,
from horovod/common/ops/adasum_mpi_operations.h:22,
from horovod/common/ops/adasum_mpi_operations.cc:16:
horovod/common/ops/adasum/../../mpi/../half.h: In function ‘void horovod::common::HalfBits2Float(short unsigned int*, float*)’:
horovod/common/ops/adasum/../../mpi/../half.h:70:44: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
*res = *reinterpret_cast<float const*>(&f);
^
/usr/bin/gcc -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DEIGEN_MPL2_ONLY=1 -DHAVE_MPI=1 -Ithird_party/HTTPRequest/include -Ithird_party/boost/assert/include -Ithird_party/boost/config/include -Ithird_party/boost/core/include -Ithird_party/boost/detail/include -Ithird_party/boost/iterator/include -Ithird_party/boost/lockfree/include -Ithird_party/boost/mpl/include -Ithird_party/boost/parameter/include -Ithird_party/boost/predef/include -Ithird_party/boost/preprocessor/include -Ithird_party/boost/static_assert/include -Ithird_party/boost/type_traits/include -Ithird_party/boost/utility/include -Ithird_party/eigen -Ithird_party/flatbuffers/include -Ithird_party/lbfgs/include -I/usr/include/python3.6m -c horovod/tensorflow/mpi_ops.cc -o build/temp.linux-x86_64-3.6/horovod/tensorflow/mpi_ops.o -std=c++11 -fPIC -O2 -Wall -fassociative-math -ffast-math -ftree-vectorize -funsafe-math-optimizations -mf16c -mavx -mfma -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/root/.local/lib/python3.6/site-packages/tensorflow/include -D_GLIBCXX_USE_CXX11_ABI=0
/usr/bin/g++ -pthread -shared -Wl,-z,relro -g -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv build/temp.linux-x86_64-3.6/horovod/common/common.o build/temp.linux-x86_64-3.6/horovod/common/controller.o build/temp.linux-x86_64-3.6/horovod/common/fusion_buffer_manager.o build/temp.linux-x86_64-3.6/horovod/common/logging.o build/temp.linux-x86_64-3.6/horovod/common/message.o build/temp.linux-x86_64-3.6/horovod/common/operations.o build/temp.linux-x86_64-3.6/horovod/common/parameter_manager.o build/temp.linux-x86_64-3.6/horovod/common/response_cache.o build/temp.linux-x86_64-3.6/horovod/common/stall_inspector.o build/temp.linux-x86_64-3.6/horovod/common/thread_pool.o build/temp.linux-x86_64-3.6/horovod/common/timeline.o build/temp.linux-x86_64-3.6/horovod/common/tensor_queue.o build/temp.linux-x86_64-3.6/horovod/common/ops/collective_operations.o build/temp.linux-x86_64-3.6/horovod/common/ops/operation_manager.o build/temp.linux-x86_64-3.6/horovod/common/optim/bayesian_optimization.o build/temp.linux-x86_64-3.6/horovod/common/optim/gaussian_process.o build/temp.linux-x86_64-3.6/horovod/common/utils/env_parser.o build/temp.linux-x86_64-3.6/horovod/common/half.o build/temp.linux-x86_64-3.6/horovod/common/mpi/mpi_context.o build/temp.linux-x86_64-3.6/horovod/common/mpi/mpi_controller.o build/temp.linux-x86_64-3.6/horovod/common/ops/mpi_operations.o build/temp.linux-x86_64-3.6/horovod/common/ops/adasum/adasum_mpi.o build/temp.linux-x86_64-3.6/horovod/common/ops/adasum_mpi_operations.o build/temp.linux-x86_64-3.6/horovod/tensorflow/mpi_ops.o -L/usr/lib64 -lpython3.6m -o build/lib.linux-x86_64-3.6/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so -Wl,--version-script=horovod.lds -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -I/usr/local/include -pthread -Wl,-rpath -Wl,/usr/local/lib -Wl,--enable-new-dtags -L/usr/local/lib -lmpi -L/root/.local/lib/python3.6/site-packages/tensorflow -l:libtensorflow_framework.so.1
/opt/rh/devtoolset-7/root/usr/bin/ld: cannot find -lpython3.6m
collect2: error: ld returned 1 exit status
error: command '/usr/bin/g++' failed with exit status 1
----------------------------------------
ERROR: Command errored out with exit status 1: /usr/bin/python3 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-rw1my8vd/horovod_c985d6fc46794d40a1fbe4f795ac2673/setup.py'"'"'; __file__='"'"'/tmp/pip-install-rw1my8vd/horovod_c985d6fc46794d40a1fbe4f795ac2673/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-vlu2jf8f/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.6m/horovod Check the logs for full command output.
**Could anyone help which step I did is wrong? Thanks a lot! **
| closed | 2021-12-01T09:15:54Z | 2022-07-01T20:35:01Z | https://github.com/horovod/horovod/issues/3297 | [
"bug"
] | coolnut12138 | 1 |
apachecn/ailearning | nlp | 542 | 加入我们 | 大佬,怎么加入群聊学习呢?为啥子加不上呢?是不是需要有某些要求呢? | closed | 2019-09-03T02:40:48Z | 2021-09-07T17:45:14Z | https://github.com/apachecn/ailearning/issues/542 | [] | achievejia | 1 |
gradio-app/gradio | machine-learning | 10,557 | Add an option to remove line numbers in gr.Code | - [X ] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
`gr.Code()` always displays line numbers.
**Describe the solution you'd like**
I propose to add an option `show_line_numbers = True | False` to display or hide the line numbers. The default should be `True` for compatibility with the current behaviour.
| closed | 2025-02-10T11:38:07Z | 2025-02-21T22:11:43Z | https://github.com/gradio-app/gradio/issues/10557 | [
"enhancement",
"good first issue"
] | altomani | 1 |
MaartenGr/BERTopic | nlp | 2,014 | Zero shot topic model with pre embedded zero shot topics | _Preface, I have tried to read through the current issues. I dont think that any issues raises what I am wanting. Issues like this https://github.com/MaartenGr/BERTopic/issues/2011 sound promising but is talking about something different. I apologise if this has already been discussed!_
I would like try out BERTopics zero shot modelling while using a proprietary embeding model (voyageai). Therefore I need to give BERTopic the embeddings for both the documents and zero shot topics.
An example would be something like this:
```python
from datasets import load_dataset
dataset = load_dataset("CShorten/ML-ArXiv-Papers")["train"]
docs = dataset["abstract"][:5_000]
zeroshot_topic_list = ["Clustering", "Topic Modeling", "Large Language Models"]
zeroshot_topic_list_embeddings = np.random.rand(len(zeroshot_topic_list), 1024).astype(np.float32)
document_embeddings = np.random.rand(len(docs), 1024).astype(np.float32)
topic_model = BERTopic(
embedding_model=None,
min_topic_size=5,
zeroshot_topic_list=zeroshot_topic_list,
embedded_zeroshot_topic_list=zeroshot_topic_list_embeddings
zeroshot_min_similarity=0.85
)
topics, _ = topic_model.fit_transform(docs, document_embeddings)
topic_model.get_topic_info()
```
Am I missing something with how BERTopic and zero-shot models should be working? If not I am happy to make PR with what seems to be the small changes that need to be made.
**Potential solution**
I have had a look through `_bertopic.py` and it seems to be a relatively straight forward process.
It seems that [here](https://github.com/MaartenGr/BERTopic/blob/be9376c99dba157707286b4d828277b5f3627572/bertopic/_bertopic.py#L3554) it could just pass it the given zero-shot topic embedidngs. These embeddings would come from another `init` arugment.
Then besides a few other changes like the `_is_zeroshot()` method.
| open | 2024-05-28T05:27:12Z | 2024-05-31T13:36:07Z | https://github.com/MaartenGr/BERTopic/issues/2014 | [] | 1jamesthompson1 | 1 |
ned2/slapdash | dash | 21 | Documentation on Heroku Deployment | This many not be an issue or belong here, but I was wondering if you can help in documenting how one can deploy this to Heroku | closed | 2019-07-15T13:29:23Z | 2019-08-01T16:02:29Z | https://github.com/ned2/slapdash/issues/21 | [] | btoro | 2 |
gevent/gevent | asyncio | 1,959 | greenlets accidentally stuck on sleep(0) after GC removes objects with sleep(0) in __del__ on python3.9+ | * gevent version: 22.10.2
* Python version: 3.9.X
* Operating System: CentOS based or docker python3.9 + pip install gevent
### Description:
This code fails with `LoopExit: This operation would block forever` on Python 3.9.
```python
import gevent.monkey
gevent.monkey.patch_all()
class X:
def __init__(self):
# need this for GC
self.link = self
def __del__(self):
gevent.hub.sleep()
def loop():
i = 0
while True:
print(f'iteration {i}')
i += 1
X()
gevent.hub.sleep()
a = gevent.spawn(loop)
# uncomment this if you want loop() greenlet to stuck
# b = gevent.spawn(gevent.hub.sleep, 100000000000000000)
a.join()
```
Reproducible with python:39 docker container.
```
# docker run -v ${PWD}:/test python:3.9 sh -c "pip install gevent==22.10.2 2>&1; python /test/test.py 2>&1" | tail -n 20
iteration 130
iteration 131
iteration 132
iteration 133
iteration 134
iteration 135
Traceback (most recent call last):
File "/test/test.py", line 23, in <module>
a.join()
File "src/gevent/greenlet.py", line 833, in gevent._gevent_cgreenlet.Greenlet.join
File "src/gevent/greenlet.py", line 859, in gevent._gevent_cgreenlet.Greenlet.join
File "src/gevent/greenlet.py", line 848, in gevent._gevent_cgreenlet.Greenlet.join
File "src/gevent/_greenlet_primitives.py", line 61, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src/gevent/_greenlet_primitives.py", line 61, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src/gevent/_greenlet_primitives.py", line 65, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src/gevent/_gevent_c_greenlet_primitives.pxd", line 35, in gevent._gevent_c_greenlet_primitives._greenlet_switch
gevent.exceptions.LoopExit: This operation would block forever
Hub: <Hub '' at 0x7fc21b5c32c0 epoll default pending=0 ref=0 fileno=3 thread_ident=0x7fc21c897740>
Handles:
[]
```
loop() hangs after GC jumps in while hub.sleep(0) is executed between hub.run_callback(waiter.switch, None) and hub.switch() from waiter.get(). GC is executed in same greenlet, so added loop's waiter.switch callback wakes up waiter created in __del__ method, after that waiter.switch callback added by __del__ method doesn't not switch to greenlet back, because it's waiter already has no self.greenlet, so execution never switch back to loop.
While provided example is artificial, we found it in real application after we switch to python3.9 and latest gevent.
Here is our traceback with __del__executed in sleep()/waiter.get().
```python-traceback
File "/usr/lib/python3.9/site-packages/requests/adapters.py", line 456, in send
conn = self.get_connection(request.url, proxies)
File "/usr/lib/python3.9/site-packages/requests/adapters.py", line 358, in get_connection
conn = self.poolmanager.connection_from_url(url)
File "/usr/lib/python3.9/site-packages/urllib3/poolmanager.py", line 298, in connection_from_url
return self.connection_from_host(
File "/usr/lib/python3.9/site-packages/urllib3/poolmanager.py", line 245, in connection_from_host
return self.connection_from_context(request_context)
File "/usr/lib/python3.9/site-packages/urllib3/poolmanager.py", line 260, in connection_from_context
return self.connection_from_pool_key(pool_key, request_context=request_context)
File "/usr/lib/python3.9/site-packages/urllib3/poolmanager.py", line 281, in connection_from_pool_key
pool = self._new_pool(scheme, host, port, request_context=request_context)
File "/usr/lib/python3.9/site-packages/urllib3/poolmanager.py", line 213, in _new_pool
return pool_cls(host, port, **request_context)
File "/usr/lib/python3.9/site-packages/urllib3/connectionpool.py", line 906, in __init__
HTTPConnectionPool.__init__(
File "/usr/lib/python3.9/site-packages/urllib3/connectionpool.py", line 206, in __init__
self.pool.put(None)
File "/usr/lib64/python3.9/queue.py", line 152, in put
self.not_empty.notify()
File "/usr/lib64/python3.9/threading.py", line 361, in notify
if not self._is_owned():
File "/usr/lib64/python3.9/threading.py", line 274, in _is_owned
if self._lock.acquire(False):
File "/usr/lib64/python3.9/site-packages/gevent/thread.py", line 141, in acquire
sleep()
File "/usr/lib64/python3.9/site-packages/gevent/hub.py", line 160, in sleep
waiter.get()
File "/usr/lib/python3.9/site-packages/keystoneauth1/session.py", line 397, in __del__ # <<<<< GC IS HERE
self._session.close()
File "/usr/lib/python3.9/site-packages/requests/sessions.py", line 797, in close
v.close()
File "/usr/lib/python3.9/site-packages/requests/adapters.py", line 368, in close
self.poolmanager.clear()
File "/usr/lib/python3.9/site-packages/urllib3/poolmanager.py", line 222, in clear
self.pools.clear()
File "/usr/lib/python3.9/site-packages/urllib3/_collections.py", line 100, in clear
self.dispose_func(value)
File "/usr/lib/python3.9/site-packages/urllib3/poolmanager.py", line 173, in <lambda>
self.pools = RecentlyUsedContainer(num_pools, dispose_func=lambda p: p.close())
File "/usr/lib/python3.9/site-packages/urllib3/connectionpool.py", line 490, in close
conn = old_pool.get(block=False)
File "/usr/lib64/python3.9/queue.py", line 182, in get
self.not_full.notify()
File "/usr/lib64/python3.9/threading.py", line 361, in notify
if not self._is_owned():
File "/usr/lib64/python3.9/threading.py", line 274, in _is_owned
if self._lock.acquire(False):
File "/usr/lib64/python3.9/site-packages/gevent/thread.py", line 141, in acquire
sleep()
File "/usr/lib64/python3.9/site-packages/gevent/hub.py", line 160, in sleep
waiter.get()
```
I was not able to reproduce with python3.8, probably GC logic was changed in python3.9.
| open | 2023-06-08T09:49:06Z | 2023-10-06T17:52:36Z | https://github.com/gevent/gevent/issues/1959 | [] | unipolar | 1 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 1,136 | Hello, | Hello,
using a new pyenv environment with the following versions and lib installed (after doing the `pip3 install torch torchvision torchaudio`)
```
% python --version
Python 3.10.4
% pyenv --version
pyenv 2.3.0
% pip list
Package Version
------------------ ---------
certifi 2022.9.14
charset-normalizer 2.1.1
idna 3.4
numpy 1.23.3
Pillow 9.2.0
pip 22.2.2
requests 2.28.1
setuptools 58.1.0
torch 1.12.1
torchaudio 0.12.1
torchvision 0.13.1
typing_extensions 4.3.0
urllib3 1.26.12`
I am having an error when trying to install the requirements `pip3 install -r requirements.txt`
` Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [29 lines of output]
Traceback (most recent call last):
File "/Users/adpablos/.pyenv/versions/3.10.4/envs/real-time-voice-cloning/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 156, in prepare_metadata_for_build_wheel
hook = backend.prepare_metadata_for_build_wheel
AttributeError: module 'sipbuild.api' has no attribute 'prepare_metadata_for_build_wheel'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/adpablos/.pyenv/versions/3.10.4/envs/real-time-voice-cloning/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 363, in <module>
main()
File "/Users/adpablos/.pyenv/versions/3.10.4/envs/real-time-voice-cloning/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 345, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/Users/adpablos/.pyenv/versions/3.10.4/envs/real-time-voice-cloning/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 160, in prepare_metadata_for_build_wheel
whl_basename = backend.build_wheel(metadata_directory, config_settings)
File "/private/var/folders/7t/5snbn06x5j17zqr251ryl7p40000gn/T/pip-build-env-fp3sbooh/overlay/lib/python3.10/site-packages/sipbuild/api.py", line 46, in build_wheel
project = AbstractProject.bootstrap('wheel',
File "/private/var/folders/7t/5snbn06x5j17zqr251ryl7p40000gn/T/pip-build-env-fp3sbooh/overlay/lib/python3.10/site-packages/sipbuild/abstract_project.py", line 87, in bootstrap
project.setup(pyproject, tool, tool_description)
File "/private/var/folders/7t/5snbn06x5j17zqr251ryl7p40000gn/T/pip-build-env-fp3sbooh/overlay/lib/python3.10/site-packages/sipbuild/project.py", line 584, in setup
self.apply_user_defaults(tool)
File "/private/var/folders/7t/5snbn06x5j17zqr251ryl7p40000gn/T/pip-install-eenv0a8p/pyqt5_77eef741f3924b23ad38cc2613c5171c/project.py", line 63, in apply_user_defaults
super().apply_user_defaults(tool)
File "/private/var/folders/7t/5snbn06x5j17zqr251ryl7p40000gn/T/pip-build-env-fp3sbooh/overlay/lib/python3.10/site-packages/pyqtbuild/project.py", line 70, in apply_user_defaults
super().apply_user_defaults(tool)
File "/private/var/folders/7t/5snbn06x5j17zqr251ryl7p40000gn/T/pip-build-env-fp3sbooh/overlay/lib/python3.10/site-packages/sipbuild/project.py", line 236, in apply_user_defaults
self.builder.apply_user_defaults(tool)
File "/private/var/folders/7t/5snbn06x5j17zqr251ryl7p40000gn/T/pip-build-env-fp3sbooh/overlay/lib/python3.10/site-packages/pyqtbuild/builder.py", line 67, in apply_user_defaults
raise PyProjectOptionException('qmake',
sipbuild.pyproject.PyProjectOptionException
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
```
Any idea what I am missing?
Thanks in advance!
__Originally posted by @adpablos in https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1113__
__Originally posted by @ImanuillKant1 in https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1135__ | closed | 2022-11-19T17:26:18Z | 2022-12-02T08:51:51Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1136 | [] | ImanuillKant1 | 0 |
great-expectations/great_expectations | data-science | 10,977 | File Context can't be created with `context_root_dir` | ```
context = gx.get_context(
context_root_dir=context_root_dir, project_root_dir=None, mode="file"
)
```
Results in
```
TypeError: 'project_root_dir' and 'context_root_dir' are conflicting args; please only provide one
```
GX version: 1.3.7
I think this is due to https://github.com/great-expectations/great_expectations/blob/f9dba6f7c5409b0f25374dac028000ebabedca48/great_expectations/data_context/data_context/context_factory.py#L184 | open | 2025-02-27T04:13:24Z | 2025-03-19T16:44:18Z | https://github.com/great-expectations/great_expectations/issues/10977 | [
"request-for-help"
] | CrossNox | 7 |
google-research/bert | tensorflow | 939 | nan error: tensorflow.python.framework.errors_impl.InvalidArgumentError: From /job:worker/replica:0/task:0: Gradient for bert/embeddings/LayerNorm/gamma:0 is NaN : Tensor had NaN values [[node CheckNumerics_4 (defined at usr/local/lib/python3.5/dist-packages/tensorflow_core/python/framework/ops.py:1748) ]] | Original stack trace for 'CheckNumerics_4':
File "usr/lib/python3.5/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "usr/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "home/mengqingyang0102/albert/run_squad_sp.py", line 1381, in <module>
tf.app.run()
File "usr/local/lib/python3.5/dist-packages/tensorflow_core/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "usr/local/lib/python3.5/dist-packages/absl/app.py", line 299, in run
_run_main(main, args)
File "usr/local/lib/python3.5/dist-packages/absl/app.py", line 250, in _run_main
sys.exit(main(argv))
File "home/mengqingyang0102/albert/run_squad_sp.py", line 1304, in main
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
File "usr/local/lib/python3.5/dist-packages/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py", line 3030, in train
saving_listeners=saving_listeners)
File "usr/local/lib/python3.5/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 370, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "usr/local/lib/python3.5/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1161, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "usr/local/lib/python3.5/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1191, in _train_model_default
features, labels, ModeKeys.TRAIN, self.config)
File "usr/local/lib/python3.5/dist-packages/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py", line 2857, in _call_model_fn
config)
File "usr/local/lib/python3.5/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1149, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "usr/local/lib/python3.5/dist-packages/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py", line 3278, in _model_fn
update_ops = _sync_variables_ops(ctx)
File "usr/local/lib/python3.5/dist-packages/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py", line 240, in _sync_variables_ops
for v in variables.trainable_variables()
File "usr/local/lib/python3.5/dist-packages/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py", line 240, in <listcomp>
for v in variables.trainable_variables()
File "usr/local/lib/python3.5/dist-packages/tensorflow_core/python/ops/gen_array_ops.py", line 1011, in check_numerics
"CheckNumerics", tensor=tensor, message=message, name=name)
File "usr/local/lib/python3.5/dist-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
op_def=op_def)
File "usr/local/lib/python3.5/dist-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "usr/local/lib/python3.5/dist-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "usr/local/lib/python3.5/dist-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "usr/local/lib/python3.5/dist-packages/tensorflow_core/python/framework/ops.py", line 1748, in __init__
self._traceback = tf_stack.extract_stack()
I set lower batchsize but still not work | open | 2019-11-26T08:16:28Z | 2020-05-05T17:55:55Z | https://github.com/google-research/bert/issues/939 | [] | SUMMER1234 | 2 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 1,180 | Add a way to create a paginated query without executing it | currently when calling `Query.paginate`, the SQL executes immdiately. This makes the use-case where you want to store the query as a Redis key, so that the result set can be cached and you can return the result set from memory instead of executing the query at all.
Basically what i'm asking for is some kinda functionality like this:
```python
"""
omitting the serialization/deserialization but the gist is something like.....
"""
paginated_query = Query.paginate(page=1, per_page=25)
if(Redis.get(paginated_query.query)):
return Redis.get(paginated_query.query)
result_set = paginated_query.items
Redis.set(paginated_query.query, result_set)
return result_set
``` | closed | 2023-03-15T19:27:40Z | 2023-03-30T01:08:03Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/1180 | [] | martinmckenna | 2 |
pywinauto/pywinauto | automation | 902 | Windows access violation |

This is my script.

The test case is passed but there is access violation. I wonder what the root cause is and how i can fix it.
- Pywinauto version: 0.6.8
- Python version and bitness: 3.8.2 64 bit
- Platform and OS: win 10 pro
| open | 2020-03-14T04:28:48Z | 2020-03-15T16:38:15Z | https://github.com/pywinauto/pywinauto/issues/902 | [] | czhhua28 | 1 |
comfyanonymous/ComfyUI | pytorch | 7,268 | Request how to create a new repository | ### Your question
I downloaded Git and also GitLFS pls how do I create a new repository and use the He keeps popping up this tooltip
### Logs
```powershell
```
### Other
 | closed | 2025-03-16T10:03:14Z | 2025-03-16T15:03:46Z | https://github.com/comfyanonymous/ComfyUI/issues/7268 | [
"User Support"
] | AC-pj | 1 |
ultralytics/ultralytics | pytorch | 18,881 | Where run summary and run history is created/can be changed? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hey,
I added succesfully the metrics `mAP70` to my training and wonder about the sequence in the final history and summary.
Where (script) the history and summary is created?
When my key series is correct.
```python
@property
def keys(self):
"""Returns a list of keys for accessing specific metrics."""
#return ["metrics/precision(B)", "metrics/recall(B)", "metrics/mAP50(B)", "metrics/mAP50-95(B)"] # default
default_keys = ["metrics/precision(B)", "metrics/recall(B)", "metrics/mAP50(B)", "metrics/mAP70(B)", "metrics/mAP50-95(B)"] # default metrics for training job
return default_keys
```
I expected that mAP70 is right after mAP50:
<div class="wandb-row">
<div class="wandb-col">
<h3>Run history:</h3>
<table class="wandb">
<tr><td>lr/pg0</td><td>█▂▁▁▁</td></tr>
<tr><td>lr/pg1</td><td>▄█▆▄▁</td></tr>
<tr><td>lr/pg2</td><td>▄█▆▄▁</td></tr>
<tr><td>metrics/mAP50(B)</td><td>▁▄▆▇█</td></tr>
<tr><td>metrics/mAP50-95(B)</td><td>▁▄▆▇█</td></tr>
<tr><td>metrics/mAP70(B)</td><td>▁▄▇▇█</td></tr>
<tr><td>metrics/precision(B)</td><td>▁█▂▃▃</td></tr>
<tr><td>metrics/recall(B)</td><td>▁▃▅▇█</td></tr>
<tr><td>model/GFLOPs</td><td>▁</td></tr>
<tr><td>model/parameters</td><td>▁</td></tr>
<tr><td>model/speed_PyTorch(ms)</td><td>▁</td></tr>
<tr><td>train/box_loss</td><td>█▇▄▂▁</td></tr>
<tr><td>train/cls_loss</td><td>█▅▃▂▁</td></tr>
<tr><td>train/dfl_loss</td><td>█▅▃▂▁</td></tr>
<tr><td>val/box_loss</td><td>█▆▃▂▁</td></tr>
<tr><td>val/cls_loss</td><td>█▄▂▁▁</td></tr>
<tr><td>val/dfl_loss</td><td>█▅▃▁▁</td></tr>
</table>
</div>
<div class="wandb-col">
<h3>Run summary:</h3>
<table class="wandb">
<tr><td>lr/pg0</td><td>1e-05</td></tr>
<tr><td>lr/pg1</td><td>1e-05</td></tr>
<tr><td>lr/pg2</td><td>1e-05</td></tr>
<tr><td>metrics/mAP50(B)</td><td>0.23647</td></tr>
<tr><td>metrics/mAP50-95(B)</td><td>0.13146</td></tr>
<tr><td>metrics/mAP70(B)</td><td>0.16201</td></tr>
<tr><td>metrics/precision(B)</td><td>0.29527</td></tr>
<tr><td>metrics/recall(B)</td><td>0.27963</td></tr>
<tr><td>model/GFLOPs</td><td>29.639</td></tr>
<tr><td>model/parameters</td><td>11423327</td></tr>
<tr><td>model/speed_PyTorch(ms)</td><td>3.011</td></tr>
<tr><td>train/box_loss</td><td>1.89892</td></tr>
<tr><td>train/cls_loss</td><td>4.6896</td></tr>
<tr><td>train/dfl_loss</td><td>2.43468</td></tr>
<tr><td>val/box_loss</td><td>1.65862</td></tr>
<tr><td>val/cls_loss</td><td>4.63246</td></tr>
<tr><td>val/dfl_loss</td><td>2.57379</td></tr>
</table>
</div>
</div>
### Additional
_No response_ | closed | 2025-01-25T14:19:50Z | 2025-01-27T10:59:23Z | https://github.com/ultralytics/ultralytics/issues/18881 | [
"question"
] | Petros626 | 6 |
microsoft/JARVIS | deep-learning | 214 | 这个项目不再更新了吗? | 两个多月没有变化了,看来不会有支持windows的版本了。 | open | 2023-06-25T13:28:05Z | 2023-10-10T15:20:26Z | https://github.com/microsoft/JARVIS/issues/214 | [] | Combustible-material | 2 |
jina-ai/serve | deep-learning | 6,010 | Documentation: Adapt Documentation to single document serving and parameters schema | **Describe the feature**
Adapt Documentation to latest features added | closed | 2023-08-03T04:32:45Z | 2023-08-04T08:39:27Z | https://github.com/jina-ai/serve/issues/6010 | [] | JoanFM | 0 |
scikit-learn-contrib/metric-learn | scikit-learn | 256 | [DOC] Docstring of num_constraints should explain default behavior | In the docstring for supervised versions of weakly supervised algorithms, one has to look at the source code to find out how many constraints are constructed by default (when `num_constraints=None`). This should be explained in the docstring for `num_constraints` | closed | 2019-10-30T07:38:09Z | 2019-11-21T15:12:19Z | https://github.com/scikit-learn-contrib/metric-learn/issues/256 | [] | bellet | 1 |
biolab/orange3 | data-visualization | 6,332 | Concatenate: data source ID is not used if compute_value is ignore in comparison | ### Discussed in https://github.com/biolab/orange3/discussions/6326
<div type='discussions-op-text'>
<sup>Originally posted by **Bigfoot-solutions** February 3, 2023</sup>
I am trying to concatenate a set of 20 datasets and use the "Append data source ID" option to retain visibility into which data element came from which input source.
The issue I am running into is that even though the input data tables have unique names in the UI (Unit A logs, Unit X logs, etc) the data info widget still displays the name of every data table as "untitled", and the Concatenate widget appends source ID's of "untitled (0)" and "untitled (1)" apparently based on the order in which the tables were connected to the Concatenate widget. I have a work-around using the Create Class widget to rename the Source ID, but this is very brittle.
Am I doing something wrong or should this be a bug report? Is there a mechanism for naming data tables that I can't find in the documentation?
Thanks for any help. </div> | closed | 2023-02-07T14:14:22Z | 2023-02-10T08:28:21Z | https://github.com/biolab/orange3/issues/6332 | [] | markotoplak | 3 |
ultralytics/yolov5 | pytorch | 13,028 | Inconsistency issue with single_cls functionality and dataset class count | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I found a small issue related to `single_cls` that I'm not quite clear on the purpose of.
In `train.py`, there is the following statement:
```python
names = {0: "item"} if single_cls and len(data_dict["names"]) != 1 else data_dict["names"] # class names
```
This statement can be broken down into:
```python
if single_cls and len(data_dict["names"]) != 1: # The user has enabled --single_cls, but the dataset configuration file has more than one class
names = {0: "item"}
else: # The user has not enabled --single_cls or len(data_dict["names"]) == 1
names = data_dict["names"]
```
Here, `single_cls` indicates that the task has only one class; `data_dict["names"]` are the names of different classes defined in the dataset configuration file; `len(dict)` is used to determine the number of keys in a dictionary.
I don't understand why `len(data_dict["names"]) != 1` is used. In the current code, `names = {0: "item"` only happens in one case, which is when `--single_cls` is enabled and the dataset configuration file has multiple classes. Is this case too rare? Suppose the dataset used is MS COCO, which has 80 classes, then after enabling `--single_cls`, only one class remains. Will the model still train and inference normally in this case?
Also, I suggest adding a warning to avoid misuse by users:
```python
if single_cls and len(data_dict["names"]) != 1:
LOGGER.warning("WARNING ⚠️ Please check the dataset to ensure that when --single_cls is enabled, the number of classes in the dataset is 1.")
```
### Additional
_No response_ | closed | 2024-05-20T08:49:40Z | 2024-05-21T05:33:10Z | https://github.com/ultralytics/yolov5/issues/13028 | [
"question"
] | Le0v1n | 3 |
indico/indico | flask | 5,962 | Lightweight meeting/lecture themes | We currently have custom plugins to maintain those themes for CERN and LCAgenda, but the vast majority of them are really simply CSS tweaks: They have a custom stylesheet (just overriding a few things of the default) and logo, and that's it (see some examples below).
We already have support for an event logo but it's only exposed in conferences. If we exposed this for lectures and meetings as well, event organizers could easily upload a custom logo for those events as well. For the CSS tweaks, the best option would be using CSS variables since we can set those somewhat easily while still using the normal webpack logic to build the CSS for the theme.
A second step (more fancy but also more work) would be to add the ability to create custom themes on the category level, which would then be available like the hardcoded themes - except that only events within that category (or its subtree) would see them.
---
Examples from [indico_themes_cern](https://github.com/indico/indico-plugins-cern/tree/master/themes_cern/indico_themes_cern):
```scss
@use 'base/palette' as *;
@import 'themes/indico';
$header-bg-color: #013d7c;
@include header-logo('themes_cern:lhcb_logo.png', 25px 25px, 200px);
```
```scss
@use 'base/palette' as *;
$header-bg-color: #fff;
$header-icon-color: $gray;
$header-text-color: $black;
@import 'themes/indico';
@include header-logo('themes_cern:intelum_logo.png', 25px 50px, 250px, 20%);
```
```scss
@use 'base/palette' as *;
$header-bg-color: #dedede;
$header-icon-color: $light-black;
$header-text-color: $black;
@import 'themes/indico';
@include header-logo('themes_cern:fcc_logo.png', 15px 30px, 230px, 20%);
``` | open | 2023-09-29T12:47:00Z | 2023-09-29T12:47:00Z | https://github.com/indico/indico/issues/5962 | [
"enhancement"
] | ThiefMaster | 0 |
dynaconf/dynaconf | flask | 1,000 | Django 4.2.5 and Dynaconf 3.2.2 (AttributeError) | **Describe the bug**
When I try to access the Django admin, the Django log shows many error messages, such as:
```bash
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/czar/.pyenv/versions/3.11.5/lib/python3.11/wsgiref/handlers.py", line 137, in run
self.result = application(self.environ, self.start_response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/czar/dev/projects/FuturesLab/flab-issue/.venv/lib/python3.11/site-packages/django/contrib/staticfiles/handlers.py", line 80, in __call__
return self.application(environ, start_response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/czar/dev/projects/FuturesLab/flab-issue/.venv/lib/python3.11/site-packages/django/core/handlers/wsgi.py", line 124, in __call__
response = self.get_response(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/czar/dev/projects/FuturesLab/flab-issue/.venv/lib/python3.11/site-packages/django/core/handlers/base.py", line 140, in get_response
response = self._middleware_chain(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/czar/dev/projects/FuturesLab/flab-issue/.venv/lib/python3.11/site-packages/django/core/handlers/exception.py", line 57, in inner
response = response_for_exception(request, exc)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/czar/dev/projects/FuturesLab/flab-issue/.venv/lib/python3.11/site-packages/django/core/handlers/exception.py", line 140, in response_for_exception
response = handle_uncaught_exception(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/czar/dev/projects/FuturesLab/flab-issue/.venv/lib/python3.11/site-packages/django/core/handlers/exception.py", line 181, in handle_uncaught_exception
return debug.technical_500_response(request, *exc_info)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/czar/dev/projects/FuturesLab/flab-issue/.venv/lib/python3.11/site-packages/django/views/debug.py", line 67, in technical_500_response
html = reporter.get_traceback_html()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/czar/dev/projects/FuturesLab/flab-issue/.venv/lib/python3.11/site-packages/django/views/debug.py", line 410, in get_traceback_html
c = Context(self.get_traceback_data(), use_l10n=False)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/czar/dev/projects/FuturesLab/flab-issue/.venv/lib/python3.11/site-packages/django/views/debug.py", line 379, in get_traceback_data
"settings": self.filter.get_safe_settings(),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/czar/dev/projects/FuturesLab/flab-issue/.venv/lib/python3.11/site-packages/django/views/debug.py", line 154, in get_safe_settings
settings_dict[k] = self.cleanse_setting(k, getattr(settings, k))
^^^^^^^^^^^^^^^^^^^^
File "/home/czar/dev/projects/FuturesLab/flab-issue/.venv/lib/python3.11/site-packages/dynaconf/base.py", line 145, in __getattr__
value = getattr(self._wrapped, name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/czar/dev/projects/FuturesLab/flab-issue/.venv/lib/python3.11/site-packages/dynaconf/base.py", line 309, in __getattribute__
return super().__getattribute__(name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'HookableSettings' object has no attribute '_REGISTERED_HOOKS'
```
**To Reproduce**
- Pop_OS 22.04
- python 3.11.5
- django 4.2.5
- dynaconf 3.2.2
I'm using **poetry** in development, but when I use **pip** the problem also happens
1. Having the following folder structure
.
├── LICENSE
├── README.md
├── poetry.lock
├── poetry.toml
├── pyproject.toml
├── pytest.ini
├── requirements.txt
├── settings.yaml
├── src
│ ├── apps
│ │ ├── accounts
│ │ ├── area_skill
│ │ ├── base
│ │ ├── certified
│ │ ├── highlight
│ │ ├── post
│ │ └── profile
│ ├── conftest.py
│ ├── flab
│ │ ├── __init__.py
│ │ ├── asgi.py
│ │ ├── common.py
│ │ ├── settings
│ │ ├── urls.py
│ │ └── wsgi.py
│ ├── manage.py
│ └── tests
│ └── post
│ ├── test_post__status_code.py
│ ├── test_post__urls.py
│ └── test_post__views.py
└── www
├── assets
├── media
└── static
<details>
<summary> Project structure </summary>
```python
# settings.py
""" here are the other django settings """
import os
import dynaconf # noqa
settings = dynaconf.DjangoDynaconf(
__name__,
ENVVAR_PREFIX="FLAB",
SETTINGS_FILE_FOR_DYNACONF="../settings.yaml",
SECRETS_FOR_DYNACONF="../.secrets.yaml",
) # noqa
```
</details>
2. Having the following config files:
<!-- Please adjust if you are using different files and formats! -->
<details>
<summary> Config files </summary>
**/path/.env**
```ini
ENV_FOR_DYNACONF="DEVELOPMENT"
# ENV_FOR_DYNACONF="PRODUCTION"
```
and
**/path/settings.yaml**
```yaml
---
development:
DEBUG: true
ALLOWED_HOSTS:
- localhost
- 127.0.0.1
- testserver
DATABASES:
default:
ENGINE: django.db.backends.postgresql_psycopg2
NAME: ########
USER: ########
PASSWORD: ########
HOST: ########
PORT: ########
EMAIL_BACKEND: django.core.mail.backends.console.EmailBackend
production:
DEBUG: false
ALLOWED_HOSTS:
- localhost
- 127.0.0.1
DATABASES:
default:
ENGINE: django.db.backends.postgresql_psycopg2
NAME: ########
USER: ########
PASSWORD: ########
HOST: ########
PORT: ########
```
</details>
3. Having the following app code:
<details>
<summary> Code </summary>
**/path/src/app.py**
```python
from dynaconf import settings
...
```
</details>
4. Executing under the following environment
<details>
<summary> Execution </summary>
```bash
$ poetry shell
$ src/manage.py runserver
```
</details>
**Expected behavior**
I hope the error messages stop appearing in the Django log.
**Environment (please complete the following information):**
- OS: Linux/Pop_OS 22.04
- Dynaconf Version 3.2.2
- Frameworks in use Django 4.2.5
**Additional context**
Add any other context about the problem here.
| closed | 2023-09-09T16:16:50Z | 2023-09-13T14:14:30Z | https://github.com/dynaconf/dynaconf/issues/1000 | [
"bug",
"Pending Release",
"django"
] | cesargodoi | 5 |
tflearn/tflearn | data-science | 538 | [Tutorial] I can't import 'titinic' | I run the exact same code as [Quickstart](http://tflearn.org/tutorials/quickstart.html#source-code). But I got a problem here.
`Traceback (most recent call last):
File "F:\Programming\MachineLearning\tflearn-master\tutorials\intro\quickstart.py", line 7, in <module>
from tflearn.datasets import titanic
ImportError: cannot import name 'titanic'`
I'm very new to python, so, if you can, tell me how to solve this problem in detail please. Thank you. :)
: if you need more information on this error, you can ask me here. | closed | 2016-12-27T16:55:04Z | 2020-07-01T20:07:02Z | https://github.com/tflearn/tflearn/issues/538 | [] | bongjunj | 3 |
coqui-ai/TTS | pytorch | 3,481 | [Bug] xtts ft demo: empty csv files with the format_audio_list | ### Describe the bug
I use the formatter method to process my audio files(Chinese language), but I got the csv files with no data. Because it has never met the condition of if word.word[-1] in ["!", ".", "?"]:
### To Reproduce
below is my code:
```python
datapath = "/mnt/workspace/tdy.tdy/mp3_lww"
out_path = "/mnt/workspace/tdy.tdy/mp3_lww_train"
os.makedirs(out_path, exist_ok=True)
whisper_path = "/mnt/workspace/.cache/modelscope/keepitsimple/faster-whisper-large-v3"
target_language = 'zh'
buffer=0.2
eval_percentage=0.15
speaker_name="lww"
import os
from os import path as osp
import torchaudio
from matplotlib import pyplot as plt
import torch
from faster_whisper import WhisperModel
import pandas
import gc
# Loading Whisper
device = "cuda" if torch.cuda.is_available() else "cpu"
print("Loading Whisper Model!")
asr_model = WhisperModel(whisper_path, device=device, compute_type="float16", local_files_only=True)
def plot_waveform(waveform, sample_rate):
waveform = waveform.numpy()
num_channels, num_frames = waveform.shape
time_axis = torch.arange(0, num_frames) / sample_rate
figure, axes = plt.subplots(num_channels, 1)
if num_channels == 1:
axes = [axes]
for c in range(num_channels):
axes[c].plot(time_axis, waveform[c], linewidth=1)
axes[c].grid(True)
if num_channels > 1:
axes[c].set_ylabel(f"Channel {c+1}")
figure.suptitle("waveform")
print("Reading audio files!")
audio_files = os.listdir(datapath)
audio_total_size = 0
metadata = {"audio_file": [], "text": [], "speaker_name": []}
for f in audio_files:
if f.endswith('mp3'):
audio_path = osp.join(datapath, f)
wav, sr = torchaudio.load(audio_path)
if wav.size(0) != 1:
wav = torch.mean(wav, dim=0, keepdim=True)
wav = wav.squeeze()
audio_total_size += (wav.size(-1) / sr)
# plot_waveform(wav, sr)
segments, _ = asr_model.transcribe(audio_path, word_timestamps=True, language=target_language)
segments = list(segments)
i = 0
sentence = ""
sentence_start = None
first_word = True
# added all segments words in a unique list
words_list = []
for _, segment in enumerate(segments):
words = list(segment.words)
words_list.extend(words)
# process each word
for word_idx, word in enumerate(words_list):
if first_word:
sentence_start = word.start
# If it is the first sentence, add buffer or get the begining of the file
if word_idx == 0:
sentence_start = max(sentence_start - buffer, 0) # Add buffer to the sentence start
else:
# get previous sentence end
previous_word_end = words_list[word_idx - 1].end
# add buffer or get the silence midle between the previous sentence and the current one
sentence_start = max(sentence_start - buffer, (previous_word_end + sentence_start)/2)
sentence = word.word
first_word = False
else:
sentence += word.word
if word.word[-1] in ["!", ".", "?"]:
sentence = sentence[1:]
# Expand number and abbreviations plus normalization
sentence = multilingual_cleaners(sentence, target_language)
audio_file_name, _ = os.path.splitext(os.path.basename(audio_path))
audio_file = f"wavs/{audio_file_name}_{str(i).zfill(8)}.wav"
# Check for the next word's existence
if word_idx + 1 < len(words_list):
next_word_start = words_list[word_idx + 1].start
else:
# If don't have more words it means that it is the last sentence then use the audio len as next word start
next_word_start = (wav.shape[0] - 1) / sr
# Average the current word end and next word start
word_end = min((word.end + next_word_start) / 2, word.end + buffer)
absoulte_path = os.path.join(out_path, audio_file)
os.makedirs(os.path.dirname(absoulte_path), exist_ok=True)
i += 1
first_word = True
audio = wav[int(sr*sentence_start):int(sr*word_end)].unsqueeze(0)
# if the audio is too short ignore it (i.e < 0.33 seconds)
if audio.size(-1) >= sr/3:
torchaudio.save(absoulte_path,
audio,
sr
)
else:
continue
metadata["audio_file"].append(audio_file)
metadata["text"].append(sentence)
metadata["speaker_name"].append(speaker_name)
df = pandas.DataFrame(metadata)
df = df.sample(frac=1)
num_val_samples = int(len(df)*eval_percentage)
df_eval = df[:num_val_samples]
df_train = df[num_val_samples:]
df_train = df_train.sort_values('audio_file')
train_metadata_path = os.path.join(out_path, "metadata_train.csv")
df_train.to_csv(train_metadata_path, sep="|", index=False)
eval_metadata_path = os.path.join(out_path, "metadata_eval.csv")
df_eval = df_eval.sort_values('audio_file')
df_eval.to_csv(eval_metadata_path, sep="|", index=False)
# deallocate VRAM and RAM
del asr_model, df_train, df_eval, df, metadata
gc.collect()
print('audio total size: ', audio_total_size)
```
### Expected behavior
there are data lines in metadata_train.csv and metadata_eval.csv
### Logs
```shell
root@dsw-297768-d54489667-bcrfv:/mnt/workspace/clone_voice_sft_xtts# python process_audio_files.py
2023-12-31 21:37:21,419 - modelscope - INFO - PyTorch version 2.1.0+cu118 Found.
2023-12-31 21:37:21,421 - modelscope - INFO - TensorFlow version 2.14.0 Found.
2023-12-31 21:37:21,421 - modelscope - INFO - Loading ast index from /mnt/workspace/.cache/modelscope/ast_indexer
2023-12-31 21:37:21,462 - modelscope - INFO - Loading done! Current index file version is 1.10.0, with md5 44f0b88effe82ceea94a98cf99709694 and a total number of 946 components indexed
Loading Whisper Model!
/mnt/workspace/.cache/modelscope/keepitsimple/faster-whisper-large-v3
Reading audio files!
> /mnt/workspace/clone_voice_sft_xtts/process_audio_files.py(82)<module>()
-> if word.word[-1] in ["!", ".", "?"]:
(Pdb) words_list
[Word(start=0.0, end=0.42, word='但', probability=0.82470703125), Word(start=0.42, end=0.68, word='小', probability=0.9951171875), Word(start=0.68, end=1.06, word='狗', probability=0.99951171875), Word(start=1.06, end=1.18, word='呢', probability=0.8623046875), Word(start=1.18, end=1.34, word='它', probability=0.4169921875), Word(start=1.34, end=1.6, word='不是', probability=0.9970703125), Word(start=1.6, end=1.9, word='关', probability=0.904296875), Word(start=1.9, end=2.2, word='节', probability=0.99853515625), Word(start=2.2, end=2.38, word='它', probability=0.91015625), Word(start=2.38, end=2.64, word='是', probability=0.99951171875), Word(start=2.64, end=3.0, word='近', probability=0.362548828125), Word(start=3.0, end=3.72, word='病', probability=0.80419921875), Word(start=3.72, end=4.08, word='骨', probability=0.99072265625), Word(start=4.08, end=4.72, word='就', probability=0.9921875), Word(start=4.72, end=4.86, word='它', probability=0.9794921875), Word(start=4.86, end=5.16, word='病', probability=0.9990234375), Word(start=5.16, end=5.44, word='骨', probability=1.0), Word(start=5.44, end=5.6, word='和', probability=0.9990234375), Word(start=5.6, end=5.72, word='它', probability=0.99755859375), Word(start=5.72, end=6.0, word='那个', probability=0.99658203125), Word(start=6.0, end=6.24, word='什么', probability=0.994140625), Word(start=6.979999999999997, end=7.5, word='骨', probability=0.99853515625), Word(start=7.5, end=7.76, word='头', probability=1.0), Word(start=7.76, end=7.92, word='的', probability=1.0), Word(start=7.92, end=8.06, word='那个', probability=0.998046875), Word(start=8.06, end=8.26, word='位', probability=1.0), Word(start=8.26, end=8.54, word='置', probability=1.0), Word(start=8.54, end=8.84, word='它', probability=0.99560546875), Word(start=8.84, end=9.1, word='是', probability=1.0), Word(start=9.1, end=9.3, word='那个', probability=1.0), Word(start=9.3, end=9.74, word='地方', probability=1.0), Word(start=9.74, end=10.12, word='没', probability=0.9990234375), Word(start=10.12, end=10.32, word='长', probability=0.998046875), Word(start=10.32, end=10.66, word='好', probability=0.99951171875), Word(start=10.66, end=11.64, word='然后', probability=0.99853515625), Word(start=11.64, end=12.28, word='长', probability=0.99951171875), Word(start=12.28, end=12.7, word='期', probability=1.0), Word(start=12.7, end=12.86, word='那么', probability=0.9892578125), Word(start=12.86, end=13.16, word='走', probability=1.0), Word(start=13.16, end=13.4, word='路', probability=1.0), Word(start=13.4, end=13.52, word='呢', probability=0.990234375), Word(start=13.52, end=13.84, word='磨', probability=0.998291015625), Word(start=13.84, end=14.16, word='损', probability=0.999755859375), Word(start=14.16, end=14.5, word='导', probability=0.99951171875), Word(start=14.5, end=14.78, word='致', probability=1.0), Word(start=14.78, end=14.94, word='的', probability=0.98876953125), Word(start=14.94, end=15.92, word='就', probability=0.98681640625), Word(start=15.92, end=16.08, word='反', probability=1.0), Word(start=16.08, end=16.26, word='正', probability=1.0), Word(start=16.26, end=16.48, word='原', probability=0.9990234375), Word(start=16.48, end=16.62, word='理', probability=0.99755859375), Word(start=16.62, end=16.74, word='应', probability=0.99951171875), Word(start=16.74, end=16.84, word='该', probability=1.0), Word(start=16.84, end=16.96, word='都是', probability=1.0), Word(start=16.96, end=17.42, word='差不多', probability=0.99951171875), Word(start=17.42, end=17.7, word='反', probability=1.0), Word(start=17.7, end=17.84, word='正', probability=1.0), Word(start=17.84, end=18.08, word='就是', probability=1.0), Word(start=18.9, end=19.42, word='用', probability=0.99951171875), Word(start=19.42, end=19.7, word='力', probability=1.0), Word(start=19.7, end=19.86, word='用', probability=0.9990234375), Word(start=19.86, end=20.2, word='不对', probability=0.9990234375), Word(start=20.2, end=20.7, word='然后', probability=0.998046875), Word(start=20.7, end=21.68, word='导', probability=0.99951171875), Word(start=21.68, end=21.92, word='致', probability=1.0), Word(start=21.92, end=22.12, word='那个', probability=0.99658203125), Word(start=22.12, end=22.46, word='膝', probability=0.983154296875), Word(start=22.46, end=22.7, word='关', probability=0.99853515625), Word(start=22.7, end=22.96, word='节', probability=0.99951171875), Word(start=22.96, end=23.86, word='的', probability=0.99560546875), Word(start=23.86, end=24.04, word='那个', probability=0.9990234375), Word(start=24.04, end=24.36, word='白', probability=1.0), Word(start=24.36, end=24.66, word='色', probability=1.0), Word(start=24.66, end=24.74, word='的', probability=0.966796875), Word(start=24.74, end=24.9, word='那个', probability=0.97119140625), Word(start=24.9, end=25.22, word='软', probability=0.999267578125), Word(start=25.22, end=25.48, word='骨', probability=0.999755859375), Word(start=25.48, end=25.68, word='啊', probability=0.962890625), Word(start=25.68, end=26.58, word='就', probability=0.99853515625), Word(start=26.58, end=27.16, word='磨', probability=0.999755859375), Word(start=27.16, end=27.42, word='损', probability=1.0), Word(start=27.42, end=27.58, word='的', probability=0.9775390625), Word(start=27.58, end=27.72, word='太', probability=0.9990234375), Word(start=27.72, end=27.92, word='严', probability=0.999755859375), Word(start=27.92, end=28.16, word='重', probability=1.0), Word(start=28.16, end=28.26, word='了', probability=0.97509765625), Word(start=28.26, end=29.26, word='然后', probability=0.99560546875), Word(start=29.26, end=29.54, word='呢', probability=1.0), Word(start=29.54, end=29.82, word='现在', probability=0.38525390625), Word(start=29.82, end=30.08, word='呢', probability=0.283203125), Word(start=30.08, end=30.92, word='它', probability=0.1630859375), Word(start=30.92, end=31.16, word='走', probability=0.9970703125), Word(start=31.16, end=31.44, word='路', probability=0.99951171875), Word(start=31.44, end=31.52, word='呢', probability=0.89697265625), Word(start=31.52, end=31.74, word='它', probability=0.9326171875), Word(start=31.74, end=31.94, word='是', probability=0.98681640625), Word(start=31.94, end=32.18, word='骨', probability=0.991943359375), Word(start=32.18, end=32.38, word='头', probability=0.9970703125), Word(start=32.38, end=32.64, word='磨', probability=0.907470703125), Word(start=32.64, end=32.74, word='着', probability=0.76904296875), Word(start=32.74, end=32.96, word='骨', probability=0.994873046875), Word(start=32.96, end=33.2, word='头', probability=0.99951171875), Word(start=33.2, end=33.48, word='所以', probability=0.96240234375), Word(start=33.48, end=33.58, word='就', probability=0.990234375), Word(start=33.58, end=33.72, word='会', probability=0.99853515625), Word(start=33.72, end=33.94, word='很', probability=0.9990234375), Word(start=33.94, end=34.26, word='疼', probability=0.994384765625), Word(start=34.26, end=34.96, word='或者', probability=0.98193359375), Word(start=34.96, end=35.2, word='是', probability=0.9990234375), Word(start=35.2, end=35.76, word='那个', probability=0.79638671875), Word(start=35.76, end=37.32, word='软', probability=0.997314453125), Word(start=37.32, end=37.58, word='骨', probability=0.9990234375), Word(start=37.58, end=37.68, word='比', probability=0.98974609375), Word(start=37.68, end=38.1, word='较', probability=1.0), Word(start=38.92, end=38.94, word='比', probability=0.4990234375), Word(start=38.94, end=39.34, word='较', probability=1.0), Word(start=39.34, end=39.64, word='薄', probability=1.0), Word(start=39.64, end=39.78, word='了', probability=0.9990234375), Word(start=39.78, end=40.22, word='所以', probability=0.99658203125), Word(start=40.22, end=40.38, word='它', probability=0.96826171875), Word(start=40.38, end=40.56, word='就', probability=0.99755859375), Word(start=40.56, end=41.24, word='不能', probability=0.998046875), Word(start=41.24, end=41.66, word='缓', probability=0.99951171875), Word(start=41.66, end=41.98, word='冲', probability=0.99853515625), Word(start=41.98, end=42.62, word='所以', probability=0.99072265625), Word(start=42.62, end=42.78, word='就', probability=0.99951171875), Word(start=42.78, end=42.9, word='比', probability=0.99951171875), Word(start=42.9, end=43.08, word='较', probability=1.0), Word(start=43.08, end=43.4, word='疼', probability=0.999755859375)]
(Pdb) len(words_list)
129
(Pdb) words_list[0]
Word(start=0.0, end=0.42, word='但', probability=0.82470703125)
(Pdb) q
Traceback (most recent call last):
File "/mnt/workspace/clone_voice_sft_xtts/process_audio_files.py", line 82, in <module>
sentence = sentence[1:]
File "/mnt/workspace/clone_voice_sft_xtts/process_audio_files.py", line 82, in <module>
sentence = sentence[1:]
File "/opt/conda/lib/python3.10/bdb.py", line 90, in trace_dispatch
return self.dispatch_line(frame)
File "/opt/conda/lib/python3.10/bdb.py", line 115, in dispatch_line
if self.quitting: raise BdbQuit
bdb.BdbQuit
^[[A
```
### Environment
```shell
{
"CUDA": {
"GPU": [
"Tesla V100-SXM2-16GB"
],
"available": true,
"version": "11.8"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.1.0+cu118",
"numpy": "1.26.2"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "x86_64",
"python": "3.10.13",
"version": "#1 SMP Tue Jun 20 06:15:49 UTC 2023"
}
}
```
### Additional context
I installed TTS by this:
```bash
rm -rf TTS/ # delete repo to be able to reinstall if needed
git clone --branch xtts_demo https://github.com/coqui-ai/TTS.git
pip install --use-deprecated=legacy-resolver -e TTS
pip install --use-deprecated=legacy-resolver -r TTS/TTS/demos/xtts_ft_demo/requirements.txt
pip install typing_extensions==4.8.0 numpy==1.26.2
``` | closed | 2023-12-31T14:17:18Z | 2024-02-10T18:38:12Z | https://github.com/coqui-ai/TTS/issues/3481 | [
"bug",
"wontfix"
] | dorbodwolf | 1 |
zappa/Zappa | django | 1,289 | Deployed API Gateway points to $LATEST | As the title says. Rather than pointing to the most recently deployed version of the code, API Gateway is pointing to the unqualified ARN of the lambda function, which points to $LATEST.
The reason this is an issue is that you cannot use provisioned concurrency on $LATEST, even when aliased, which is causing some serious pain for me (massive cold start times, which is a separate issue in itself, but provisioning concurrency would fix).
## Expected Behavior
API Gateway should point to f'{FunctionArn}:{Version}', so that provisioned concurrency can occur.
## Actual Behavior
API Gateway points to FunctionARN.
## Possible Fix
Kind of covered? I see there is already an open issue regarding provisioned concurrency, and this would be step to that working.
## Steps to Reproduce
1. Deploy a project with Zappa, then view the ARN of the Lambda instance it points to.
## Your Environment
* Zappa version used: 0.58.0
* Operating System and Python version: Lambda - Python 3.9
| closed | 2023-12-21T23:30:57Z | 2024-04-13T20:37:04Z | https://github.com/zappa/Zappa/issues/1289 | [
"no-activity",
"auto-closed"
] | texonidas | 2 |
shibing624/text2vec | nlp | 47 | 词向量模型使用的时候是不是需要先分词 | w2v_model = Word2Vec("w2v-light-tencent-chinese")
compute_emb(w2v_model)
看了下代码,编码的时候会把句子分成一个一个的字符,分别计算字向量得到句子向量,是不是少了分词步骤
另外,衡量word2vec模型向量距离的方法是不是用欧式距离更好? | closed | 2022-08-22T09:47:05Z | 2022-11-17T09:39:55Z | https://github.com/shibing624/text2vec/issues/47 | [
"question"
] | lushizijizoude | 1 |
coqui-ai/TTS | python | 3,269 | [Bug] server.py crashes on systems with IPv6 disabled | ### Describe the bug
`server.py` is statically configured to use IPv6. On systems with IPv6 disabled - this causes a crash when starting the server (bare metal or Docker)
### To Reproduce
1. On a Linux host as an example, disable ipv6 (sysctl):
```
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
```
2. Try to start the server
3. ???
### Expected behavior
`server.py` should not rely on `::` by default
### Logs
```shell
> initialization of speaker-embedding layers.
> initialization of language-embedding layers.
* Serving Flask app 'server'
* Debug mode: off
Traceback (most recent call last):
File "/root/TTS/server/server.py", line 258, in <module>
main()
File "/root/TTS/server/server.py", line 254, in main
app.run(debug=args.debug, host="::", port=args.port)
File "/usr/local/lib/python3.10/dist-packages/flask/app.py", line 612, in run
run_simple(t.cast(str, host), port, self, **options)
File "/usr/local/lib/python3.10/dist-packages/werkzeug/serving.py", line 1077, in run_simple
srv = make_server(
File "/usr/local/lib/python3.10/dist-packages/werkzeug/serving.py", line 917, in make_server
return ThreadedWSGIServer(
File "/usr/local/lib/python3.10/dist-packages/werkzeug/serving.py", line 737, in __init__
super().__init__(
File "/usr/lib/python3.10/socketserver.py", line 448, in __init__
self.socket = socket.socket(self.address_family,
File "/usr/lib/python3.10/socket.py", line 232, in __init__
_socket.socket.__init__(self, family, type, proto, fileno)
OSError: [Errno 97] Address family not supported by protocol
```
### Environment
```shell
irrelevant
```
### Additional context
I was able to easily fix this on my local environment by just modifying https://github.com/coqui-ai/TTS/blob/29dede20d336c8250810575fcdcdbbcad8c40a44/TTS/server/server.py#L254 to look like:
```python
app.run(debug=args.debug, host="0.0.0.0", port=args.port)
``` | closed | 2023-11-20T01:18:45Z | 2023-11-28T10:49:24Z | https://github.com/coqui-ai/TTS/issues/3269 | [
"bug"
] | Phr33d0m | 3 |
comfyanonymous/ComfyUI | pytorch | 7,169 | Problem with Comfy-Manager and rgthree nodes after last update | ### Your question
After the last update, the generation was crashing without giving an error.
I updated all nodes, started getting errors when starting ComfyUI, updated the requirements. Now ComfyUI starts, but there is no “Manager” button and some Rgthree nodes are not displayed. I've tried deleting them and cloning them again, but to no avail.
with the standard nodes, the generation is now in progress.
### Logs
```powershell
C:\SD\ComfyUI
** ComfyUI Base Folder Path: C:\SD\ComfyUI
** User directory: C:\SD\ComfyUI\user
** ComfyUI-Manager config path: C:\SD\ComfyUI\user\default\ComfyUI-Manager\config.ini
** Log path: C:\SD\ComfyUI\user\comfyui.log
Prestartup times for custom nodes:
0.0 seconds: C:\SD\ComfyUI\custom_nodes\rgthree-comfy
8.1 seconds: C:\SD\ComfyUI\custom_nodes\ComfyUI-Manager
Checkpoint files will always be loaded safely.
Total VRAM 2048 MB, total RAM 16252 MB
pytorch version: 2.6.0+cu124
Set vram state to: NO_VRAM
Disabling smart memory management
Device: cuda:0 Quadro K620 : native
Using split optimization for attention
ComfyUI version: 0.3.26
ComfyUI frontend version: 1.11.8
[Prompt Server] web root: C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\comfyui_frontend_package\static
use sdp attention as default
keep default attention mode
### Loading: ComfyUI-Manager (V3.30.3)
[ComfyUI-Manager] network_mode: public
### ComfyUI Version: v0.3.26-4-g35e2dcf5 | Released on '2025-03-10'
[rgthree-comfy] Loaded 42 exciting nodes. 🎉
Import times for custom nodes:
0.0 seconds: C:\SD\ComfyUI\custom_nodes\rgthree-comfy
0.7 seconds: C:\SD\ComfyUI\custom_nodes\ComfyUI-DiffBIR
0.9 seconds: C:\SD\ComfyUI\custom_nodes\ComfyUI-Manager
Starting server
To see the GUI go to: http://127.0.0.1:8188
[ComfyUI-Manager] Failed to perform initial fetching 'alter-list.json': Cannot connect to host raw.githubusercontent.com:443 ssl:default [Превышен таймаут семафора]
Traceback (most recent call last):
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1116, in _wrap_create_connection
sock = await aiohappyeyeballs.start_connection(
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohappyeyeballs\impl.py", line 104, in start_connection
raise first_exception
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohappyeyeballs\impl.py", line 82, in start_connection
sock = await _connect_sock(
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohappyeyeballs\impl.py", line 174, in _connect_sock
await loop.sock_connect(sock, address)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 709, in sock_connect
return await self._proactor.connect(sock, address)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\asyncio\windows_events.py", line 826, in _poll
value = callback(transferred, key, ov)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\asyncio\windows_events.py", line 613, in finish_connect
ov.getresult()
OSError: [WinError 121] Превышен таймаут семафора
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\SD\ComfyUI\custom_nodes\ComfyUI-Manager\glob\manager_server.py", line 1709, in get_cache
json_obj = await manager_util.get_data(uri, True)
File "C:\SD\ComfyUI\custom_nodes\ComfyUI-Manager\glob\manager_util.py", line 139, in get_data
async with session.get(uri, headers=headers) as resp:
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\client.py", line 1425, in __aenter__
self._resp: _RetType = await self._coro
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\client.py", line 703, in _request
conn = await self._connector.connect(
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 548, in connect
proto = await self._create_connection(req, traces, timeout)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1056, in _create_connection
_, proto = await self._create_direct_connection(req, traces, timeout)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1411, in _create_direct_connection
raise last_exc
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1380, in _create_direct_connection
transp, proto = await self._wrap_create_connection(
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1135, in _wrap_create_connection
raise client_error(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host raw.githubusercontent.com:443 ssl:default [Превышен таймаут семафора]
[ComfyUI-Manager] Failed to perform initial fetching 'github-stats.json': Cannot connect to host raw.githubusercontent.com:443 ssl:default [Превышен таймаут семафора]
Traceback (most recent call last):
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1116, in _wrap_create_connection
sock = await aiohappyeyeballs.start_connection(
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohappyeyeballs\impl.py", line 104, in start_connection
raise first_exception
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohappyeyeballs\impl.py", line 82, in start_connection
sock = await _connect_sock(
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohappyeyeballs\impl.py", line 174, in _connect_sock
await loop.sock_connect(sock, address)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 709, in sock_connect
return await self._proactor.connect(sock, address)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\asyncio\windows_events.py", line 826, in _poll
value = callback(transferred, key, ov)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\asyncio\windows_events.py", line 613, in finish_connect
ov.getresult()
OSError: [WinError 121] Превышен таймаут семафора
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\SD\ComfyUI\custom_nodes\ComfyUI-Manager\glob\manager_server.py", line 1709, in get_cache
json_obj = await manager_util.get_data(uri, True)
File "C:\SD\ComfyUI\custom_nodes\ComfyUI-Manager\glob\manager_util.py", line 139, in get_data
async with session.get(uri, headers=headers) as resp:
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\client.py", line 1425, in __aenter__
self._resp: _RetType = await self._coro
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\client.py", line 703, in _request
conn = await self._connector.connect(
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 548, in connect
proto = await self._create_connection(req, traces, timeout)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1056, in _create_connection
_, proto = await self._create_direct_connection(req, traces, timeout)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1411, in _create_direct_connection
raise last_exc
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1380, in _create_direct_connection
transp, proto = await self._wrap_create_connection(
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1135, in _wrap_create_connection
raise client_error(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host raw.githubusercontent.com:443 ssl:default [Превышен таймаут семафора]
[ComfyUI-Manager] Failed to perform initial fetching 'custom-node-list.json': Cannot connect to host raw.githubusercontent.com:443 ssl:default [Превышен таймаут семафора]
Traceback (most recent call last):
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1116, in _wrap_create_connection
sock = await aiohappyeyeballs.start_connection(
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohappyeyeballs\impl.py", line 104, in start_connection
raise first_exception
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohappyeyeballs\impl.py", line 82, in start_connection
sock = await _connect_sock(
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohappyeyeballs\impl.py", line 174, in _connect_sock
await loop.sock_connect(sock, address)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 709, in sock_connect
return await self._proactor.connect(sock, address)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\asyncio\windows_events.py", line 826, in _poll
value = callback(transferred, key, ov)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\asyncio\windows_events.py", line 613, in finish_connect
ov.getresult()
OSError: [WinError 121] Превышен таймаут семафора
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\SD\ComfyUI\custom_nodes\ComfyUI-Manager\glob\manager_server.py", line 1709, in get_cache
json_obj = await manager_util.get_data(uri, True)
File "C:\SD\ComfyUI\custom_nodes\ComfyUI-Manager\glob\manager_util.py", line 139, in get_data
async with session.get(uri, headers=headers) as resp:
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\client.py", line 1425, in __aenter__
self._resp: _RetType = await self._coro
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\client.py", line 703, in _request
conn = await self._connector.connect(
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 548, in connect
proto = await self._create_connection(req, traces, timeout)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1056, in _create_connection
_, proto = await self._create_direct_connection(req, traces, timeout)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1411, in _create_direct_connection
raise last_exc
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1380, in _create_direct_connection
transp, proto = await self._wrap_create_connection(
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1135, in _wrap_create_connection
raise client_error(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host raw.githubusercontent.com:443 ssl:default [Превышен таймаут семафора]
[ComfyUI-Manager] Failed to perform initial fetching 'extension-node-map.json': Cannot connect to host raw.githubusercontent.com:443 ssl:default [Превышен таймаут семафора]
Traceback (most recent call last):
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1116, in _wrap_create_connection
sock = await aiohappyeyeballs.start_connection(
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohappyeyeballs\impl.py", line 104, in start_connection
raise first_exception
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohappyeyeballs\impl.py", line 82, in start_connection
sock = await _connect_sock(
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohappyeyeballs\impl.py", line 174, in _connect_sock
await loop.sock_connect(sock, address)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 709, in sock_connect
return await self._proactor.connect(sock, address)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\asyncio\windows_events.py", line 826, in _poll
value = callback(transferred, key, ov)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\asyncio\windows_events.py", line 613, in finish_connect
ov.getresult()
OSError: [WinError 121] Превышен таймаут семафора
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\SD\ComfyUI\custom_nodes\ComfyUI-Manager\glob\manager_server.py", line 1709, in get_cache
json_obj = await manager_util.get_data(uri, True)
File "C:\SD\ComfyUI\custom_nodes\ComfyUI-Manager\glob\manager_util.py", line 139, in get_data
async with session.get(uri, headers=headers) as resp:
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\client.py", line 1425, in __aenter__
self._resp: _RetType = await self._coro
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\client.py", line 703, in _request
conn = await self._connector.connect(
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 548, in connect
proto = await self._create_connection(req, traces, timeout)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1056, in _create_connection
_, proto = await self._create_direct_connection(req, traces, timeout)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1411, in _create_direct_connection
raise last_exc
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1380, in _create_direct_connection
transp, proto = await self._wrap_create_connection(
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1135, in _wrap_create_connection
raise client_error(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host raw.githubusercontent.com:443 ssl:default [Превышен таймаут семафора]
[ComfyUI-Manager] Failed to perform initial fetching 'model-list.json': Cannot connect to host raw.githubusercontent.com:443 ssl:default [Превышен таймаут семафора]
Traceback (most recent call last):
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1116, in _wrap_create_connection
sock = await aiohappyeyeballs.start_connection(
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohappyeyeballs\impl.py", line 104, in start_connection
raise first_exception
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohappyeyeballs\impl.py", line 82, in start_connection
sock = await _connect_sock(
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohappyeyeballs\impl.py", line 174, in _connect_sock
await loop.sock_connect(sock, address)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 709, in sock_connect
return await self._proactor.connect(sock, address)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\asyncio\windows_events.py", line 826, in _poll
value = callback(transferred, key, ov)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\asyncio\windows_events.py", line 613, in finish_connect
ov.getresult()
OSError: [WinError 121] Превышен таймаут семафора
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\SD\ComfyUI\custom_nodes\ComfyUI-Manager\glob\manager_server.py", line 1709, in get_cache
json_obj = await manager_util.get_data(uri, True)
File "C:\SD\ComfyUI\custom_nodes\ComfyUI-Manager\glob\manager_util.py", line 139, in get_data
async with session.get(uri, headers=headers) as resp:
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\client.py", line 1425, in __aenter__
self._resp: _RetType = await self._coro
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\client.py", line 703, in _request
conn = await self._connector.connect(
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 548, in connect
proto = await self._create_connection(req, traces, timeout)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1056, in _create_connection
_, proto = await self._create_direct_connection(req, traces, timeout)
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1411, in _create_direct_connection
raise last_exc
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1380, in _create_direct_connection
transp, proto = await self._wrap_create_connection(
File "C:\Users\***\AppData\Local\Programs\Python\Python310\lib\site-packages\aiohttp\connector.py", line 1135, in _wrap_create_connection
raise client_error(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host raw.githubusercontent.com:443 ssl:default [Превышен таймаут семафора]
FETCH ComfyRegistry Data: 5/57
FETCH ComfyRegistry Data: 10/57
FETCH ComfyRegistry Data: 15/57
FETCH ComfyRegistry Data: 20/57
FETCH ComfyRegistry Data: 25/57
FETCH ComfyRegistry Data: 30/57
FETCH ComfyRegistry Data: 35/57
FETCH ComfyRegistry Data: 40/57
FETCH ComfyRegistry Data: 45/57
FETCH ComfyRegistry Data: 50/57
FETCH ComfyRegistry Data: 55/57
FETCH ComfyRegistry Data [DONE]
[ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
nightly_channel: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/remote
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json[ComfyUI-Manager] Due to a network error, switching to local mode.
=> custom-node-list.json
=> Cannot connect to host raw.githubusercontent.com:443 ssl:default [Превышен таймаут семафора]
FETCH DATA from: C:\SD\ComfyUI\custom_nodes\ComfyUI-Manager\custom-node-list.json [DONE]
[ComfyUI-Manager] All startup tasks have been completed.
```
### Other
_No response_ | open | 2025-03-10T11:10:13Z | 2025-03-11T09:56:10Z | https://github.com/comfyanonymous/ComfyUI/issues/7169 | [
"User Support"
] | VladimirNCh | 3 |
airtai/faststream | asyncio | 1,275 | Feature: allow to use XXXDsn from Pydantic while specyfing the URL | To suggest an idea or inquire about a new Message Broker supporting feature or any other enhancement, please follow this template:
**Is your feature request related to a problem? Please describe.**
Provide a clear and concise description of the problem you've encountered. For example: "I'm always frustrated when..."
I have an RabbitMQ broker, and I use Pydantic's AmqpDsn model for config validation. I would be really pleased if I could use AmqpDsn directly in Faststream
**Describe the solution you'd like**
Clearly and concisely describe the desired outcome or solution.
Just allow the URL param to accept XXX(broker) Dsn model instead of string or URL
**Feature code example**
To help others understand the proposed feature, illustrate it with a **FastStream** code example:
```python
from faststream import FastStream
from pydantic import AmqpDsn
...
dsn = AmqpDsn.build(
scheme="amqp",
user="guest",
password="guest",
host="localhost",
port=5672,
)
router = RabbitRouter(dsn)
```
| closed | 2024-02-28T18:26:36Z | 2024-02-29T12:27:15Z | https://github.com/airtai/faststream/issues/1275 | [
"enhancement"
] | ntoskrn | 1 |
brightmart/text_classification | nlp | 140 | tflearn.data_utils | Which file is this tflearn.data_utils module under | closed | 2020-05-18T09:50:13Z | 2020-05-18T10:10:25Z | https://github.com/brightmart/text_classification/issues/140 | [] | Catherine-HFUT | 0 |
pyppeteer/pyppeteer | automation | 189 | setRequestInterception blocks page.close() | When setRequestInterception is enabled, page.close () is not executed, but everything works fine without it
import asyncio
from pyppeteer import launch, launcher
async def main():
browser = await launch(headless=False)
page = await browser.newPage()
await page.setRequestInterception(True)
async def request_check(req):
if req.resourceType == "image":
await req.abort()
else:
await req.continue_()
page.on("request", lambda req: asyncio.ensure_future(request_check(req)))
await page.goto("https://pazzo.com.tw")
await page.close()
await browser.close()
loop = asyncio.get_event_loop()
loop.run_until_complete(main()) | open | 2020-11-10T12:59:38Z | 2021-08-05T10:16:24Z | https://github.com/pyppeteer/pyppeteer/issues/189 | [
"bug"
] | ViktorRubenko | 5 |
ageitgey/face_recognition | python | 725 | face_recognition crashes python on Windows 7 64-bit | * face_recognition version: 1.2.3
* dlib version: 19.8.1
* Python version: 3.6.8
* Operating System: Windows 7 Ultimate 64-bit (amd phenom ii x4 965 processor)
Installed with these commands:
pip install dlib-19.8.1-cp36-cp36m-win_amd64.whl
pip install face_recognition
### Description
I was trying to run the face_recognition command line application for the first time.
It crashed python.
The face_detection command line application crashed too when it was pointed at a directory with a jpg file in it.
### What I Did
The known_people directory has 3 jpg files in it and the unknown_people directory has multiple jpg files in it. If I take the files out of both directories, then it doesn't crash. If either folder has any files in it, then it crashes. I've tried different files and it does the same thing.
```
(dlib_virtualenv) u:\python_virtualenvs\dlib_virtualenv>face_recognition U:\images\face_recognition_testing\known_people U:\images\face_recognition_testing\unknown_people
Problem signature:
Problem Event Name: APPCRASH
Application Name: python.exe
Application Version: 3.6.8150.1013
Application Timestamp: 5c20260d
Fault Module Name: dlib.pyd
Fault Module Version: 0.0.0.0
Fault Module Timestamp: 5a39dadb
Exception Code: c000001d
Exception Offset: 00000000001a0ef2
OS Version: 6.1.7601.2.1.0.256.1
Locale ID: 1033
Additional Information 1: 3a15
Additional Information 2: 3a15cae584eda8ce9dd95459de3633f6
Additional Information 3: a7a5
Additional Information 4: a7a53d45843ae312cc586157dd3e12fd
```
It also crashes when using the API at the python prompt:
(dlib_virtualenv) u:\python_virtualenvs\dlib_virtualenv>python
Python 3.6.8 (tags/v3.6.8:3c6b436a57, Dec 24 2018, 00:16:47) [MSC v.1916 64 bit
(AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import face_recognition
>>> image = face_recognition.load_image_file("U:\\images\\face_recognition_testi
ng\\known_people_x\\ivy.jpg")
>>> face_locations = face_recognition.face_locations(image)
| open | 2019-01-27T22:20:58Z | 2019-01-27T23:12:24Z | https://github.com/ageitgey/face_recognition/issues/725 | [] | xdaviddx | 0 |
flairNLP/flair | pytorch | 3,256 | [Question]: Something is wrong with the lemmatization | ### Question
```
>>> from flair.models import Lemmatizer
>>> from flair.data import Sentence
>>> sentence = Sentence("I can't wait to get out of here, I hate this place!")
>>> lemmatizer = Lemmatizer()
>>> lemmatizer.predict(sentence)
>>> sentence
Sentence[15]: "I can't wait to get out of here, I hate this place!" → ["I"/ê;êêê™, "ca"/©¤‘”C;, "n't"/êêê;rr, "wait"/ý;‘ª‘â, "to"/êê;êêê, "get"/""ઑ, "out"/""òª‘‘, "of"/ê;‘рêê, "here"/¤ý‘C;−, ","/ê;êêêÇ, "I"/ê;êêê™, "hate"/""ઑ, "this"/""àÂ<unk>‘, "place"/ýε‘C;‘, "!"/ê;êê™р]
```
I think something is wrong with the lemmatization here, am I wrong? I expect to have the lemmas of each token | closed | 2023-06-01T15:46:34Z | 2023-08-21T09:25:48Z | https://github.com/flairNLP/flair/issues/3256 | [
"question"
] | riccardobucco | 1 |
noirbizarre/flask-restplus | api | 29 | Splitting up API library into multiple files | I've tried several different ways to split up the API files into separate python source but have come up empty. I love the additions to flask-restplus but it appears that only the classes within the main python file are seen. Is there a good example of how to do this? In Flask-Restful it was a bit simpler as you could just add the resource and point to a different python file that got imported.
| closed | 2015-03-13T22:41:18Z | 2018-01-05T18:11:28Z | https://github.com/noirbizarre/flask-restplus/issues/29 | [
"help wanted"
] | kinabalu | 16 |
hbldh/bleak | asyncio | 1,645 | macOS client.start_notify fails after reconnect | * bleak version: 0.22.2
* Python version: 3.9
* Operating System: macOS sonoma 14.4.1
* BlueZ version (`bluetoothctl -v`) in case of Linux:
### Description
When running `client.start_notify` on a device that has previously been connected to and that disconnected and connected again, it throws the exception `ValueError: Characteristic notifications already started`.
Not calling the function a second time causes no notifications to arrive. | open | 2024-09-28T19:47:12Z | 2024-10-07T05:58:18Z | https://github.com/hbldh/bleak/issues/1645 | [
"bug",
"Backend: Core Bluetooth"
] | dakhnod | 5 |
saulpw/visidata | pandas | 2,720 | Can't copy cell contents in VSCode ssh connection | **Small description**
Can't seem to copy cell contents in VSCode while on an ssh connection.
**Data to reproduce**
**Steps to reproduce**
Run VSCode on local machine, connect to remote server via ssh
Open file in visidata (installed on remote server)
Select cell from row/column

Use zY (z+Shift+y) to copy cell contents to system clipboard
Get this message

No cell contents are copied when trying to paste them back to the local machine
**Expected result**
Should be able to paste cell contents back to local machine
**Actual result with screenshot**
(see above)
**Additional context**
- What platform and version are you using (Linux, MacOS, Windows)?
Ubuntu 24.04.2 (local machine) Rocky Linux 8.10 (Green Obsidian) (remote machine)
- Which version of Python?
Python 3.12.3 (remote machine)
- Which terminal are you using (for display and input issues)?
VSCode (local machine)
| closed | 2025-03-13T12:59:10Z | 2025-03-13T15:59:41Z | https://github.com/saulpw/visidata/issues/2720 | [
"Limitation",
"terminal/curses"
] | mvelinder | 2 |
huggingface/datasets | computer-vision | 6,860 | CI fails after huggingface_hub-0.23.0 release: FutureWarning: "resume_download" | CI fails after latest huggingface_hub-0.23.0 release: https://github.com/huggingface/huggingface_hub/releases/tag/v0.23.0
```
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_bertscore - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_perplexity - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
FAILED tests/test_fingerprint.py::TokenizersHashTest::test_hash_tokenizer - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
FAILED tests/test_fingerprint.py::TokenizersHashTest::test_hash_tokenizer_with_cache - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
FAILED tests/test_arrow_dataset.py::MiscellaneousDatasetTest::test_set_format_encode - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
``` | closed | 2024-05-02T13:24:17Z | 2024-05-02T16:53:45Z | https://github.com/huggingface/datasets/issues/6860 | [
"bug"
] | albertvillanova | 3 |
databricks/koalas | pandas | 2,153 | How to run Koalas script as a normal python program | my code is having koalas dataframes and few operations on koalas dataframes. I am able to run the script with spark submit , but not able to run as a normal pyhton code.
RUn Command:
python3 test.py
It is asking for Spark.
"Unable to import pyspark - consider doing a pip install with [spark] "
ImportError: Unable to import pyspark - consider doing a pip install with [spark] extra to install pyspark with pip
| closed | 2021-04-26T05:58:05Z | 2021-05-13T03:14:54Z | https://github.com/databricks/koalas/issues/2153 | [
"question",
"not a koalas issue"
] | priyankadas87 | 3 |
ultralytics/ultralytics | deep-learning | 18,706 | Can I take a video file as an API input (using FastAPI's UploadFile) and stream it directly into a YOLOv11n model for object detection without saving the file or processing it into frames or fixed-size chunks? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
Hi, I am using YOLOv11n as apart of fastapi and can I take a video file as an API input (using FastAPI's UploadFile) and stream it directly into a YOLOv11n model for object detection without saving the file or processing it into frames or fixed-size chunks?
### Additional
#### Desired Usage
```
from fastapi import FastAPI, UploadFile
from fastapi.responses import JSONResponse
from ultralytics import YOLO
app = FastAPI()
# Load the YOLO model
model = YOLO("/path/to/yolo11n.pt")
@app.post("/upload-and-detect")
async def upload_and_detect(file: UploadFile):
"""
Accept a video file and pass it directly to the YOLO model for detection.
"""
try:
# YOLO accepts a video path, so we pass the file object directly
results = model(file.file, stream=True) # Use the stream=True for generator processing
detections = []
for result in results:
frame_detections = []
for box, conf, label in zip(result.boxes.xyxy, result.boxes.conf, result.boxes.cls):
frame_detections.append({
"box": box.tolist(),
"confidence": float(conf),
"label": model.names[int(label)]
})
detections.append(frame_detections)
return JSONResponse(content={"detections": detections})
except Exception as e:
return JSONResponse(content={"error": str(e)}, status_code=500)
``` | open | 2025-01-16T04:21:09Z | 2025-01-16T11:37:14Z | https://github.com/ultralytics/ultralytics/issues/18706 | [
"question",
"detect"
] | hariv0 | 2 |
brightmart/text_classification | nlp | 34 | TextRNN model details | Hello.
Is there any chance to get some reference to papers (or any other documents) to the TextRNN model?
Thanks in advance. | closed | 2018-02-14T01:32:18Z | 2018-02-22T03:28:45Z | https://github.com/brightmart/text_classification/issues/34 | [] | adilek | 1 |
sngyai/Sequoia | pandas | 23 | 停机坪策略 | 请问停机坪策略是个什么原理呢? | closed | 2021-06-26T12:53:58Z | 2021-12-06T09:04:01Z | https://github.com/sngyai/Sequoia/issues/23 | [] | jianhoo727 | 1 |
biolab/orange3 | data-visualization | 6,996 | Scoring Sheet Viewer: Refactor | **What's wrong?**
_class_combo_changed (https://github.com/biolab/orange3/blob/master/Orange/widgets/visualize/owscoringsheetviewer.py#L446C9-L446C29) checks whether the class indeed changed and if so, they (indirectly) call https://github.com/biolab/orange3/blob/master/Orange/widgets/visualize/owscoringsheetviewer.py#L459, which just negates some coefficients and subtracts risks from 100. I don't think this is very safe.
Switching back and forth can easily go wrong. The widget should remember the values for one target and use them to compute that values for the shown target. I suspect this may be the reason for failing test (see e.g. #6995).
**How can we reproduce the problem?**
Test fails randomly, but apparently only on github CI, not locally.
| open | 2025-01-19T09:09:59Z | 2025-01-24T09:17:27Z | https://github.com/biolab/orange3/issues/6996 | [
"bug"
] | janezd | 0 |
d2l-ai/d2l-en | deep-learning | 2,424 | Policy Optimization and PPO | Dear all,
While the book currently has a small section on Reinforcement Learning covering MDPs, value iteration, and the Q-Learning algorithm, the book still does not cover an important family of algorithms: **Policy optimization algorithms**.
It'd be great to include an overview of the taxonomy of algorithms as the one provided by _OpenAI's spinning UP_
<img src=https://spinningup.openai.com/en/latest/_images/rl_algorithms_9_15.svg width=400px />
For that, I propose that we cover [Proximal Policy Optimization (PPO)](https://openai.com/blog/openai-baselines-ppo/) since:
- It is very popular in the ML community
- It is a state-of-the-art algorithm
- It is relatively easy to implement and grasp.
I have already written a [medium post](https://medium.com/mlearning-ai/ppo-intuitive-guide-to-state-of-the-art-reinforcement-learning-410a41cb675b) about it. My idea would be to use the environment used for the Q-learning algorithm to train the PPO model. | open | 2023-01-07T23:49:38Z | 2023-01-08T17:09:36Z | https://github.com/d2l-ai/d2l-en/issues/2424 | [] | BrianPulfer | 3 |
dfki-ric/pytransform3d | matplotlib | 134 | Add project on conda-forge? | We at [xdem](https://github.com/GlacioHack/xdem) are slowly preparing to put our package on conda-forge, and with `pytransform3d` as a dependency, I wonder if there's any plan to do this for `pytransform3d` as well?
Thanks in advance!
Erik | closed | 2021-05-21T10:16:06Z | 2021-05-26T20:39:07Z | https://github.com/dfki-ric/pytransform3d/issues/134 | [] | erikmannerfelt | 11 |
ray-project/ray | deep-learning | 51,086 | [core] Guard ray C++ code quality via unit test | ### Description
Ray core C++ components are not properly unit tested:
- As people left, it's less confident to guard against improper code change with missing context;
- Sanitizer on CI is only triggered on unit test;
- Unit test coverage is a good indicator of code quality (i.e. 85% branch coverage).
### Use case
_No response_ | open | 2025-03-05T02:20:07Z | 2025-03-05T02:20:34Z | https://github.com/ray-project/ray/issues/51086 | [
"enhancement",
"P2",
"core",
"help-wanted"
] | dentiny | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,006 | Mio | ###
- _**
1.
- [x] @ecoopnet
**_ | closed | 2020-04-25T20:22:25Z | 2020-04-25T20:22:49Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1006 | [] | angeloko23 | 0 |
jacobgil/pytorch-grad-cam | computer-vision | 454 | Support grad cam for cross attention on encoder-decoder models | Currently, encoder-decoder models lack support for Grad-CAM (Gradient-weighted Class Activation Mapping) visualization with cross-attention mechanisms. Grad-CAM is a valuable tool for interpreting model decisions and understanding which parts of the input contribute most to the output. Extending Grad-CAM support to cross-attention models would greatly enhance their interpretability and utility.
Proposal
We propose adding Grad-CAM support specifically tailored for cross-attention mechanisms in our encoder-decoder models. This would allow users to visualize the attention weights between encoder and decoder, shedding light on how information flows between these components during inference.
Implementation Ideas
Here are some high-level steps to implement Grad-CAM support for cross-attention:
Identify the cross-attention layers in the encoder-decoder architecture.
Compute the gradients of the output with respect to the activations of these cross-attention layers.
Aggregate these gradients to create class-specific importance scores.
Generate the Grad-CAM heatmap for visualization.
Benefits
Improved model interpretability: Users can gain insights into how the model attends to different parts of the input during decoding.
Debugging and model refinement: Grad-CAM can help diagnose model behavior and identify areas for model improvements
Example:
This would help for example in the Donut encoder decoder model to generate heat maps using gradcam from cross attention outputs to identify what part of the image are predicted by which text token. Refer to the following discussion:
https://github.com/clovaai/donut/issues/45
| open | 2023-09-07T20:57:12Z | 2024-07-18T15:36:55Z | https://github.com/jacobgil/pytorch-grad-cam/issues/454 | [] | ahmedplateiq | 1 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 469 | 多轮对话结尾出现很多句号 | ### 详细描述问题
使用chinese-alpaca-plus进行多轮对话,超过一定轮后模型回答结尾会有很多句号
### 运行截图或日志
```commandline
question: 中国的首都是哪里
answer: 中国的首都是北京。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。
```
### 必查项目(前三项只保留你要问的)
- [x] **基础模型**:LLaMA / Alpaca / LLaMA-Plus / Alpaca-Plus
- [ ] **运行系统**:Windows / MacOS / Linux
- [ ] **问题分类**:下载问题 / 模型转换和合并 / 模型训练与精调 / 模型推理问题(🤗 transformers) / 模型量化和部署问题(llama.cpp、text-generation-webui、LlamaChat) / 效果问题 / 其他问题
- [x] (必选)由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [x] (必选)我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [x] (必选)第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
| closed | 2023-05-31T06:36:50Z | 2023-06-16T22:02:17Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/469 | [
"stale"
] | Mewral | 7 |
taverntesting/tavern | pytest | 224 | Update Changelog | The changelog seems to have gotten out of date, we should update it | closed | 2019-01-05T13:09:51Z | 2019-01-13T13:34:41Z | https://github.com/taverntesting/tavern/issues/224 | [] | benhowes | 1 |
quantmind/pulsar | asyncio | 291 | windows tests failures | * **pulsar version**: 2.0
* **platform**: windows
## Description
Some tests, mainly with socket connections and repeated requests, fail in windows from time to time. These tests are currently switched off in windows.
To see them seach for
```python
@unittest.skipIf(platform.is_windows, 'windows test #291')
```
| open | 2017-11-21T09:23:10Z | 2017-11-21T09:23:35Z | https://github.com/quantmind/pulsar/issues/291 | [
"bug",
"test",
"stores"
] | lsbardel | 0 |
mwaskom/seaborn | data-visualization | 3,274 | Using color breaks so.Line when there is only one row per class | Using this data:
```
data = pd.DataFrame(
{
"category": ["A", "B", "C", "D", "E"],
"x": [450, 610, 4160, 9662, 127000],
"y": [500, 152.26, 54.76, 40.42, 0.8]
}
)
```
I can plot the individual points using `so.Dot` and/or `so.Line`:
```
so.Plot(data=data, x="x", y="y").add(so.Dot()).add(so.Line())
```

However, coloring the points by category breaks `so.Line` (`so.Dot` still works):
```
so.Plot(data=data, x="x", y="y", color="category").add(so.Dot()).add(so.Line())
```

Forcing `so.Line` to have a fixed color does not rescue the chart:
```
so.Plot(data=data, x="x", y="y", color="category").add(so.Dot()).add(so.Line(color="k"))
```

Presumably this is because seaborn cannnot plot a line with a single point, and there is only one row for each category. Here is an example with one of the standard datasets which have multiple elements per class that works as I would expect:
```
healthexp = sns.load_dataset("healthexp")
so.Plot(healthexp, x="Year", y="Life_Expectancy", color="Country").add(so.Dot()).add(so.Line())
```

I don't quite see a way to layer the charts using the objects interface. My workaround is to make the line chart directly using matplotlib directly and then using `p.on(ax)` to draw the colored points on top, but it would be nice to do everything from within seaborn.
Python 3.10.10
Seaborn 0.12.0 | closed | 2023-02-22T22:27:36Z | 2023-02-23T01:47:43Z | https://github.com/mwaskom/seaborn/issues/3274 | [] | joeyo | 3 |
sqlalchemy/sqlalchemy | sqlalchemy | 10,939 | Type of "self_group" is partially unknown warning | ### Ensure stubs packages are not installed
- [X] No sqlalchemy stub packages is installed (both `sqlalchemy-stubs` and `sqlalchemy2-stubs` are not compatible with v2)
### Verify if the api is typed
- [X] The api is not in a module listed in [#6810](https://github.com/sqlalchemy/sqlalchemy/issues/6810) so it should pass type checking
### Describe the typing issue
See example code.
### To Reproduce
```python
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column
class Base(DeclarativeBase):
pass
class User(Base):
__tablename__ = "a"
id: Mapped[int] = mapped_column(primary_key=True)
account_id: Mapped[int] = mapped_column()
type: Mapped[str] = mapped_column()
# no warnings
test = User.id.op('->>')('field')
# shows warning:
# Type of "self_group" is partially unknown
# Type of "self_group" is "(against: Unknown | None = None) -> (Grouping[Any] | BinaryExpression[Any])"
test = User.id.op('->>')('field').self_group()
```
### Error
_No response_
### Versions
- OS:
- Python: 3.12.1
- SQLAlchemy: 2.0.25
- Type checker (eg: mypy 0.991, pyright 1.1.290, etc): pyright 1.1.336
### Additional context
_No response_ | closed | 2024-01-29T15:25:54Z | 2024-05-05T15:43:26Z | https://github.com/sqlalchemy/sqlalchemy/issues/10939 | [
"bug",
"PRs (with tests!) welcome",
"typing"
] | AlexanderPodorov | 7 |
lorien/grab | web-scraping | 209 | Выбрать конкретное поле для отправки | Есть поля:
```html
<input name="op" value="Save" type="submit"/>
<input name="op" value="Preview" type="submit"/>
<input name="op" value="Delete" type="submit"/>
```
По умолчанию если сделать g.doc.submit() то ни одно из полей не отправляется.
Сейчас использую такой костыль:
```python
g.doc.set_input('op', 'Delete')
g.doc.submit(submit_name='op')
```
Работает, но не надёжно. В некоторых местах я вижу такую ошибку:
``
(<class 'pycurl.error'>, error(0, ''), <traceback object at 0x000000EFA76A8288>)
``
Вопрос: как можно "нажимать" нужную мне кнопку? При том что имена (name) у них идентичные | closed | 2016-12-23T06:07:53Z | 2016-12-27T05:11:10Z | https://github.com/lorien/grab/issues/209 | [] | InputError | 1 |
youfou/wxpy | api | 274 | 请问如果自动投骰子 | 请问投骰子是什么消息?应该调用什么方法发? | open | 2018-03-13T09:24:10Z | 2018-03-13T09:24:10Z | https://github.com/youfou/wxpy/issues/274 | [] | lzou | 0 |
mwaskom/seaborn | data-visualization | 3,541 | FacetGrid with `size=` argument? | I don't know about the details of the implementation of `FacetGrid.map()`.
But in essence it must be splitting the data frame.
Splitting the data frame and gathering the resulting individual plots (naively)
seems to have problems since it does not incorporate the global features of the data.
Using `sns.FacetGrid()` I had the hard time finding how to incorporate the size into
`sns.scatterplot()` and I found from the tutorial that `hue=` should be specified in
`sns.FacetGrid()` and it looks reasonable because in the other case, the legend for the hue
would be in the every individual plot. And I guess It should be the same for the size!
Unfortunately `sns.FacetGrid()` does not support `size` parameter.
Here is my work-around
```
import seaborn as sns
import statsmodels.api as sm
mtcars = sm.datasets.get_rdataset('mtcars').data
mtcars.head()
import math
dat = mtcars
ax_nx = 3;
ax_ny = 2
# ax_ny = math.ceil(dat[x_cat].nunique()/ax_nx)
x_cat = 'carb'; y_con = 'hp'; z_con = 'qsec'
w_cat = 'am'
a_con = 'hp'
fig_h = 3; fig_w = 7
g = sns.FacetGrid(dat, col=x_cat,
hue = w_cat,
#size = a_con,
col_wrap = ax_nx,
height = fig_h/ax_ny, aspect = fig_w/(fig_h/ax_ny)/ax_nx)
g.map(sns.scatterplot, y_con, z_con, w_cat, a_con)
g.add_legend()
```
Somehow `g.map(sns.scatterplot, a, b, c, d)` calls like
`sns.scatterplot(dat_splitted, x = a, y = b, hue = c, size = d)`,
and I don't know how to get it work like doing
`g.map(sns.scatterplot, x=a, y=b, size = d)`, I had to specify the hue twice in `FacetGrid` and in `g.map()`.
The final plot looks okay but I am worried that the size must be normalized to individual data splitted and there is no global legend for the size.
Is there any way to specify the size for individual plots and at the same, adding the common legend for all plots?
| closed | 2023-10-26T07:07:04Z | 2023-10-27T19:57:45Z | https://github.com/mwaskom/seaborn/issues/3541 | [] | kwhkim | 1 |
ludwig-ai/ludwig | computer-vision | 3,126 | GBM backend schema validation `dict` has no attribute `type` | When trying to run a GBM with auxiliary validation checks, the following error occurs:
```
File "<LUDWIG_ROOT>/ludwig/ludwig/config_validation/checks.py", line 200, in check_gbm_horovod_incompatibility
if config.model_type == MODEL_GBM and config.backend.type == "horovod":
AttributeError: 'dict' object has no attribute 'type'
```
**To Reproduce**
On Ludwig master, run the following script:
```
from ludwig.api import LudwigModel
import pandas as pd
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'
column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower', 'Weight',
'Acceleration', 'Model Year', 'Origin']
df = pd.read_csv(url, names=column_names,
na_values='?', comment='\t',
sep=' ', skipinitialspace=True)
ludwig_config = {
"model_type": "gbm",
"input_features": [
{
"name": "Cylinders",
"type": "number",
},
{
"name": "Displacement",
"type": "number",
},
{
"name": "Horsepower",
"type": "number",
},
{
"name": "Weight",
"type": "number",
},
{
"name": "Acceleration",
"type": "number",
},
{
"name": "Model Year",
"type": "number",
},
{
"name": "Origin",
"type": "category",
},
],
"output_features": [
{
"name": "MPG",
"type": "number",
"optimizer": {"type": "mean_squared_error"}
}
],
"backend": {"type": "local"}
}
model = LudwigModel(config=ludwig_config)
results = model.experiment(dataset=df)
```
**Environment:**
- OS: MacOS
- Version 13.2
- Python 3.8.16
- Ludwig master
**Additional context**
This is probably caused by the backend schema being a dict rather than a dataclass. We should be able to address this by
1. Temporarily catching the AttributeError and bypassing to allow GBM training
2. Creating a backend schema object
| closed | 2023-02-21T15:58:22Z | 2024-10-18T13:21:46Z | https://github.com/ludwig-ai/ludwig/issues/3126 | [] | jeffkinnison | 0 |
home-assistant/core | asyncio | 140,732 | Bose SoundBar Ultra cast issues | ### The problem
Failed to determine cast type for host <unknown>
My Bose SoundBar ultra is the only problematic casr device versus 5 other devices that work (albeit those 5 are so Google devices)
### What version of Home Assistant Core has the issue?
core-2025.3.3
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Cast
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/cast
### Diagnostics information
[debug.log](https://github.com/user-attachments/files/19273166/debug.log)
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
Log details (WARNING)
Logger: pychromecast.dial
Source: components/cast/helpers.py:68
First occurred: 15:46:32 (1 occurrences)
Last logged: 15:46:32
Failed to determine cast type for host <unknown> (<urlopen error timed out>) (services:{MDNSServiceInfo(name='Bose-Smart-Ultra-Sou-fbdbaa488a199cef42e44b44aa6160ca._googlecast._tcp.local.')})
Logger: pychromecast.socket_client
Source: /usr/local/lib/python3.13/site-packages/pychromecast/socket_client.py:416
First occurred: 16:00:00 (1 occurrences)
Last logged: 16:00:00
[Bose Smart Ultra Soundbar(192.168.250.74):8009] Failed to connect to service MDNSServiceInfo(name='Bose-Smart-Ultra-Sou-fbdbaa488a199cef42e44b44aa6160ca._googlecast._tcp.local.'), retrying in 5.0s
```
### Additional information
_No response_ | open | 2025-03-16T16:04:22Z | 2025-03-18T07:25:29Z | https://github.com/home-assistant/core/issues/140732 | [
"integration: cast"
] | thewookiewon | 13 |
ultralytics/yolov5 | deep-learning | 12,818 | YOLOv5 GUI Implementation | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
i want to implement a GUI using YOLOv5 in which there are multiple live feeds from camera now i want that each feed is detected by their own model and their result is showed in a cascaded window in GUI. i have tried implementing it using script but from there i cant access the results without saving them. I have tried implementing the run function of detect.py within the gui but it still shows lag and too much delay as I am accessing the video streams using cv2 and giving model a single frame like frame by frame. Please suggest me a solution.
### Additional
_No response_ | closed | 2024-03-13T09:26:49Z | 2024-10-20T19:41:31Z | https://github.com/ultralytics/yolov5/issues/12818 | [
"question",
"Stale"
] | haseebakbar94 | 3 |
widgetti/solara | flask | 870 | Complex Layout with multiple "set of children" | Hi,
I need to create a more complex layout, that requires passing different children (component) to different parts (components) of the layout. I have made this simple pycafe example illustrating what my intention is: I want to pass a component to the "Left" part of the layout, and a different component to the "right" part of the Layout.
Here is the [pycafe example](https://py.cafe/snippet/solara/v1?pycafe-app-view=false&pycafe-edit-enable=true#c=H4sIAEnZQWcEA51XW2_bOBb-K1oHC7iApbVkx3GyELCzBQbz0JmHom91EdASZbOhSJWknHgG89_nO4eSYzdp56IWQXgu37kfMr9NKlvLyd1EtZ11IfFWCyc25uKYabHdmBPRSVEFazLVHQ-9DKo5JsInB5L436BQWYgaacLG1LJJ3skmTN_cbUyCbxD5WbiH2j6a6WZydXWVfNgrn-C_SIL0IdHQSDqxk5vJmwu1__cBtqH0VqvqIWllYk0S9pI1WPibXrxXu_3fdMORyt_zg1UGR77pylsctXx6J462D9MIXO2Vrp005cdPs0gZXXUQkvdV78ANs3j090LrpBzd6b28Z_p0cFPbSgQFny5EKmuCfArTgUQayuyy-1F6FBhAFJx9AsKzxYxJUyaMHr1JVHPpYyK1l8kvCDhmgbAobC_DfSfCfsooY3j0EfXC0EcW-ZQR41ls9JPJRiDrZVRNrpL3srUHmSCCWpGM0DM0RUBeZfVwL3pI7YWpNUoka-VkRUKjd48K7EP2Q9edGoQ-Jg_Jem8fpzvRlZvJHOU9E6Jv0H9L6TNhWmnh_T1Eo27KaTUhbYUym8ks8eGoJfEfVR32d8n1_N8vMek7w4WqdNNG96oufxRI8BnMVlQPOyTP1DClrbtLrpqmquvidVT6hqg-UDtsJjShkH1d9KX9D67_i-Znz309_gIrl3b-eZDVWq6q2_8C4ftpvAyWF8Gr0bIng7AW2-yD2PrpQeheltyPM0z5fTyPvTxLhFY7A_8qVFg6ChlePkb3X3OGvsa62OyYsbOu_4Y0feOMcesnxgbS_AirUKPMkN3NRNsdhDaTT99Bom-YnTPES_x_lQD7z2YSJ3kz-cm2WIHfx7zM25RMvCg1fS8r_eftFAtNMY5ddNlO9I_d94jqY7Q5-POeyFOKCtAICRDjMi6p7bElxFZqMN_JJ6WxD-1dEuMlFm3o8mJfvxnW82v4GhsGCJdWuN9eNfPDFsroRCB-mswmTn7psZlaKHncydHAxoATjh3d0pGCc3cMe2tA6Y62VrVMD_OsuM5ysKLLkzvTaz2bNEpLYOFOQW9UD3Sk7MMClK-wET3mpp6v1nNZ3ObbZZE3Yrva1vlinc-Xy3Wxvr5pNkaYo7JlucyW2Rwnt7OmSKumUWVZLLL8K2K6xbQgQg9unhXZHOu5hXUSotEo82zBKh6X6IM0JAfkHJQQHJ0GTOoEavCyRHhMkFjkqum1t323JIfyIitA13iU7Mtylc1JrMJBBms1sK5hC8iVdFAkf-dAv8kKaMUA8ixH5nDaC4epTo11LWb6V-nKcgFlEqRLvizX8OoGJ9u20SOwarntd92RYMA-BVrLyjoRLDAATvC4AHEF108tR3NDJPkkK76BKX4OuEFKPntrPPxvBZELDojqRvUjj-DtEpQvtSGj7Po-zwk05-rs-1YYuM_ZYW1VG2AtSA2PtgfpDBoRuULHIAKQqJlKCm9FaAMh3UkD7yiHQ_bBwIuPM8YhPb8AmRZtdUds450MUKN8XYPkbd0jGbhvKf2kDMHPuIXZ6Vs-KfNZFBwe-UQ56KxCo3IRGPk8L4jsJluA1sNX6VIUiCYNFhHVGdk6uHtN5X4mygPNF5lenUN46Q4S1lAJmDsV8pKd4tdW4W3B-mitkwCmO-2OO55d4nGDn_FOSaFocui1w8uTKGiq8ZiqAJxT0ETuOy8axEE9woqh0zZotU2VwSIBB4iMgTRzd3EOW-VDT1yCwtls-VmiKi7WWYjEiOmDJlsFxRqECxIadUWNY7a4s2Ca-3nOFLyQU-GPpqLVkCObpGmD3FqLRl0hPWc2Bnrq92oYHoLo2zg6BdvA-805bDNk6QY1A1yHfWordD-yCOrQ7p1wHibnmDjko5NPHV5z1BRopZPFDiOL3Yeh5gTdUCd2SmveP4iS4bUIFBbeg0BfwinkqXNY_njK9_7UVnA3yoPVhZQ2y4MCnTK7ROI7T4NCqbnN1s8eYGs7W0kPbHKAAHon0YCCq1Sw98eKwpHceQUFd2oi1JvnsTu23Bl4vWBXYorAQwS3xInbfx9CFxFZ3HmUfizobbTCE12LIKOnBXJHsUY6jVaKt8MuugFniXUUtKtopTLqr-0XMLFizpI8_C1IleFa0A2GtmDnFzzmrqkWi8VtipgVzFsYAB5Vmzi369XXHNhyqsIuzxfAJAScqBK0_UYzXe15SqAAmZM7Xpq6CA6X2kkUG73vhqtgRQsJUXv1RHxkF_DeKFwD3MF8UcQ7lo-8meL5tB6-Ivd8gYwkTKrCeqHosaFACbjAKOvAG9YNSA5_lgRuygWvVfQ27ZSaWxrxQBGXwrHyHgsR4CxjXZTgQUGIKgAFMWEgloRLLwTKyYs6wzV0b7Gcr6m7IIb75qKVlmhNcHqnsN3wzoHuySz2CfbMgnC4mv1BYaei3vAdAW3MowjVvra4wS4bg-l0a8EA7EP5mVVhF-K1FPsVrj9KrBu88yCK9uKagyRxzw6PCNq08BBEDwRc0eNcUo2jdGRAFl2DJIIW9y1eDGOo0UUy6L3Gjy9aBYnT5Pc_AFM4lOwGEQAA)
Currently, the component passed to the layout is used in both the "Left" and "Right" side of the layout. I would like essentially to be able to pass (in whatever way) the `Left` and `Right` component to be on the same page.
What I tried:
- I naively thought I can pass a list of components to the `Router` object, and then select the relevant stuff inside the layout component. That does not work as a `Div` class is recieved by the time data arrives in the `Layout` class/component.
My current understanding (looking at the Layout documentation and the few layout implementations of solara) is that there is a single Div (child?) that is passed to the layout. Is there a way we can interface to that, or otherwise intercept it to allow for different type of input (say multiple components?).
I hope the question is clear, if not I would be happy to elaborate. Thank you!! | open | 2024-11-23T13:41:35Z | 2024-12-25T12:49:15Z | https://github.com/widgetti/solara/issues/870 | [] | JovanVeljanoski | 3 |
xuebinqin/U-2-Net | computer-vision | 198 | Segmentation Badly | Hi I trained u2net on refined supervisely dataset(including personal goods) and some of matting dataset images. Whole dataset has 60k images. After 20 epochs, I predict few of my images which should be easy to distinguish bg and fg, however it looks very bad. Personally I thought it was receptive field problem. I would now try to use dilation conv to increase it. Can anyone give me more advices on dealing with it? Thanks


| open | 2021-04-30T04:13:43Z | 2021-04-30T10:08:23Z | https://github.com/xuebinqin/U-2-Net/issues/198 | [] | Sparknzz | 3 |
geex-arts/django-jet | django | 26 | Dashboard doesn't update unless i press reset button | When i do some changes in my custom dashboard file it does not reflect changes in browser unless i press reset button. When i looked into jet's dashboard file i found that
``` python
def load_modules(self):
module_models = UserDashboardModule.objects.filter(
app_label=self.app_label,
user=self.context['request'].user.pk
).all()
if len(module_models) == 0:
module_models = self.create_initial_module_models(self.context['request'].user)
loaded_modules = []
for module_model in module_models:
module_cls = module_model.load_module()
if module_cls is not None:
module = module_cls(model=module_model, context=self.context)
loaded_modules.append(module)
self.modules = loaded_modules
```
This function loads module from UserDashboardModule model not from my custom dashboard file. I guess model should only store the position and some setting for respective modules instead of storing all the modules in db. It never happened with me when i used admin-tools.
| open | 2015-11-27T05:01:04Z | 2017-08-25T17:58:21Z | https://github.com/geex-arts/django-jet/issues/26 | [] | Ajeetlakhani | 11 |
keras-team/keras | tensorflow | 20,455 | Inconsistent warning using MultiHeadAttention with Masking | Hello, when I try to use `MultiHeadAttention` (which supports masking), I get the following warning:
```
/opt/conda/envs/trdm/lib/python3.11/site-packages/keras/src/layers/layer.py:915: UserWarning: Layer 'query' (of type EinsumDense) was passed an input with a mask attached to it. However, this layer does not support masking and will therefore destroy the mask information. Downstream layers will not see the mask.
warnings.warn(
/opt/conda/envs/trdm/lib/python3.11/site-packages/keras/src/layers/layer.py:915: UserWarning: Layer 'key' (of type EinsumDense) was passed an input with a mask attached to it. However, this layer does not support masking and will therefore destroy the mask information. Downstream layers will not see the mask.
warnings.warn(
/opt/conda/envs/trdm/lib/python3.11/site-packages/keras/src/layers/layer.py:915: UserWarning: Layer 'value' (of type EinsumDense) was passed an input with a mask attached to it. However, this layer does not support masking and will therefore destroy the mask information. Downstream layers will not see the mask.
warnings.warn(
```
However it seems that the mask is not really destroyed:
```python
from keras.layers import MultiHeadAttention, Masking
from keras import random
mha = MultiHeadAttention(num_heads=2, key_dim=3)
a = random.normal([1,3,4])
a[:, -1] = 0
a = Masking()(a)
print(a._keras_mask) # tensor([[ True, True, False]], device='cuda:0')
b = mha(a, a)
print(b._keras_mask) # tensor([[ True, True, False]], device='cuda:0')
```
[Einsum Layer](https://github.com/keras-team/keras/blob/master/keras/src/layers/core/einsum_dense.py) does not indeed specify `self.supports_masking = True` contrarily to MultiHeadAttention. I am not sure about the behaviour in general then. Is the mask preserved and warning can be ignored? Or are there instances which the mask will be destroyed? Because the [code](https://github.com/keras-team/keras/blob/master/keras/src/layers/layer.py#L931-L938) in `Layer` does not seem to preserve the mask when the warning is activated (so I am not sure how I am able to retain the mask in the example). Thank you! | closed | 2024-11-06T13:39:06Z | 2024-11-06T22:00:14Z | https://github.com/keras-team/keras/issues/20455 | [] | fdtomasi | 2 |
biolab/orange3 | scikit-learn | 6,143 | Move Multifile widget from Spectroscopy add-on to Data category in "regular" Orange | <!--
Thanks for taking the time to submit a feature request!
For the best chance at our team considering your request, please answer the following questions to the best of your ability.
-->
**What's your use case?**
The Multifile widget can be very useful in applications other than spectroscopy. It should therefore not be hidden in the add-on.
For instance, I can download data from my smart energy meter to monthly files. Multifile allows me to put them all in one file and investigate my energy use over longer time.
**What's your proposed solution?**
Include Multifile in the standard Orange installation and put it in the Data category of widgets.
**Are there any alternative solutions?**
Yes: install the Spectroscopy add-on, which brings several other widgets not needed by people not working on spectroscopy.
Besides, people may be unaware that the widget exists at all. I found out about it with a Google search for _orange merge files from folder_
| open | 2022-09-18T14:02:37Z | 2023-01-10T10:55:04Z | https://github.com/biolab/orange3/issues/6143 | [
"wish",
"feast"
] | wvdvegte | 6 |
gevent/gevent | asyncio | 1,250 | No module named 'gevent.__hub_local | * gevent version: 1.3.4
* Python version: 3.6.5
* Operating System: linux-4.15.2.0 on imx6q
### Description:
when import gevent , then thers is a Traceback below.
Howerver, this code works well on unbuntu18.04
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/python/lib/python3.6/site-packages/gevent/__init__.py", line 87, in <module>
from gevent._hub_local import get_hub
File "/python/lib/python3.6/site-packages/gevent/_hub_local.py", line 101, in <module>
import_c_accel(globals(), 'gevent.__hub_local')
File "/python/lib/python3.6/site-packages/gevent/_util.py", line 105, in import_c_accel
mod = importlib.import_module(cname)
File "/python/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ModuleNotFoundError: No module named 'gevent.__hub_local'
```
### What I've run:
```python
# -*- coding: utf-8 -*-
import gevent
def fa():
while 1:
print('-------fa-------')
gevent.sleep(1)
def fb():
while 1:
print('-------fb-------')
gevent.sleep(0.5)
if __name__ == '__main__':
g1 = gevent.spawn(fa)
g2 = gevent.spawn(fb)
g1.join()
g2.join()
```
| closed | 2018-07-09T08:02:20Z | 2018-08-17T17:22:58Z | https://github.com/gevent/gevent/issues/1250 | [] | feimeng115 | 4 |
Anjok07/ultimatevocalremovergui | pytorch | 796 | ValueError: Input signal length=0 is too small to resample from 48000->44100 | Last Error Received:
Process: Demucs
If this error persists, please contact the developers with the error details.
Raw Error Details:
ValueError: "Input signal length=0 is too small to resample from 48000->44100"
Traceback Error: "
File "UVR.py", line 4719, in process_start
File "separate.py", line 470, in seperate
File "separate.py", line 870, in prepare_mix
File "librosa/util/decorators.py", line 88, in inner_f
File "librosa/core/audio.py", line 179, in load
File "librosa/util/decorators.py", line 88, in inner_f
File "librosa/core/audio.py", line 647, in resample
File "resampy/core.py", line 97, in resample
"
Error Time Stamp [2023-09-15 01:13:40]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 10
window_size: 512
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: v4 | htdemucs
segment: Default
overlap: 0.25
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: Choose Model
chunks: Auto
margin: 44100
compensate: Auto
is_denoise: False
is_invert_spec: False
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_gpu_conversion: True
is_primary_stem_only: False
is_secondary_stem_only: False
is_testing_audio: False
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_create_model_folder: False
mp3_bit_set: 320k
save_format: MP3
wav_type_set: PCM_16
help_hints_var: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems | open | 2023-09-15T08:16:53Z | 2023-09-15T08:17:47Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/796 | [] | chrispviews | 1 |
voila-dashboards/voila | jupyter | 824 | Markdown latex equation inside Output widget | A markdown latex equation represented by `$ ... $` or `$$ ... $$` is not well represented by voilà when it's inside an `Output` widget.
Here is an example. I have a class with a `_repr_html_` method
```
class my_obj:
def _repr_html_(self):
return "<h1>Hello $D_1Q_2$ !!</h1>"
```
Then I make the display of this class inside an Output widget using
```
out = widgets.Output()
with out:
display(my_obj())
```
Unfortunately, the markdown is not interpreted and the output is `Hello $D_1Q_2$ !! `.
If I make the same in a notebook, it works. | open | 2021-02-08T09:23:33Z | 2021-02-08T09:23:33Z | https://github.com/voila-dashboards/voila/issues/824 | [] | gouarin | 0 |
polakowo/vectorbt | data-visualization | 746 | Documentation on vectorbt.dev seems to be broken | All pages are 404 since 2 days | closed | 2024-09-13T20:30:02Z | 2024-09-13T21:37:39Z | https://github.com/polakowo/vectorbt/issues/746 | [] | maniolias | 2 |
modelscope/data-juicer | data-visualization | 212 | Why only keep the most frequently occurring suffix when constructing formatter? | ### Before Asking 在提问之前
- [X] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully. 我已经仔细阅读了 [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) 上的操作指引。
- [X] I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
### Search before asking 先搜索,再提问
- [X] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar questions. 我已经在 [issue列表](https://github.com/alibaba/data-juicer/issues) 中搜索但是没有发现类似的问题。
### Question
When I read following code from [data_juicer/format/formatter.py](https://github.com/alibaba/data-juicer/blob/main/data_juicer/format/formatter.py), I'm curious why there is a **max** operation?
Doesn't this cause some data loss?
Can someone help me explain this?
```
# local dataset
if ext_num:
formatter_num = {}
for name, formatter in FORMATTERS.modules.items():
formatter_num[name] = 0
for ext in ext_num:
if ext in formatter.SUFFIXES:
formatter_num[name] += ext_num[ext]
formatter = max(formatter_num, key=lambda x: formatter_num[x]) # why there is a max operation?
target_suffixes = set(ext_num.keys()).intersection(
set(FORMATTERS.modules[formatter].SUFFIXES))
return FORMATTERS.modules[formatter](dataset_path,
text_keys=text_keys,
suffixes=target_suffixes,
add_suffix=add_suffix,
**kwargs)
```
### Additional 额外信息
_No response_ | closed | 2024-02-20T09:49:52Z | 2024-03-08T08:31:40Z | https://github.com/modelscope/data-juicer/issues/212 | [
"question"
] | BlockLiu | 3 |
ymcui/Chinese-BERT-wwm | tensorflow | 106 | 请教下两阶段预训练的schedule设置的细节 | 论文中写到:
> We train 100K steps on the samples with a maximum length of 128, batch size of 2,560, an initial learning rate of 1e-4 (with warm-up ratio 10%). Then, we train another 100K steps on a maximum length of 512 with a batch size of 384 to learn the long-range dependencies and position embeddings.
请教一下这样两阶段训练时,lr schdule是下面哪一种?
1. warmup 10k steps到1e-4,再用190k steps线性衰减到0,中途第100k step的时候换了最大长度;
2. warmup 10k steps到1e-4,经过90k steps线性衰减到0,接下来换最大长度以后用10k steps上升到1e-4,最后90k steps线性衰减到0。
如果是第二种,第二阶段的预训练loss是否会出现先升再降的情况? | closed | 2020-04-16T02:59:54Z | 2020-05-04T11:31:28Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/106 | [] | hitvoice | 4 |
microsoft/JARVIS | pytorch | 210 | 运行python run_gradio_demo.py --config configs/config.gradio.yaml,报错: | 2023-06-10 16:17:58,244 - awesome_chat - INFO - [{"task": "conversational", "id": 0, "dep": [-1], "args": {"text": "please show me a joke of cat" }}, {"task": "text-to-image", "id": 1, "dep": [-1], "args": {"text": "a photo of cat" }}]
2023-06-10 16:17:58,244 - awesome_chat - DEBUG - [{'task': 'conversational', 'id': 0, 'dep': [-1], 'args': {'text': 'please show me a joke of cat'}}, {'task': 'text-to-image', 'id': 1, 'dep': [-1], 'args': {'text': 'a photo of cat'}}]
2023-06-10 16:17:58,244 - awesome_chat - DEBUG - Run task: 0 - conversational
2023-06-10 16:17:58,245 - awesome_chat - DEBUG - Run task: 1 - text-to-image
2023-06-10 16:17:58,245 - awesome_chat - DEBUG - Deps: []
2023-06-10 16:17:58,245 - awesome_chat - DEBUG - Deps: []
2023-06-10 16:17:58,245 - awesome_chat - DEBUG - parsed task: {'task': 'conversational', 'id': 0, 'dep': [-1], 'args': {'text': 'please show me a joke of cat'}}
2023-06-10 16:17:58,245 - awesome_chat - DEBUG - parsed task: {'task': 'text-to-image', 'id': 1, 'dep': [-1], 'args': {'text': 'a photo of cat'}}
Exception in thread Thread-8 (get_model_status):
Traceback (most recent call last):
Exception in thread Thread-12 (get_model_status):
File "/raid/anaconda3/envs/lifei_llm/lib/python3.10/site-packages/urllib3/connectionpool.py", line 699, in urlopen
Traceback (most recent call last):
File "/raid/anaconda3/envs/lifei_llm/lib/python3.10/site-packages/urllib3/connectionpool.py", line 699, in urlopen
httplib_response = self._make_request(
File "/raid/anaconda3/envs/lifei_llm/lib/python3.10/site-packages/urllib3/connectionpool.py", line 382, in _make_request
httplib_response = self._make_request(
File "/raid/anaconda3/envs/lifei_llm/lib/python3.10/site-packages/urllib3/connectionpool.py", line 382, in _make_request
self._validate_conn(conn)
File "/raid/anaconda3/envs/lifei_llm/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1010, in _validate_conn
self._validate_conn(conn)
File "/raid/anaconda3/envs/lifei_llm/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1010, in _validate_conn
conn.connect()
File "/raid/anaconda3/envs/lifei_llm/lib/python3.10/site-packages/urllib3/connection.py", line 411, in connect
conn.connect()
File "/raid/anaconda3/envs/lifei_llm/lib/python3.10/site-packages/urllib3/connection.py", line 411, in connect
self.sock = ssl_wrap_socket(
File "/raid/anaconda3/envs/lifei_llm/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 449, in ssl_wrap_socket
self.sock = ssl_wrap_socket(
File "/raid/anaconda3/envs/lifei_llm/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 449, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(
File "/raid/anaconda3/envs/lifei_llm/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl
ssl_sock = _ssl_wrap_socket_impl(
File "/raid/anaconda3/envs/lifei_llm/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "/raid/anaconda3/envs/lifei_llm/lib/python3.10/ssl.py", line 513, in wrap_socket
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "/raid/anaconda3/envs/lifei_llm/lib/python3.10/ssl.py", line 513, in wrap_socket
return self.sslsocket_class._create(
File "/raid/anaconda3/envs/lifei_llm/lib/python3.10/ssl.py", line 1071, in _create
return self.sslsocket_class._create(
File "/raid/anaconda3/envs/lifei_llm/lib/python3.10/ssl.py", line 1071, in _create
self.do_handshake()
File "/raid/anaconda3/envs/lifei_llm/lib/python3.10/ssl.py", line 1342, in do_handshake
self.do_handshake()
File "/raid/anaconda3/envs/lifei_llm/lib/python3.10/ssl.py", line 1342, in do_handshake
Exception in thread Thread-10 (get_model_status):
Traceback (most recent call last):
self._sslobj.do_handshake()
File "/raid/anaconda3/envs/lifei_llm/lib/python3.10/site-packages/urllib3/connectionpool.py", line 699, in urlopen
self._sslobj.do_handshake()
ConnectionResetError: [Errno 104] Connection reset by peer | open | 2023-06-10T08:33:33Z | 2023-06-10T08:33:33Z | https://github.com/microsoft/JARVIS/issues/210 | [] | lovelucymuch | 0 |
3b1b/manim | python | 2,302 | [Bug] Code class's Animation applies to the entire code instead of the changed lines | ### Describe the bug
When using Manim's `Code` object to animate code changes, animations are applied to the entire code block, even if only a single line is added.
### Video
https://github.com/user-attachments/assets/d161622f-e72a-483a-852d-c978c11f9ce9
### Expected behavior
The animation should be applied only to the newly added or modified line of code.
### Actual behavior
The animation is applied to the entire code block, including unchanged lines, making it unclear which part of the code was modified.
**Code**:
```py
from manim import *
class CodeAnimation(Scene):
def construct(self):
code1 = '''from manim import *
class Animation(Scene):
def construct(self):
square = Square(side_length=2.0, color=RED)
self.play(Create(square))
self.wait()
'''
code2 = '''from manim import *
class Animation(Scene):
def construct(self):
square = Square(side_length=2.0, color=RED)
square.shift(LEFT * 2)
self.play(Create(square))
self.wait()
'''
rendered_code1 = Code(
code=code1,
tab_width=4,
background="window",
language="Python",
font="Monospace",
style="one-dark",
line_spacing=1
)
rendered_code2 = Code(
code=code2,
tab_width=4,
background="window",
language="Python",
font="Monospace",
style="one-dark",
line_spacing=1
)
self.play(Write(rendered_code1))
self.wait()
self.play(Transform(rendered_code1, rendered_code2))
self.wait()
```
### Environment
- **Manim version**: [v0.18.1]
- **Python version**: [3.10.3]
- **Operating system**: [Window 11]
| closed | 2025-01-13T10:23:01Z | 2025-01-13T14:48:31Z | https://github.com/3b1b/manim/issues/2302 | [
"bug"
] | Mindev27 | 2 |
deepset-ai/haystack | machine-learning | 9,062 | Refactor `LLMEvaluator` and child components to use Chat Generators and adopt the protocol | - Refactor the internal behavior of the component(s) to use Chat Generators instead of Generators
- Add a `chat_generator: ChatGenerator` init parameter and deprecate similar init parameters (in version 2.Y.Z).
- Remove deprecated parameters in version 2.Y.Z+1. | open | 2025-03-18T18:11:34Z | 2025-03-24T09:12:19Z | https://github.com/deepset-ai/haystack/issues/9062 | [
"P1"
] | anakin87 | 0 |
huggingface/text-generation-inference | nlp | 2,265 | gemma-7b warmup encountered an error | ### System Info
Hi, I have encountered an warmup error when using the newst main branch to compile and start up gemma-7b model, the error like this:
Traceback (most recent call last):
File "/usr/local//bin/text-generation-server", line 8, in <module>
sys.exit(app())
File "/usr/local/lib/python3.10/dist-packages/typer/main.py", line 311, in __call__
return get_command(self)(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/typer/core.py", line 778, in main
return _main(
File "/usr/local/lib/python3.10/dist-packages/typer/core.py", line 216, in _main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/typer/main.py", line 683, in wrapper
return callback(**use_params) # type: ignore
File "/usr/src/text-generation-inference-main/server/text_generation_server/cli.py", line 118, in serve
server.serve(
File "/usr/src/text-generation-inference-main/server/text_generation_server/server.py", line 297, in serve
asyncio.run(
File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.10/asyncio/base_events.py", line 636, in run_until_complete
self.run_forever()
File "/usr/lib/python3.10/asyncio/base_events.py", line 603, in run_forever
self._run_once()
File "/usr/lib/python3.10/asyncio/base_events.py", line 1909, in _run_once
handle._run()
File "/usr/lib/python3.10/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/usr/local/lib/python3.10/dist-packages/grpc_interceptor/server.py", line 165, in invoke_intercept_method
return await self.intercept(
> File "/usr/src/text-generation-inference-main/server/text_generation_server/interceptor.py", line 21, in intercept
return await response
File "/usr/local/lib/python3.10/dist-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 120, in _unary_interceptor
raise error
File "/usr/local/lib/python3.10/dist-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 111, in _unary_interceptor
return await behavior(request_or_iterator, context)
File "/usr/src/text-generation-inference-main/server/text_generation_server/server.py", line 125, in Warmup
max_supported_total_tokens = self.model.warmup(batch)
File "/usr/src/text-generation-inference-main/server/text_generation_server/models/flash_causal_lm.py", line 1096, in warmup
_, batch, _ = self.generate_token(batch)
File "/usr/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/usr/src/text-generation-inference-main/server/text_generation_server/models/flash_causal_lm.py", line 1371, in generate_token
out, speculative_logits = self.forward(batch, adapter_data)
File "/usr/src/text-generation-inference-main/server/text_generation_server/models/flash_causal_lm.py", line 1296, in forward
logits, speculative_logits = self.model.forward(
File "/usr/src/text-generation-inference-main/server/text_generation_server/models/custom_modeling/flash_gemma_modeling.py", line 474, in forward
logits, speculative_logits = self.lm_head(hidden_states)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/src/text-generation-inference-main/server/text_generation_server/layers/speculative.py", line 51, in forward
logits = self.head(input)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/src/text-generation-inference-main/server/text_generation_server/layers/tensor_parallel.py", line 87, in forward
return super().forward(input)
File "/usr/src/text-generation-inference-main/server/text_generation_server/layers/tensor_parallel.py", line 37, in forward
return self.linear.forward(x)
File "/usr/src/text-generation-inference-main/server/text_generation_server/layers/linear.py", line 37, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasGemmEx( handle, opa, opb, m, n, k, &falpha, a, CUDA_R_16BF, lda, b, CUDA_R_16BF, ldb, &fbeta, c, CUDA_R_16BF, ldc, compute_type, CUBLAS_GEMM_DEFAULT_TENSOR_OP)`
2024-07-21T12:44:09.788954Z ERROR warmup{max_input_length=4096 max_prefill_tokens=20000 max_total_tokens=8192 max_batch_size=None}:warmup: text_generation_client: router/client/src/lib.rs:46: Server error: CANCELLED
Error: WebServer(Warmup(Generation("CANCELLED")))
2024-07-21T12:44:14.909514Z ERROR text_generation_launcher: Webserver Crashed
2024-07-21T12:44:14.909530Z INFO text_generation_launcher: Shutting down shards
2024-07-21T12:44:14.993505Z INFO shard-manager: text_generation_launcher: Terminating shard rank=0
2024-07-21T12:44:14.993672Z INFO shard-manager: text_generation_launcher: Waiting for shard to gracefully shutdown rank=0
2024-07-21T12:44:15.494334Z INFO shard-manager: text_generation_launcher: shard terminated rank=0
Error: WebserverFailed
text_generation_launcher exit 1
How to solve it? Thanks.
### Information
- [ ] Docker
- [X] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
text_generation_launcher_pid=591
2024-07-21T12:43:52.574813Z INFO text_generation_launcher: Args {
model_id: "/dataset/model/gemma-7b-it/",
revision: None,
validation_workers: 2,
sharded: None,
num_shard: Some(
1,
),
quantize: None,
speculate: None,
dtype: None,
trust_remote_code: false,
max_concurrent_requests: 5000,
max_best_of: 1,
max_stop_sequences: 4,
max_top_n_tokens: 5,
max_input_tokens: None,
max_input_length: Some(
4096,
),
max_total_tokens: Some(
8192,
),
waiting_served_ratio: 1.2,
max_batch_prefill_tokens: Some(
20000,
),
max_batch_total_tokens: None,
max_waiting_tokens: 20,
max_batch_size: None,
cuda_graphs: None,
hostname: "chat-tianrui-medusa2-master-0",
port: 31471,
shard_uds_path: "/tmp/text-generation-server",
master_addr: "chat-tianrui-medusa2-master-0",
master_port: 23456,
huggingface_hub_cache: None,
weights_cache_override: None,
disable_custom_kernels: false,
cuda_memory_fraction: 0.95,
rope_scaling: None,
rope_factor: None,
json_output: false,
otlp_endpoint: None,
otlp_service_name: "text-generation-inference.router",
cors_allow_origin: [],
watermark_gamma: None,
watermark_delta: None,
ngrok: false,
ngrok_authtoken: None,
ngrok_edge: None,
tokenizer_config_path: None,
disable_grammar_support: false,
env: false,
max_client_batch_size: 4,
lora_adapters: None,
disable_usage_stats: false,
disable_crash_reports: false,
}
### Expected behavior
expected the inference service to start normally. | closed | 2024-07-21T12:55:15Z | 2024-08-27T01:54:55Z | https://github.com/huggingface/text-generation-inference/issues/2265 | [
"Stale"
] | Amanda-Barbara | 3 |
scikit-image/scikit-image | computer-vision | 7,728 | Enable rc-coordinate conventions in `skimage.transform` | ## Description
Add a coordinates or similar flag to each function in `skimage.transform`, to change it from working with xy to rc. For skimage 2.0, we'll change the default from xy to rc.
**Can be closed, when** users have a means of using "rc" coordinates with every callable in `skimage.transform`.
### See also
Related: [#2275](https://github.com/scikit-image/scikit-image/issues/2275)
Previous discussions:
- [#5439 (Juan's request)](https://github.com/scikit-image/scikit-image/issues/5439#issuecomment-866642190)
- [#5439 (Greg's list)](https://github.com/scikit-image/scikit-image/issues/5439#issuecomment-1046269796)
- [#3148 (past tentative work)](https://github.com/scikit-image/scikit-image/pull/3148)
Other: [our coordinate conventions](https://scikit-image.org/docs/stable/user_guide/numpy_images.html#coordinate-conventions), [OpenCV?](https://stackoverflow.com/questions/25642532/opencv-pointx-y-represent-column-row-or-row-column)
| open | 2025-03-03T22:40:31Z | 2025-03-07T16:42:56Z | https://github.com/scikit-image/scikit-image/issues/7728 | [
":hiking_boot: Path to skimage2",
":globe_with_meridians: Coordinate convention"
] | lagru | 0 |
widgetti/solara | fastapi | 688 | FileBrowser bug when navigating to path root | Steps to reproduce:
- create an app with `solara.FileBrowser()`
- the Filebrowser starts in the cwd (C:\Users\...), in the app click '..' until you are at 'C'
The filebrowser now shows the as if it was in the cwd instead of disk root
Similar issue when using eg `solara.FileBrowser(directory="D:\\")`
FileBrowser starts at D:\ correctly
click to enter 'my_folder'

click '..' to go back

note the lacking trailing backslash
click 'my_folder' again

now backslash is missing
click '..' again:

This is on windows and solara 1.33.0
| closed | 2024-06-19T16:12:19Z | 2024-07-10T14:48:23Z | https://github.com/widgetti/solara/issues/688 | [] | Jhsmit | 0 |
hankcs/HanLP | nlp | 773 | 我自己也在做词典的命名实体,想知道哪里有命名实体的标注规则文档,还是说这些是根据自己的需求来定 | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:
我使用的版本是:
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
<!-- 请详细描述问题,越详细越可能得到解决 -->
## 复现问题
<!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?-->
### 步骤
1. 首先……
2. 然后……
3. 接着……
### 触发代码
```
public void testIssue1234() throws Exception
{
CustomDictionary.add("用户词语");
System.out.println(StandardTokenizer.segment("触发问题的句子"));
}
```
### 期望输出
<!-- 你希望输出什么样的正确结果?-->
```
期望输出
```
### 实际输出
<!-- HanLP实际输出了什么?产生了什么效果?错在哪里?-->
```
实际输出
```
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
| closed | 2018-03-25T12:57:12Z | 2020-01-01T10:50:42Z | https://github.com/hankcs/HanLP/issues/773 | [
"ignored"
] | brucegai | 2 |
noirbizarre/flask-restplus | flask | 171 | Swagger documentation error when used with other bluprints | Hello,
I experience an error with swagger documentation rendering that enters in conflict with other blueprints.
Code for restplus blueprint declaration
```
v1 = Blueprint('v1', __name__)
api = Api(v1, title='Project API (v1)', version='1.0', doc='/documentation/',
default_label='project')
```
Code for blueprint registration
```
self.register_blueprint(image, url_prefix='/render')
self.register_blueprint(account, url_prefix='/compte')
self.register_blueprint(front)
self.register_blueprint(filters)
self.register_blueprint(v1, url_prefix='/v1')
```
Flask return this error
```
AttributeError
AttributeError: 'dict' object has no attribute 'jinja_loader'
```
Full Debug
https://gist.github.com/anonymous/68cd9e7b4ba3b74dd74672f7fb5bdf4d
When I comment the front blueprint swagger doc is generated as usual.
Can anyone help me with this.
Thank you,
Restplus is a very very useful flask module. Thanks to the developers.
| open | 2016-05-11T10:11:29Z | 2016-09-05T11:36:33Z | https://github.com/noirbizarre/flask-restplus/issues/171 | [
"bug"
] | k3z | 0 |
ultralytics/ultralytics | machine-learning | 19,317 | 4channel implementation of YOLO | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I searched in other issues, but non of them had led to an answer. Can you explicitly say if I want to use preweights for my 4channel data, How I can use a preweight like yolov9-c.pt. and also, can you tell me in details which parts of the code should be changed for that matter?
### Additional
_No response_ | open | 2025-02-19T18:18:54Z | 2025-02-25T22:11:56Z | https://github.com/ultralytics/ultralytics/issues/19317 | [
"enhancement",
"question"
] | MehrsaMashhadi | 5 |
aio-libs/aiopg | sqlalchemy | 58 | PostgreSQL notification support | Hi,
do you plan to add support for http://initd.org/psycopg/docs/advanced.html#asynchronous-notifications by any chance?
| closed | 2015-05-09T18:28:58Z | 2015-07-02T13:37:01Z | https://github.com/aio-libs/aiopg/issues/58 | [] | spinus | 5 |
plotly/dash-table | dash | 467 | Update documentation for css property | The current documentation for css is as such:
<img width="816" alt="Screen Shot 2019-06-14 at 2 16 20 PM" src="https://user-images.githubusercontent.com/30607586/59529422-007b1c80-8eaf-11e9-8ddc-fe6bb6c85617.png">
The example has the value part of "rule" as a string within a string. However it should be just one string.
i.e.
```Example: {"selector": ".dash-spreadsheet", "rule": "font-family: monospace"}``` | open | 2019-06-14T18:23:05Z | 2019-09-09T14:07:18Z | https://github.com/plotly/dash-table/issues/467 | [
"dash-type-maintenance"
] | OwenMatsuda | 0 |
pandas-dev/pandas | python | 60,815 | DOC: Missing documentation for `Styler.columns` and `Styler.index` | ### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/dev/reference/api/pandas.io.formats.style.Styler.html#pandas.io.formats.style.Styler
### Documentation problem
The attributes `columns` and `index` are not documented for the `Styler` class.
### Suggested fix for documentation
Document those attributes.
Initially reported here in `pandas-stubs` : https://github.com/pandas-dev/pandas-stubs/issues/1102 | closed | 2025-01-29T15:25:29Z | 2025-02-21T18:06:56Z | https://github.com/pandas-dev/pandas/issues/60815 | [
"Docs",
"Styler"
] | Dr-Irv | 5 |
Miserlou/Zappa | django | 2,092 | Certify with tags | ## Context
Tag is missing in API gateway
## Expected Behavior
The tag shall be added to API gateway
## Actual Behavior
The tag is not added
## Steps to Reproduce
1. zappa certify xxx
| open | 2020-04-30T02:35:43Z | 2020-04-30T02:35:43Z | https://github.com/Miserlou/Zappa/issues/2092 | [] | weasteam | 0 |
vimalloc/flask-jwt-extended | flask | 311 | get_jwt_identity return None for protected endpoint | This library is awesome but i had a question, why the get_jet_identity function returning None ?
```
@jwt.required
def post(self):
try:
return response.ok(jwt.getIdentity(), "")
except Exception as e:
return response.badRequest('', '{}'.format(e))
def required(fn):
@wraps(fn)
def wrapper(*args, **kwargs):
try:
decode()
except Exception as e:
return response.unAuthorized('', 'Unauthorized!')
return fn(*args, **kwargs)
return wrapper
def decode():
authorization = request.headers.get('Authorization')
string = authorization.split(' ')
decoded = decode_token(string[1])
return decoded
``` | closed | 2020-01-25T08:27:52Z | 2020-01-25T08:49:14Z | https://github.com/vimalloc/flask-jwt-extended/issues/311 | [] | sunthree74 | 0 |
Gozargah/Marzban | api | 1,594 | [Question] How to set IP-Limit per subscription | I want to set a limit on how many different IPs a subscription can be used. Is this already possible? If not, please take it as a feature request. | closed | 2025-01-11T01:51:09Z | 2025-01-11T06:51:42Z | https://github.com/Gozargah/Marzban/issues/1594 | [
"Question"
] | socksprox | 1 |
joeyespo/grip | flask | 381 | GitHub API Rate Limit | With basic auth still hit an hourly rate limit. Does grip hit their API on every refresh just to make sure styles are up-to-date? What if I refresh 20 seconds later, I don't think the API changed much.
Is there a way to just use the last version of the styles it fetched? Maybe an `--offline` flag? Could run grip offline and use whatever styles it grabbed last - no more rate limit.
Is that possible?
Note, I did read this and that's why I wonder if it hits their API on every refresh.
> Grip strives to be as close to GitHub as possible. To accomplish this, grip uses [GitHub's Markdown API](http://developer.github.com/v3/markdown) so that changes to their rendering engine are reflected immediately without requiring you to upgrade grip. However, because of this you may hit the API's hourly rate limit. If this happens, grip offers a way to access the API using your credentials to unlock a much higher rate limit.
<img width="699" alt="Screen Shot 2024-02-24 at 9 40 37 AM" src="https://github.com/joeyespo/grip/assets/15990810/11243436-ef1f-45ab-bda5-f59a2a6f17fa"> | open | 2024-02-24T14:51:44Z | 2024-10-30T16:32:27Z | https://github.com/joeyespo/grip/issues/381 | [] | jstnbr | 2 |
huggingface/transformers | nlp | 36,926 | `Mllama` not supported by `AutoModelForCausalLM` after updating `transformers` to `4.50.0` | ### System Info
- `transformers` version: 4.50.0
- Platform: Linux-5.15.0-100-generic-x86_64-with-glibc2.35
- Python version: 3.12.2
- Huggingface_hub version: 0.29.3
- Safetensors version: 0.5.3
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A40
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behavior:
1. Install latest version of `transformers` (4.50.0)
2. Run the following:
```
from transformers import AutoModelForCausalLM
model_name = "meta-llama/Llama-3.2-11B-Vision"
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype=torch.float16)
```
**Got the error:**
```
ValueError: Unrecognized configuration class <class 'transformers.models.mllama.configuration_mllama.MllamaTextConfig'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of AriaTextConfig, BambaConfig, BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CohereConfig, Cohere2Config, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DbrxConfig, DiffLlamaConfig, ElectraConfig, Emu3Config, ErnieConfig, FalconConfig, FalconMambaConfig, FuyuConfig, GemmaConfig, Gemma2Config, Gemma3Config, Gemma3TextConfig, GitConfig, GlmConfig, GotOcr2Config, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, GraniteConfig, GraniteMoeConfig, GraniteMoeSharedConfig, HeliumConfig, JambaConfig, JetMoeConfig, LlamaConfig, MambaConfig, Mamba2Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MllamaConfig, MoshiConfig, MptConfig, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NemotronConfig, OlmoConfig, Olmo2Config, OlmoeConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, Phi3Config, PhimoeConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, Qwen2MoeConfig, RecurrentGemmaConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, StableLmConfig, Starcoder2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig, ZambaConfig, Zamba2Config.
```
However, it's mentioned in the latest document that the `mllama` model is supported
https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForCausalLM.from_pretrained
I tested this in an environment with `transformers==4.49.0` and the model is loaded without issue
### Expected behavior
The multimodal mllama model (Llama-3.2-11B-Vision) is loaded successfully | open | 2025-03-24T12:07:09Z | 2025-03-24T12:28:00Z | https://github.com/huggingface/transformers/issues/36926 | [
"bug"
] | WuHaohui1231 | 2 |
recommenders-team/recommenders | deep-learning | 2,091 | [ASK] Perfect MAP@k is less than 1 | ### Description
I have a recommender that, for some users in some folds, has less than $k$ items in the ground truth. Therefore, the $precision@k$ is less than 1, even with a recommender that recommends the ground truth. For that reason, I calculate the results of a perfect recommender for multiple metrics.
By definition, the _perfect_ $ndcg@k$ is 1. I thought this was the case for $MAP@k$ too, but it is not, the average $MAP@5$ of various folds of mine is 0.99, but I even have a fold with a $MAP@5$ of 0.7! I've also noticed that perfect $MAP@k$ is exactly equal to $recall@k$, but I haven't found any resources that explain this coincidence.
Keep in mind that I'm talking about implicit feedback, and the ideal recommender just assigns 1 in the prediction field.
### Other Comments
I'll try and provide an example that causes this "issue".
| closed | 2024-04-26T13:28:54Z | 2024-04-29T21:36:52Z | https://github.com/recommenders-team/recommenders/issues/2091 | [
"documentation"
] | daviddavo | 1 |
graphistry/pygraphistry | pandas | 7 | Make an anaconda package | closed | 2015-06-25T21:30:28Z | 2016-05-08T02:14:10Z | https://github.com/graphistry/pygraphistry/issues/7 | [
"enhancement"
] | thibaudh | 2 | |
nl8590687/ASRT_SpeechRecognition | tensorflow | 277 | 模型太小,语音识别不准确 | 模型只有6M,之前做目标检测的时候模型动不动就几百M,原因是?
语音识别不准确,thchs30中直接找了一些语料,识别不准确;自己裁了一些小视频音频,有背景音,识别简直惨不忍睹;
后续有什么改进计划 | open | 2022-04-02T23:43:32Z | 2024-12-25T01:26:09Z | https://github.com/nl8590687/ASRT_SpeechRecognition/issues/277 | [] | wangzhanwei666 | 2 |
httpie/cli | python | 1,538 | Add support for OAuth2 authentication | ## Checklist
- [x] I've searched for similar feature requests.
---
## Enhancement request
I would like httpie to support OAuth 2.0 authentication, ideally in a way similar to the [infamous P**man](https://learning.postman.com/docs/sending-requests/authorization/oauth-20/#using-client-credentials). As an example for a client credentials request, users of httpie would supply the info needed to obtain a token (eventually via a config file).
I am aware of the existence of [httpie-oauth1 plugin](https://github.com/qcif/httpie-oauth1), which hasn't been updated in a while and supports only OAuth 1.0.
I am not sure if this feature request shouldn't be in the [httpie/desktop](https://github.com/httpie/desktop) repo.
| open | 2023-10-31T05:01:37Z | 2025-01-22T13:23:39Z | https://github.com/httpie/cli/issues/1538 | [
"enhancement",
"new"
] | rnd-debug | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.