repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
Significant-Gravitas/AutoGPT | python | 9,635 | Broken beads | <img src="https://uploads.linear.app/a47946b5-12cd-4b3d-8822-df04c855879f/a1b130c4-9908-4b8d-8d3c-cde53b62e160/40b10681-c5fb-42b1-9802-fd9b85f08960?signature=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJwYXRoIjoiL2E0Nzk0NmI1LTEyY2QtNGIzZC04ODIyLWRmMDRjODU1ODc5Zi9hMWIxMzBjNC05OTA4LTRiOGQtOGQzYy1jZGU1M2I2MmUxNjAvNDBiMTA2ODEtYzVmYi00MmIxLTk4MDItZmQ5Yjg1ZjA4OTYwIiwiaWF0IjoxNzQxOTQ2NDY4LCJleHAiOjMzMzEyNTA2NDY4fQ.me7EpGKYSNc8wCDmAR2xeJUayBPkT9akqIfWOht5n30 " alt="image.png" width="1646" data-linear-height="1552" /> | open | 2025-03-14T10:01:09Z | 2025-03-14T10:01:09Z | https://github.com/Significant-Gravitas/AutoGPT/issues/9635 | [
"bug",
"platform/frontend"
] | linear[bot] | 0 |
marimo-team/marimo | data-visualization | 3,823 | Support disabling mo.ui.chat input | ### Description
I was hoping to make my chatbot disabled until the user entered an api key in another input but didn't see any parameter to disable the input.
### Suggested solution
Support disabled parameter on the Chat input.
When set to true it can create a non interactive overlay over the input.
Html now supports inert attribute to disable interactions inside an element. Inert elements can be styled to achieve the disabled styling.
See: https://developer.mozilla.org/en-US/docs/Web/HTML/Global_attributes/inert
### Alternative
Currently using mo.stop to not show the cell at all.
### Additional context
Here's the notebook where I want to use this:
https://aszenz.github.io/kashmiri-chatbot/
By not showing the input, I have a feeling people won't understand how the chatbot looks like. | open | 2025-02-17T21:08:58Z | 2025-02-18T01:45:46Z | https://github.com/marimo-team/marimo/issues/3823 | [
"enhancement",
"help wanted",
"good first issue (typescript)"
] | aszenz | 0 |
ivy-llc/ivy | tensorflow | 28,540 | Fix Frontend Failing Test: numpy - math.tensorflow.math.reduce_prod | To-do List: https://github.com/unifyai/ivy/issues/27497 | closed | 2024-03-11T10:57:14Z | 2024-04-02T09:31:09Z | https://github.com/ivy-llc/ivy/issues/28540 | [
"Sub Task"
] | ZJay07 | 0 |
ymcui/Chinese-BERT-wwm | nlp | 66 | 关于在您发布的预训练模型之上进一步预训练 | 您好打扰您了
我用您的roberta模型进行微调效果很好
但是我加载您的roberta-large模型,在我的语料上进一步预训练,一上来的mlm准确率是0。我也用完形填空试了一下,确实模型不能进行准确的预测。所以我猜测您的预训练roberta-large模型的最上层(单词预测)是不是有一些问题?
我试过了google发布的模型,包括中文的和英文的base、large模型,不存在这样的问题
期待您的回复~ | closed | 2019-10-28T09:43:13Z | 2020-06-05T10:07:46Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/66 | [] | zhezhaoa | 6 |
allenai/allennlp | nlp | 4,806 | Writing distributed metrics is confusing | Big approach: We could change the metrics API so that there are two methods to implement, one that produces an intermediate representation on each worker, and another that combines them. That way it's clear to the implementer what to do.
Small approach: Every metric gets a flag called `safe_for_distributed`, which is `False` by default, and needs to be set to `True` if the implementer has taken care of the distributed side of things. | closed | 2020-11-20T17:24:54Z | 2021-02-12T00:47:13Z | https://github.com/allenai/allennlp/issues/4806 | [] | dirkgr | 2 |
robinhood/faust | asyncio | 507 | GlobalTable hangs application boot | ## Checklist
- [x] I have included information about relevant versions
- [x] I have verified that the issue persists when using the `master` branch of Faust.
## Steps to reproduce
Create simple app with GlobalTable, like so:
```python
app = faust.App('test')
table = app.GlobalTable('test')
def main():
app.main()
```
and run it using Faust from master branch.
## Expected behavior
App should start.
## Actual behavior
App bootstrap hangs on:
```
[2020-01-10 14:56:32,824] [3385] [INFO] Elected group leader -- performing partition assignments using faust
```
During our investigation we found that the problem is in `def _global_table_standby_assignments` method in `assignor/partition_assignor.py` file. `num_partitions` variable is `None` during app bootstrap.
Single GlobalTable works fine on Faust 1.9.0.
## Full traceback
```pytb
┌ƒaµS† v1.9.0─┬──────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ id │ test │
│ transport │ [URL('kafka://192.168.1.1:9092'), URL('kafka://192.168.1.2:9093'), URL('kafka://192.168.1.3:9094')] │
│ store │ memory: │
│ web │ http://localhost:6066/ │
│ log │ -stderr- (info) │
│ pid │ 3385 │
│ hostname │ makz0rd │
│ platform │ CPython 3.6.5 (Darwin x86_64) │
│ drivers │ │
│ transport │ aiokafka=1.1.3 │
│ web │ aiohttp=3.6.2 │
│ datadir │ (cut)/test-data │
│ appdir │ (cut)/test-data/v1 │
└─────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────┘
[2020-01-10 14:56:29,747] [3385] [INFO] [^Worker]: Starting...
[2020-01-10 14:56:29,759] [3385] [INFO] [^-App]: Starting...
[2020-01-10 14:56:29,759] [3385] [INFO] [^--Monitor]: Starting...
[2020-01-10 14:56:29,759] [3385] [INFO] [^--Producer]: Starting...
[2020-01-10 14:56:29,759] [3385] [INFO] [^---ProducerBuffer]: Starting...
[2020-01-10 14:56:29,787] [3385] [INFO] [^--CacheBackend]: Starting...
[2020-01-10 14:56:29,787] [3385] [INFO] [^--Web]: Starting...
[2020-01-10 14:56:29,788] [3385] [INFO] [^---Server]: Starting...
[2020-01-10 14:56:29,789] [3385] [INFO] [^--Consumer]: Starting...
[2020-01-10 14:56:29,790] [3385] [INFO] [^---AIOKafkaConsumerThread]: Starting...
[2020-01-10 14:56:29,811] [3385] [INFO] [^--LeaderAssignor]: Starting...
[2020-01-10 14:56:29,812] [3385] [INFO] [^--Producer]: Creating topic 'test-__assignor-__leader'
[2020-01-10 14:56:29,825] [3385] [INFO] [^--ReplyConsumer]: Starting...
[2020-01-10 14:56:29,826] [3385] [INFO] [^--AgentManager]: Starting...
[2020-01-10 14:56:29,826] [3385] [INFO] [^--Conductor]: Starting...
[2020-01-10 14:56:29,826] [3385] [INFO] [^--TableManager]: Starting...
[2020-01-10 14:56:29,826] [3385] [INFO] [^--Conductor]: Waiting for agents to start...
[2020-01-10 14:56:29,827] [3385] [INFO] [^--Conductor]: Waiting for tables to be registered...
[2020-01-10 14:56:30,831] [3385] [INFO] [^--GlobalTable: test]: Starting...
[2020-01-10 14:56:30,835] [3385] [INFO] [^---Store: test]: Starting...
[2020-01-10 14:56:30,836] [3385] [INFO] [^--Producer]: Creating topic 'test-test-changelog'
[2020-01-10 14:56:30,843] [3385] [INFO] [^---Recovery]: Starting...
[2020-01-10 14:56:30,844] [3385] [INFO] [^--Producer]: Creating topic 'test-test-changelog'
[2020-01-10 14:56:30,860] [3385] [INFO] [^--Producer]: Creating topic 'test-__assignor-__leader'
[2020-01-10 14:56:31,815] [3385] [INFO] Updating subscribed topics to: frozenset({'test-test-changelog', 'test-__assignor-__leader'})
[2020-01-10 14:56:31,818] [3385] [INFO] Subscribed to topic(s): {'test-test-changelog', 'test-__assignor-__leader'}
[2020-01-10 14:56:31,849] [3385] [INFO] Discovered coordinator 3 for group test
[2020-01-10 14:56:31,851] [3385] [INFO] Revoking previously assigned partitions set() for group test
[2020-01-10 14:56:32,815] [3385] [INFO] (Re-)joining group test
[2020-01-10 14:56:32,823] [3385] [INFO] Joined group 'test' (generation 3) with member_id faust-1.9.0-8f6c71c3-5caf-4428-b22c-504fe1915783
[2020-01-10 14:56:32,824] [3385] [INFO] Elected group leader -- performing partition assignments using faust
```
# Versions
* Python version: 3.6.5
* Faust version: master (rev 37fb187120c64bc94b0ed88f5ba38e6b9cd32e8b)
* Operating system: macOS
* Kafka version: 2.4.0
* RocksDB version (if applicable)
| closed | 2020-01-10T14:25:13Z | 2020-02-05T15:54:36Z | https://github.com/robinhood/faust/issues/507 | [] | luinnar | 3 |
Lightning-AI/pytorch-lightning | pytorch | 20,560 | Allow nested `batch_arg_name` in `BatchSizeFinder`/`Tuner.scale_batch_size()` | ### Description & Motivation
Hi,
Would it be possible to allow dot-notation in, for example, `tuner.scale_batch_size(model, batch_arg_name="dataloaders.train.batch_size")`?
Perhaps some other features would benefit from this. It should be simple to achieve that through `lightning_hasattr` and `lightning_getattr`.
### Pitch
_No response_
### Alternatives
_No response_
### Additional context
_No response_
cc @lantiga @borda | open | 2025-01-23T16:26:50Z | 2025-01-24T15:25:02Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20560 | [
"feature",
"needs triage"
] | ibro45 | 1 |
marimo-team/marimo | data-science | 3,312 | `Signature.return_annotation` somehow turns into string when the function is ran under a marimo cell | ### Describe the bug
Running code snippet
```
from inspect import signature
def wololo(i: int, ii: int) -> tuple[str, int]:
a = i+ii
b = ii-i
c = f"{a} {b}"
return c, b
print(signature(wololo).return_annotation, type(signature(wololo).return_annotation))
```
on python console it prints `tuple[str, int] <class 'types.GenericAlias'>` according to expectation, but on a marimo cell it prints `tuple[str, int] <class 'str'>`, so somehow the return annotation type is a string instead of an actual type object.
Also occurs on functions with primitive return type annotations like `def wololo(i: int, ii: int) -> int`
Could be a `inspect.signature` bug too but I haven't seen any clue pointing that way yet
### Environment
<details>
```
{
"marimo": "0.9.34",
"OS": "Linux",
"OS Version": "6.8.12-4-pve",
"Processor": "",
"Python Version": "3.11.2",
"Binaries": {
"Browser": "--",
"Node": "--"
},
"Dependencies": {
"click": "8.1.7",
"docutils": "0.21.2",
"itsdangerous": "2.2.0",
"jedi": "0.19.2",
"markdown": "3.7",
"narwhals": "1.17.0",
"packaging": "24.2",
"psutil": "6.1.0",
"pygments": "2.18.0",
"pymdown-extensions": "10.12",
"pyyaml": "6.0.2",
"ruff": "0.8.2",
"starlette": "0.41.3",
"tomlkit": "0.13.2",
"typing-extensions": "4.12.2",
"uvicorn": "0.32.1",
"websockets": "14.1"
},
"Optional Dependencies": {
"pandas": "2.2.3"
}
}
```
</details>
### Code to reproduce
_No response_ | closed | 2024-12-30T11:34:38Z | 2025-03-05T06:32:44Z | https://github.com/marimo-team/marimo/issues/3312 | [
"bug"
] | Retorikal | 3 |
unionai-oss/pandera | pandas | 1,946 | drop_invalid_rows is broken | ```python
class TestModel(DataFrameModel):
a: int
class Config:
drop_invalid_rows = True
df_test = pd.DataFrame({'a': [1, 1.1]})
TestModel(df_test, lazy=True)
```
```
File [~/miniforge3/envs/ds-3.12/lib/python3.12/site-packages/pandera/backends/pandas/base.py:194], in PandasSchemaBackend.drop_invalid_rows(self, check_obj, error_handler)
192 errors = error_handler.schema_errors
193 for err in errors:
--> 194 index_values = err.failure_cases["index"]
195 if isinstance(check_obj.index, pd.MultiIndex):
196 # MultiIndex values are saved on the error as strings so need to be cast back
197 # to their original types
198 index_tuples = err.failure_cases["index"].apply(eval)
TypeError: string indices must be integers, not 'str'
```
Same with Clumn.validate and Schema.validate.
pandera==0.23.1
pandas==2.2.3 | open | 2025-03-20T18:17:09Z | 2025-03-21T09:38:08Z | https://github.com/unionai-oss/pandera/issues/1946 | [
"bug"
] | n-splv | 1 |
ipython/ipython | jupyter | 14,131 | Shift+Tab not working in Jupyter (ValueError); seems to be inspection error within oinspect.py | *Shift+Tab* (code tooltip in Jupyter) works alright, but not with ```pandas.DataFrame``` methods.
```
File "[...]/miniconda3/lib/python3.11/site-packages/ipykernel/kernelbase.py", line 412, in dispatch_shell
await result
File "[...]/miniconda3/lib/python3.11/site-packages/ipykernel/kernelbase.py", line 809, in inspect_request
reply_content = self.do_inspect(
^^^^^^^^^^^^^^^^
File "[...]/miniconda3/lib/python3.11/site-packages/ipykernel/ipkernel.py", line 555, in do_inspect
bundle = self.shell.object_inspect_mime(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[...]/miniconda3/lib/python3.11/site-packages/IPython/core/interactiveshell.py", line 1838, in object_inspect_mime
if self.sphinxify_docstring
^^^^^^^^^^^^^^^^^^^^^^^^
File "[...]/miniconda3/lib/python3.11/site-packages/IPython/core/oinspect.py", line 738, in _get_info
"""Retrieve an info dict and format it.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[...]/miniconda3/lib/python3.11/site-packages/IPython/core/oinspect.py", line 838, in info
An object info dict with known fields from `info_fields`. Keys are
File "[...]/miniconda3/lib/python3.11/site-packages/pandas-2.0.3-py3.11-linux-x86_64.egg/pandas/core/generic.py", line 1466, in __nonzero__
raise ValueError(
ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
```
| closed | 2023-08-09T16:33:42Z | 2023-08-09T17:14:24Z | https://github.com/ipython/ipython/issues/14131 | [] | trevisanj | 5 |
huggingface/transformers | nlp | 36,357 | Allow setting a seed for DataCollatorForLanguageModeling | ### Feature request
The `DataCollatorForLanguageModeling` class allows training for an MLM (masked language model) task, which randomly masks or replaces certain tokens. Models such as BERT and RoBERTa are trained in such a manner. It would be great if the user can set a seed, ensuring repeatability in generating masked batches.
### Motivation
This would ensure generation of repeatable batches of data, which is critical for model reproducibility. Right now, there is a form of repeatability with `transformers.set_seed()`, but one can make use of generators([PyTorch](https://pytorch.org/docs/stable/generated/torch.Generator.html), [Tensorflow](https://www.tensorflow.org/api_docs/python/tf/random/Generator), [NumPy](https://numpy.org/doc/2.1/reference/random/generator.html)) to set the data collator seed without globally setting the seed for each framework. The major reason this would help is that the MLM masking probabilities would not be influenced by code outside of it, which is good practice. This would mean that, given the same dataset and seed, the masking would happen consistently irrespective of the rest of your training script. See [this blog post for more details](https://albertcthomas.github.io/good-practices-random-number-generators/#random-number-generation-with-numpy).
### Your contribution
I can submit a PR for this. I have experience with TF, PyTorch and NumPy, would love to contribute. I have taken a look at the code, and can add a `seed` argument which enables usage of generators for repeatability. If not specified, however, the code would fall back to its previous behavior, including using `transformers.set_seed()`. | closed | 2025-02-23T16:07:50Z | 2025-03-20T18:27:45Z | https://github.com/huggingface/transformers/issues/36357 | [
"Feature request"
] | capemox | 2 |
statsmodels/statsmodels | data-science | 9,344 | dataframe not printing after UnobservedComponents | #### Describe the bug
Hey. I was trying out the tutorial you have on [https://www.statsmodels.org/dev/examples/notebooks/generated/statespace_tvpvar_mcmc_cfa.html]
However, when I try to plot inflation together with simulated trends, inflation does not appear in the plots. I tried printing the inflation variable but it gives an error "cannot use a string pattern on a bytes-like object". However, when I print it just after the data import it works. Is this supposed to happen? Is dta.infl being modified by sm.tsa.UnobservedComponents? See code below
#### Code Sample
```python
from importlib import reload
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
from scipy.stats import invwishart, invgamma
# Get the macro dataset
dta = sm.datasets.macrodata.load_pandas().data
dta.index = pd.date_range('1959Q1', '2009Q3', freq='QS')
## when I print the data here, it works
print(dta.infl)
# Construct a local level model for inflation
mod = sm.tsa.UnobservedComponents(dta.infl, 'llevel')
## when I print it here, it says TypeError: cannot use a string pattern on a bytes-like object
print(dta.infl)
```
#### Expected Output
I expected dta.infl to be printed in the terminal
#### Output of ``import statsmodels.api as sm; sm.show_versions()``
INSTALLED VERSIONS
Python: 3.12.5.final.0
statsmodels
Installed: 0.14.2
Required Dependencies
cython: Not installed
numpy: 2.1.0
scipy: 1.14.1
pandas: 2.2.2
dateutil: 2.9.0.post0
patsy: 0.5.6
Optional Dependencies
matplotlib: 3.9.2
backend: tkagg
cvxopt: Not installed
joblib: Not installed
| closed | 2024-09-03T12:42:18Z | 2024-09-05T11:01:40Z | https://github.com/statsmodels/statsmodels/issues/9344 | [] | joaogoliveira1 | 3 |
tensorpack/tensorpack | tensorflow | 1,188 | Loss diverge when training Imagenet with ResNet18. | Hi, I am training Imagenet using ResNet18 but the loss diverge, could you give some advices?
### 1. What you did:
First I use mode='cpu' in **SyncMultiGPUTrainerReplicated** in **imagenet-resnet.py** , and run the following code:
```
python examples/ResNet/imagenet-resnet.py \
--data '/common-data/jlong.yuan/imagenet/raw/imagenet-data' \
-d 18 \
--mode resnet \
--batch 256
```
### 2. What you observed:
The loss diverge. Here is the log:
[32m[0514 09:35:21 @logger.py:90][0m Argv: examples/ResNet/imagenet-resnet.py --data /common-data/jlong.yuan/imagenet/raw/imagenet-data -d 18 --mode resnet --batch 256
[32m[0514 09:35:21 @imagenet-resnet.py:55][0m Running on 4 towers. Batch size per tower: 64
[32m[0514 09:35:22 @parallel.py:311][0m [PrefetchDataZMQ] Will fork a dataflow more than one times. This assumes the datapoints are i.i.d.
[32m[0514 09:35:22 @ilsvrc.py:128][0m [ILSVRC12] Assuming directory /common-data/jlong.yuan/imagenet/raw/imagenet-data/val has 'train' structure.
[32m[0514 09:35:23 @prof.py:47][0m [5m[31mWRN[0m [GPUUtilizationTracker] Both devices and CUDA_VISIBLE_DEVICES are None! Will monitor all 4 visible GPUs!
[32m[0514 09:35:23 @training.py:50][0m [DataParallel] Training a model of 4 towers.
[32m[0514 09:35:23 @interface.py:43][0m Automatically applying StagingInput on the DataFlow.
[32m[0514 09:35:23 @input_source.py:223][0m Setting up the queue 'QueueInput/input_queue' for CPU prefetching ...
[32m[0514 09:35:23 @training.py:110][0m Building graph for training tower 0 on device /gpu:0 ...
[32m[0514 09:35:23 @registry.py:135][0m conv0 input: [None, 3, 224, 224]
[32m[0514 09:35:23 @registry.py:143][0m conv0 output: [None, 64, 112, 112]
[32m[0514 09:35:23 @registry.py:135][0m pool0 input: [None, 64, 112, 112]
[32m[0514 09:35:23 @registry.py:143][0m pool0 output: [None, 64, 56, 56]
[32m[0514 09:35:23 @registry.py:135][0m group0/block0/conv1 input: [None, 64, 56, 56]
[32m[0514 09:35:23 @registry.py:143][0m group0/block0/conv1 output: [None, 64, 56, 56]
[32m[0514 09:35:23 @registry.py:135][0m group0/block0/conv2 input: [None, 64, 56, 56]
[32m[0514 09:35:23 @registry.py:143][0m group0/block0/conv2 output: [None, 64, 56, 56]
[32m[0514 09:35:23 @registry.py:135][0m group0/block1/conv1 input: [None, 64, 56, 56]
[32m[0514 09:35:23 @registry.py:143][0m group0/block1/conv1 output: [None, 64, 56, 56]
[32m[0514 09:35:23 @registry.py:135][0m group0/block1/conv2 input: [None, 64, 56, 56]
[32m[0514 09:35:23 @registry.py:143][0m group0/block1/conv2 output: [None, 64, 56, 56]
[32m[0514 09:35:23 @registry.py:135][0m group1/block0/conv1 input: [None, 64, 56, 56]
[32m[0514 09:35:23 @registry.py:143][0m group1/block0/conv1 output: [None, 128, 28, 28]
[32m[0514 09:35:23 @registry.py:135][0m group1/block0/conv2 input: [None, 128, 28, 28]
[32m[0514 09:35:23 @registry.py:143][0m group1/block0/conv2 output: [None, 128, 28, 28]
[32m[0514 09:35:23 @registry.py:135][0m group1/block0/convshortcut input: [None, 64, 56, 56]
[32m[0514 09:35:23 @registry.py:143][0m group1/block0/convshortcut output: [None, 128, 28, 28]
[32m[0514 09:35:23 @registry.py:135][0m group1/block1/conv1 input: [None, 128, 28, 28]
[32m[0514 09:35:23 @registry.py:143][0m group1/block1/conv1 output: [None, 128, 28, 28]
[32m[0514 09:35:23 @registry.py:135][0m group1/block1/conv2 input: [None, 128, 28, 28]
[32m[0514 09:35:23 @registry.py:143][0m group1/block1/conv2 output: [None, 128, 28, 28]
[32m[0514 09:35:23 @registry.py:135][0m group2/block0/conv1 input: [None, 128, 28, 28]
[32m[0514 09:35:23 @registry.py:143][0m group2/block0/conv1 output: [None, 256, 14, 14]
[32m[0514 09:35:23 @registry.py:135][0m group2/block0/conv2 input: [None, 256, 14, 14]
[32m[0514 09:35:23 @registry.py:143][0m group2/block0/conv2 output: [None, 256, 14, 14]
[32m[0514 09:35:23 @registry.py:135][0m group2/block0/convshortcut input: [None, 128, 28, 28]
[32m[0514 09:35:23 @registry.py:143][0m group2/block0/convshortcut output: [None, 256, 14, 14]
[32m[0514 09:35:23 @registry.py:135][0m group2/block1/conv1 input: [None, 256, 14, 14]
[32m[0514 09:35:23 @registry.py:143][0m group2/block1/conv1 output: [None, 256, 14, 14]
[32m[0514 09:35:23 @registry.py:135][0m group2/block1/conv2 input: [None, 256, 14, 14]
[32m[0514 09:35:23 @registry.py:143][0m group2/block1/conv2 output: [None, 256, 14, 14]
[32m[0514 09:35:23 @registry.py:135][0m group3/block0/conv1 input: [None, 256, 14, 14]
[32m[0514 09:35:23 @registry.py:143][0m group3/block0/conv1 output: [None, 512, 7, 7]
[32m[0514 09:35:23 @registry.py:135][0m group3/block0/conv2 input: [None, 512, 7, 7]
[32m[0514 09:35:23 @registry.py:143][0m group3/block0/conv2 output: [None, 512, 7, 7]
[32m[0514 09:35:23 @registry.py:135][0m group3/block0/convshortcut input: [None, 256, 14, 14]
[32m[0514 09:35:24 @registry.py:143][0m group3/block0/convshortcut output: [None, 512, 7, 7]
[32m[0514 09:35:24 @registry.py:135][0m group3/block1/conv1 input: [None, 512, 7, 7]
[32m[0514 09:35:24 @registry.py:143][0m group3/block1/conv1 output: [None, 512, 7, 7]
[32m[0514 09:35:24 @registry.py:135][0m group3/block1/conv2 input: [None, 512, 7, 7]
[32m[0514 09:35:24 @registry.py:143][0m group3/block1/conv2 output: [None, 512, 7, 7]
[32m[0514 09:35:24 @registry.py:135][0m gap input: [None, 512, 7, 7]
[32m[0514 09:35:24 @registry.py:143][0m gap output: [None, 512]
[32m[0514 09:35:24 @registry.py:135][0m linear input: [None, 512]
[32m[0514 09:35:24 @registry.py:143][0m linear output: [None, 1000]
[32m[0514 09:35:24 @regularize.py:97][0m regularize_cost() found 21 variables to regularize.
[32m[0514 09:35:24 @regularize.py:21][0m The following tensors will be regularized: conv0/W:0, group0/block0/conv1/W:0, group0/block0/conv2/W:0, group0/block1/conv1/W:0, group0/block1/conv2/W:0, group1/block0/conv1/W:0, group1/block0/conv2/W:0, group1/block0/convshortcut/W:0, group1/block1/conv1/W:0, group1/block1/conv2/W:0, group2/block0/conv1/W:0, group2/block0/conv2/W:0, group2/block0/convshortcut/W:0, group2/block1/conv1/W:0, group2/block1/conv2/W:0, group3/block0/conv1/W:0, group3/block0/conv2/W:0, group3/block0/convshortcut/W:0, group3/block1/conv1/W:0, group3/block1/conv2/W:0, linear/W:0
[32m[0514 09:35:24 @training.py:110][0m Building graph for training tower 1 on device /gpu:1 ...
[32m[0514 09:35:25 @regularize.py:97][0m regularize_cost() found 21 variables to regularize.
[32m[0514 09:35:25 @training.py:110][0m Building graph for training tower 2 on device /gpu:2 ...
[32m[0514 09:35:26 @regularize.py:97][0m regularize_cost() found 21 variables to regularize.
[32m[0514 09:35:26 @training.py:110][0m Building graph for training tower 3 on device /gpu:3 ...
[32m[0514 09:35:27 @regularize.py:97][0m regularize_cost() found 21 variables to regularize.
[32m[0514 09:35:29 @training.py:348][0m 'sync_variables_from_main_tower' includes 492 operations.
[32m[0514 09:35:29 @model_utils.py:67][0m [36mList of Trainable Variables:
[0mname shape #elements
------------------------------------- ---------------- -----------
conv0/W:0 [7, 7, 3, 64] 9408
conv0/bn/gamma:0 [64] 64
conv0/bn/beta:0 [64] 64
group0/block0/conv1/W:0 [3, 3, 64, 64] 36864
group0/block0/conv1/bn/gamma:0 [64] 64
group0/block0/conv1/bn/beta:0 [64] 64
group0/block0/conv2/W:0 [3, 3, 64, 64] 36864
group0/block0/conv2/bn/gamma:0 [64] 64
group0/block0/conv2/bn/beta:0 [64] 64
group0/block1/conv1/W:0 [3, 3, 64, 64] 36864
group0/block1/conv1/bn/gamma:0 [64] 64
group0/block1/conv1/bn/beta:0 [64] 64
group0/block1/conv2/W:0 [3, 3, 64, 64] 36864
group0/block1/conv2/bn/gamma:0 [64] 64
group0/block1/conv2/bn/beta:0 [64] 64
group1/block0/conv1/W:0 [3, 3, 64, 128] 73728
group1/block0/conv1/bn/gamma:0 [128] 128
group1/block0/conv1/bn/beta:0 [128] 128
group1/block0/conv2/W:0 [3, 3, 128, 128] 147456
group1/block0/conv2/bn/gamma:0 [128] 128
group1/block0/conv2/bn/beta:0 [128] 128
group1/block0/convshortcut/W:0 [1, 1, 64, 128] 8192
group1/block0/convshortcut/bn/gamma:0 [128] 128
group1/block0/convshortcut/bn/beta:0 [128] 128
group1/block1/conv1/W:0 [3, 3, 128, 128] 147456
group1/block1/conv1/bn/gamma:0 [128] 128
group1/block1/conv1/bn/beta:0 [128] 128
group1/block1/conv2/W:0 [3, 3, 128, 128] 147456
group1/block1/conv2/bn/gamma:0 [128] 128
group1/block1/conv2/bn/beta:0 [128] 128
group2/block0/conv1/W:0 [3, 3, 128, 256] 294912
group2/block0/conv1/bn/gamma:0 [256] 256
group2/block0/conv1/bn/beta:0 [256] 256
group2/block0/conv2/W:0 [3, 3, 256, 256] 589824
group2/block0/conv2/bn/gamma:0 [256] 256
group2/block0/conv2/bn/beta:0 [256] 256
group2/block0/convshortcut/W:0 [1, 1, 128, 256] 32768
group2/block0/convshortcut/bn/gamma:0 [256] 256
group2/block0/convshortcut/bn/beta:0 [256] 256
group2/block1/conv1/W:0 [3, 3, 256, 256] 589824
group2/block1/conv1/bn/gamma:0 [256] 256
group2/block1/conv1/bn/beta:0 [256] 256
group2/block1/conv2/W:0 [3, 3, 256, 256] 589824
group2/block1/conv2/bn/gamma:0 [256] 256
group2/block1/conv2/bn/beta:0 [256] 256
group3/block0/conv1/W:0 [3, 3, 256, 512] 1179648
group3/block0/conv1/bn/gamma:0 [512] 512
group3/block0/conv1/bn/beta:0 [512] 512
group3/block0/conv2/W:0 [3, 3, 512, 512] 2359296
group3/block0/conv2/bn/gamma:0 [512] 512
group3/block0/conv2/bn/beta:0 [512] 512
group3/block0/convshortcut/W:0 [1, 1, 256, 512] 131072
group3/block0/convshortcut/bn/gamma:0 [512] 512
group3/block0/convshortcut/bn/beta:0 [512] 512
group3/block1/conv1/W:0 [3, 3, 512, 512] 2359296
group3/block1/conv1/bn/gamma:0 [512] 512
group3/block1/conv1/bn/beta:0 [512] 512
group3/block1/conv2/W:0 [3, 3, 512, 512] 2359296
group3/block1/conv2/bn/gamma:0 [512] 512
group3/block1/conv2/bn/beta:0 [512] 512
linear/W:0 [512, 1000] 512000
linear/b:0 [1000] 1000[36m
Number of trainable variables: 62
Number of parameters (elements): 11689512
Storage space needed for all trainable variables: 44.59MB[0m
[32m[0514 09:35:29 @base.py:209][0m Setup callbacks graph ...
[32m[0514 09:35:29 @argtools.py:146][0m [5m[31mWRN[0m Starting a process with 'fork' method is not safe and may consume unnecessary extra memory. Use 'forkserver' method (available after Py3.4) instead if you run into any issues. See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
[32m[0514 09:35:30 @input_source.py:223][0m Setting up the queue 'DataParallelInferenceRunner/QueueInput/input_queue' for CPU prefetching ...
[32m[0514 09:35:30 @inference_runner.py:234][0m [InferenceRunner] Building tower 'InferenceTower0' on device /gpu:0 ...
[32m[0514 09:35:30 @inference_runner.py:234][0m [InferenceRunner] Building tower 'InferenceTower1' on device /gpu:1 with variable scope 'tower1'...
[32m[0514 09:35:30 @inference_runner.py:234][0m [InferenceRunner] Building tower 'InferenceTower2' on device /gpu:2 with variable scope 'tower2'...
[32m[0514 09:35:30 @inference_runner.py:234][0m [InferenceRunner] Building tower 'InferenceTower3' on device /gpu:3 with variable scope 'tower3'...
[32m[0514 09:35:31 @summary.py:46][0m [MovingAverageSummary] 4 operations in collection 'MOVING_SUMMARY_OPS' will be run with session hooks.
[32m[0514 09:35:31 @summary.py:93][0m Summarizing collection 'summaries' of size 7.
[32m[0514 09:35:31 @graph.py:98][0m Applying collection UPDATE_OPS of 40 ops.
[32m[0514 09:35:33 @base.py:230][0m Creating the session ...
[32m[0514 09:35:37 @base.py:236][0m Initializing the session ...
[32m[0514 09:35:37 @base.py:243][0m Graph Finalized.
[32m[0514 09:35:38 @concurrency.py:37][0m Starting EnqueueThread QueueInput/input_queue ...
[32m[0514 09:35:38 @graph.py:73][0m Running Op sync_variables/sync_variables_from_main_tower ...
[32m[0514 09:35:55 @param.py:158][0m [HyperParamSetter] At global_step=0, learning_rate is set to 0.100000
[32m[0514 09:35:55 @concurrency.py:37][0m Starting EnqueueThread DataParallelInferenceRunner/QueueInput/input_queue ...
[32m[0514 09:35:55 @inference_runner.py:96][0m [InferenceRunner] Will eval 782 iterations
[32m[0514 09:35:56 @base.py:275][0m Start Epoch 1 ...
[32m[0514 09:35:56 @input_source.py:556][0m Pre-filling StagingArea ...
[32m[0514 09:35:56 @input_source.py:560][0m 1 element was put into StagingArea on each tower.
[32m[0514 10:12:28 @base.py:285][0m Epoch 1 (global_step 5004) finished, time:36 minutes 32 seconds.
[32m[0514 10:12:28 @graph.py:73][0m Running Op sync_variables/sync_variables_from_main_tower ...
[32m[0514 10:12:45 @saver.py:79][0m Model saved to train_log/imagenet-resnet-d18-batch256/model-5004.
[32m[0514 10:12:45 @misc.py:109][0m Estimated Time Left: 2 days 15 hours 50 minutes 15 seconds
[32m[0514 10:13:51 @monitor.py:467][0m DataParallelInferenceRunner/QueueInput/queue_size: 50
[32m[0514 10:13:51 @monitor.py:467][0m GPUUtil/0: 94.979
[32m[0514 10:13:51 @monitor.py:467][0m GPUUtil/1: 73.501
[32m[0514 10:13:51 @monitor.py:467][0m GPUUtil/2: 75.286
[32m[0514 10:13:51 @monitor.py:467][0m GPUUtil/3: 74.376
[32m[0514 10:13:51 @monitor.py:467][0m QueueInput/queue_size: 48.985
[32m[0514 10:13:51 @monitor.py:467][0m l2_regularize_loss: nan
[32m[0514 10:13:51 @monitor.py:467][0m learning_rate: 0.1
[32m[0514 10:13:51 @monitor.py:467][0m train-error-top1: 1
[32m[0514 10:13:51 @monitor.py:467][0m train-error-top5: 1
[32m[0514 10:13:51 @monitor.py:467][0m val-error-top1: 1
[32m[0514 10:13:51 @monitor.py:467][0m val-error-top5: 1
[32m[0514 10:13:51 @monitor.py:467][0m xentropy-loss: nan
[32m[0514 10:13:51 @group.py:48][0m Callbacks took 82.481 sec in total. DataParallelInferenceRunner: 1 minute 5 seconds
[32m[0514 10:13:51 @base.py:275][0m Start Epoch 2 ...
[32m[0514 10:16:36 @base.py:293][0m Detected Ctrl-C and exiting main loop.
[32m[0514 10:16:36 @input_source.py:179][0m [EnqueueThread] Thread EnqueueThread QueueInput/input_queue Exited.
[32m[0514 10:16:36 @input_source.py:179][0m [EnqueueThread] Thread EnqueueThread DataParallelInferenceRunner/QueueInput/input_queue Exited.
### 3. What you expected, if not obvious.
### 4. Your environment:
-------------------- -------------------------------------------------------------------
sys.platform linux
Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56) [GCC 7.2.0]
Tensorpack 0.9.4
Numpy 1.14.3
TensorFlow 1.7.0/v1.7.0-3-g024aecf414
TF Compiler Version 4.8.4
TF CUDA support True
TF MKL support False
Nvidia Driver /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.418.40.04
CUDA /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudart.so.9.0.176
CUDNN /usr/lib/x86_64-linux-gnu/libcudnn.so.7.0.5
NCCL /usr/lib/x86_64-linux-gnu/libnccl.so.2.1.2
CUDA_VISIBLE_DEVICES None
GPU 0,1,2,3 Tesla K80
Free RAM 206.93/251.81 GB
CPU Count 40
cv2 4.0.0
msgpack 0.6.1
python-prctl True
-------------------- -------------------------------------------------------------------
| closed | 2019-05-14T10:34:59Z | 2019-05-15T03:37:10Z | https://github.com/tensorpack/tensorpack/issues/1188 | [
"upstream issue"
] | dxbdxx | 3 |
amidaware/tacticalrmm | django | 1,439 | Client sorting | **Server Info (please complete the following information):**
- OS: [e.g. Ubuntu 20.04,5 LTS]
- Browser: [chrome, safari]
- RMM Version (v.0.15.7):
**Installation Method:**
- [ x] Standard
- [ ] Docker
**Describe the bug**
When in user preferences client sorting is set to > Sort alphabetically only
the client list doesn't update and sort until you change it back to > Sort alphabetically only , move failing clients to the top
and then back again to only alphabetically.
Even then, after you close the tab in your browser and open it again, it falls back to the client list with failing ones on top, so you have to do the previously described steps again.
| closed | 2023-02-21T07:22:11Z | 2023-12-12T22:50:45Z | https://github.com/amidaware/tacticalrmm/issues/1439 | [
"bug"
] | rmmpositivens | 7 |
Sanster/IOPaint | pytorch | 328 | [BUG] Could not find TensorRT what meaning and how to fix?? | hi i get this massage error what meaning and how to fix? ?
W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` will be overriden.
/usr/local/lib/python3.10/dist-packages/transformers/models/clip/feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
warnings.warn( | closed | 2023-06-16T08:44:18Z | 2023-06-17T04:05:15Z | https://github.com/Sanster/IOPaint/issues/328 | [] | wolfkingal2000 | 1 |
plotly/dash | data-science | 2,254 | Use React version 18.2 | Hi,
I posted this initially on the plotly forums but I imagine this might be a better place for it.
Is there any way to force Dash to use a newer version of React? Or is there any time-line as to update Dash to run with 18.2?
The main reason I’m asking is that e.g. Aggrid (and some other libraries) seem to have worse render performance for large tables under 16.4 versus 18.2.
As a test I created a component (through cookiecutter) and updated the react version.
> When running it through the react testing server (using 18.2) I can scroll very smoothly.
> When running it through the react testing server (using 16.4) then scrolling is clearly lagging. I can see the cells being drawn.
> When running it through Dash as a component (Dash forces 16.4) then scrolling is again lagging.
I understand that this is a non-Dash library specific problem, however I can imagine that updating to a more recent React version (or indicating how it could be hacked) is something that has come up? | closed | 2022-09-30T18:20:05Z | 2025-02-06T14:21:24Z | https://github.com/plotly/dash/issues/2254 | [
"feature",
"P3",
"dash-3.0"
] | klar-C | 8 |
tensorflow/tensor2tensor | machine-learning | 953 | Bug - one mistaken line in next_frame.py | ### Description
There is a bug of one newly added extra line:
from tensorflow_models.slim.nets.cyclegan import cyclegan_upsample
in
tensor2tensor/models/research/next_frame.py
...
### Environment information
```
OS: <your answer here>
$ pip freeze | grep tensor
# your output here
$ python -V
# your output here
```
### For bugs: reproduction and error logs
from tensorflow_models.slim.nets.cyclegan import cyclegan_upsample
ModuleNotFoundError: No module named 'tensorflow_models'
It causes experiments whichever has dependency of next_frame.py fail.
```
# Steps to reproduce:
Run any experiment that has dependency of the file
tensor2tensor/models/research/next_frame.py
...
```
```
# Error logs:
ModuleNotFoundError: No module named 'tensorflow_models'
...
```
| closed | 2018-07-24T01:51:40Z | 2018-07-25T18:58:08Z | https://github.com/tensorflow/tensor2tensor/issues/953 | [] | johnyjyu | 3 |
allenai/allennlp | nlp | 5,456 | Not able to load coref-spanbert-large-2021.03.10.tar.gz on a non internet system | ```
from allennlp.predictors import Predictor
import os
path1 = '/opt/triniti4/pretrained_models/context_expansion/coref-spanbert-large-2021.03.10.tar.gz'
coref_model = Predictor.from_path(path1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.8/dist-packages/allennlp/predictors/predictor.py", line 362, in from_path
load_archive(archive_path, cuda_device=cuda_device, overrides=overrides),
File "/usr/local/lib/python3.8/dist-packages/allennlp/models/archival.py", line 205, in load_archive
dataset_reader, validation_dataset_reader = _load_dataset_readers(
File "/usr/local/lib/python3.8/dist-packages/allennlp/models/archival.py", line 231, in _load_dataset_readers
dataset_reader = DatasetReader.from_params(
File "/usr/local/lib/python3.8/dist-packages/allennlp/common/from_params.py", line 589, in from_params
return retyped_subclass.from_params(
File "/usr/local/lib/python3.8/dist-packages/allennlp/common/from_params.py", line 621, in from_params
kwargs = create_kwargs(constructor_to_inspect, cls, params, **extras)
File "/usr/local/lib/python3.8/dist-packages/allennlp/common/from_params.py", line 199, in create_kwargs
constructed_arg = pop_and_construct_arg(
File "/usr/local/lib/python3.8/dist-packages/allennlp/common/from_params.py", line 307, in pop_and_construct_arg
return construct_arg(class_name, name, popped_params, annotation, default, **extras)
File "/usr/local/lib/python3.8/dist-packages/allennlp/common/from_params.py", line 385, in construct_arg
value_dict[key] = construct_arg(
File "/usr/local/lib/python3.8/dist-packages/allennlp/common/from_params.py", line 341, in construct_arg
return annotation.from_params(params=popped_params, **subextras)
File "/usr/local/lib/python3.8/dist-packages/allennlp/common/from_params.py", line 589, in from_params
return retyped_subclass.from_params(
File "/usr/local/lib/python3.8/dist-packages/allennlp/common/from_params.py", line 623, in from_params
return constructor_to_call(**kwargs) # type: ignore
File "/usr/local/lib/python3.8/dist-packages/allennlp/data/token_indexers/pretrained_transformer_mismatched_indexer.py", line 58, in __init__
self._matched_indexer = PretrainedTransformerIndexer(
File "/usr/local/lib/python3.8/dist-packages/allennlp/data/token_indexers/pretrained_transformer_indexer.py", line 57, in __init__
self._allennlp_tokenizer = PretrainedTransformerTokenizer(
File "/usr/local/lib/python3.8/dist-packages/allennlp/data/tokenizers/pretrained_transformer_tokenizer.py", line 70, in __init__
self.tokenizer = cached_transformers.get_tokenizer(
File "/usr/local/lib/python3.8/dist-packages/allennlp/common/cached_transformers.py", line 108, in get_tokenizer
tokenizer = transformers.AutoTokenizer.from_pretrained(
File "/usr/local/lib/python3.8/dist-packages/transformers/models/auto/tokenization_auto.py", line 378, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/tokenization_utils_base.py", line 1774, in from_pretrained
resolved_vocab_files[file_id] = cached_path(
File "/usr/local/lib/python3.8/dist-packages/transformers/file_utils.py", line 1077, in cached_path
output_path = get_from_cache(
File "/usr/local/lib/python3.8/dist-packages/transformers/file_utils.py", line 1263, in get_from_cache
raise ValueError(
ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.
>>>
``` | closed | 2021-11-03T07:28:17Z | 2021-12-16T16:09:54Z | https://github.com/allenai/allennlp/issues/5456 | [
"stale"
] | avinashpaul | 5 |
iperov/DeepFaceLab | machine-learning | 520 | Win Prebuild doesn't working | Win10 x64 1903 Nvidia GTX 1080 / Stage 3
Exception: Traceback (most recent call last):
File "J:\DeepFaceLab\DeepFaceLab_CUDA_9.2_SSE\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "J:\DeepFaceLab\DeepFaceLab_CUDA_9.2_SSE\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "J:\DeepFaceLab\DeepFaceLab_CUDA_9.2_SSE\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The specified procedure could not be found.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "J:\DeepFaceLab\DeepFaceLab_CUDA_9.2_SSE\_internal\DeepFaceLab\joblib\SubprocessorBase.py", line 59, in _subprocess_run
self.on_initialize(client_dict)
File "J:\DeepFaceLab\DeepFaceLab_CUDA_9.2_SSE\_internal\DeepFaceLab\mainscripts\Extractor.py", line 87, in on_initialize
nnlib.import_all (device_config)
File "J:\DeepFaceLab\DeepFaceLab_CUDA_9.2_SSE\_internal\DeepFaceLab\nnlib\nnlib.py", line 1160, in import_all
nnlib.import_keras(device_config)
File "J:\DeepFaceLab\DeepFaceLab_CUDA_9.2_SSE\_internal\DeepFaceLab\nnlib\nnlib.py", line 181, in import_keras
nnlib._import_tf(device_config)
File "J:\DeepFaceLab\DeepFaceLab_CUDA_9.2_SSE\_internal\DeepFaceLab\nnlib\nnlib.py", line 153, in _import_tf
import tensorflow as tf
File "J:\DeepFaceLab\DeepFaceLab_CUDA_9.2_SSE\_internal\python-3.6.8\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "J:\DeepFaceLab\DeepFaceLab_CUDA_9.2_SSE\_internal\python-3.6.8\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "J:\DeepFaceLab\DeepFaceLab_CUDA_9.2_SSE\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "J:\DeepFaceLab\DeepFaceLab_CUDA_9.2_SSE\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "J:\DeepFaceLab\DeepFaceLab_CUDA_9.2_SSE\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "J:\DeepFaceLab\DeepFaceLab_CUDA_9.2_SSE\_internal\python-3.6.8\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "imp.py", line 243, in load_module
File "imp.py", line 343, in load_dynamic
ImportError: DLL load failed: The specified procedure could not be found.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
| closed | 2019-12-15T12:52:33Z | 2020-03-23T01:41:12Z | https://github.com/iperov/DeepFaceLab/issues/520 | [] | Dev1lroot | 2 |
AntonOsika/gpt-engineer | python | 196 | SET OPENAI_API_KEY | Im using windows and export command is not valid to set the apikey, i tried set, but it looks like it doesnt work either, any suggestions ? | closed | 2023-06-19T10:50:58Z | 2023-06-20T22:09:06Z | https://github.com/AntonOsika/gpt-engineer/issues/196 | [] | hierro | 10 |
aleju/imgaug | machine-learning | 766 | AddToHue vs MultiplySaturation running time | I measured the running time of AddToHue and MultiplySaturation and I have noticed that MultiplySaturation is much slower than AddToHue. It is strange since they both convert the image to the same colorspace HSV and apply similar operations to one channel of this colorspace.
These are the running time in seconds:
- AddToHue: 0.005942453940709432
- MultiplySaturation: 0.03731683890024821 | open | 2021-05-01T11:41:37Z | 2021-05-01T11:41:37Z | https://github.com/aleju/imgaug/issues/766 | [] | crocis | 0 |
plotly/dash-table | plotly | 567 | style_** props do not support `border-radius`, `border_radius`, `borderRadius` | Issue found while looking at this community issue: https://community.plot.ly/t/datatable-rounded-corners/28207
The validity of the style props is determined with this file: https://github.com/plotly/dash-table/blob/master/src/dash-table/derived/style/py2jsCssProperties.ts | closed | 2019-09-04T14:01:02Z | 2019-09-05T16:02:29Z | https://github.com/plotly/dash-table/issues/567 | [
"dash-type-bug",
"size: 1"
] | Marc-Andre-Rivet | 0 |
grillazz/fastapi-sqlalchemy-asyncpg | sqlalchemy | 3 | boost logging with rich | closed | 2021-08-30T07:33:47Z | 2021-09-01T07:52:04Z | https://github.com/grillazz/fastapi-sqlalchemy-asyncpg/issues/3 | [
"enhancement"
] | grillazz | 0 | |
unionai-oss/pandera | pandas | 1,577 | Make `pydantic` and `typeguard` extras for pandas generic type support | Currently, `pydantic` and `typeguard` are being used in `pandas_engine.py` to implement support for generic types via the `PythonGenericType` datatype: https://github.com/unionai-oss/pandera/blob/main/pandera/engines/pandas_engine.py#L1347.
Consider making this either: a soft dependency such that the types are only available if these packages are available, or adding an additional extra to pandera, i.e. `pip install 'pandera[generics]` or something to that effect. | open | 2024-04-14T20:38:48Z | 2024-04-14T20:38:48Z | https://github.com/unionai-oss/pandera/issues/1577 | [
"enhancement"
] | cosmicBboy | 0 |
strawberry-graphql/strawberry | fastapi | 3,290 | DataLoader custom cache cannot be used with redis | <!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
DataLoader custom cache cannot be used with redis even thought [docs](https://github.com/strawberry-graphql/strawberry/blob/5b5b717d3ce86b9d8018a5c0c1acb7a66c4d97ce/docs/guides/dataloaders.md?plain=1#L240) mentions it's possible.
## Additional Context
The issue seems to be that dataloader calls get/set methods of custom cache class but expects/works only with Future type.
- `get` from cache expecting Future ([here](https://github.com/strawberry-graphql/strawberry/blob/5b5b717d3ce86b9d8018a5c0c1acb7a66c4d97ce/strawberry/dataloader.py#L154))
- should be able to return only real value stored with `set`
- `set` to cache value arg is Future without result value itself ([here](https://github.com/strawberry-graphql/strawberry/blob/5b5b717d3ce86b9d8018a5c0c1acb7a66c4d97ce/strawberry/dataloader.py#L162)), value to the Future is set later ([here](https://github.com/strawberry-graphql/strawberry/blob/5b5b717d3ce86b9d8018a5c0c1acb7a66c4d97ce/strawberry/dataloader.py#L268))
- `set` should be called with real values to cache, not Future without result
I can help fixing this but I'm not sure what would be the proper way to handle this. | open | 2023-12-13T10:15:03Z | 2025-03-20T15:56:31Z | https://github.com/strawberry-graphql/strawberry/issues/3290 | [
"bug"
] | davidnemec | 5 |
slackapi/python-slack-sdk | asyncio | 1,350 | Multi Static Select's initial_options don't appear in the Block Actions selected_options payload | I am using a button and timepicker to set the initial_options of a Multi Static Select element equal to the timepicker's value when the button is clicked.
Everything works fine up until I get my block actions event callback. It looks like Slack doesn't include initial_options as apart of the MultiStaticSelect's selected_options payload.
Is this an intended future? It doesn't make much sense to me because aren't the initial_options considered selected items??
### Reproducible in:
Try changing a MultiStaticSelect's initial_options and see if you get them back in the block_actions, for me it was empty.
Unless I manually selected an option it would not appear in the payload.
```
| closed | 2023-04-09T23:03:28Z | 2023-05-28T20:19:01Z | https://github.com/slackapi/python-slack-sdk/issues/1350 | [
"question",
"auto-triage-stale"
] | ghost | 2 |
huggingface/datasets | nlp | 6,559 | Latest version 2.16.1, when load dataset error occurs. ValueError: BuilderConfig 'allenai--c4' not found. Available: ['default'] | ### Describe the bug
python script is:
```
from datasets import load_dataset
cache_dir = 'path/to/your/cache/directory'
dataset = load_dataset('allenai/c4','allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', use_auth_token=False, cache_dir=cache_dir)
```
the script success when datasets version is 2.14.7.
when using 2.16.1, error occurs
`
ValueError: BuilderConfig 'allenai--c4' not found. Available: ['default']`
### Steps to reproduce the bug
1. pip install datasets==2.16.1
2. run python script:
```
from datasets import load_dataset
cache_dir = 'path/to/your/cache/directory'
dataset = load_dataset('allenai/c4','allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', use_auth_token=False, cache_dir=cache_dir)
```
### Expected behavior
the dataset should be loaded successful in the latest version.
### Environment info
datasets 2.16.1 | closed | 2024-01-04T07:04:48Z | 2024-04-03T10:40:53Z | https://github.com/huggingface/datasets/issues/6559 | [] | zhulinJulia24 | 8 |
BeanieODM/beanie | pydantic | 578 | [BUG] Migration does not run locally after upgrading to 1.19.1 | **Describe the bug**
After upgrading to 1.19.1, we are having issues with running migration locally (on a single instance MongoDB, running on docker).
<img width="1280" alt="Screenshot 2023-05-31 at 18 10 32" src="https://github.com/roman-right/beanie/assets/4325034/aaf2e0c6-7815-48fa-9dec-e7ed490d9734">
**To Reproduce**
**Expected behavior**
When running locally or without a replica set, this should't run transactions and run into this error.
**Additional context**
Add any other context about the problem here.
| open | 2023-06-01T01:14:43Z | 2024-12-08T21:53:48Z | https://github.com/BeanieODM/beanie/issues/578 | [
"feature request"
] | prabhumarappan | 4 |
Asabeneh/30-Days-Of-Python | python | 124 | Day 7 wrong pop method description | fruits.pop() # removes the last element from the set
from the [doc](https://docs.python.org/2/library/sets.html#set-objects)
-->Remove and return an arbitrary element from the set. Raises KeyError if the set is empty. | closed | 2021-01-30T14:33:14Z | 2021-07-08T01:46:51Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/124 | [] | Mar1usSo | 2 |
dynaconf/dynaconf | fastapi | 382 | [bug] Django extension incompatible with DJDT | **Describe the bug**
Django Debug Toolbar calls `settings.is_overridden()` which results in `'Settings' object has no attribute 'IS_OVERRIDDEN'`
**To Reproduce**
Steps to reproduce the behavior:
1. Start new django project and add [django debug toolbar](https://django-debug-toolbar.readthedocs.io/en/latest/installation.html) version 2.2
2. Add dynaconf to settings.py as per instructions:
```python
# HERE STARTS DYNACONF EXTENSION LOAD (Keep at the very bottom of settings.py)
# Read more at https://dynaconf.readthedocs.io/en/latest/guides/django.html
import dynaconf # noqa
settings = dynaconf.DjangoDynaconf(__name__, environments=True, load_dotenv=True) # noqa
# HERE ENDS DYNACONF EXTENSION LOAD (No more code below this line)
```
3. Start the development server `python manage.py runserver`
**Expected behavior**
The Django development server to start-up with no errors
**Environment (please complete the following information):**
- OS: Linux
- Dynaconf 3.0.0
| closed | 2020-08-03T15:05:58Z | 2020-08-14T17:05:28Z | https://github.com/dynaconf/dynaconf/issues/382 | [
"bug",
"Pending Release"
] | wengole | 2 |
aeon-toolkit/aeon | scikit-learn | 1,837 | [BUG] PyODAdapter only returns decision_scores_ of train-set | ### Describe the bug
The PyODAdapter currently does not support predict() on test-data, only decision_scores_ on data classifier was fitted on is available.
https://pyod.readthedocs.io/en/latest/pyod.models.html#module-pyod.models.lof
(...)
[decision_scores_](https://pyod.readthedocs.io/en/latest/pyod.models.html#id1159)numpy array of shape (n_samples,)
The outlier scores of the training data. The higher, the more abnormal. Outliers tend to have higher scores. This value is available once the detector is fitted.
(...)
I would expect to fit on train-data and predict on test-data.
Furthermore fit_predict of PyOD is deprecated and therefore should not be used by an adapter as to not elicit unexpected behaviour for persons versed with the underlying PyOD.
Output:
### Steps/Code to reproduce the bug
```
import numpy as np
import warnings
from pyod.models.lof import LOF
from aeon.anomaly_detection import PyODAdapter
from aeon.utils.windowing import reverse_windowing
warnings.simplefilter('ignore')
def sliding_window(a, window):
shape = a.shape[:-1] + (a.shape[-1] - window + 1, window)
strides = a.strides + (a.strides[-1],)
return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)
X = np.asarray([0, 0, 0, 0, 0, 0, 1, 0, 0, 0])
Y = np.asarray([0, 0, 0, 0, 0, 1, 0, 0, 0, 0])
X_win = sliding_window(X, 2)
Y_win = sliding_window(Y, 2)
print("train:", X)
print("test :", Y)
detector = PyODAdapter(LOF(), window_size=2)
print("LOF via PyODAdapter")
print(detector.fit_predict(X, axis=0))
detector.fit(X)
print("predicting on test via PyODAdapter")
print(detector.predict(Y))
print("LOF via PyOD")
clf = LOF()
clf.fit(X_win)
print(reverse_windowing(clf.decision_scores_, 2, np.nanmean, 1, 2))
print("decision_function on test via PyOD")
print(reverse_windowing(clf.decision_function(Y_win), 2, np.nanmean, 1, 2))
```
### Expected results
Ability to use test-sets and getting scores for the test-set returned, see below for comparison.
### Actual results
```
$ python pyod_test.py
<frozen importlib._bootstrap>:228: RuntimeWarning: scipy._lib.messagestream.MessageStream size changed, may indicate binary incompatibility. Expected 56 from C header, got 64 from PyObject
/home/roadrunner/miniconda3/envs/py3k/lib/python3.9/site-packages/aeon/base/__init__.py:24: FutureWarning: The aeon package will soon be releasing v1.0.0 with the removal of legacy modules and interfaces such as BaseTransformer and BaseForecaster. This will contain breaking changes. See aeon-toolkit.org for more information. Set aeon.AEON_DEPRECATION_WARNING or the AEON_DEPRECATION_WARNING environmental variable to 'False' to disable this warning.
warnings.warn(
train: [0 0 0 0 0 0 1 0 0 0]
test : [0 0 0 0 0 1 0 0 0 0]
LOF via PyODAdapter
[1.01230696 1.01230696 1.01230696 1.01230696 1.01230696 0.98562678
0.95894661 0.98562678 1.01230696 1.01230696]
predicting on test via PyODAdapter
[1.01230696 1.01230696 1.01230696 1.01230696 0.98562678 0.95894661
0.98562678 1.01230696 1.01230696 1.01230696]
LOF via PyOD
[1.01230696 1.01230696 1.01230696 1.01230696 1.01230696 0.98562678
0.95894661 0.98562678 1.01230696 1.01230696]
decision_function on test via PyOD
[0.95894661 0.95894661 0.95894661 0.95894661 0.95894661 0.95894661
0.95894661 0.95894661 0.95894661 0.95894661]
```
### Versions
_No response_ | closed | 2024-07-22T08:45:30Z | 2024-08-13T07:40:23Z | https://github.com/aeon-toolkit/aeon/issues/1837 | [
"bug",
"interfacing algorithms",
"anomaly detection"
] | roadrunner-gs | 2 |
tensorflow/tensor2tensor | deep-learning | 1,270 | Question: Can I add my own Metrics? | ### Description
I've added my own problem using `--t2t_usr_dir` flag. And I also want to add my own Metrics for my own problem. But I notice that
> You can do so for models, hyperparameter sets, modalities, and problems.
I want to know whether I could add my own Metrics or how I could add my own Metrics ?
Thanks.
### Environment information
```
OS: Ubuntu
$ pip freeze | grep tensor
mesh-tensorflow==0.0.4
tensor2tensor==1.11.0
tensorflow==1.12.0
tensorflow-gpu=1.12.0
tensorflow-hub==0.1.1
tensorflow-metadata==0.9.0
tensorflow-probability=0.5.0
tensorflow-tensorboard=0.4.0
$ python -V
Python 3.5.4 :: Continuum Analytics, Inc.
```
| closed | 2018-12-04T08:12:25Z | 2018-12-06T11:07:55Z | https://github.com/tensorflow/tensor2tensor/issues/1270 | [] | luffy06 | 8 |
AirtestProject/Airtest | automation | 960 | 1.2.2版本「强制指定更短的边为width,更长的边为height」带来的安卓横屏设备触摸位置不正确 | **描述问题bug**
Airtest 1.2.2更新中,在获取设备宽高数据时,强制指定更短的边为width,更长的边为height
但是对于平板、模拟器等默认横屏(orientation=0时为landscape)的设备应该是「长边为Width,短边为Height」
因此在touch_proxy.transform_xy时应用了错误的值,致使坐标被错误地拉伸了
**相关截图**
无
**复现步骤**
```python3
from airtest.core.android.android import Android
d=Android(...) # 此处连接到分辨率为1280*720的模拟器
print(d.get_current_resolution()) # 得到(720,1280),而在原先的版本中正确的值应为(1280,720)
d.touch((100,100)) # 实际的点击位置变为了(100/720*1280,100/1280*720)=(177,56)
d.touch((56,177)) # 这样才点到(100,100)
```
**预期效果**
见上
**python 版本:** `python3.8.7` 和 `3.9.6`
**airtest 版本: 1.2.2
**设备:**
- 型号: BlueStack4 hyperv x64
在AndroidStudio中挑个width>height的设备都能复现相同错误
比如我选择了(2048*1536)的Pixel XL


**其他相关环境信息**
无
***请立刻撤回该项更新,删除`airtest/core/android/adb.py`第893行*** | closed | 2021-08-24T13:42:57Z | 2021-10-03T14:59:50Z | https://github.com/AirtestProject/Airtest/issues/960 | [] | hgjazhgj | 6 |
Farama-Foundation/Gymnasium | api | 544 | [Bug Report] [MuJoCo] [Documentation] Various environment do not terminate on non-finite states. | ### Describe the bug
## HumanoidStandup
the Documention says:
```markdown
1. Termination: Any of the state space values is no longer finite
```
But the environment never terminates
# InvertedDoublePendulum
the Documention says:
```markdown
2. Termination: Any of the state space values is no longer finite.
```
but
https://github.com/Farama-Foundation/Gymnasium/blob/deb50802facfd827abd4d1f0cf1069afb12a726b/gymnasium/envs/mujoco/inverted_double_pendulum.py#L39
# Reacher & Pusher
the Documention says:
```md
2. Termination: Any of the state space values is no longer finite.
```
But the environments never terminate
# Walker2d
the Documention says:
```md
1. Any of the state space values is no longer finite
```
but the environment does not terminate on that condition
https://github.com/Farama-Foundation/Gymnasium/blob/deb50802facfd827abd4d1f0cf1069afb12a726b/gymnasium/envs/mujoco/walker2d_v4.py#L228-L238
# Proposed Solution
Simply remove those lines from the documentation, as practically the state can not reach non-finite values
### Code example
_No response_
### System info
_No response_
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
| closed | 2023-06-08T08:13:53Z | 2023-06-10T11:14:11Z | https://github.com/Farama-Foundation/Gymnasium/issues/544 | [
"bug"
] | Kallinteris-Andreas | 1 |
tensorflow/tensor2tensor | machine-learning | 1,831 | Will BERT+transformer-decoder better than tensor2tensor for text-generation? | Thank you very much. | open | 2020-07-10T08:43:07Z | 2020-07-10T08:47:00Z | https://github.com/tensorflow/tensor2tensor/issues/1831 | [] | guotong1988 | 0 |
scikit-optimize/scikit-optimize | scikit-learn | 618 | Provide a way to disallow the Optimizer from querying already-queried points | I have noticed in some of my runs on smaller search spaces there are collisions between the points returned by subsequent `ask()` calls. Right now skopt prints a warning, but I would really prefer it just never return these duplicate points. Can we provide a setting that enforces this? | open | 2018-01-25T17:08:15Z | 2022-07-19T10:37:44Z | https://github.com/scikit-optimize/scikit-optimize/issues/618 | [] | pavelkomarov | 2 |
desec-io/desec-stack | rest-api | 186 | Revise Permissions and Validations | Currently, much validation/permission checking happens in the `perform_create` view of the `DomainList` view. Some of it could be considered a user's permission (such as creating public suffix domains), some of it could be considered validation (such as the domain name validation according to the dyn flag).
This issue is to move all permission checks to where they belong, supposedly rest framework permission classes, if possible. One major problem is that object permissions are hard to run when creating new objects, so we need to figure out a way to do that.
Some other views have similar issues, e.g. the type check on the RRSet endpoint. | closed | 2019-05-09T08:32:22Z | 2020-02-25T17:19:20Z | https://github.com/desec-io/desec-stack/issues/186 | [] | nils-wisiol | 1 |
aminalaee/sqladmin | asyncio | 675 | Changing color theme | ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
This feature request is not related to a problem.
### Describe the solution you would like.
I would like to see some documentation on how the color theme can be changed.
### Describe alternatives you considered
I've tried searching for how other repos have done this and so far it appears like this is a custom implementation.
### Additional context
_No response_ | open | 2023-11-20T20:30:37Z | 2024-05-06T11:30:12Z | https://github.com/aminalaee/sqladmin/issues/675 | [
"good first issue"
] | aleksarias | 1 |
RobertCraigie/prisma-client-py | pydantic | 635 | Explore using DMMF types directly | ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Currently, we define our own types based off of the model definitions themselves, however, Prisma also includes the actual type definitions themselves in the DMMF, this is how they generate the types in the JS client.
The fact that we generate our own types independently can result in mismatches between our types and the types that the query engine expects. An example of this is [this issue](https://github.com/prisma/prisma/issues/13892) which is requesting support for `push` for arrays in CockroachDB. Given the current structure, we generate the `push` type for CockroachDB, even though using it will result in an error from the query engine.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
Design a prototype implementing this, see how feasible it is.
Known downsides to changing to this approach:
- less flexibility
- unknown how we will support pseudo-recursive types | open | 2022-12-04T01:17:39Z | 2022-12-04T01:20:08Z | https://github.com/RobertCraigie/prisma-client-py/issues/635 | [
"kind/improvement",
"topic: types",
"topic: internal",
"level/advanced",
"priority/low"
] | RobertCraigie | 0 |
aminalaee/sqladmin | fastapi | 332 | DatieTime field is editable even after setting it as readonly | ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
DatieTime field is editable even after setting it as readonly field

### Steps to reproduce the bug
Add this code to Admin class that inherits ModelView.
```
form_widget_args = {
...
'created_at': { 'readonly': True },
}
```
### Expected behavior
Create At (DateTime) field is not editable.
### Actual behavior
Create At (DateTime) field is editable.
### Debugging material
_No response_
### Environment
OS - Ubuntu
Python - 3.9
SQLAdmin - latest
FastAPI
### Additional context
_No response_ | closed | 2022-09-24T11:40:59Z | 2022-09-25T12:09:15Z | https://github.com/aminalaee/sqladmin/issues/332 | [] | 94929 | 4 |
miguelgrinberg/microblog | flask | 256 | (MySQL 8, WSL 2) su: warning: cannot change directory to /nonexistent: No such file or directory | Hi Miguel, I just recently switched and moved my project files from **WSL 1** to the **WSL 2** environment.
Specs:
- Ubuntu 20.04 LTS
- Python 3.8.2
.
But I'm having problems starting **MySQL** in **(venv)** environment please see the below warning message. Is this an issue that you have seen in the past? When starting **MySQL** outside of **(venv)** it runs ok.
When I check if **MySQL** is installed in both inside and outside of **(venv)** environment the terminal says that they are installed in both.
```
(venv)$ sudo service mysql start
* Starting MySQL database server mysqld
su: warning: cannot change directory to /nonexistent: No such file or directory
sleep: cannot read realtime clock: Invalid argument
$ sudo service mysql start
* Starting MySQL database server mysqld
$ mysql --version
mysql Ver 8.0.21-0ubuntu0.20.04.4 for Linux on x86_64 ((Ubuntu))
(venv)$ mysql --version
mysql Ver 8.0.21-0ubuntu0.20.04.4 for Linux on x86_64 ((Ubuntu))
``` | closed | 2020-08-21T09:27:45Z | 2021-12-11T10:01:31Z | https://github.com/miguelgrinberg/microblog/issues/256 | [
"question"
] | mrbiggleswirth | 5 |
plotly/dash | jupyter | 2,937 | [BUG] App fails to load plotly graphs on chromium-based browsers | Thank you so much for helping improve the quality of Dash!
We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through.
**Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.17.1
dash-bootstrap-components 1.6.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: hosted on Ubuntu 22.04, viewed on Windows11
- Browser: Chrome, Edge
- Version [e.g. 22]
**Describe the bug**
A [dash app](armchair-strategist.dev) hosted on AWS and managed through gunicorn (launch command: `gunicorn app:server -b :8000`) takes substantially longer to load on chromium-based browser. Sometimes the included plotly graphs fail to load and do not respond to callbacks.
When running with debugging enabled on the server, sometimes the `plotly.js fails to load` warning is triggered.
Unable to reproduce this issue when testing with debugging enabled in WSL (Ubuntu 20.04).
**Expected behavior**
The graphs are correctly initialized when viewed in Firefox

**Screenshots**
The graphs are not correctly loaded.

| closed | 2024-08-02T05:07:01Z | 2024-08-09T11:57:42Z | https://github.com/plotly/dash/issues/2937 | [
"bug"
] | Casper-Guo | 6 |
tflearn/tflearn | tensorflow | 311 | new examples | I'm a first-time contributor and wanted to get everybody's suggestions regarding the sort of examples they'd like to see. I'm thinking something in the NLP domain, but I'm open to all sorts of ideas
| open | 2016-08-28T23:14:51Z | 2016-09-02T14:29:39Z | https://github.com/tflearn/tflearn/issues/311 | [] | Anmol6 | 6 |
developmentseed/lonboard | jupyter | 514 | Google Colab crashes for large datasets | Hi,
I would like to use a 'lonboard' in Google Colab to visualize a GeoDataFrame ('gdf'). The code runs without any errors, but no output is displayed, even though it works on my local PC. What could be the issue? | closed | 2024-05-12T18:09:52Z | 2024-09-24T19:33:52Z | https://github.com/developmentseed/lonboard/issues/514 | [] | adamekcerv | 5 |
alirezamika/autoscraper | automation | 32 | Asynchronous methods for fetching URLs, parsing HTML, and exporting data | ### Introduction
I was looking over the code for this project and am impressed with it's simplicity in design and brilliant approach to this problem. However, one thing that jumped out to me was the lack of asynchronous methods to allow for a huge speed increase, especially as the number of pages to scrape increases. I am quite familiar with the standard libraries used to meet this goal and propose the following changes:
Let me know your thoughts and if you're interested in the idea. The performance gains would be immense! Thanks!
---
### Technical changes and additions proposal
- [ ] 1. Subclass `AutoScraper` with `AsyncAutoScraper`, which would require the packages `aiohttp`, `aiofiles`, and `aiosql` along with a few others purely optionally to increase speed - `uvloop`, `brotlipy`, `cchardet`, and `aiodns`
- [ ] 2. Refactor the `_get_soup` method by extracting an `async` method to **download HTML asynchronously** using `aiohttp`
- [ ] 3. Refactor the `get_results*` and `_build*` functions to also be `async` (simply adding the keyword) and then making sure to **call them by using a multiprocessing/threading pool**
- [ ] a. The `get_*` functions should handle the calling of these in an executor set to aforementioned pool
- [ ] b. Pools are created using `concurrent.futures.*`
- [ ] c. Inner-method logic should remain untouched since parsing is a CPU-bound task
- [ ] 4. Use `aiofiles` for the `save` method to be able to **export many individual JSON files quickly** if desired, same for the `load` method if **multiple sources are being used**
- [ ] 5. Add functionality for **exporting to an SQL database asynchronously** using `aiosql`
---
### References
- [`aiohttp`](https://docs.aiohttp.org/en/stable/)
- [`aiofiles`](https://github.com/Tinche/aiofiles)
- [`aiosql`](https://nackjicholson.github.io/aiosql/)
@alirezamika | closed | 2020-10-12T05:05:58Z | 2020-12-27T14:32:13Z | https://github.com/alirezamika/autoscraper/issues/32 | [] | tarasivashchuk | 10 |
roboflow/supervision | deep-learning | 791 | [PolygonZone] - allow per class counting | ### Description
Currently, [sv.PolygonZone](https://github.com/roboflow/supervision/blob/3024ddca83ad837651e59d040e2a5ac5b2b4f00f/supervision/detection/tools/polygon_zone.py#L15) provides only current aggregated counts - all classes are thrown into one bucket. In the past, many users have asked us to provide more granular - per class count. This can be achieved by adding `class_in_count` and `class_out_count` dictionaries that will store per-class counts.
```python
class PolygonZone:
def __init__(
self,
polygon: np.ndarray,
frame_resolution_wh: Tuple[int, int],
triggering_position: Position = Position.BOTTOM_CENTER,
):
# Existing initialization code...
self.class_in_count: Dict[int, int] = {}
self.class_out_count: Dict[int, int] = {}
def trigger(self, detections: Detections) -> np.ndarray:
crossed_in = np.full(len(detections), False)
crossed_out = np.full(len(detections), False)
# Required logic changes...
```
### Additional
- Note: Please share a Google Colab with minimal code to test the new feature. We know it's additional work, but it will definitely speed up the review process. Each change must be tested by the reviewer. Setting up a local environment to do this is time-consuming. Please ensure that Google Colab can be accessed without any issues (make it public). Thank you! 🙏🏻 | open | 2024-01-26T13:29:16Z | 2024-04-08T12:12:39Z | https://github.com/roboflow/supervision/issues/791 | [
"enhancement",
"api:polygonzone",
"Q2.2024"
] | SkalskiP | 2 |
sqlalchemy/sqlalchemy | sqlalchemy | 11,687 | Reflection caching is not handling quoted_name instances correctly | ### Describe the bug
I have already prepared a pull request for it, but the pull request template says I should create an issue first...and here we are...
I have noticed this issue when playing around with some toy examples in context of the SAP HANA dialect (https://github.com/SAP/sqlalchemy-hana), but technically it affects all dialects which require name normalization (i.e. also the built-in Oracle dialect).
When names (schema, tables, ...) are returned by the reflection methods of a dialect which requires name normalization the names are typically normalized in a way that for example uppercase names in the database are represented in lowercase in SQLAlchemy. When using these names again in a query the names have to be denormalized again. To distinguish between normalized lowercase names (i.e. names which appear in uppercase in the database) and non-normalized lowercase names (i.e. names which appear in lowercase in the database) during denormalization the quoted_name class resp. an instance of the class is used for the non-normalized names. This class inherits from str and is acting the same as a str except, that it has a quote property and that the upper and lower method are working differently based on the value of this property.
https://github.com/sqlalchemy/sqlalchemy/blob/bae75fe92f9636bafb75461ff0bc556432831e30/lib/sqlalchemy/sql/elements.py#L5190-L5284
This quoted_name class is problematic in context of reflection caching: The reflection caching mechanism is using a dictionary for storing the return values of the different reflection methods - for the key of this dictionary among other thing the input parameters are used (e.g., the get_table_names method is using the schema name, which could either be a str or a quoted_name instance e.g. returned by the get_schema_names method).
https://github.com/sqlalchemy/sqlalchemy/blob/bae75fe92f9636bafb75461ff0bc556432831e30/lib/sqlalchemy/engine/reflection.py#L78-L99
As the quoted_name class is not overriding the \_\_hash\_\_ and \_\_equals\_\_ methods of the str class the following instances are treated the same when caching, although they might not represent the same name:
```python
"name" == quoted_name("name", quoted=True) == quoted_name("name", quoted=False)
```
As these names are currently treated the same, you could get a cached result for the wrong object from the reflection methods.
To solve this issue you would typically implement the \_\_hash\_\_ and \_\_equals\_\_ methods...however this would break the existing logic regarding the denormalization: The denormalize_name methods of the dialects are typically using the transformed lowercase and/or uppercase value of a name and compare it to the provided name (str or quoted_name) to decide whether a name has to be denormalized or not:
https://github.com/sqlalchemy/sqlalchemy/blob/bae75fe92f9636bafb75461ff0bc556432831e30/lib/sqlalchemy/engine/default.py#L1038-L1053
Or in the SAP HANA dialect: https://github.com/SAP/sqlalchemy-hana/blob/38fdc7667870db8052083dd457575ea4e796cdfb/sqlalchemy_hana/dialect.py#L684-L692
My proposed fix is to handle quoted_name instances within the reflection cache decorator differently than str instances - this would not solve issue in cases where the default cache decorator of Python (https://docs.python.org/3/library/functools.html#functools.cache) is used together with quoted_name instances, but it would solve cases where the quoted_name instances typically are used: Within the reflection methods of SQLAlchemy and where reflection caching is used...
For the details of the proposed fix, please see the PR which I will create after creating this issue.
The impact of this issue is rather limited: Real problems can only occur in case in a database two objects with the same name with different casing are existing and which are requested after each other (e.g., using get_table_names for two schemas with the same name). I found this issue only because I tried to understand how the normalization and denormalization are working and I created some toy examples...usually you wound not have the same object twice with the same name, but different casing...
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
n/a
### SQLAlchemy Version in Use
2.0.31
### DBAPI (i.e. the database driver)
hdbcli
### Database Vendor and Major Version
SAP HANA Cloud
### Python Version
3.10
### Operating system
Windows
### To Reproduce
```python
def test_reflection_cache(self, connection):
# dummy method which mimics a reflection method of the
# dialect (e.g., get_schema_names)
@cache
def get_cached_name(self, connection, name, **kw):
return name
dialect = connection.dialect
info_cache = {}
# first test a string
n1 = get_cached_name(
dialect, connection, "name", info_cache=info_cache
)
assert isinstance(n1, str) and not isinstance(n1, quoted_name)
assert n1 == "name"
assert not hasattr(n1, "quote")
# a single value should be cached
assert len(info_cache) == 1
assert any(True for n in info_cache.values() if n == "name" and not hasattr(n, "quote"))
assert not any(True for n in info_cache.values() if n == "name" and hasattr(n, "quote") and getattr(n, "quote"))
# test a quoted_name instance, which is inherited from string
n2 = get_cached_name(
dialect,
connection,
quoted_name("name", quote=True),
info_cache=info_cache,
)
assert isinstance(n2, str) and isinstance(n2, quoted_name)
assert n2 == "name"
assert hasattr(n2, "quote")
assert getattr(n2, "quote") == True
# two values should be cached
assert len(info_cache) == 2
assert any(True for n in info_cache.values() if n == "name" and not hasattr(n, "quote"))
assert any(True for n in info_cache.values() if n == "name" and hasattr(n, "quote") and getattr(n, "quote"))
# test the string again
n3 = get_cached_name(
dialect, connection, "name", info_cache=info_cache
)
assert isinstance(n3, str) and not isinstance(n3, quoted_name)
assert n3 == "name"
assert not hasattr(n3, "quote")
# still only two values should be cached
assert len(info_cache) == 2
assert any(True for n in info_cache.values() if n == "name" and not hasattr(n, "quote"))
assert any(True for n in info_cache.values() if n == "name" and hasattr(n, "quote") and getattr(n, "quote"))
# clear the cache
info_cache = {}
# test the quoted_name instance first
n4 = get_cached_name(
dialect,
connection,
quoted_name("name", quote=True),
info_cache=info_cache,
)
assert isinstance(n4, str) and isinstance(n4, quoted_name)
assert n4 == "name"
assert hasattr(n4, "quote")
assert getattr(n4, "quote") == True
# a single value should be cached
assert len(info_cache) == 1
assert not any(True for n in info_cache.values() if n == "name" and not hasattr(n, "quote"))
assert any(True for n in info_cache.values() if n == "name" and hasattr(n, "quote") and getattr(n, "quote"))
# test the string
n5 = get_cached_name(
dialect, connection, "name", info_cache=info_cache
)
assert isinstance(n5, str) and not isinstance(n5, quoted_name)
assert n5 == "name"
assert not hasattr(n5, "quote")
# two values should be cached
assert len(info_cache) == 2
assert any(True for n in info_cache.values() if n == "name" and not hasattr(n, "quote"))
assert any(True for n in info_cache.values() if n == "name" and hasattr(n, "quote") and getattr(n, "quote"))
# test the quoted_name instance again
n6 = get_cached_name(
dialect,
connection,
quoted_name("name", quote=True),
info_cache=info_cache,
)
assert isinstance(n6, str) and isinstance(n6, quoted_name)
assert n6 == "name"
assert hasattr(n6, "quote")
assert getattr(n6, "quote") == True
# still only two values should be cached
assert len(info_cache) == 2
assert any(True for n in info_cache.values() if n == "name" and not hasattr(n, "quote"))
assert any(True for n in info_cache.values() if n == "name" and hasattr(n, "quote") and getattr(n, "quote"))
```
### Error
n/a
### Additional context
_No response_ | closed | 2024-08-04T15:22:33Z | 2024-08-13T20:46:14Z | https://github.com/sqlalchemy/sqlalchemy/issues/11687 | [
"bug",
"reflection"
] | Masterchen09 | 2 |
blockchain-etl/bitcoin-etl | dash | 56 | ETL process needs upgrade to work for Bitcoin Core 22.0 | From https://github.com/bitcoin/bitcoin the bitcoin core has been making changes in 22.0,
rpc: remove deprecated addresses and reqSigs from rpc outputs via this commit https://github.com/bitcoin/bitcoin/commit/8721638daa8502c7f8de5ae24a9393d7290a2ce5
This change in the core 22.0 require the bitcoin-etl to make the corresponding changes.
| open | 2021-11-18T16:57:52Z | 2021-11-21T13:54:46Z | https://github.com/blockchain-etl/bitcoin-etl/issues/56 | [] | veronicavero | 2 |
activeloopai/deeplake | computer-vision | 2,622 | [BUG] ds.visualize not working in jupyter notebook for local dataset | ### Severity
P1 - Urgent, but non-breaking
### Current Behavior
Hello everyone, I tried ds.visualize with dataset like 'hub://activeloop/animal10n-train', it worked in jupyter notebook. But with local dataset like './animal10n-train', ds.visualized showed nothing.
### Steps to Reproduce
`import deeplake
# it worked with remote dataset
dataset_path = 'hub://activeloop/animal10n-train'
ds = deeplake.load(dataset_path) # Returns a Deep Lake Dataset but does not download data locally
ds.summary()
ds.visualize()
`

`
# copy to local
deeplake.copy('hub://activeloop/animal10n-train', './animal10n-train', num_workers=10)
# it not worked
dataset_path = './animal10n-train'
ds = deeplake.load(dataset_path) # Returns a Deep Lake Dataset but does not download data locally
ds.summary()
ds.visualize()
`

### Expected/Desired Behavior
ds.visualize worked with local dataset
### Python Version
python3.10
### OS
Ubuntu 18.04
### IDE
Jupyter
### Packages
deeplake 3.7.1
### Additional Context
_No response_
### Possible Solution
_No response_
### Are you willing to submit a PR?
- [ ] I'm willing to submit a PR (Thank you!) | open | 2023-09-27T02:56:48Z | 2023-11-02T05:52:18Z | https://github.com/activeloopai/deeplake/issues/2622 | [
"bug"
] | journey-wang | 9 |
flasgger/flasgger | api | 370 | add auth to docs | Copied from #103
I've resolved the issue of authentication using the next code:
`swagger_template = {
...,
'securityDefinitions': {
'basicAuth': {
'type': 'basic'
}
},
...
}`
`app = Flask(__name__)`
`Swagger(app, config=config[config_name].swagger_config, template=swagger_template)
`

| open | 2020-03-11T20:35:12Z | 2021-01-01T14:31:40Z | https://github.com/flasgger/flasgger/issues/370 | [] | rochacbruno | 1 |
keras-team/keras | tensorflow | 20,355 | Keras argmin function returns incorrect index when handling subnormal float values. | When using the Keras backend `argmin` function on an input array containing subnormal float values, Keras consistently returns the index of `0.0` as the minimum value, even though a smaller subnormal value (`-1.401298464324817e-45`) exists in the array. Other deep learning frameworks such as PyTorch and Chainer correctly return the index of the subnormal value, but Keras (and TensorFlow) return the index of `0`.
### Expected Behavior:
The expected behavior is for Keras's `argmin` function to return the index of the smallest value, which should be the subnormal float value (`-1.401298464324817e-45`) at index 2. Instead, Keras is returning the index of `0.0` (index 0).
### Reproduction Code:
```
import torch
import tensorflow as tf
import numpy as np
from chainer import functions as F
import jax.numpy as jnp
import tensorflow.keras.backend as K
# Input data
input_data = [
0.0,
1.1754943508222875e-38,
-1.401298464324817e-45,
0.0,
459367.0
]
# Test PyTorch
def test_pytorch_argmin(input_data):
tensor = torch.tensor(input_data, dtype=torch.float32)
result = torch.argmin(tensor).item()
print(f"PyTorch argmin result: {result}")
return result
# Test TensorFlow
def test_tensorflow_argmin(input_data):
tensor = tf.constant(input_data, dtype=tf.float32)
result = tf.argmin(tensor).numpy()
print(f"TensorFlow argmin result: {result}")
return result
# Test Keras using backend
def test_keras_argmin(input_data):
tensor = K.constant(input_data, dtype=tf.float32)
result = K.argmin(tensor, axis=-1).numpy()
print(f"Keras argmin result: {result}")
return result
# Test Chainer
def test_chainer_argmin(input_data):
tensor = np.array(input_data, dtype=np.float32)
result = F.argmin(tensor).data
print(f"Chainer argmin result: {result}")
return result
# Test JAX
def test_jax_argmin(input_data):
tensor = jnp.array(input_data, dtype=jnp.float32)
result = jnp.argmin(tensor).item()
print(f"JAX argmin result: {result}")
return result
if __name__ == "__main__":
pytorch_result = test_pytorch_argmin(input_data)
tensorflow_result = test_tensorflow_argmin(input_data)
keras_result = test_keras_argmin(input_data)
chainer_result = test_chainer_argmin(input_data)
jax_result = test_jax_argmin(input_data)
print("\nSummary of results:")
print(f"PyTorch argmin: {pytorch_result}")
print(f"TensorFlow argmin: {tensorflow_result}")
print(f"Keras argmin: {keras_result}")
print(f"Chainer argmin: {chainer_result}")
print(f"JAX argmin: {jax_result}")
```
```
Summary of results:
PyTorch argmin: 2
TensorFlow argmin: 0
Keras argmin: 0
Chainer argmin: 2
JAX argmin: 0
``` | closed | 2024-10-15T08:25:10Z | 2025-03-07T02:05:09Z | https://github.com/keras-team/keras/issues/20355 | [
"stat:awaiting response from contributor",
"stale"
] | LilyDong0127 | 8 |
gradio-app/gradio | machine-learning | 10,251 | stop btn click event with cancels do NOT execute then | ### Describe the bug
I have a long RAG app, sometimes the llm loop output, I have a button to cancel, and then save the looped output, something like:
```python
stop_btn.click(fn=None,inputs=None,cancels=[submit_event]).then(fn=store_chat,inputs=[chatbot])
```
but the store_chat never run
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
...
stop_btn.click(fn=None,inputs=None,cancels=[submit_event, query_event]).then(fn=store_chat,inputs=[chatbot])
...
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
macOS gradio 5.6.0
```
### Severity
I can work around it | closed | 2024-12-25T00:40:51Z | 2024-12-29T17:47:10Z | https://github.com/gradio-app/gradio/issues/10251 | [
"bug",
"needs repro"
] | liaoweiguo | 2 |
horovod/horovod | tensorflow | 2,998 | Imported target "MPI::MPI_CXX" includes non-existent path | Hello,
I am having some issues installing horovod with MPI, would anybody be able to give any suggestions? I have attached my setup below.
Thanks,
Yiltan
**Environment:**
1. Framework: TensorFlow
2. Framework version: 1.15.2
3. Horovod version: v0.20.3
4. MPI version: MVAPICH2-GDR
5. CUDA version: 10.1.2436. NCCL version: n/a
7. Python version: 3.7
8. Spark / PySpark version: n/a
9. Ray version: n/a
10. OS and version: Red Hat
11. GCC version: 8.4.0
12. CMake version: 3.16.3
**Install Script**
```
IBM_REPO="https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda/"
# Create the enviroment
conda create -y python=3.7 --prefix=$(pwd)/.conda/envs/horovod_tf/
eval "$(conda shell.bash hook)"
conda activate .conda/envs/horovod_tf/
conda install -y -c $IBM_REPO tensorflow-gpu==1.15.2 keras Pillow
# Horovod variables
export HOROVOD_WITH_TENSORFLOW=1
export HOROVOD_WITHOUT_PYTORCH=1
export HOROVOD_WITHOUT_MXNET=1
export HOROVOD_WITHOUT_GLOO=1
export HOROVOD_CUDA_HOME=$CUDA_HOME
export HOROVOD_GPU_OPERATIONS=MPI
export HOROVOD_WITH_MPI=1
cd packages/horovod
python setup.py install
```
**Output**
```running clean
removing 'build/temp.linux-ppc64le-3.7' (and everything under it)
running install
running bdist_egg
running egg_info
writing horovod.egg-info/PKG-INFO
writing dependency_links to horovod.egg-info/dependency_links.txt
writing entry points to horovod.egg-info/entry_points.txt
writing requirements to horovod.egg-info/requires.txt
writing top-level names to horovod.egg-info/top_level.txt
reading manifest file 'horovod.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
no previously-included directories found matching '.eggs'
warning: no previously-included files found matching 'third_party/eigen/Eigen/src/IterativeSolvers/*'
warning: no previously-included files found matching 'third_party/eigen/unsupported/Eigen/FFT'
warning: no previously-included files found matching 'third_party/eigen/unsupported/Eigen/MPRealSupport'
warning: no previously-included files found matching 'third_party/eigen/doc/PreprocessorDirectives.dox'
warning: no previously-included files found matching 'third_party/eigen/doc/UsingIntelMKL.dox'
warning: no previously-included files found matching 'third_party/eigen/doc/SparseLinearSystems.dox'
warning: no previously-included files found matching 'third_party/eigen/COPYING.GPL'
warning: no previously-included files found matching 'third_party/eigen/COPYING.LGPL'
warning: no previously-included files found matching 'third_party/eigen/COPYING.README'
writing manifest file 'horovod.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-ppc64le/egg
running install_lib
running build_py
running build_ext
-- Could not find CCache. Consider installing CCache to speed up compilation.
-- The CXX compiler identification is GNU 9.3.0
-- Check for working CXX compiler: /opt/base/gcc/9.3.0/bin/g++
-- Check for working CXX compiler: /opt/base/gcc/9.3.0/bin/g++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Build architecture flags:
-- Using command /project/.conda/envs/horovod_tf/bin/python
CMake Error in /project/packages/horovod/build/temp.linux-ppc64le-3.7/CMakeFiles/CMakeTmp/CMakeLists.txt:
Imported target "MPI::MPI_CXX" includes non-existent path
"/usr/tce/packages/cuda/cuda-10.1.243/include"
in its INTERFACE_INCLUDE_DIRECTORIES. Possible reasons include:
* The path was deleted, renamed, or moved to another location.
* An install or uninstall procedure did not complete successfully.
* The installation package was faulty and references files it does not
provide.
CMake Error in /project/packages/horovod/build/temp.linux-ppc64le-3.7/CMakeFiles/CMakeTmp/CMakeLists.txt:
Imported target "MPI::MPI_CXX" includes non-existent path
"/usr/tce/packages/cuda/cuda-10.1.243/include"
in its INTERFACE_INCLUDE_DIRECTORIES. Possible reasons include:
* The path was deleted, renamed, or moved to another location.
* An install or uninstall procedure did not complete successfully.
* The installation package was faulty and references files it does not
provide.
CMake Error at /opt/base/cmake/3.16.3/share/cmake-3.16/Modules/FindMPI.cmake:1194 (try_compile):
Failed to generate test project build system.
Call Stack (most recent call first):
/opt/base/cmake/3.16.3/share/cmake-3.16/Modules/FindMPI.cmake:1245 (_MPI_try_staged_settings)
/opt/base/cmake/3.16.3/share/cmake-3.16/Modules/FindMPI.cmake:1505 (_MPI_check_lang_works)
CMakeLists.txt:131 (find_package)
-- Configuring incomplete, errors occurred!
See also "/project/packages/horovod/build/temp.linux-ppc64le-3.7/CMakeFiles/CMakeOutput.log".
Traceback (most recent call last):
File "setup.py", line 193, in <module>
'horovodrun = horovod.runner.launch:run_commandline'
File "/project/.conda/envs/horovod_tf/lib/python3.7/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/project/.conda/envs/horovod_tf/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/project/.conda/envs/horovod_tf/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/project/.conda/envs/horovod_tf/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/project/.conda/envs/horovod_tf/lib/python3.7/site-packages/setuptools/command/install.py", line 67, in run
self.do_egg_install()
File "/project/.conda/envs/horovod_tf/lib/python3.7/site-packages/setuptools/command/install.py", line 109, in do_egg_install
self.run_command('bdist_egg')
File "/project/.conda/envs/horovod_tf/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/project/.conda/envs/horovod_tf/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/project/.conda/envs/horovod_tf/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 164, in run
cmd = self.call_command('install_lib', warn_dir=0)
File "/project/.conda/envs/horovod_tf/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 150, in call_command
self.run_command(cmdname)
File "/project/.conda/envs/horovod_tf/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/project/.conda/envs/horovod_tf/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/project/.conda/envs/horovod_tf/lib/python3.7/site-packages/setuptools/command/install_lib.py", line 11, in run
self.build()
File "/project/.conda/envs/horovod_tf/lib/python3.7/distutils/command/install_lib.py", line 107, in build
self.run_command('build_ext')
File "/project/.conda/envs/horovod_tf/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/project/.conda/envs/horovod_tf/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/project/.conda/envs/horovod_tf/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/project/.conda/envs/horovod_tf/lib/python3.7/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "setup.py", line 89, in build_extensions
cwd=self.build_temp)
File "/project/.conda/envs/horovod_tf/lib/python3.7/subprocess.py", line 363, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '/project/packages/horovod', '-DCMAKE_BUILD_TYPE=RelWithDebInfo', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELWITHDEBINFO=/project/packages/horovod/build/lib.linux-ppc64le-3.7', '-DPYTHON_EXECUTABLE:FILEPATH=/project/mvapich-gdr/.conda/envs/horovod_tf/bin/python']' returned non-zero exit status 1.
```
| closed | 2021-06-25T21:50:13Z | 2021-07-02T22:39:50Z | https://github.com/horovod/horovod/issues/2998 | [
"bug"
] | Yiltan | 2 |
MorvanZhou/tutorials | numpy | 88 | tensorflow 2.1.0 error | macOS Catalina 10.15.4
PyCharm 2019.3.2 (Community Edition)
I used the most updated tensorflow 2.1.0 to run comes with below error:
module 'tensorflow_core._api.v2.config' has no attribute 'experimental_list_devices'
Then I downgrade to tensorflow 2.0.0 everything is good. | open | 2020-05-05T03:05:42Z | 2023-12-18T18:04:04Z | https://github.com/MorvanZhou/tutorials/issues/88 | [] | AlexYibeiOu | 3 |
plotly/dash-table | plotly | 714 | Include space before SI-prefix | In the SI system, prefix and units should be written after a space (see e.g.: [https://en.wikipedia.org/wiki/International_System_of_Units#General_rules](https://en.wikipedia.org/wiki/International_System_of_Units#General_rules))
I haven't found an option to do this. The issue can be reproduced by the example below where I get a space between the prefix (e.g. k) and the unit W. Alternatively I could of course skip the space before the W, but I cannot find a way to add a space between the last digit and the SI-prefix.
```
import dash
import dash_table
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/solar.csv')
app = dash.Dash(__name__)
app.layout = dash_table.DataTable(
id='table',
columns=[{'name': i, 'id': i, 'type': 'numeric', 'format': {'locale': {'symbol': ['', ' W']}, 'specifier': '$.3s'}} for i in df.columns],
data=df.to_dict('records'),
)
if __name__ == '__main__':
app.run_server(debug=True)
```

| open | 2020-03-05T12:57:29Z | 2021-11-15T14:25:52Z | https://github.com/plotly/dash-table/issues/714 | [] | asnyv | 8 |
matplotlib/mplfinance | matplotlib | 375 | Help with understanding error: NotImplementedError | OS: Raspberry pi3
System:
```
NAME="Raspbian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=raspbian
ID_LIKE=debian
```
Running mplfinance in headless-mode on my raspberry pi and I'm trying to wrap my head around this error (below) i'm getting. The bulk of the error states `NotImplementedError: kwarg "style" is NOT YET implemented.` which I'm not sure how to break down. I read the pydocs [here](https://docs.python.org/3/library/exceptions.html#NotImplementedError) but still a bit lost where the error is.
> In user-defined base classes, abstract methods should raise this exception when they require derived classes to override the method, or while the class is being developed to indicate that the real implementation still needs to be added.
I could modify the module but that is a can of worms I'm trying to avoid. I'm semi-new to python and trying to work my way through this codebase + docs and posting this for help/awareness. Let me know if you need more info to help with this.
> FYI: Running this on my Mac works without error
FULL error below:
```bash
File "/home/pi/.local/lib/python3.7/site-packages/mplfinance/plotting.py", line 181, in plot
config = _process_kwargs(kwargs)
File "/home/pi/.local/lib/python3.7/site-packages/mplfinance/plotting.py", line 160, in _process_kwargs
raise NotImplementedError('kwarg "'+key+'" is NOT YET implemented.')
NotImplementedError: kwarg "style" is NOT YET implemented.
``` | closed | 2021-04-11T20:14:19Z | 2021-04-25T18:11:13Z | https://github.com/matplotlib/mplfinance/issues/375 | [
"question"
] | engineertree5 | 4 |
thp/urlwatch | automation | 192 | Custom "headers" support | Allowing custom headers to be added (in the same way as we add "data" pin post ? ) would be useful for some monitoring... | closed | 2018-01-03T13:54:11Z | 2018-06-02T10:40:51Z | https://github.com/thp/urlwatch/issues/192 | [] | MoonshineSG | 2 |
plotly/dash-table | dash | 735 | Unable to set ellipses on header cells | From the docs:
```python
dash_table.DataTable(
data=df_election.to_dict('records'),
columns=[{'id': c, 'name': c} for c in df_election.columns],
style_cell={
'overflow': 'hidden',
'textOverflow': 'ellipsis',
'maxWidth': 0,
},
)
```
Ellipses should be displayed on "Election Polling" header

FYI @Marc-Andre-Rivet in case you are in the neighborhood
| open | 2020-04-13T17:42:17Z | 2020-08-04T21:08:38Z | https://github.com/plotly/dash-table/issues/735 | [
"bug"
] | chriddyp | 1 |
recommenders-team/recommenders | data-science | 1,966 | [ASK] SLi Rec model doesn't improve on training. | ### Description
<!--- Describe your general ask in detail -->
I tried running the quick start Notebook provided [here](https://github.com/microsoft/recommenders/blob/main/examples/00_quick_start/sequential_recsys_amazondataset.ipynb) on my local machine. As far as I could tell, I had all the necessary training files required for the model. However, at the end of 10 epochs, the model only achieved an AUC score of ~0.5 after starting from ~0.49. I don't really understand why this is happening. The only factor I can think of that might cause issues is that I'm using Tensorflow for M2 Macs, and I understand that there are some issues with it. But other than this, I'm pretty stuck. Any help would be appreciated.
### Other Comments
Here's the output from the model training:
2023-08-14 10:47:23.473809: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:114] Plugin optimizer for device_type GPU is enabled.
step 20 , total_loss: 1.5846, data_loss: 1.5846
step 40 , total_loss: 1.3459, data_loss: 1.3459
eval valid at epoch 1: auc:0.493,logloss:0.6977,mean_mrr:0.44,ndcg@2:0.3008,ndcg@4:0.4984,ndcg@6:0.577,group_auc:0.482
step 20 , total_loss: 0.5859, data_loss: 0.5859
step 40 , total_loss: 0.3387, data_loss: 0.3387
eval valid at epoch 2: auc:0.4896,logloss:0.5833,mean_mrr:0.446,ndcg@2:0.3092,ndcg@4:0.4981,ndcg@6:0.5814,group_auc:0.4836
step 20 , total_loss: 0.1450, data_loss: 0.1450
step 40 , total_loss: 0.0671, data_loss: 0.0671
eval valid at epoch 3: auc:0.4984,logloss:0.5172,mean_mrr:0.4526,ndcg@2:0.3202,ndcg@4:0.5088,ndcg@6:0.5865,group_auc:0.494
step 20 , total_loss: 0.0591, data_loss: 0.0591
step 40 , total_loss: 0.0373, data_loss: 0.0373
eval valid at epoch 4: auc:0.4955,logloss:0.512,mean_mrr:0.451,ndcg@2:0.3171,ndcg@4:0.5068,ndcg@6:0.5853,group_auc:0.4933
step 20 , total_loss: 0.0432, data_loss: 0.0432
step 40 , total_loss: 0.0255, data_loss: 0.0255
eval valid at epoch 5: auc:0.5046,logloss:0.7092,mean_mrr:0.4623,ndcg@2:0.3324,ndcg@4:0.5199,ndcg@6:0.594,group_auc:0.5067
step 20 , total_loss: 0.0106, data_loss: 0.0106
step 40 , total_loss: 0.0288, data_loss: 0.0288
eval valid at epoch 6: auc:0.4986,logloss:3.6601,mean_mrr:0.4512,ndcg@2:0.3187,ndcg@4:0.5119,ndcg@6:0.5857,group_auc:0.4982
step 20 , total_loss: 0.0159, data_loss: 0.0159
step 40 , total_loss: 0.0096, data_loss: 0.0096
eval valid at epoch 7: auc:0.4913,logloss:0.5893,mean_mrr:0.4509,ndcg@2:0.3208,ndcg@4:0.5024,ndcg@6:0.5852,group_auc:0.49
step 20 , total_loss: 0.0285, data_loss: 0.0285
step 40 , total_loss: 0.0048, data_loss: 0.0048
eval valid at epoch 8: auc:0.5065,logloss:0.5384,mean_mrr:0.461,ndcg@2:0.3337,ndcg@4:0.5198,ndcg@6:0.5931,group_auc:0.5088
step 20 , total_loss: 0.0056, data_loss: 0.0056
step 40 , total_loss: 0.0060, data_loss: 0.0060
eval valid at epoch 9: auc:0.5133,logloss:0.8202,mean_mrr:0.4725,ndcg@2:0.3499,ndcg@4:0.529,ndcg@6:0.6018,group_auc:0.5189
step 20 , total_loss: 0.0055, data_loss: 0.0055
step 40 , total_loss: 0.0129, data_loss: 0.0129
eval valid at epoch 10: auc:0.5015,logloss:0.5818,mean_mrr:0.4608,ndcg@2:0.3305,ndcg@4:0.5154,ndcg@6:0.5928,group_auc:0.5038
[(1, {'auc': 0.493, 'logloss': 0.6977, 'mean_mrr': 0.44, 'ndcg@2': 0.3008, 'ndcg@4': 0.4984, 'ndcg@6': 0.577, 'group_auc': 0.482}), (2, {'auc': 0.4896, 'logloss': 0.5833, 'mean_mrr': 0.446, 'ndcg@2': 0.3092, 'ndcg@4': 0.4981, 'ndcg@6': 0.5814, 'group_auc': 0.4836}), (3, {'auc': 0.4984, 'logloss': 0.5172, 'mean_mrr': 0.4526, 'ndcg@2': 0.3202, 'ndcg@4': 0.5088, 'ndcg@6': 0.5865, 'group_auc': 0.494}), (4, {'auc': 0.4955, 'logloss': 0.512, 'mean_mrr': 0.451, 'ndcg@2': 0.3171, 'ndcg@4': 0.5068, 'ndcg@6': 0.5853, 'group_auc': 0.4933}), (5, {'auc': 0.5046, 'logloss': 0.7092, 'mean_mrr': 0.4623, 'ndcg@2': 0.3324, 'ndcg@4': 0.5199, 'ndcg@6': 0.594, 'group_auc': 0.5067}), (6, {'auc': 0.4986, 'logloss': 3.6601, 'mean_mrr': 0.4512, 'ndcg@2': 0.3187, 'ndcg@4': 0.5119, 'ndcg@6': 0.5857, 'group_auc': 0.4982}), (7, {'auc': 0.4913, 'logloss': 0.5893, 'mean_mrr': 0.4509, 'ndcg@2': 0.3208, 'ndcg@4': 0.5024, 'ndcg@6': 0.5852, 'group_auc': 0.49}), (8, {'auc': 0.5065, 'logloss': 0.5384, 'mean_mrr': 0.461, 'ndcg@2': 0.3337, 'ndcg@4': 0.5198, 'ndcg@6': 0.5931, 'group_auc': 0.5088}), (9, {'auc': 0.5133, 'logloss': 0.8202, 'mean_mrr': 0.4725, 'ndcg@2': 0.3499, 'ndcg@4': 0.529, 'ndcg@6': 0.6018, 'group_auc': 0.5189}), (10, {'auc': 0.5015, 'logloss': 0.5818, 'mean_mrr': 0.4608, 'ndcg@2': 0.3305, 'ndcg@4': 0.5154, 'ndcg@6': 0.5928, 'group_auc': 0.5038})]
best epoch: 9
Time cost for training is 52.33 mins | closed | 2023-08-14T10:24:30Z | 2023-08-17T18:13:18Z | https://github.com/recommenders-team/recommenders/issues/1966 | [
"help wanted"
] | manishvee | 1 |
databricks/koalas | pandas | 1,226 | ks.read_csv() - compression parameter ( *.csv.gz ) | I'm missing the CSV compression behavior from pandas:
```python
pd.read_csv('dataset.csv.gz', compression='gzip')
# is same as
# (pandas is recognizing the compression automatically by the file extension)
pd.read_csv('dataset.csv.gz')
```
We can use one of the Spark `codec` implementations explained here:
https://stackoverflow.com/questions/40163996/how-to-save-a-dataframe-as-compressed-gzipped-csv | closed | 2020-01-24T23:14:41Z | 2020-01-28T02:30:26Z | https://github.com/databricks/koalas/issues/1226 | [] | patryk-oleniuk | 3 |
mitmproxy/mitmproxy | python | 7,339 | quic in chrome using android | #### Problem Description
when I try to use mitmproxy for quic in Android Chrome or any app i get tls handshake failed
and i can't bypass this i tried Frida scripts etc but none Magisk added a certificate to root
all did't bypass this
Only Firefox works if I make it trust a third-party certificate
#### Steps to reproduce the behavior:
1. https://prnt.sc/cZh9A2HYkVRh
| open | 2024-11-21T04:15:05Z | 2024-12-09T19:31:57Z | https://github.com/mitmproxy/mitmproxy/issues/7339 | [
"kind/triage"
] | r00tback | 8 |
mljar/mercury | jupyter | 291 | Mercury is replacing all Plotly graphs with the last one on the page | When I have multiple Plotly graphs on a page, Mercury is replacing all of them with the last one.
I have a repro for this issue and have attached the .ipynb. You will notice that if you run the .ipynb, all of the graphs are generated fine. But when you load the Mercury dashboard, only the last graph works.
I was able to repro this on a Docker container running Ubuntu 22.04. Let me know if you're unable to repro this on your end - I can create a repo with the Dockerfile, etc, but I imagine you'll be able to repro it without.
I uploaded the .ipynb (you will need to rename it to remove the .txt extension and use .ipynb). I also uploaded a screenshot so you can see what I mean.
[repro_ipynb.txt](https://github.com/mljar/mercury/files/11549273/repro_ipynb.txt)

| open | 2023-05-24T00:16:31Z | 2023-06-15T09:47:10Z | https://github.com/mljar/mercury/issues/291 | [
"bug"
] | kapily | 1 |
modelscope/data-juicer | streamlit | 233 | Video content compliance and privacy protection operators (image, text, audio) | closed | 2024-03-08T02:35:21Z | 2024-03-14T02:47:51Z | https://github.com/modelscope/data-juicer/issues/233 | [] | yxdyc | 0 | |
ivy-llc/ivy | pytorch | 27,942 | Fix Ivy Failing Test: jax - elementwise.asinh | closed | 2024-01-17T17:02:53Z | 2024-01-17T18:50:39Z | https://github.com/ivy-llc/ivy/issues/27942 | [
"Sub Task"
] | samthakur587 | 0 | |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 686 | Is there any possible way to control intonation? | I have trained several models with the program so far, and I am very satisfied with the results.
However, when I synthesize and vocode text, intonation can be somewhat random and uncontrollable. For exmaple, question intonation is hard to reliably produce, and I rely on trial and error to generate phrases with intonation that sounds like a question.
I've found other utilities that allow controlling the alteration of pitch or intonation in some way:
-[SKVASynth/FastPitch has a very interactive GUI that allows the altering of pitch for individual or several letters.](https://www.nexusmods.com/skyrimspecialedition/mods/44184)
-[Some TTS programs like this one use meta-tags or SSML tags to convey a certain mood, like uncertainty.](https://stackoverflow.com/questions/46953918/using-different-intonations-with-watson-text-to-speech)
Is this sort of thing possible in this repo, or beyond its scope?
Thank you! | closed | 2021-03-01T00:26:24Z | 2021-03-04T00:36:17Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/686 | [] | StElysse | 2 |
coqui-ai/TTS | deep-learning | 2,785 | [Bug] KeyError: 'use_phonemes' | ### Describe the bug
I tried to train a vocoder model and I tried to utilize it with tts-server but this throws KeyError: 'use_phonemes'.
This is the full error that I get:
(venv) PS D:\GitHub\Coqui> tts-server --config_path D:\GitHub\Coqui\models\LJSpeech-1.1\config.json --model_path D:\GitHub\Coqui\models\LJSpeech-1.1\best_model.pth
D:\GitHub\Coqui\venv\lib\site-packages\torchaudio\compliance\kaldi.py:22: UserWarning: Failed to initialize NumPy: module compiled against API version 0x10 but this version of numpy is 0xf . Check the section C-API incompatibility at the Troubleshooting ImportError section at https://numpy.org/devdocs/user/troubleshooting-importerror.html#c-api-incompatibility for indications on how to solve this problem . (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:84.)
EPSILON = torch.tensor(torch.finfo(torch.float).eps)
Traceback (most recent call last):
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "D:\GitHub\Coqui\venv\Scripts\tts-server.exe\__main__.py", line 4, in <module>
File "D:\GitHub\Coqui\venv\lib\site-packages\TTS\server\server.py", line 104, in <module>
synthesizer = Synthesizer(
File "D:\GitHub\Coqui\venv\lib\site-packages\TTS\utils\synthesizer.py", line 91, in __init__
self._load_tts(tts_checkpoint, tts_config_path, use_cuda)
File "D:\GitHub\Coqui\venv\lib\site-packages\TTS\utils\synthesizer.py", line 182, in _load_tts
if self.tts_config["use_phonemes"] and self.tts_config["phonemizer"] is None:
File "D:\GitHub\Coqui\venv\lib\site-packages\coqpit\coqpit.py", line 614, in __getitem__
return self.__dict__[arg]
KeyError: 'use_phonemes'
### To Reproduce
1. Run the command "tts-server --config_path config path --model_path model path"
2. error
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
{
"CUDA": {
"GPU": [],
"available": false,
"version": null
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.0.1+cpu",
"TTS": "0.15.6",
"numpy": "1.22.0"
},
"System": {
"OS": "Windows",
"architecture": [
"64bit",
"WindowsPE"
],
"processor": "AMD64 Family 25 Model 33 Stepping 0, AuthenticAMD",
"python": "3.10.11",
"version": "10.0.22621"
}
}
```
### Additional context
_No response_ | closed | 2023-07-19T20:04:17Z | 2023-10-26T19:24:24Z | https://github.com/coqui-ai/TTS/issues/2785 | [
"bug",
"wontfix"
] | MVR3S | 4 |
LibrePhotos/librephotos | django | 1,365 | Defining places | **Describe the enhancement you'd like**
I would like the possibility to set a name / title for a particular location so that pictures taken within a certain area will be associated with that "place". Let's say for example that I want to define a place called "Home" with a radius of "100m", then I want those images to be associated with that place. Also I would like the auto albums generation to take this into account.
**Describe why this will benefit the LibrePhotos**
- GPS is not always reliable, for "special places" like "home" and such, this could help.
| open | 2024-08-08T18:08:49Z | 2024-08-08T18:08:49Z | https://github.com/LibrePhotos/librephotos/issues/1365 | [
"enhancement"
] | tomplast | 0 |
falconry/falcon | api | 1,564 | Can Falcon first response to request then execute other code? | I'm sorry that I can't find a method to response immediately and then execute other time consuming cods. | closed | 2019-08-12T11:42:14Z | 2019-08-13T01:23:36Z | https://github.com/falconry/falcon/issues/1564 | [] | Cylkal | 3 |
apify/crawlee-python | web-scraping | 334 | IndexError: list index out of range | ```python
import asyncio
from crawlee.playwright_crawler import (
PlaywrightCrawler,
PlaywrightCrawlingContext,
)
async def main() -> None:
crawler = PlaywrightCrawler(
# Limit the crawl to max requests. Remove or increase it for crawling all links.
max_requests_per_crawl=10,
)
# Define the default request handler, which will be called for every request.
@crawler.router.default_handler
async def request_handler(context: PlaywrightCrawlingContext) -> None:
context.log.info(f"Processing {context.request.url} ...")
# Extract data from the page.
data = {
"url": context.request.url,
"title": context.page.title.string if context.page.title else None,
}
# Enqueue all links found on the page.
await context.enqueue_links()
# Push the extracted data to the default dataset.
await context.push_data(data)
# Run the crawler with the initial list of URLs.
await crawler.run(["https://crawlee.dev"])
# Export the entire dataset to a CSV file.
await crawler.export_data("results.csv")
if __name__ == "__main__":
asyncio.run(main())
```
TERMINAL:
```text
Traceback (most recent call last):
File "g:\python\python_experiment\my-crawler\my-crawler\routes.py", line 40, in <module>
asyncio.run(main())
File "D:\python3.9\lib\asyncio\runners.py", line 44, in run
return loop.run_until_complete(main)
File "D:\python3.9\lib\asyncio\base_events.py", line 642, in run_until_complete
return future.result()
File "g:\python\python_experiment\my-crawler\my-crawler\routes.py", line 36, in main
await crawler.export_data("results.csv")
File "g:\python\python_experiment\venv\lib\site-packages\crawlee\basic_crawler\basic_crawler.py", line 474, in export_data
return await dataset.write_to(content_type, path.open('w', newline=''))
File "g:\python\python_experiment\venv\lib\site-packages\crawlee\storages\dataset.py", line 213, in write_to
writer.writerows([items[0].keys(), *[item.values() for item in items]])
IndexError: list index out of range
``` | closed | 2024-07-21T03:32:31Z | 2024-07-24T11:58:11Z | https://github.com/apify/crawlee-python/issues/334 | [
"bug",
"t-tooling"
] | bingbingyu | 1 |
tflearn/tflearn | data-science | 511 | conv_2d activation == None raises an 'Invalid Activation Error' | Error stack:
```
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcurand.so locally
WARNING:tensorflow:tf.variable_op_scope(values, name, default_name) is deprecated, use tf.variable_scope(name, default_name, values)
Traceback (most recent call last):
File "code_TL.py", line 98, in <module>
conv1a_3_3 = relu(batch_normalization(conv_2d(network, 32, 3, strides=2, bias=False, padding='VALID',activation=None,name='Conv2d_1a_3x3')))
File "/usr/local/lib/python2.7/dist-packages/tflearn/layers/conv.py", line 105, in conv_2d
raise ValueError("Invalid Activation.")
ValueError: Invalid Activation.
```
In the official documentation, a None activation is allowed for conv_2d, as seen in:
http://tflearn.org/layers/conv/
However, in the inception-resnet-v2 model, activation is None, which automatically raises this error. This code can be found in:
https://github.com/tflearn/tflearn/blob/master/examples/images/inception_resnet_v2.py
lines 107-113
```
if activation:
if isinstance(activation, str):
inference = activations.get(activation)(inference)
elif hasattr(activation, '__call__'):
inference = activation(inference)
else:
raise ValueError("Invalid Activation.")
```
How can I fix this issue? Will it be alright to let no activation occur? | open | 2016-12-10T05:48:40Z | 2016-12-14T23:55:51Z | https://github.com/tflearn/tflearn/issues/511 | [] | kwotsin | 1 |
ultralytics/ultralytics | python | 19,642 | xywhr in OBB result have changed | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Predict
### Bug
Recently I updated my environment to the latest version of ultralytics and found that the xywhr result for the OBB task is somehow broken. I was using it to get the centre position of each detection, the width and height and the rotation. Before the update I always had the width as the largest dimension and the height as the smaller, and the rotation relative to the width.
After the update, I found that sometimes the width is the smaller dimension and the height is the larger one, so the rotation is also skewed by 90 degrees.
Going back to older version fix the problem.
### Environment
Ultralytics 8.3.87 🚀 Python-3.12.7 torch-2.6.0+cpu CPU (Intel Core(TM) i7-10850H 2.70GHz)
Setup complete ✅ (12 CPUs, 31.7 GB RAM, 609.9/952.5 GB disk)
OS Windows-11-10.0.22631-SP0
Environment Windows
Python 3.12.7
Install git
Path C:\Users\my_user\Desktop\yolo_test\.venv\Lib\site-packages\ultralytics
RAM 31.73 GB
Disk 609.9/952.5 GB
CPU Intel Core(TM) i7-10850H 2.70GHz
CPU count 12
GPU None
GPU count None
CUDA None
numpy ✅ 2.1.1<=2.1.1,>=1.23.0
matplotlib ✅ 3.10.1>=3.3.0
opencv-python ✅ 4.11.0.86>=4.6.0
pillow ✅ 11.1.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.15.2>=1.4.1
torch ✅ 2.6.0>=1.8.0
torch ✅ 2.6.0!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.21.0>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 7.0.0
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0
### Minimal Reproducible Example
Input image

# Ultralitics 8.3.87
With ultralytics package version 8.3.87 run this code:
```
from pathlib import Path
import torch
from ultralytics import YOLO
model_file = Path("yolo11x-obb.pt")
model = YOLO(model_file, task="OBB")
img = Path("./images/P0006.png")
results = model.predict(
source=img,
conf=0.8,
imgsz=640,
device=torch.cuda.current_device() if torch.cuda.device_count() > 0 else "CPU",
)
for res in results:
for i, pts in enumerate(res.obb):
if int(pts.xywhr[0][2]) < int(pts.xywhr[0][3]):
print(
f"{i})\tcenter x {int(pts.xywhr[0][0])}\tcenter y {int(pts.xywhr[0][1])}\twidth {int(pts.xywhr[0][2])}\theight {int(pts.xywhr[0][3])}\trotation {pts.xywhr[0][4]}"
)
```
## Output:
```
# 3) center x 322 center y 796 width 24 height 104 rotation 0.7707133889198303
# 9) center x 329 center y 428 width 25 height 96 rotation 0.8094407916069031
# 11) center x 325 center y 470 width 24 height 95 rotation 0.8357727527618408
# 12) center x 324 center y 835 width 25 height 96 rotation 0.7769593000411987
# 16) center x 329 center y 387 width 26 height 96 rotation 0.7910935282707214
# 17) center x 586 center y 361 width 24 height 102 rotation 0.8455972075462341
# 19) center x 582 center y 1246 width 24 height 93 rotation 0.8130463361740112
# 20) center x 332 center y 345 width 25 height 95 rotation 0.8040628433227539
# 21) center x 593 center y 394 width 25 height 104 rotation 0.8151286244392395
# 22) center x 575 center y 1212 width 24 height 101 rotation 0.808992326259613
# 23) center x 325 center y 674 width 24 height 96 rotation 0.8091808557510376
# 24) center x 592 center y 673 width 25 height 102 rotation 0.8232788443565369
# 25) center x 591 center y 798 width 25 height 104 rotation 0.8026906847953796
# 35) center x 321 center y 878 width 25 height 93 rotation 0.787294864654541
# 40) center x 585 center y 763 width 24 height 103 rotation 0.7789315581321716
# 41) center x 590 center y 839 width 25 height 102 rotation 0.8089277148246765
# 44) center x 318 center y 920 width 26 height 103 rotation 0.8125594854354858
# 47) center x 592 center y 435 width 24 height 101 rotation 0.8140760660171509
# 50) center x 325 center y 512 width 25 height 100 rotation 0.8281809091567993
# 51) center x 312 center y 964 width 25 height 97 rotation 0.7953415513038635
# 54) center x 333 center y 1306 width 24 height 99 rotation 0.854654848575592
# 55) center x 317 center y 758 width 25 height 101 rotation 0.7970958948135376
# 60) center x 318 center y 1119 width 25 height 104 rotation 0.8262746930122375
# 66) center x 329 center y 708 width 24 height 94 rotation 0.8186021447181702
# 68) center x 316 center y 1160 width 24 height 102 rotation 0.8195444345474243
# 69) center x 583 center y 1286 width 24 height 100 rotation 0.8252146244049072
# 77) center x 585 center y 722 width 24 height 103 rotation 0.7846349477767944
# 79) center x 315 center y 1200 width 24 height 101 rotation 0.8189680576324463
# 81) center x 147 center y 1238 width 21 height 58 rotation 1.5592427253723145
# 82) center x 857 center y 1059 width 22 height 58 rotation 1.5689480304718018
# 84) center x 312 center y 1282 width 24 height 98 rotation 0.8073341846466064
# 88) center x 316 center y 1240 width 24 height 99 rotation 0.802503764629364
# 89) center x 313 center y 1004 width 26 height 105 rotation 0.8169794082641602
# 90) center x 582 center y 1163 width 23 height 98 rotation 0.8214647173881531
# 91) center x 148 center y 1295 width 22 height 60 rotation 1.564757227897644
# 92) center x 850 center y 1001 width 21 height 52 rotation 1.564372181892395
# 95) center x 588 center y 881 width 26 height 103 rotation 0.7998103499412537
# 96) center x 586 center y 1041 width 24 height 103 rotation 0.8159875273704529
# 98) center x 853 center y 1029 width 23 height 60 rotation 1.5595061779022217
# 100) center x 587 center y 959 width 25 height 103 rotation 0.8381613492965698
# 103) center x 583 center y 1084 width 23 height 101 rotation 0.8027064204216003
# 107) center x 603 center y 1310 width 24 height 101 rotation 0.8306283354759216
# 108) center x 589 center y 920 width 26 height 104 rotation 0.8245478272438049
# 109) center x 586 center y 1001 width 24 height 101 rotation 0.8359718322753906
# 110) center x 319 center y 1038 width 24 height 102 rotation 0.8157163858413696
# 113) center x 161 center y 364 width 22 height 66 rotation 1.5607739686965942
# 114) center x 851 center y 1088 width 22 height 61 rotation 1.5662956237792969
# 115) center x 581 center y 1124 width 23 height 101 rotation 0.8175578713417053
# 116) center x 316 center y 1080 width 25 height 103 rotation 0.8084856271743774
```
# Ultralitics 8.3.32
## yolo checks
Ultralytics 8.3.32 🚀 Python-3.12.7 torch-2.6.0+cpu CPU (Intel Core(TM) i7-10850H 2.70GHz)
Setup complete ✅ (12 CPUs, 31.7 GB RAM, 613.0/952.5 GB disk)
OS Windows-11-10.0.22631-SP0
Environment Windows
Python 3.12.7
Install git
RAM 31.73 GB
Disk 613.0/952.5 GB
CPU Intel Core(TM) i7-10850H 2.70GHz
CPU count 12
GPU None
GPU count None
CUDA None
numpy ✅ 2.1.1>=1.23.0
matplotlib ✅ 3.10.1>=3.3.0
opencv-python ✅ 4.11.0.86>=4.6.0
pillow ✅ 11.1.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.15.2>=1.4.1
torch ✅ 2.6.0>=1.8.0
torchvision ✅ 0.21.0>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 7.0.0
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0
numpy ✅ 2.1.1<2.0.0; sys_platform == "darwin"
torch ✅ 2.6.0!=2.4.0,>=1.8.0; sys_platform == "win32"
Running same code as above no output, so no results with width < height as expected.
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2025-03-11T14:21:07Z | 2025-03-15T13:03:36Z | https://github.com/ultralytics/ultralytics/issues/19642 | [
"bug",
"OBB"
] | MarcelloCuoghi | 7 |
plotly/dash | flask | 2,690 | provide loading Indicator during plotly rendering | I have created a simple dash app, which renders a scatter plot using plotly express, I have used the ```dcc.Loading``` to wrap the component which displays the plot upon a simple button click.
But during the initial rendering the indicator stops displaying once the component is ready with its data, but the plot is not yet rendered and displayed to the user.
Is there any way to show a loading spinner or any loading indicator which can be displayed until the plot is rendered? Or anything within the ```dcc.Graph``` component which can show this loading indicator until the plot is rendered completly
I have searched around the plotly documentation and not seem find anything suitable.
The CSV file being consumed is around 250 mb and the delay after the loading indicator stops and the plot to render is around a second or 2, but this might increase once the files being consumed are also increasing in size.
```
import time
import dash
from dash import dcc, html
from dash.dependencies import Input, Output
import plotly.express as px
import pandas as pd
import numpy as np
df = pd.read_csv(
'2m Sales Records.csv',
index_col=0)
app = dash.Dash(__name__)
app.layout = html.Div([
html.Button('Update Plot', id='update-button', n_clicks=0),
html.Div([
dcc.Loading(
id="loading",
type="default",
fullscreen=True,
children=[
html.Div(id='graph-container')
]
)
])
])
@app.callback(
Output('graph-container', 'children'),
[Input('update-button', 'n_clicks')]
)
def update_plot(n_clicks):
x_column = 'Units Sold'
y_column = 'Total Profit'
if n_clicks > 0:
fig = px.scatter(df, x=x_column, y=y_column, title='Scatter Plot')
fig.update_traces(textposition='top center')
return dcc.Graph(id='volcano-plot', figure=fig)
return html.Div()
if __name__ == '__main__':
app.run_server(debug=True)
```
The CSV file with about 2million records:
[2m-Sales-Records.zip](https://github.com/plotly/dash/files/13344970/2m-Sales-Records.zip)
Video Sample:
https://github.com/plotly/dash/assets/150319231/07be24af-f482-45cc-903d-2e2d85eece1b
| open | 2023-11-13T11:10:41Z | 2024-08-13T19:42:42Z | https://github.com/plotly/dash/issues/2690 | [
"feature",
"P3"
] | Yokogawa-Akshay | 0 |
plotly/dash | dash | 2,354 | How to access an Iframe from an external source without uncaught DOMexception? | I have some other website I own and I want to embed html from that website as Iframes. I want to access some properties of the actual element (such as scroll height) to adjust the Iframe in dash.
But I get a `Uncaught DOMException: Blocked a frame with origin "http://localhost:8050" from accessing a cross-origin frame.`, which is to be expected. In flask there's a way to whitelist other sites, is there a way to do this in Dash?
thank you! | closed | 2022-12-06T04:29:40Z | 2024-07-24T17:00:26Z | https://github.com/plotly/dash/issues/2354 | [] | matthewyangcs | 2 |
waditu/tushare | pandas | 861 | get_hist_data()返回缺少个股日线,周线的turnover(换手率)数据 | open high close ... v_ma5 v_ma10 v_ma20
date ...
2018-12-07 9.60 9.72 9.64 ... 488711.96 416995.45 476792.38
2018-12-06 9.30 9.95 9.56 ... 459862.07 405640.67 478135.00
2018-12-05 8.25 9.26 9.26 ... 396943.05 355666.80 471003.30 | closed | 2018-12-09T04:26:47Z | 2018-12-09T13:14:13Z | https://github.com/waditu/tushare/issues/861 | [] | simpely545 | 1 |
streamlit/streamlit | python | 10,373 | `audio_input` overflow when recording for more than one hour | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
Whenever the audio_input records for more than one hour, the minutes display simply overflows and it looks like the timer is reset.
See the GIF below

### Reproducible Code Example
```Python
import streamlit a st
audio_input = st.audio_input(label="Record")
```
### Steps To Reproduce
1. Start recording
2. Wait for one hour
3. See the timer 'reset'
### Expected Behavior
See `1:00:00` appear.
### Current Behavior
_No response_
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
_No response_
### Additional Information
_No response_ | closed | 2025-02-11T15:38:40Z | 2025-02-20T23:55:24Z | https://github.com/streamlit/streamlit/issues/10373 | [
"type:bug",
"status:confirmed",
"priority:P3",
"feature:st.audio_input"
] | stijnhoskens | 2 |
iterative/dvc | data-science | 9,901 | Alias `dvc mv` to `dvc move` | In being similar to git, it would be nice to be able to use `dvc mv` in addition to `dvc move`, it put me off at first. | closed | 2023-09-01T08:24:26Z | 2023-09-04T11:58:33Z | https://github.com/iterative/dvc/issues/9901 | [
"awaiting response",
"p3-nice-to-have"
] | feefladder | 3 |
deepfakes/faceswap | machine-learning | 437 | convert.py doesn't work with OriginalHighRes trainer | ## Expected behavior
Training with OriginalHighRes plugin module works like other trainer plugins
## Actual behavior
Training with OriginalHighRes plugin module throws the error:
`TypeError: join() argument must be str or bytes, not 'PosixPath'` with the following
stack trace:
````
Using TensorFlow backend.
Output Directory: /home/agilebean/faceswap/output
Input Directory: /home/agilebean/faceswap/input_images
Loading Extract from Extract_Align plugin...
Using json serializer
Alignments filepath: /home/agilebean/faceswap/input_images/alignments.json
Aligned directory not specified. All faces listed in the alignments file will be converted
Loading Model from Model_OriginalHighRes plugin...
Traceback (most recent call last):
File "faceswap.py", line 36, in <module>
ARGUMENTS.func(ARGUMENTS)
File "/home/agilebean/faceswap/lib/cli.py", line 81, in execute_script
process.process()
File "/home/agilebean/faceswap/scripts/convert.py", line 42, in process
model = self.load_model()
File "/home/agilebean/faceswap/scripts/convert.py", line 70, in load_model
if not model.load(self.args.swap_model):
File "/home/agilebean/faceswap/plugins/Model_OriginalHighRes/Model.py", line 134, in load
state_dir = os.path.join(self.model_dir, 'state_{version_str}_{ENCODER.value}.json'.format(**globals()))
File "/usr/lib/python3.5/posixpath.py", line 89, in join
genericpath._check_arg_types('join', a, *p)
File "/usr/lib/python3.5/genericpath.py", line 143, in _check_arg_types
(funcname, s.__class__.__name__)) from None
TypeError: join() argument must be str or bytes, not 'PosixPath'
````
## Steps to reproduce
TRAINER=OriginalHighRes
python3 faceswap.py train -A $INPUT_FACE_DIR -B $STYLE_FACE_DIR -it $NUM_ITERATIONS -t $TRAINER
I tested with all other methods, OriginalHighRes is the only method that doesn't work.
## Other relevant information
- **Operating system and version:** Ubuntu 16.04 LTS
- **Python version:** 3.5
- **Faceswap version:**
- **Faceswap method:** GPU
| closed | 2018-06-21T14:05:26Z | 2018-06-26T12:18:13Z | https://github.com/deepfakes/faceswap/issues/437 | [
"bug"
] | agilebean | 6 |
twopirllc/pandas-ta | pandas | 77 | TA - all features generations | What's the best way to create all relevant features? I tried to use the `df.ta.strategy(name='all')`
Error:
```
`Index(['dt', 'open', 'high', 'low', 'close', 'volumn', 'trading_dt', 'stock'], dtype='object')
KeyError Traceback (most recent call last)
D:\Anaconda\envs\py36\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
2645 try:
-> 2646 return self._engine.get_loc(key)
2647 except KeyError:
pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'high'
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
<ipython-input-98-ce3fa983de61> in <module>
----> 1 ch_3 = ta_pd_v2(ch)
<ipython-input-77-d2409a027e53> in ta_pd_v2(df)
8 df1["close-1"] = df1["close"].shift(1)
9
---> 10 df.ta.strategy(name='all')
11 return df1
D:\Anaconda\envs\py36\lib\site-packages\pandas_ta\core.py in strategy(self, **kwargs)
367 if name is None or name == "" or not isinstance(name, str): # Extra check
368 name = "all"
--> 369 self._all(**kwargs) if name == "all" else None
370
371
D:\Anaconda\envs\py36\lib\site-packages\pandas_ta\core.py in _all(self, **kwargs)
343 for kind in indicators:
344 fn = getattr(self, kind)
--> 345 fn(append=append, **kwargs)
346 print(f"[+] {kind}") if verbose else None
347
D:\Anaconda\envs\py36\lib\site-packages\pandas_ta\core.py in _wrapper(*class_methods, **method_kwargs)
22 def _wrapper(*class_methods, **method_kwargs):
23 cm = class_methods[0]
---> 24 result = method(cm, **method_kwargs)
25
26 cm._add_prefix_suffix(result, **method_kwargs)
D:\Anaconda\envs\py36\lib\site-packages\pandas_ta\core.py in aberration(self, high, low, close, length, atr_length, offset, **kwargs)
1067 @finalize
1068 def aberration(self, high=None, low=None, close=None, length=None, atr_length=None, offset=None, **kwargs):
-> 1069 high = self._get_column(high, 'high')
1070 low = self._get_column(low, 'low')
1071 close = self._get_column(close, 'close')
D:\Anaconda\envs\py36\lib\site-packages\pandas_ta\core.py in _get_column(self, series, default)
229 # Apply default if no series nor a default.
230 elif series is None or default is None:
--> 231 return df[self.adjusted] if self.adjusted is not None else df[default]
232 # Ok. So it's a str.
233 elif isinstance(series, str):
D:\Anaconda\envs\py36\lib\site-packages\pandas\core\frame.py in __getitem__(self, key)
2798 if self.columns.nlevels > 1:
2799 return self._getitem_multilevel(key)
-> 2800 indexer = self.columns.get_loc(key)
2801 if is_integer(indexer):
2802 indexer = [indexer]
D:\Anaconda\envs\py36\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
2646 return self._engine.get_loc(key)
2647 except KeyError:
-> 2648 return self._engine.get_loc(self._maybe_cast_indexer(key))
2649 indexer = self.get_indexer([key], method=method, tolerance=tolerance)
2650 if indexer.ndim > 1 or indexer.size > 1:
pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'high'`
```
So, I used the customised feature generation script but run into error for certain features:
```
`def ta_pd_v2(df):
df1 = df.copy()
df1.columns = map(str.lower, df1.columns)
df1["open-1"] = df1["open"].shift(1)
df1["high-1"] = df1["high"].shift(1)
df1["low-1"] = df1["low"].shift(1)
df1["close-1"] = df1["close"].shift(1)
df1 = df1.ta.strategy(name='all')
return df1
```
Example of error:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
D:\Anaconda\envs\py36\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
2645 try:
-> 2646 return self._engine.get_loc(key)
2647 except KeyError:
pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'volume'
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
<ipython-input-28-5d947df895f7> in <module>
----> 1 ch.ta.strategy(name='all')
D:\Anaconda\envs\py36\lib\site-packages\pandas_ta\core.py in strategy(self, **kwargs)
367 if name is None or name == "" or not isinstance(name, str): # Extra check
368 name = "all"
--> 369 self._all(**kwargs) if name == "all" else None
370
371
D:\Anaconda\envs\py36\lib\site-packages\pandas_ta\core.py in _all(self, **kwargs)
343 for kind in indicators:
344 fn = getattr(self, kind)
--> 345 fn(append=append, **kwargs)
346 print(f"[+] {kind}") if verbose else None
347
D:\Anaconda\envs\py36\lib\site-packages\pandas_ta\core.py in _wrapper(*class_methods, **method_kwargs)
22 def _wrapper(*class_methods, **method_kwargs):
23 cm = class_methods[0]
---> 24 result = method(cm, **method_kwargs)
25
26 cm._add_prefix_suffix(result, **method_kwargs)
D:\Anaconda\envs\py36\lib\site-packages\pandas_ta\core.py in ad(self, high, low, close, volume, open_, signed, offset, **kwargs)
1161 low = self._get_column(low, 'low')
1162 close = self._get_column(close, 'close')
-> 1163 volume = self._get_column(volume, 'volume')
1164
1165 result = ad(high=high, low=low, close=close, volume=volume, open_=open_, signed=signed, offset=offset, **kwargs)
D:\Anaconda\envs\py36\lib\site-packages\pandas_ta\core.py in _get_column(self, series, default)
229 # Apply default if no series nor a default.
230 elif series is None or default is None:
--> 231 return df[self.adjusted] if self.adjusted is not None else df[default]
232 # Ok. So it's a str.
233 elif isinstance(series, str):
D:\Anaconda\envs\py36\lib\site-packages\pandas\core\frame.py in __getitem__(self, key)
2798 if self.columns.nlevels > 1:
2799 return self._getitem_multilevel(key)
-> 2800 indexer = self.columns.get_loc(key)
2801 if is_integer(indexer):
2802 indexer = [indexer]
D:\Anaconda\envs\py36\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
2646 return self._engine.get_loc(key)
2647 except KeyError:
-> 2648 return self._engine.get_loc(self._maybe_cast_indexer(key))
2649 indexer = self.get_indexer([key], method=method, tolerance=tolerance)
2650 if indexer.ndim > 1 or indexer.size > 1:
pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas\_libs\hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'volume'
```
My another attempt:
```
`
import traceback
def ta_pd(df):
df1 = df.copy()
indicators = pd.DataFrame().ta.indicators(as_list=True)
append = True
df1.columns = map(str.lower, df1.columns)
df1["open-1"] = df1["open"].shift(1)
df1["high-1"] = df1["high"].shift(1)
df1["low-1"] = df1["low"].shift(1)
df1["close-1"] = df1["close"].shift(1)
#print(len(indicators), "will be added.")
for kind in indicators:
try:
print(kind)
df1.ta(kind=kind, append=append)
except Exception as e:
print("\nError with indicator:", kind)
print("") # print(".", end="")
indicators2 = ["log_return", "percent_return", "trend_return"]
for kind in indicators2:
try:
print(kind)
df1.ta(kind=kind, append=append, cumulative=True)
except Exception as e:
print("\nError with indicator:", kind)
print("") # print(".", end="")
df1.drop(columns = ['dt', 'open', 'high', 'low', 'close', 'volumn', 'trading_dt', 'stock', 'open-1', 'high-1', 'low-1', 'close-1',
return df1
`
```
and the error:
```
ad
Error with indicator: ad
adosc
Error with indicator: adosc
``` | closed | 2020-07-19T16:14:05Z | 2020-07-31T22:32:43Z | https://github.com/twopirllc/pandas-ta/issues/77 | [] | skwskwskwskw | 11 |
Kludex/mangum | fastapi | 165 | AWS Lambda - Container | Trying out building a quick and simple app to use with the HTTP API and AWS Lambda with a container. I can set up the app locally and hit it locally, however I am not sure if I am missing something but I cannot get it working in Lambda with a container.
Super simple main.py file that looks like the following:
```
import json
import boto3
import os
import uvicorn
from mangum import Mangum
from fastapi import FastAPI
from pydantic import BaseModel
app = FastAPI()
@app.get("/")
def read_root():
return {"Hello": "World"}
@app.get("/items/{item_id}")
def read_item(item_id: int, q: str = None):
return {"item_id": item_id, "q": q}
handler = Mangum(app)
```
With my Dockerfile it looks like the following:
```
# Define function directory
FROM public.ecr.aws/lambda/python:3.8
RUN pip install poetry
RUN mkdir /app
COPY ./app /app
COPY ./poetry.lock ./pyproject.toml ./
RUN poetry install --no-dev
CMD [ "app.main.handler" ]
```
I created a HTTP API and basically just have a single proxy route that points to my Lambda.
```
ANY /{proxy+}
```
I am not sure if the handler part is correct for the container or does it need to be a uvicorn command to run the app.main:app file? The error I was seeing in Cloudwatch looks like this.
```
[ERROR] Runtime.ImportModuleError: Unable to import module 'app.main': No module named 'app'
```
| closed | 2021-03-03T21:46:23Z | 2021-03-12T12:45:40Z | https://github.com/Kludex/mangum/issues/165 | [] | stephenbawks | 2 |
dask/dask | scikit-learn | 11,149 | a tutorial for distributed text deduplication | Can you provide an example of distributed text deduplication based on dask, such as:
- https://github.com/xorbitsai/xorbits/blob/main/python/xorbits/experimental/dedup.py
- https://github.com/ChenghaoMou/text-dedup/blob/main/text_dedup/minhash_spark.py
- https://github.com/FlagOpen/FlagData/blob/main/flagdata/deduplication/minhash.py | open | 2024-05-27T07:38:30Z | 2024-06-03T07:29:57Z | https://github.com/dask/dask/issues/11149 | [
"documentation"
] | simplew2011 | 5 |
OpenInterpreter/open-interpreter | python | 839 | answering "I'm an AI language model and cannot directly interact with files" | ### Describe the bug
Hello, I would like to ask how to make the interpreter work on my computer. Why does it answer "I'm an AI language model and cannot directly interact with files"? Thank you so much! (i used local model Mistral 7B Instruct v0.2)
### Reproduce
ask questions like "can you help me summary this doc"
### Expected behavior
Able to interact with the system
### Screenshots
_No response_
### Open Interpreter version
0.1.17
### Python version
3.11.7
### Operating System name and version
Windows 11
### Additional context
_No response_ | closed | 2023-12-17T04:59:08Z | 2024-03-19T18:40:36Z | https://github.com/OpenInterpreter/open-interpreter/issues/839 | [] | aziefp | 2 |
allenai/allennlp | data-science | 5,276 | Add label smoothing to CopyNetSeq2Seq | **Is your feature request related to a problem? Please describe.**
I am wondering if it is possible to add label smoothing to [`CopyNetSeq2Seq`](https://github.com/allenai/allennlp-models/blob/main/allennlp_models/generation/models/copynet_seq2seq.py). Label smoothing is implemented for the other allennlp-models under the [`generation`](https://github.com/allenai/allennlp-models/tree/main/allennlp_models/generation/models) module via [`sequence_cross_entropy_with_logits`](https://github.com/allenai/allennlp/blob/cf113d705b9054d329c67cf9bb29cbc3f191015d/allennlp/nn/util.py#L706).
**Describe the solution you'd like**
I think ideally, `CopyNetSeq2Seq` would be updated to use `sequence_cross_entropy_with_logits`, which would enable label smoothing in addition to some other features (like `alpha` and `gamma`).
**Describe alternatives you've considered**
An alternative solution would be to skip `sequence_cross_entropy_with_logits` and just add label smoothing directly to `CopyNetSeq2Seq`, with a new parameter: `label_smoothing`.
**Additional context**
I've looked through the source code of `CopyNetSeq2Seq`, but I can't quite figure out how to implement either solution, or if they are even feasible given the complications that the copy mechanism introduces.
I would be happy to take a crack at this but I think I might need more guidance (if it's feasible and if so where to start).
| closed | 2021-06-22T00:44:59Z | 2021-08-05T17:48:21Z | https://github.com/allenai/allennlp/issues/5276 | [
"Contributions welcome"
] | JohnGiorgi | 5 |
robotframework/robotframework | automation | 5,238 | Document return codes in `--help` | I regularly use the `robot` and `rebot` scripts, notably in CI. When reading their `--help`, they give no clear indication of what their `exit_code`/`return_status` will be.
They both have a `--nostatusrc` :
> Sets the return code to zero regardless are there failures. Error codes are returned normally.
It's not clear to me what it means that "Error codes are returned normally" when "Sets the return code to zero". I think I don't know the nuance between "error code" and "return code".
And if I am not setting this option, what should be the normal error codes ?
I have found [3.1.3 Test results / Return codes](https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#return-codes) in the UserGuide, which states the expected results codes for `robot` :
* 0 = all pass
* 1-249 = corresponding number of failing tests
* 250 = more than 250 tests have failed
* 251 = help
* 252 = bad command
* 253 = ctrl+C
* 254 unused ?
* 255 = unexpected internal error
Could a summary of that be included in the Robot help message ? [robot/run.py#L46](https://github.com/robotframework/robotframework/blob/master/src/robot/run.py#L46)
I can open a PR if you are interested.
Also, the same applies to `rebot`. There is *note* in [3.1.3 Test results / Return codes](https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#return-codes) that indicates that it is the same for `rebot`, so could it be added to its help message too ? [robot/rebot.py#L46](https://github.com/robotframework/robotframework/blob/master/src/robot/rebot.py#L46) | closed | 2024-10-15T12:28:14Z | 2024-12-18T13:33:04Z | https://github.com/robotframework/robotframework/issues/5238 | [
"enhancement",
"priority: low",
"beta 1",
"effort: small"
] | Lenormju | 2 |
BeanieODM/beanie | asyncio | 500 | [BUG] Sorting by ID does not work if limit is same as total number of documents | Hi @roman-right, I hope the below issue is just setup on our end, any help appreciated.
**Describe the bug**
Sorting a query by ID does not work if the limit is the same as the total number of documents.
**To Reproduce**
Assuming X documents in Y collection:
```python
results = await Y.find({}, sort="id", limit=X).to_list() # This does NOT sort by ID
results = await Y.find({}, sort="id", limit=X + 1).to_list() # This correctly sorts by ID
```
**Expected behavior**
We expected the resulting list to be sorted by object ID.
**Additional context**
In our case, we have a Users collection with 10 documents:
```python
users = await User.find({}, sort="id", limit=10).to_list()
```

If we set `limit=11`, sorting by ID works correctly.
If we change sort to a different field like `first_name` then the sort works correctly.
Strangely if we set `limit=3`, sorting by ID works correctly.
Thank you! | closed | 2023-03-13T17:15:50Z | 2023-07-08T02:09:21Z | https://github.com/BeanieODM/beanie/issues/500 | [
"Stale"
] | csanders-rga | 4 |
pmaji/crypto-whale-watching-app | plotly | 56 | building issues | am i the only one not able to build this?? downloaded dash and still can't find out how to build this at all? | closed | 2018-02-20T20:33:47Z | 2018-02-22T01:33:23Z | https://github.com/pmaji/crypto-whale-watching-app/issues/56 | [] | chris56a | 6 |
gevent/gevent | asyncio | 2,030 | 24.2.1: `test_import_from_another_thread` fails | * gevent version: 24.2.1 installed from source
* Python version: 3.9.19 installed from operating system package
* Operating System: OpenIndiana Hipster (rolling release, up to date)
### Description:
I'm running tests for gevent 24.2.1 and I found that the `test_import_from_another_thread` test fails:
```
======================================================================
FAIL: test_import_from_another_thread (__main__.ThreadTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/test_threadingwtswui_n.py", line 859, in test_import_from_another_thread
rc, out, err = assert_python_ok("-c", code)
File "/usr/lib/python3.9/test/support/script_helper.py", line 156, in assert_python_ok
return _assert_python(True, *args, **env_vars)
File "/usr/lib/python3.9/test/support/script_helper.py", line 142, in _assert_python
res.fail(cmd_line)
File "/usr/lib/python3.9/test/support/script_helper.py", line 70, in fail
raise AssertionError("Process return code is %d\n"
AssertionError: Process return code is 1
command line: ['/data/builds/oi-userland/components/python/gevent/build/amd64-3.9/.tox/py39/bin/python', '-X', 'faulthandler', '-I', '-c', "\nimport _thread\nimport sys\n\nevent = _thread.allocate_lock()\nevent.acquire()\n\ndef import_threading():\n import threading\n event.release()\n\nif 'threading' in sys.modules:\n raise Exception('threading is already imported')\n\n_thread.start_new_thread(import_threading, ())\n\n# wait until the threading module is imported\nevent.acquire()\nevent.release()\n\nif 'threading' not in sys.modules:\n raise Exception('threading is not imported')\n\n# don't wait until the thread completes\n"]
stdout:
---
---
stderr:
---
Traceback (most recent call last):
File "<string>", line 13, in <module>
Exception: threading is already imported
---
----------------------------------------------------------------------
```
The problem seems to be that on this platform the `threading` module is always available:
```
$ python3.9 -c 'import sys; import pprint; pprint.pprint(sys.modules)' | grep threading
'threading': <module 'threading' from '/usr/lib/python3.9/threading.py'>,
$
```
Honestly, I'm not sure if this is a bug in `gevent` (assuming `threading` is not available while it could be available) or in the Python on this platform (delivering the `threading` module where it should not be delivered).
BTW, the [threading documentation](https://docs.python.org/3.9/library/threading.html) suggests that the `threading` module is always available. | open | 2024-04-09T18:48:24Z | 2024-04-09T18:48:24Z | https://github.com/gevent/gevent/issues/2030 | [] | mtelka | 0 |
mirumee/ariadne | api | 1,039 | Incorrect typing for values on EnumType.__init__() | Ariadne 0.18.0 has updated the `EnumType` class in a way that causes the `values` argument of `__init__()` to become typed, but the typing is incorrect:
```
def __init__(
self, name: str, values: Union[Dict[str, Any], enum.Enum, enum.IntEnum]
) -> None:
```
should be:
```
def __init__(
self, name: str, values: Union[Dict[str, Any], Type[enum.Enum], Type[enum.IntEnum]]
) -> None:
``` | closed | 2023-02-21T17:21:07Z | 2023-02-22T10:29:22Z | https://github.com/mirumee/ariadne/issues/1039 | [
"bug",
"help wanted"
] | markedwards | 4 |
dynaconf/dynaconf | django | 382 | [bug] Django extension incompatible with DJDT | **Describe the bug**
Django Debug Toolbar calls `settings.is_overridden()` which results in `'Settings' object has no attribute 'IS_OVERRIDDEN'`
**To Reproduce**
Steps to reproduce the behavior:
1. Start new django project and add [django debug toolbar](https://django-debug-toolbar.readthedocs.io/en/latest/installation.html) version 2.2
2. Add dynaconf to settings.py as per instructions:
```python
# HERE STARTS DYNACONF EXTENSION LOAD (Keep at the very bottom of settings.py)
# Read more at https://dynaconf.readthedocs.io/en/latest/guides/django.html
import dynaconf # noqa
settings = dynaconf.DjangoDynaconf(__name__, environments=True, load_dotenv=True) # noqa
# HERE ENDS DYNACONF EXTENSION LOAD (No more code below this line)
```
3. Start the development server `python manage.py runserver`
**Expected behavior**
The Django development server to start-up with no errors
**Environment (please complete the following information):**
- OS: Linux
- Dynaconf 3.0.0
| closed | 2020-08-03T15:05:58Z | 2020-08-14T17:05:28Z | https://github.com/dynaconf/dynaconf/issues/382 | [
"bug",
"Pending Release"
] | wengole | 2 |
saleor/saleor | graphql | 17,130 | Upgrade urllib3 from 1.x to 2.x | ## Problem
We currently still depend on an old version of urllib3 (v1.26.x), we should upgrade to the latest version (v2.2.x). `requests` supports urllib3 2.x since `requests==2.30.0`[^1]
Issues:
1. It's currently unclear for how long the v1.x major version will remain supported for.
2. `types-requests` only supports `urllib3>=2.0`[^2] which causes us to have outdated typing
## Steps
<!-- Everything from developer perspective -->
```[tasklist]
### Tasks
- [ ] Check migration guide: https://urllib3.readthedocs.io/en/stable/v2-migration-guide.html
- [ ] Make required changes and upgrade urllib3 to latest v2.x
- [ ] Upgrade `types-requests` to the same version as `requests`
```
[^1]: https://github.com/psf/requests/issues/6432#issuecomment-1676067621
[^2]: https://github.com/python/typeshed/blob/c606b4f8ad36d363a7ec5193f5f8f88d7491bdbc/stubs/requests/METADATA.toml#L4
| open | 2024-12-03T09:22:14Z | 2024-12-03T09:23:40Z | https://github.com/saleor/saleor/issues/17130 | [
"technical debt",
"dependencies"
] | NyanKiyoshi | 0 |
polyaxon/traceml | data-visualization | 15 | PyPI is considering 0.0.41 as latest version | Hello,
Thank you for all the work creating this project. It appears PyPI is considering 0.0.41 as the latest/default version instead of 0.0.5, therefore, 'pip install pandas-summary' is installing the old version.
Ryan | closed | 2018-07-28T14:08:23Z | 2018-07-28T14:23:47Z | https://github.com/polyaxon/traceml/issues/15 | [] | rlshuhart | 1 |
jupyter/nbviewer | jupyter | 242 | External links in notebook collections go to nbviewer URLs, don't get re-directed | Example notebook:
http://nbviewer.ipython.org/github/ipython/ipython/blob/master/examples/Embedding/Index.ipynb
Click [embed_class_long.py](http://nbviewer.ipython.org/github/ipython/ipython/blob/master/examples/Embedding/embed_class_long.py), it links to http://nbviewer.ipython.org/github/ipython/ipython/blob/master/examples/Embedding/embed_class_long.py
rather than what the tree view will link you to:
http://nbviewer.ipython.org/github/ipython/ipython/tree/master/examples/Embedding/
While we could fix the notebook to provide the right links, it's clear the re-direct isn't working properly.
| open | 2014-04-10T17:25:38Z | 2019-07-05T21:47:17Z | https://github.com/jupyter/nbviewer/issues/242 | [
"type:Bug"
] | rgbkrk | 1 |
vitalik/django-ninja | rest-api | 861 | [BUG] happens error with trailing slash only POST Method | **Describe the bug**
A clear and concise description of what the bug is.
**Versions (please complete the following information):**
- Python version: [e.g. 3.6] 3.9.13
- Django version: [e.g. 4.0] 4.2.1
- Django-Ninja version: [e.g. 0.16.2] 0.21
- Pydantic version: [e.g. 1.9.0] 1.10.8
Note you can quickly get this by runninng in `./manage.py shell` this line:
```
import django; import pydantic; import ninja; django.__version__; ninja.__version__; pydantic.__version__
```
```
@daily_router.post("/bulk/")
def create_bulk_daily_bible(request, bulk: BulkDailyBibleInSchema):
```
This makes error.. I am not sure the reason.
```
Method Not Allowed: /api/daily/bulk/
[19/Sep/2023 15:40:25] "POST /api/daily/bulk/ HTTP/1.1" 405 18
```
I tested it with curl. But same response.
```
curl -X 'POST' \
'http://127.0.0.1:8000/api/daily/bulk/' \
-H 'accept: */*' \
-H 'Content-Type: application/json' \
-d '{
"group_id": 4,
"daily_bible_list": [
{
"month": 1,
"day": 1,
"book": 1,
"from_chapter": 2,
"from_verse": 2,
"to_chapter": 2,
"to_verse": 3
},
{
"month": 1,
"day": 1,
"book": 1,
"from_chapter": 2,
"from_verse": 4,
"to_chapter": 2,
"to_verse": 5
}
]
}'
```
But without trailing slash,
it works well.
```
@daily_router.post("/bulk")
def create_bulk_daily_bible(request, bulk: BulkDailyBibleInSchema):
```
```
[19/Sep/2023 15:41:56] "POST /api/daily/bulk HTTP/1.1" 200 17
[19/Sep/2023 15:42:00] "POST /api/daily/bulk HTTP/1.1" 200 17
```
Do you guys have any ideas?
| closed | 2023-09-19T06:53:27Z | 2023-09-20T04:41:13Z | https://github.com/vitalik/django-ninja/issues/861 | [] | quroom | 6 |
nvbn/thefuck | python | 1,491 | Python 3.12.4: No module named 'imp' | C:\Users\31009>fuck
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in run_code
File "C:\Users\31009\AppData\Local\Programs\Python\Python312\Scripts\fuck.exe_main.py", line 4, in <module>
File "C:\Users\31009\AppData\Local\Programs\Python\Python312\Lib\site-packages\thefuck\entrypoints\not_configured.py", line 13, in <module>
from .. import logs, const # noqa: E402
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\31009\AppData\Local\Programs\Python\Python312\Lib\site-packages\thefuck\logs.py", line 8, in <module>
from .conf import settings
File "C:\Users\31009\AppData\Local\Programs\Python\Python312\Lib\site-packages\thefuck\conf.py", line 1, in <module>
from imp import load_source
ModuleNotFoundError: No module named 'imp' | open | 2025-01-09T06:18:42Z | 2025-03-06T18:29:50Z | https://github.com/nvbn/thefuck/issues/1491 | [] | Zero-Hub | 8 |
man-group/arctic | pandas | 174 | No module named Cython.Build | i am using windows and i use putty to connect to a linux host.
when i enter : git+https://github.com/manahl/arctic.git
my result is as follows:
```
Cloning https://github.com/manahl/arctic.git to /tmp/pip-B5xD0j-build
Running setup.py (path:/tmp/pip-B5xD0j-build/setup.py) egg_info for package from git+https://github.com/manahl/arctic.git
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "/tmp/pip-B5xD0j-build/setup.py", line 24, in <module>
from Cython.Build import cythonize
ImportError: No module named Cython.Build
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "/tmp/pip-B5xD0j-build/setup.py", line 24, in <module>
from Cython.Build import cythonize
ImportError: No module named Cython.Build
----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /tmp/pip-B5xD0j-build
Storing debug log for failure in /home/mickey/.pip/pip.log
```
how can i fix it?
thx for any help in advance
| closed | 2016-07-18T06:28:53Z | 2017-07-12T10:46:48Z | https://github.com/man-group/arctic/issues/174 | [] | mickeylau | 4 |
PaddlePaddle/PaddleHub | nlp | 1,767 | windows 10,用GPU推理时,CPU资源占用很大 | 我先在我的笔记本电脑上做的测试,配置如下:
1、2060单显卡
2、cuda 10.2
3、conda list 列出包的版本:
paddlehub 2.1.0
paddlenlp 2.0.8
paddlepaddle-gpu 2.1.2
paddleslim 1.1.1
paddlex 1.3.11
使用PaddleX 1.2.8 训练的模型,做目标识别的推理,使用 hub serving start -c .\server-cfg.json 启动服务后,推理速度大概是40-50ms左右,CPU占用6%,并且在“任务管理器”的面板中可以看到Cuda的计算资源占用为7%,
我认为性能已够我用,
为此购买了Dell的服务器,配置如下:
1、4块3060显卡
2、cuda 11.2
3、conda list 列出包的版本:
paddledet 2.3.0
paddlehub 2.2.0
paddlenlp 2.2.3
paddlepaddle-gpu 2.2.1.post111
paddleslim 2.2.1
paddlex 2.1.0
使用PaddleX 2.1.0 训练的模型,做目标识别的推理,使用 hub serving start -c .\server-cfg.json 启动服务后,推理速度大概是140ms以上,CPU占用30%左右,在“任务管理器”的面板中可以看到Cuda的计算资源占用为0%也就是cuda不做运算,但显卡内存确实占用的大了。因此我开4个目标进程CPU就满了,而cuda一真闲着。
重要:
因此我怀疑推理时根本没有去使用GPU,全部在使用CPU去计算,请问专家们,这是怎么回事?
是我环境装的不对,还是其它原因? | open | 2022-01-12T07:49:45Z | 2022-02-04T15:49:01Z | https://github.com/PaddlePaddle/PaddleHub/issues/1767 | [] | mbsoftlf | 4 |
hankcs/HanLP | nlp | 1,865 | When I runing the example occurred error | <!--
感谢找出bug,请认真填写下表:
-->
**Describe the bug**
Failed to load https://file.hankcs.com/hanlp/dep/ctb9_dep_electra_small_20220216_100306.zip
If the problem still persists, please submit an issue to https://github.com/hankcs/HanLP/issues
When reporting an issue, make sure to paste the FULL ERROR LOG below.
```python
import hanlp
HanLP = hanlp.pipeline() \
.append(hanlp.utils.rules.split_sentence, output_key='sentences') \
.append(hanlp.load('FINE_ELECTRA_SMALL_ZH'), output_key='tok') \
.append(hanlp.load('CTB9_POS_ELECTRA_SMALL'), output_key='pos') \
.append(hanlp.load('MSRA_NER_ELECTRA_SMALL_ZH'), output_key='ner', input_key='tok') \
.append(hanlp.load('CTB9_DEP_ELECTRA_SMALL', conll=0), output_key='dep', input_key='tok')\
.append(hanlp.load('CTB9_CON_ELECTRA_SMALL'), output_key='con', input_key='tok')
text = HanLP('2021年HanLPv2.1为生产环境带来次世代最先进的多语种NLP技术。阿婆主来到北京立方庭参观自然语义科技公司。')
print(text)
```
**Describe the current behavior**
When I runing the example
**Expected behavior**
The ternminal had output variable 'text' content.
**System information**
-- OS: Windows-10-10.0.22621-SP0
-- Python: 3.10.4
-- PyTorch: 2.1.2+cpu
-- HanLP: 2.1.0-beta.54
**Other info / logs**
================================ERROR LOG BEGINS================================
OS: Windows-10-10.0.22621-SP0
Python: 3.10.4
PyTorch: 2.1.2+cpu
HanLP: 2.1.0-beta.54
Traceback (most recent call last):
File "C:\TestProject\python\hanlptest\main.py", line 17, in <module>
.append(hanlp.load('CTB9_DEP_ELECTRA_SMALL', conll=0), output_key='dep', input_key='tok')\
File "C:\Users\jsyyb\AppData\Local\Programs\Python\Python310\lib\site-packages\hanlp\__init__.py", line 43, in load
return load_from_meta_file(save_dir, 'meta.json', verbose=verbose, **kwargs)
File "C:\Users\jsyyb\AppData\Local\Programs\Python\Python310\lib\site-packages\hanlp\utils\component_util.py", line 186, in load_from_meta_file
raise e from None
File "C:\Users\jsyyb\AppData\Local\Programs\Python\Python310\lib\site-packages\hanlp\utils\component_util.py", line 106, in load_from_meta_file
obj.load(save_dir, verbose=verbose, **kwargs)
File "C:\Users\jsyyb\AppData\Local\Programs\Python\Python310\lib\site-packages\hanlp\common\torch_component.py", line 173, in load
self.load_config(save_dir, **kwargs)
File "C:\Users\jsyyb\AppData\Local\Programs\Python\Python310\lib\site-packages\hanlp\common\torch_component.py", line 126, in load_config
self.on_config_ready(**self.config, save_dir=save_dir)
File "C:\Users\jsyyb\AppData\Local\Programs\Python\Python310\lib\site-packages\hanlp\components\parsers\biaffine\biaffine_dep.py", line 562, in on_config_ready
self.build_transformer_tokenizer() # We have to build tokenizer before building the dataloader and model
File "C:\Users\jsyyb\AppData\Local\Programs\Python\Python310\lib\site-packages\hanlp\components\parsers\biaffine\biaffine_dep.py", line 285, in build_transformer_tokenizer
transformer_tokenizer: PreTrainedTokenizer = AutoTokenizer.from_pretrained(transformer, use_fast=True)
File "C:\Users\jsyyb\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 752, in from_pretrained
config = AutoConfig.from_pretrained(
File "C:\Users\jsyyb\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\auto\configuration_auto.py", line 1082, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "C:\Users\jsyyb\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\configuration_utils.py", line 644, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "C:\Users\jsyyb\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\configuration_utils.py", line 699, in _get_config_dict
resolved_config_file = cached_file(
File "C:\Users\jsyyb\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\utils\hub.py", line 429, in cached_file
raise EnvironmentError(
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like hfl/chinese-electra-180g-small-discriminator is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.
=================================ERROR LOG ENDS=================================
**Code to reproduce the issue**
Provide a reproducible test case that is the bare minimum necessary to generate the problem.
* [x] I've completed this form and searched the web for solutions.
<!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! -->
<!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! -->
<!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! --> | closed | 2023-12-21T02:22:02Z | 2023-12-22T02:22:28Z | https://github.com/hankcs/HanLP/issues/1865 | [
"improvement"
] | lianggx | 1 |
openapi-generators/openapi-python-client | rest-api | 842 | readonly model properties not defaulting to UNSET | **Describe the bug**
First of all I am not sure if this is a bug or working as intended, just posting in case it is a bug.
The model of the API I am consuming marks a few fields as readonly, in the from_dict function, most of the properties are retrieved with d.pop("key", UNSET) but the few readonly properties are not defaulted.
As a result the code throws a KeyError.
**Additional context**
I'm kind of assuming this is intended behavior. If so, what is the appropriate way to handle this scenario?
| closed | 2023-08-25T16:07:07Z | 2023-09-05T13:17:05Z | https://github.com/openapi-generators/openapi-python-client/issues/842 | [] | chrisrrowland | 4 |
keras-rl/keras-rl | tensorflow | 187 | Swimmer doesn't work with DDPG Mujoco | Hi,
I'm trying to train the 'Swimmer-v2' model that is in Mujoco with examples/ddpg_mujoco.py but it doesn't seem to learn. Does anyone know why this might be happening?
Thanks!
| closed | 2018-04-01T19:30:53Z | 2019-04-12T14:24:49Z | https://github.com/keras-rl/keras-rl/issues/187 | [
"bug",
"wontfix"
] | rohit497 | 3 |
donnemartin/data-science-ipython-notebooks | pandas | 39 | Add Keras notebooks | closed | 2016-12-10T13:13:58Z | 2017-03-19T01:12:45Z | https://github.com/donnemartin/data-science-ipython-notebooks/issues/39 | [
"feature-request"
] | donnemartin | 2 | |
jmcnamara/XlsxWriter | pandas | 477 | Issue with MINIFS and MAXIFS functions | Python version: 2.7
XlsxWriter version: 1.0.2
Excel version: 1705
Currently, using MINIFS or MAXIFS with write_formula gives a "#NAME?" issue within the printed cell. If you evaluate the error, the issue is shown that MINIFS/MAXIFS function is the culprit. However, if you go into the cells formula bar and just hit return, the function works perfectly (but you'd have to do this with every cell). Example code:
import xlsxwriter
wb = xlsxwriter.Workbook(r"testMINIFS.xlsx")
ws = wb.add_worksheet()
for row in range(0, 5):
ws.write(row, 0, "foo" if row%2==0 else "bar")
ws.write(row, 1, row)
ws.write_formula(row, 2, "=MINIFS(B:B, A:A, A{0})".format(row+1))
wb.close()
If I use win32com, I'm able to use the functions without issue. However, I really don't want to have to re-write my entire program.
Using an array formula technically works with XlsxWriter - {MIN(IF(A:A=A1,B:B))} - but is incredibly slow; This is being applied to ~15k rows, and the spreadsheet becomes unresponsive for several minutes during first time booting the spreadsheet as well as when any value is changed (Whereas MINIFS only takes a few seconds). | closed | 2017-10-24T17:23:59Z | 2018-03-16T21:26:48Z | https://github.com/jmcnamara/XlsxWriter/issues/477 | [
"question"
] | JHsai | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.