repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
skypilot-org/skypilot | data-science | 4,981 | [GCP] Provisioning fails with TPU API error despite TPU not even used (and not wanted) | Skypilot 0.8
As you can see in the log, it sucessfully gets a L4 instance but still decides to contact TPU API for reasons I dont understand - then it fails with an error as TPU is deactivated and we definately dont want to use it. The log is slightly cleaned:
> I 03-18 13:07:46 optimizer.py:752] Estimated cost: $0.8 / hour
> I 03-18 13:07:46 optimizer.py:752]
> I 03-18 13:07:46 optimizer.py:887] Considered resources (1 node):
> I 03-18 13:07:46 optimizer.py:955] -----------------------------------------------------------------------------------------------
> I 03-18 13:07:46 optimizer.py:955] CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN
> I 03-18 13:07:46 optimizer.py:955] -----------------------------------------------------------------------------------------------
> I 03-18 13:07:46 optimizer.py:955] GCP g2-standard-4 4 16 L4:1 europe-west1-b 0.78 ✔
> I 03-18 13:07:46 optimizer.py:955] -----------------------------------------------------------------------------------------------
> I 03-18 13:07:47 authentication.py:201] OS Login is enabled for GCP project PROJECT_ID. Running additional authentication steps.
> I 03-18 13:07:52 cloud_vm_ray_backend.py:1551] ⚙︎ Launching on GCP europe-west1 (europe-west1-b).
> I 03-18 13:08:38 provisioner.py:450] └── Instance is up.
> I 03-18 13:09:12 provisioner.py:624] ✓ Cluster launched: sky-service-XXXX-3. View logs at: ~/sky_logs/sky-2025-03-18-13-07-46-XXXXXX/provision.log
>
> I 03-18 13:09:14 replica_managers.py:121] Failed to launch the sky serve replica cluster with error: googleapiclient.errors.HttpError: <HttpError 403 when requesting https://tpu.googleapis.com/v2alpha1/projects/PROJECT_ID/locations/europe-west1-b/nodes?alt=json returned "Cloud TPU API has not been used in project PROJECT_ID before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/tpu.googleapis.com/overview?project=PROJECT_ID then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.". Details: "[{'@type': 'type.googleapis.com/google.rpc.ErrorInfo', 'reason': 'SERVICE_DISABLED', 'domain': 'googleapis.com', 'metadata': {'service': 'tpu.googleapis.com', 'containerInfo': 'PROJECT_ID', 'activationUrl': 'https://console.developers.google.com/apis/api/tpu.googleapis.com/overview?project=PROJECT_ID', 'serviceTitle': 'Cloud TPU API', 'consumer': 'projects/PROJECT_ID'}}, {'@type': 'type.googleapis.com/google.rpc.LocalizedMessage', 'locale': 'en-US', 'message': 'Cloud TPU API has not been used in project PROJECT_ID before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/tpu.googleapis.com/overview?project=PROJECT_ID then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.'}, {'@type': 'type.googleapis.com/google.rpc.Help', 'links': [{'description': 'Google developers console API activation', 'url': 'https://console.developers.google.com/apis/api/tpu.googleapis.com/overview?project=PROJECT_ID'}]}]">)
> I 03-18 13:09:14 replica_managers.py:124] Traceback: Traceback (most recent call last):
> I 03-18 13:09:14 replica_managers.py:124] File "/home/USER_PATH/skypilot-runtime/lib/python3.10/site-packages/sky/serve/replica_managers.py", line 98, in launch_cluster
> I 03-18 13:09:14 replica_managers.py:124] sky.launch(task,
> I 03-18 13:09:14 replica_managers.py:124] File "/home/USER_PATH/skypilot-runtime/lib/python3.10/site-packages/sky/utils/common_utils.py", line 385, in _record
> I 03-18 13:09:14 replica_managers.py:124] return f(*args, **kwargs)
> I 03-18 13:09:14 replica_managers.py:124] File "/home/USER_PATH/skypilot-runtime/lib/python3.10/site-packages/sky/utils/common_utils.py", line 385, in _record
> I 03-18 13:09:14 replica_managers.py:124] return f(*args, **kwargs)
> I 03-18 13:09:14 replica_managers.py:124] File "/home/USER_PATH/skypilot-runtime/lib/python3.10/site-packages/sky/execution.py", line 529, in launch
> I 03-18 13:09:14 replica_managers.py:124] return _execute(
> I 03-18 13:09:14 replica_managers.py:124] File "/home/USER_PATH/skypilot-runtime/lib/python3.10/site-packages/sky/execution.py", line 302, in _execute
> I 03-18 13:09:14 replica_managers.py:124] handle = backend.provision(
> I 03-18 13:09:14 replica_managers.py:124] File "/home/USER_PATH/skypilot-runtime/lib/python3.10/site-packages/sky/utils/common_utils.py", line 385, in _record
> I 03-18 13:09:14 replica_managers.py:124] return f(*args, **kwargs)
> I 03-18 13:09:14 replica_managers.py:124] File "/home/USER_PATH/skypilot-runtime/lib/python3.10/site-packages/sky/utils/common_utils.py", line 365, in _record
> I 03-18 13:09:14 replica_managers.py:124] return f(*args, **kwargs)
> I 03-18 13:09:14 replica_managers.py:124] File "/home/USER_PATH/skypilot-runtime/lib/python3.10/site-packages/sky/backends/backend.py", line 84, in provision
> I 03-18 13:09:14 replica_managers.py:124] return self._provision(task, to_provision, dryrun, stream_logs,
> I 03-18 13:09:14 replica_managers.py:124] File "/home/USER_PATH/skypilot-runtime/lib/python3.10/site-packages/sky/backends/cloud_vm_ray_backend.py", line 2967, in _provision
> I 03-18 13:09:14 replica_managers.py:124] self._update_after_cluster_provisioned(
> I 03-18 13:09:14 replica_managers.py:124] File "/home/USER_PATH/skypilot-runtime/lib/python3.10/site-packages/sky/backends/cloud_vm_ray_backend.py", line 3114, in _update_after_cluster_provisioned
> I 03-18 13:09:14 replica_managers.py:124] self._open_ports(handle)
> I 03-18 13:09:14 replica_managers.py:124] File "/home/USER_PATH/skypilot-runtime/lib/python3.10/site-packages/sky/backends/cloud_vm_ray_backend.py", line 3052, in _open_ports
> I 03-18 13:09:14 replica_managers.py:124] provision_lib.open_ports(repr(cloud), handle.cluster_name_on_cloud,
> I 03-18 13:09:14 replica_managers.py:124] File "/home/USER_PATH/skypilot-runtime/lib/python3.10/site-packages/sky/provision/__init__.py", line 53, in _wrapper
> I 03-18 13:09:14 replica_managers.py:124] return impl(*args, **kwargs)
> I 03-18 13:09:14 replica_managers.py:124] File "/home/USER_PATH/skypilot-runtime/lib/python3.10/site-packages/sky/provision/gcp/instance.py", line 591, in open_ports
> I 03-18 13:09:14 replica_managers.py:124] handler_to_instances = _filter_instances(handlers, project_id, zone,
> I 03-18 13:09:14 replica_managers.py:124] File "/home/USER_PATH/skypilot-runtime/lib/python3.10/site-packages/sky/provision/gcp/instance.py", line 38, in _filter_instances
> I 03-18 13:09:14 replica_managers.py:124] instance_dict = instance_handler.filter(
> I 03-18 13:09:14 replica_managers.py:124] File "/home/USER_PATH/skypilot-runtime/lib/python3.10/site-packages/sky/provision/gcp/instance_utils.py", line 1272, in filter
> I 03-18 13:09:14 replica_managers.py:124] response = (cls.load_resource().projects().locations().nodes().list(
> I 03-18 13:09:14 replica_managers.py:124] File "/home/USER_PATH/skypilot-runtime/lib/python3.10/site-packages/googleapiclient/_helpers.py", line 130, in positional_wrapper
> I 03-18 13:09:14 replica_managers.py:124] return wrapped(*args, **kwargs)
> I 03-18 13:09:14 replica_managers.py:124] File "/home/USER_PATH/skypilot-runtime/lib/python3.10/site-packages/googleapiclient/http.py", line 938, in execute
> I 03-18 13:09:14 replica_managers.py:124] raise HttpError(resp, content, uri=self.uri)
> I 03-18 13:09:14 replica_managers.py:124] googleapiclient.errors.HttpError: <HttpError 403 when requesting https://tpu.googleapis.com/v2alpha1/projects/PROJECT_ID/locations/europe-west1-b/nodes?alt=json returned "Cloud TPU API has not been used in project PROJECT_ID before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/tpu.googleapis.com/overview?project=PROJECT_ID then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.". Details: "[{'@type': 'type.googleapis.com/google.rpc.ErrorInfo', 'reason': 'SERVICE_DISABLED', 'domain': 'googleapis.com', 'metadata': {'service': 'tpu.googleapis.com', 'containerInfo': 'PROJECT_ID', 'activationUrl': 'https://console.developers.google.com/apis/api/tpu.googleapis.com/overview?project=PROJECT_ID', 'serviceTitle': 'Cloud TPU API', 'consumer': 'projects/PROJECT_ID'}}, {'@type': 'type.googleapis.com/google.rpc.LocalizedMessage', 'locale': 'en-US', 'message': 'Cloud TPU API has not been used in project PROJECT_ID before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/tpu.googleapis.com/overview?project=PROJECT_ID then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.'}, {'@type': 'type.googleapis.com/google.rpc.Help', 'links': [{'description': 'Google developers console API activation', 'url': 'https://console.developers.google.com/apis/api/tpu.googleapis.com/overview?project=PROJECT_ID'}]}]">
> I 03-18 13:09:14 replica_managers.py:124]
> Firewall rule sky-ports-sky-service-XXXX-3-XXXXX not found. Skip cleanup.
> E 03-18 13:09:47 ux_utils.py:117] Failed to run launch_cluster. Details: RuntimeError: Failed to launch the sky serve replica cluster sky-service-XXXX-3 after 3 retries.
> E 03-18 13:09:47 ux_utils.py:120] Traceback:
> E 03-18 13:09:47 ux_utils.py:120] Traceback (most recent call last):
> E 03-18 13:09:47 ux_utils.py:120] File "/home/USER_PATH/skypilot-runtime/lib/python3.10/site-packages/sky/utils/ux_utils.py", line 115, in run
> E 03-18 13:09:47 ux_utils.py:120] self.func(*args, **kwargs)
> E 03-18 13:09:47 ux_utils.py:120] File "/home/USER_PATH/skypilot-runtime/lib/python3.10/site-packages/sky/serve/replica_managers.py", line 130, in launch_cluster
> E 03-18 13:09:47 ux_utils.py:120] raise RuntimeError('Failed to launch the sky serve replica cluster '
> E 03-18 13:09:47 ux_utils.py:120] RuntimeError: Failed to launch the sky serve replica cluster sky-service-XXXX-3 after 3 retries.
> E 03-18 13:09:47 ux_utils.py:120]
>
| closed | 2025-03-18T15:19:33Z | 2025-03-19T18:31:50Z | https://github.com/skypilot-org/skypilot/issues/4981 | [] | shiosai | 3 |
huggingface/datasets | pytorch | 7,267 | Source installation fails on Macintosh with python 3.10 | ### Describe the bug
Hi,
Decord is a dev dependency not maintained since couple years.
It does not have an ARM package available rendering it uninstallable on non-intel based macs
Suggestion is to move to eva-decord (https://github.com/georgia-tech-db/eva-decord) which doesnt have this problem.
Happy to raise a PR
### Steps to reproduce the bug
Source installation as mentioned in contributinog.md
### Expected behavior
Installation without decord failing to be installed.
### Environment info
python=3.10, M3 Mac | open | 2024-10-31T10:18:45Z | 2024-11-04T22:18:06Z | https://github.com/huggingface/datasets/issues/7267 | [] | mayankagarwals | 1 |
twtrubiks/django-rest-framework-tutorial | rest-api | 2 | def get_days_since_created(self, obj) 問題請教 | 不好意思,python新手,這邊有個概念不是很懂:
class MusicSerializer(serializers.ModelSerializer):
days_since_created = serializers.SerializerMethodField()
def get_days_since_created(self, obj):
return (now() - obj.created).days
我的理解是class MusicSerializer 繼承了serializers.ModelSerializer ,然後他利用他底下的method get_days_since_created 去回傳一個值。
物件obj 則是呼叫了music。
不理解的地方:
這個def 裡面放了參數obj,但我沒看到其他地方有呼叫這個method,他是怎麼可以運行的?
| open | 2018-04-27T12:56:03Z | 2018-05-08T02:53:18Z | https://github.com/twtrubiks/django-rest-framework-tutorial/issues/2 | [] | ekils | 1 |
TencentARC/GFPGAN | deep-learning | 374 | Tencentarc damo no working | Tencentarc damo no working solve your tencentarc site | open | 2023-05-08T02:45:38Z | 2023-05-08T02:45:38Z | https://github.com/TencentARC/GFPGAN/issues/374 | [] | jgfuj | 0 |
nvbn/thefuck | python | 1,071 | alias fuck="fuck -y" causes error when sourcing .zshrc | <!-- If you have any issue with The Fuck, sorry about that, but we will do what we
can to fix that. Actually, maybe we already have, so first thing to do is to
update The Fuck and see if the bug is still there. -->
<!-- If it is (sorry again), check if the problem has not already been reported and
if not, just open an issue on [GitHub](https://github.com/nvbn/thefuck) with
the following basic information: -->
* The output of `thefuck --version`
`The Fuck 3.30 using Python 3.8.2 and ZSH 5.8`
* My system:
` macOS Catalina 10.15.4`
* How to reproduce the bug:
Update the `.zshrc` file. When I try to do a `source ~/.zshrc` I get the following error:
```
(eval):2: defining function based on alias `fuck'
(eval):2: parse error near `()'
```
The error is caused by `eval "$(thefuck --alias)"` line in the file. Note: When I for example open a new tab in the terminal after updating the config zsh initializes normally without errors.
* The output of The Fuck with `THEFUCK_DEBUG=true`
Same as ☝️
<!-- It's only with enough information that we can do something to fix the problem. -->
| closed | 2020-03-27T16:36:12Z | 2020-03-29T14:05:56Z | https://github.com/nvbn/thefuck/issues/1071 | [] | orthodoX | 6 |
piskvorky/gensim | data-science | 3,081 | Memory leaks when using doc_topics in LdaSeqModel | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
I trained a large LdaSeqModel with 53,000 documents, 30 topics, and 18 timeslices.
I saved the model to disk because it ran for 7 days.
When extracting topic probabilities for 53,000 documents, memory usage rises above 120GB.
However, only extracting probabilities for 1,000 documents works flawlessly.
#### Steps/code/corpus to reproduce
1. Train LdaSeqModel
2. Save LdaSeqModel
2. Load LdaSeqModel from disk
4. Extract document-topic probabilities with `doc_topics`
ldaseq.corpus_len = 53,101 in the below MWE.
```python
from gensim.models import LdaSeqModel
ldaseq = LdaSeqModel.load("/ldaseq_model")
prob = (ldaseq.doc_topics(x) for x in range(ldaseq.corpus_len))
df = pd.DataFrame(prob, columns=[f"topic_{i}" for i in range(30)])
```
#### Versions
macOS-10.16-x86_64-i386-64bit
Python 3.8.8 | packaged by conda-forge | (default, Feb 20 2021, 16:12:38)
[Clang 11.0.1 ]
Bits 64
NumPy 1.20.1
SciPy 1.6.1
gensim 3.8.3
FAST_VERSION 1 | open | 2021-03-18T15:29:10Z | 2021-03-18T17:46:06Z | https://github.com/piskvorky/gensim/issues/3081 | [] | nikchha | 0 |
GibbsConsulting/django-plotly-dash | plotly | 81 | Install from requirements.txt | This is an awesome project and I appreciate your work!
When installing from `pip install -r requirements.txt` the install will fail because in setup.py for django-plotly-dash has the line `import django_plotly_dash as dpd` which requires dash, django, etc. to work.
See point 6. https://packaging.python.org/guides/single-sourcing-package-version/
> Although this technique is common, beware that it will fail if sample/\_\_init\_\_.py imports packages from install_requires dependencies, which will very likely not be installed yet when setup.py is run.
I was able to get it working by removing django-plotly-dash from the requirements.txt file and installing it separately after everything else. I don't know if you like any other methods proposed for version number, but I thought you would like to know about the issue.
Thank you again for your work!
| closed | 2018-12-06T16:40:21Z | 2018-12-08T21:24:35Z | https://github.com/GibbsConsulting/django-plotly-dash/issues/81 | [
"bug",
"good first issue"
] | VenturaFranklin | 2 |
TheAlgorithms/Python | python | 12,046 | add quicksort under divide and conquer | ### Feature description
would like to add quicksort algo under divide and conquer | closed | 2024-10-14T00:42:53Z | 2024-10-16T02:52:02Z | https://github.com/TheAlgorithms/Python/issues/12046 | [
"enhancement"
] | OmMahajan29 | 1 |
noirbizarre/flask-restplus | flask | 626 | Enforce the order of functions | How do i enforce the order of functions under a certain method? Even if use Order Preservation as described in the link below the order of functions seems random.
https://flask-restplus.readthedocs.io/en/stable/quickstart.html?highlight=order
Thanks!
Isaac | open | 2019-04-12T15:49:50Z | 2019-04-12T15:49:50Z | https://github.com/noirbizarre/flask-restplus/issues/626 | [] | iwainstein | 0 |
automl/auto-sklearn | scikit-learn | 952 | Pipeline Export | Is there a way to export the best pipeline into python code and save to a file? | closed | 2020-09-17T19:56:03Z | 2021-11-17T10:51:41Z | https://github.com/automl/auto-sklearn/issues/952 | [
"question"
] | amybachir | 6 |
huggingface/datasets | deep-learning | 7,072 | nm | closed | 2024-07-25T17:03:24Z | 2024-07-25T20:36:11Z | https://github.com/huggingface/datasets/issues/7072 | [] | brettdavies | 0 | |
BeanieODM/beanie | pydantic | 599 | [BUG] Updating documents with a frozen BaseModel as field raises TypeError | **Describe the bug**
Relating a document X to document Y using `Link[]`, with document Y having a `pydantic.BaseModel` field that has `class Config: frozen = True`, will trigger the error `TypError: <field> is immutable` in Pydantic.
**To Reproduce**
**edit** please check my comment below, there is an easier way to reproduce it.
```python
class PersonDetails(BaseModel):
ssn: int
dob: str
class Config:
frozen = True
class Person(Document):
first_name: str
last_name: str
details: PersonDetails
class House(Document):
name: str
people: list[Link[Person]]
await init_beanie(database=db, document_models=[House, Person])
person = Person(first_name="john", last_name="doe", details=PersonDetails(ssn=123, dob="01/01/1970"))
house = House(name="my house", people=[person])
await house.insert(link_rule=WriteRules.WRITE)
print(f"Created {house=}")
```
Outputs:
```
TypeError: "PersonDetails" is immutable and does not support item assignment
```
When I first create the `Person` document and then try to link this to the `House`, it fails on the linking action;
```python
person = Person(first_name="john", last_name="doe", details=PersonDetails(ssn=123, dob="01/01/1970"))
await person.insert()
print(f"Created {person=}")
house = House(name="my house", people=[person])
await house.insert(link_rule=WriteRules.WRITE)
print(f"Created {house=}")
```
It successfully performs `person.insert()` and then raises the error at the `house.insert():
```
Created person=Person(id=ObjectId('649450d942a9b7dbc587940b'), revision_id=None, first_name='john', last_name='doe', details=PersonDetails(ssn=123, dob='01/01/1970'))
Traceback (most recent call last):
... (ommitted)
File "/usr/local/lib/python3.11/site-packages/beanie/odm/documents.py", line 607, in update
merge_models(self, result)
File "/usr/local/lib/python3.11/site-packages/beanie/odm/utils/parsing.py", line 24, in merge_models
merge_models(left_value, right_value)
File "/usr/local/lib/python3.11/site-packages/beanie/odm/utils/parsing.py", line 35, in merge_models
left.__setattr__(k, right_value)
File "pydantic/main.py", line 359, in pydantic.main.BaseModel.__setattr__
TypeError: "PersonDetails" is immutable and does not support item assignment
```
Ways to not get this error:
* Rewrite to 2-step approach and leave out the `link_rule=WriteRules.WRITE` (viable workaround, but quite verbose)
* Remove `frozen = True` (not a viable workaround in my case)
**Expected behavior**
Same as when `Person` would not be a Document but just a BaseModel;
```python
class PersonDetails(BaseModel):
ssn: int
dob: str
class Config:
frozen = True
class Person(BaseModel):
first_name: str
last_name: str
details: PersonDetails
class House(Document):
name: str
people: list[Person]
await init_beanie(database=db, document_models=[House])
person = Person(first_name="john", last_name="doe", details=PersonDetails(ssn=123, dob="01/01/1970"))
house = House(name="my house", people=[person])
await house.insert(link_rule=WriteRules.WRITE)
print(f"Created {house=}")
```
Which inserts without problems:
```
Created house=House(id=ObjectId('64944f0e5d3bc3eb3a5b495e'), revision_id=None, name='my house', people=[Person(first_name='john', last_name='doe', details=PersonDetails(ssn=123, dob='01/01/1970'))])
```
**Additional context**
n/a | closed | 2023-06-22T13:52:41Z | 2023-08-24T14:43:31Z | https://github.com/BeanieODM/beanie/issues/599 | [] | Mark90 | 6 |
pytorch/pytorch | deep-learning | 149,495 | DISABLED AotInductorTest.FreeInactiveConstantBufferCuda (build.bin.test_aoti_inference) | Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=AotInductorTest.FreeInactiveConstantBufferCuda&suite=build.bin.test_aoti_inference&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39012167561).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `AotInductorTest.FreeInactiveConstantBufferCuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Expected equality of these values:
initMemory - DATASIZE
Which is: 22508863488
updateMemory2
Which is: 22508797952
/var/lib/jenkins/workspace/test/cpp/aoti_inference/test.cpp:383: C++ failure
```
</details>
Test file path: `` or `test/run_test`
Error: Error retrieving : 400, test/run_test: 404
cc @clee2000 @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi @benjaminglass1 | open | 2025-03-19T09:43:10Z | 2025-03-21T09:41:37Z | https://github.com/pytorch/pytorch/issues/149495 | [
"module: flaky-tests",
"skipped",
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | pytorch-bot[bot] | 11 |
KaiyangZhou/deep-person-reid | computer-vision | 259 | Can't training with GPU | In the version of pytorch 1.3.1, OsNet can't train with GPU, report CUDA out of memory, but the gpu is not occupied。Bug? | closed | 2019-11-20T08:09:24Z | 2020-05-18T10:09:51Z | https://github.com/KaiyangZhou/deep-person-reid/issues/259 | [] | lovekittynine | 9 |
roboflow/supervision | deep-learning | 1,694 | Crash when filtering empty detections: xyxy shape (0, 0, 4). | Reproduction code:
```python
import supervision as sv
import numpy as np
CLASSES = [0, 1, 2]
prediction = sv.Detections.empty()
prediction = prediction[np.isin(prediction["class_name"], CLASSES)]
```
Error:
```
Traceback (most recent call last):
File "/Users/linasko/.settler_workspace/pr/supervision-fresh/run_detections.py", line 7, in <module>
prediction = prediction[np.isin(prediction["class_name"], CLASSES)]
File "/Users/linasko/.settler_workspace/pr/supervision-fresh/supervision/detection/core.py", line 1206, in __getitem__
return Detections(
File "<string>", line 10, in __init__
File "/Users/linasko/.settler_workspace/pr/supervision-fresh/supervision/detection/core.py", line 144, in __post_init__
validate_detections_fields(
File "/Users/linasko/.settler_workspace/pr/supervision-fresh/supervision/validators/__init__.py", line 120, in validate_detections_fields
validate_xyxy(xyxy)
File "/Users/linasko/.settler_workspace/pr/supervision-fresh/supervision/validators/__init__.py", line 11, in validate_xyxy
raise ValueError(
ValueError: xyxy must be a 2D np.ndarray with shape (_, 4), but got shape (0, 0, 4)
```
| closed | 2024-11-28T11:31:18Z | 2024-12-04T10:15:33Z | https://github.com/roboflow/supervision/issues/1694 | [
"bug"
] | LinasKo | 0 |
odoo/odoo | python | 202,652 | [16.0] stock: _find_delivery_ids_by_lot is hanging (recursive calls) | ### Odoo Version
- [x] 16.0
- [ ] 17.0
- [ ] 18.0
- [ ] Other (specify)
### Steps to Reproduce
Step to reproduce :
- Go to Inventory > Products > Variant Product
- Select a Product that is managed with lots
- Click on button "on hand" stock quantity
- Then double click on a "Lot/Serial numer" column value
Then it will take a while (several minutes) to display the view.
After doing a profiling, it reveals that the code is recursively calling `_find_delivery_ids_by_lot`
### Log Output
```shell
_find_delivery_ids_by_lot (called at /odoo/addons/stock/models/stock_lot.py (for lot_id, delivery_ids_set in next_lots._find_delivery_ids_by_lot(lot_path=lot_path, delivery_by_lot=delivery_by_lot).items():):294)
> _find_delivery_ids_by_lot (called at /odoo/addons/stock/models/stock_lot.py (for lot_id, delivery_ids_set in next_lots._find_delivery_ids_by_lot(lot_path=lot_path, delivery_by_lot=delivery_by_lot).items():):294)
> _find_delivery_ids_by_lot (called at /odoo/addons/stock/models/stock_lot.py (for lot_id, delivery_ids_set in next_lots._find_delivery_ids_by_lot(lot_path=lot_path, delivery_by_lot=delivery_by_lot).items():):294)
> _find_delivery_ids_by_lot (called at /odoo/addons/stock/models/stock_lot.py (for lot_id, delivery_ids_set in next_lots._find_delivery_ids_by_lot(lot_path=lot_path, delivery_by_lot=delivery_by_lot).items():):294)
> _find_delivery_ids_by_lot (called at /odoo/addons/stock/models/stock_lot.py (for lot_id, delivery_ids_set in next_lots._find_delivery_ids_by_lot(lot_path=lot_path, delivery_by_lot=delivery_by_lot).items():):294)
> _find_delivery_ids_by_lot (called at /odoo/addons/stock/models/stock_lot.py (for lot_id, delivery_ids_set in next_lots._find_delivery_ids_by_lot(lot_path=lot_path, delivery_by_lot=delivery_by_lot).items():):294)
> _find_delivery_ids_by_lot (called at /odoo/addons/stock/models/stock_lot.py (for lot_id, delivery_ids_set in next_lots._find_delivery_ids_by_lot(lot_path=lot_path, delivery_by_lot=delivery_by_lot).items():):294)
> _find_delivery_ids_by_lot (called at /odoo/addons/stock/models/stock_lot.py (for lot_id, delivery_ids_set in next_lots._find_delivery_ids_by_lot(lot_path=lot_path, delivery_by_lot=delivery_by_lot).items():):294)
> _find_delivery_ids_by_lot (called at /odoo/addons/stock/models/stock_lot.py (for lot_id, delivery_ids_set in next_lots._find_delivery_ids_by_lot(lot_path=lot_path, delivery_by_lot=delivery_by_lot).items():):294)
> _find_delivery_ids_by_lot (called at /odoo/addons/stock/models/stock_lot.py (for lot_id, delivery_ids_set in next_lots._find_delivery_ids_by_lot(lot_path=lot_path, delivery_by_lot=delivery_by_lot).items():):294)
> _find_delivery_ids_by_lot (called at /odoo/addons/stock/models/stock_lot.py (for lot_id, delivery_ids_set in next_lots._find_delivery_ids_by_lot(lot_path=lot_path, delivery_by_lot=delivery_by_lot).items():):294)
> _find_delivery_ids_by_lot (called at /odoo/addons/stock/models/stock_lot.py (for lot_id, delivery_ids_set in next_lots._find_delivery_ids_by_lot(lot_path=lot_path, delivery_by_lot=delivery_by_lot).items():):294)
> _find_delivery_ids_by_lot (called at /odoo/addons/stock/models/stock_lot.py (for lot_id, delivery_ids_set in next_lots._find_delivery_ids_by_lot(lot_path=lot_path, delivery_by_lot=delivery_by_lot).items():):294)
> _find_delivery_ids_by_lot (called at /odoo/addons/stock/models/stock_lot.py (for lot_id, delivery_ids_set in next_lots._find_delivery_ids_by_lot(lot_path=lot_path, delivery_by_lot=delivery_by_lot).items():):294)
> _find_delivery_ids_by_lot (called at /odoo/addons/stock/models/stock_lot.py (for lot_id, delivery_ids_set in next_lots._find_delivery_ids_by_lot(lot_path=lot_path, delivery_by_lot=delivery_by_lot).items():):294)
> _find_delivery_ids_by_lot (called at /odoo/addons/stock/models/stock_lot.py (for lot_id, delivery_ids_set in next_lots._find_delivery_ids_by_lot(lot_path=lot_path, delivery_by_lot=delivery_by_lot).items():):294)
> _find_delivery_ids_by_lot (called at /odoo/addons/stock/models/stock_lot.py (for lot_id, delivery_ids_set in next_lots._find_delivery_ids_by_lot(lot_path=lot_path, delivery_by_lot=delivery_by_lot).items():):294)
> _find_delivery_ids_by_lot (called at /odoo/addons/stock/models/stock_lot.py (for lot_id, delivery_ids_set in next_lots._find_delivery_ids_by_lot(lot_path=lot_path, delivery_by_lot=delivery_by_lot).items():):294)
> _find_delivery_ids_by_lot (called at /odoo/addons/stock/models/stock_lot.py (for lot_id, delivery_ids_set in next_lots._find_delivery_ids_by_lot(lot_path=lot_path, delivery_by_lot=delivery_by_lot).items():):294)
> _find_delivery_ids_by_lot (called at /odoo/addons/stock/models/stock_lot.py (for lot_id, delivery_ids_set in next_lots._find_delivery_ids_by_lot(lot_path=lot_path, delivery_by_lot=delivery_by_lot).items():):294)
> _find_delivery_ids_by_lot (called at /odoo/addons/stock/models/stock_lot.py (for lot_id, delivery_ids_set in next_lots._find_delivery_ids_by_lot(lot_path=lot_path, delivery_by_lot=delivery_by_lot).items():):294)
> _find_delivery_ids_by_lot (called at /odoo/addons/stock/models/stock_lot.py (for lot_id, delivery_ids_set in next_lots._find_delivery_ids_by_lot(lot_path=lot_path, delivery_by_lot=delivery_by_lot).items():):294)
> _find_delivery_ids_by_lot (called at /odoo/addons/stock/models/stock_lot.py (for lot_id, delivery_ids_set in next_lots._find_delivery_ids_by_lot(lot_path=lot_path, delivery_by_lot=delivery_by_lot).items():):294)
> _find_delivery_ids_by_lot (called at /odoo/addons/stock/models/stock_lot.py (delivery_ids_by_lot = self._find_delivery_ids_by_lot()):133)
> _compute_delivery_ids (called at /odoo/odoo/fields.py (return needle(*args)):98)
> determine (called at /odoo/odoo/models.py (fields.determine(field.compute, self)):4259)
> _compute_field_value (called at /odoo/addons/mail/models/mail_thread.py (return super()._compute_field_value(field)):403)
> _compute_field_value (called at /odoo/odoo/fields.py (records._compute_field_value(self)):1392)
> compute_value (called at /odoo/odoo/fields.py (self.compute_value(recs)):1210)
> __get__ (called at /odoo/odoo/models.py (return self._fields[key].__get__(self, self.env.registry[self._name])):5975)
> __getitem__ (called at /odoo/odoo/models.py (vals[name] = convert(record[name], record, use_name_get)):3202)
> _read_format (called at /odoo/odoo/models.py (return self._read_format(fnames=fields, load=load)):3021)
> read (called at /odoo/odoo/api.py (result = method(recs, *args, **kwargs)):453)
> _call_kw_multi (called at /odoo/odoo/api.py (result = _call_kw_multi(method, model, args, kwargs)):468)
> call_kw (called at /odoo/addons/web/controllers/dataset.py (return call_kw(request.env[model], method, args, kwargs)):33)
> _call_kw (called at /odoo/addons/web/controllers/dataset.py (return self._call_kw(model, method, args, kwargs)):42)
> call_kw (called at /odoo/odoo/http.py (result = endpoint(self, *args, **params_ok)):734)
> route_wrapper (called at /odoo/odoo/addons/base/models/ir_http.py (result = endpoint(**request.params)):154)
> _dispatch (called at /odoo/addons/website/models/ir_http.py (response = super()._dispatch(endpoint)):237)
> _dispatch (called at /odoo/odoo/http.py (result = self.request.registry['ir.http']._dispatch(endpoint)):1884)
> dispatch (called at /odoo/odoo/http.py (response = self.dispatcher.dispatch(rule.endpoint, args)):1680)
> _serve_ir_http (called at /odoo/odoo/service/model.py (result = func()):133)
> retrying (called at /odoo/odoo/http.py (return service_model.retrying(self._serve_ir_http, self.env)):1653)
> _serve_db
> __call__ (called at /usr/local/lib/python3.10/site-packages/werkzeug/serving.py ():308)
> execute (called at /usr/local/lib/python3.10/site-packages/werkzeug/serving.py ():319)
> run_wsgi (called at /usr/local/lib/python3.10/site-packages/werkzeug/serving.py ():374)
> handle_one_request (called at /usr/local/lib/python3.10/http/server.py ():433)
> handle (called at /usr/local/lib/python3.10/site-packages/werkzeug/serving.py ():342)
> handle (called at /usr/local/lib/python3.10/socketserver.py ():747)
> __init__ (called at /usr/local/lib/python3.10/socketserver.py ():360)
> finish_request (called at /usr/local/lib/python3.10/socketserver.py ():347)
> process_request (called at /odoo/odoo/service/server.py (self.server.process_request(client, addr)):1153)
> process_request (called at /odoo/odoo/service/server.py (self.process_request(client, addr)):1162)
> process_work (called at /odoo/odoo/service/server.py (self.process_work()):1123)
> _runloop (called at /usr/local/lib/python3.10/threading.py ():953)
> run (called at /usr/local/lib/python3.10/threading.py ():1016)
> _bootstrap_inner (called at /usr/local/lib/python3.10/threading.py ():973)
> _bootstrap
```
### Support Ticket
_No response_ | open | 2025-03-20T09:52:54Z | 2025-03-20T10:09:34Z | https://github.com/odoo/odoo/issues/202652 | [] | matmicro | 1 |
jschneier/django-storages | django | 860 | Non-seekable streams can not be uploaded to S3 | I have a large (1gb or so) csv file that I want admin users to be able to upload that will then be processed by an offline task. Unfortunately, uploading that file to Django isn't straightforward - because Nginx has reasonable limits to prevent bad users from uploading massive files.
Long story short, I want users to be able to provide a link (dropbox, google drive, etc) which will then be streamed by django to S3 via django-storages.
Turns out that's not so difficult because the underlying stream of urllib3 provides a file-like interface. The following **almost** works:
```python
import requests
from django.core.files import File
def UrlFile(url):
response = requests.get(url, stream=True)
return File(response.raw)
content = UrlFile("https://example.com/large.csv")
my_model.file_field.save("streamed.csv", content)
```
But these streams are not files and do not support seek, which the s3boto3 backend attempts to do without question:
https://github.com/jschneier/django-storages/blob/d827841dd7778a71fd1c529e5e7213dfdfdd5504/storages/backends/s3boto3.py#L545-L546
What it should do (I think..) is:
```python
if content.seekable():
content.seek(0, os.SEEK_SET)
```
I've hacked my way around this by implementing seek and no-op the default case of seek(0) when already at the start of the file. Entire class provided for any interested onlookers.
```python
class UrlFile(File):
def __init__(self, url: str, name: t.Optional[str] = None, **kwargs):
"""
Provides a streaming URL File interface for Django storage system.
Use like so:
>>> file = UrlFile("https://example.com/my_file.csv", timeout=5)
>>> my_model.file_field.save(file.name, file)
`**kwargs` are all passed straight through to requests.get
If the filename can not be determined from the `content-disposition`
header, a uuid.uuid4 value will be provided.
"""
resp: requests.Response = requests.get(url, stream=True, **kwargs)
resp.raise_for_status()
headers = resp.headers
self._size = headers["Content-Length"]
content_disposition = headers["content-disposition"]
self.name = name
if not self.name:
self._try_set_filename(content_disposition)
self.file = resp.raw
def _try_set_filename(self, content_disposition: str):
try:
filename = re.findall(r'filename="([^"]+)', content_disposition)[0]
# no paths
filename = pathlib.Path(filename).name
except IndexError:
filename = uuid.uuid4().hex
self.name = filename
def seek(self, offset, whence=os.SEEK_SET):
"""
A streaming file can not seek, but s3boto3 storage will seek(0) everything
coming in, so we fake it for the default case of we're already at the
beginning of the stream (a no-op)
This fakeness might break other things, but the goal for now is to work
with s3boto3storage
"""
if self.file.tell() == offset and whence == os.SEEK_SET:
return offset
return self.file.seek(offset, whence)
```
| closed | 2020-03-13T11:39:17Z | 2021-09-19T00:17:40Z | https://github.com/jschneier/django-storages/issues/860 | [] | jarshwah | 8 |
ageitgey/face_recognition | python | 705 | face_encodings make the code slow | * face_recognition version: 1.2.3
* Python version: 3.7
* Operating System: win10
cpu is i5-7300HQ
### Description
when detection face in the use camera
face_encodings will make the cv2.imshow() show the frame slow.
and how can i make the code quick.
i test to use the model='cnn', it is also slow
i test to cut some line to know what make it slow, and cause by face_encodings | open | 2018-12-15T09:35:50Z | 2018-12-25T05:34:57Z | https://github.com/ageitgey/face_recognition/issues/705 | [] | guozhaojian | 2 |
onnx/onnx | pytorch | 6,359 | flash-attention onnx export. | When using Flash-Attention version 2.6.3, there is an issue with the ONNX file saved using torch.onnx.export.
code:
import sys
import torch
qkv=torch.load("/home/qkv.pth")
from modeling_intern_vit import FlashAttention
falsh=FlashAttention().eval().cuda()
out=falsh(qkv.cpu().cuda())
with torch.no_grad():
torch.onnx.export(
falsh,
(qkv,),
"/home/qkv.onnx",
input_names = ["input0"],
output_names = ["qkv_out"],
opset_version = 11 )

| closed | 2024-09-10T05:41:26Z | 2024-09-13T14:01:15Z | https://github.com/onnx/onnx/issues/6359 | [
"question"
] | scuizhibin | 3 |
lepture/authlib | django | 283 | authlib 0.15.1 does not send authorizaton header | **Describe the bug**
authlib 0.15.1 does not seem to send authentication token with the request
**Error Stacks**
with authlib 0.15.1
```
RACE [2020-10-15 15:08:46] httpx._config - load_verify_locations cafile=/home/bartosz/.pyenv/versions/3.8.3/envs/quetz-heroku-test/lib/python3.8/site-packages/certifi/cacert.pem
TRACE [2020-10-15 15:08:46] httpcore._async.connection_pool - get_connection_from_pool=(b'https', b'api.github.com', 443)
TRACE [2020-10-15 15:08:46] httpcore._async.connection_pool - created connection=<AsyncHTTPConnection http_version=UNKNOWN state=0>
TRACE [2020-10-15 15:08:46] httpcore._async.connection_pool - adding connection to pool=<AsyncHTTPConnection http_version=UNKNOWN state=0>
TRACE [2020-10-15 15:08:46] httpcore._async.connection - open_socket origin=(b'https', b'api.github.com', 443) timeout={'connect': 5.0, 'read': 5.0, 'write': 5.0, 'pool': 5.0}
TRACE [2020-10-15 15:08:46] httpcore._async.connection - create_connection socket=<httpcore._backends.asyncio.SocketStream object at 0x7fa23a78e1c0> http_version='HTTP/1.1'
TRACE [2020-10-15 15:08:46] httpcore._async.connection - connection.arequest method=b'GET' url=(b'https', b'api.github.com', None, b'/user') headers=[(b'Host', b'api.github.com'), (b'Accept', b'*/*'), (b'Accept-Encoding', b'gzip, deflate'), (b'Connection', b'keep-alive'), (b'User-Agent', b'Authlib/0.15.1 (+https://authlib.org/)')]
TRACE [2020-10-15 15:08:46] httpcore._async.http11 - send_request method=b'GET' url=(b'https', b'api.github.com', None, b'/user') headers=[(b'Host', b'api.github.com'), (b'Accept', b'*/*'), (b'Accept-Encoding', b'gzip, deflate'), (b'Connection', b'keep-alive'), (b'User-Agent', b'Authlib/0.15.1 (+https://authlib.org/)')]
TRACE [2020-10-15 15:08:46] httpcore._async.http11 - send_data=Data(<0 bytes>)
DEBUG [2020-10-15 15:08:47] httpx._client - HTTP Request: GET https://api.github.com/user "HTTP/1.1 401 Unauthorized"
TRACE [2020-10-15 15:08:47] httpcore._async.http11 - receive_event=Data(<131 bytes>)
TRACE [2020-10-15 15:08:47] httpcore._async.http11 - receive_event=EndOfMessage(headers=<Headers([])>)
TRACE [2020-10-15 15:08:47] httpcore._async.http11 - response_closed our_state=DONE their_state=DONE
```
with authlib 0.14.3 it's ok (the code was not changed otherwise)
```
TRACE [2020-10-15 15:08:02] httpx._config - load_ssl_context verify=True cert=None trust_env=True http2=False
TRACE [2020-10-15 15:08:02] httpx._config - load_verify_locations cafile=/home/bartosz/.pyenv/versions/3.8.3/envs/quetz-heroku-test/lib/python3.8/site-packages/certifi/cacert.pem
TRACE [2020-10-15 15:08:02] httpcore._async.connection_pool - get_connection_from_pool=(b'https', b'api.github.com', 443)
TRACE [2020-10-15 15:08:02] httpcore._async.connection_pool - created connection=<AsyncHTTPConnection http_version=UNKNOWN state=0>
TRACE [2020-10-15 15:08:02] httpcore._async.connection_pool - adding connection to pool=<AsyncHTTPConnection http_version=UNKNOWN state=0>
TRACE [2020-10-15 15:08:02] httpcore._async.connection - open_socket origin=(b'https', b'api.github.com', 443) timeout={'connect': 5.0, 'read': 5.0, 'write': 5.0, 'pool': 5.0}
TRACE [2020-10-15 15:08:02] httpcore._async.connection - create_connection socket=<httpcore._backends.asyncio.SocketStream object at 0x7f6d43dd3850> http_version='HTTP/1.1'
TRACE [2020-10-15 15:08:02] httpcore._async.connection - connection.arequest method=b'GET' url=(b'https', b'api.github.com', None, b'/user') headers=[(b'Host', b'api.github.com'), (b'Accept', b'*/*'), (b'Accept-Encoding', b'gzip, deflate'), (b'Connection', b'keep-alive'), (b'User-Agent', b'Authlib/0.14.3 (+https://authlib.org/)'), (b'Authorization', b'Bearer 706db6c985d50bc04fb61c27e0c3a327b966d9d0')]
TRACE [2020-10-15 15:08:02] httpcore._async.http11 - send_request method=b'GET' url=(b'https', b'api.github.com', None, b'/user') headers=[(b'Host', b'api.github.com'), (b'Accept', b'*/*'), (b'Accept-Encoding', b'gzip, deflate'), (b'Connection', b'keep-alive'), (b'User-Agent', b'Authlib/0.14.3 (+https://authlib.org/)'), (b'Authorization', b'Bearer 706db6c985d50bc04fb61c27e0c3a327b966d9d0')]
TRACE [2020-10-15 15:08:02] httpcore._async.http11 - send_data=Data(<0 bytes>)
DEBUG [2020-10-15 15:08:02] httpx._client - HTTP Request: GET https://api.github.com/user "HTTP/1.1 200 OK"
TRACE [2020-10-15 15:08:02] httpcore._async.http11 - receive_event=Data(<517 bytes>)
TRACE [2020-10-15 15:08:02] httpcore._async.http11 - receive_event=EndOfMessage(headers=<Headers([])>)
TRACE [2020-10-15 15:08:02] httpcore._async.http11 - response_closed our_state=DONE their_state=DONE
```
**To Reproduce**
this is a piece of code that triggered this issue:
```
from authlib.integrations.starlette_client import OAuth
oauth = OAuth()
oauth.register(
name='github',
client_id=config.github_client_id,
client_secret=config.github_client_secret,
access_token_url='https://github.com/login/oauth/access_token',
access_token_params=None,
authorize_url='https://github.com/login/oauth/authorize',
authorize_params=None,
api_base_url='https://api.github.com/',
client_kwargs={'scope': 'user:email'},
quetz_db_url=config.sqlalchemy_database_url,
)
router.route('/auth/github/authorize', name='authorize_github')
async def authorize(request: Request):
token = await oauth.github.authorize_access_token(request)
print(token)
resp = await oauth.github.get('user', token=token)
```
**Expected behavior**
authentication token is sent and the response from api.github.com/user returns with code 200
**Environment:**
- OS: linux
- Python Version: 3.8.3
- Authlib Version: 0.15.1
**Additional context**
Add any other context about the problem here.
| closed | 2020-10-15T13:28:57Z | 2020-10-18T06:40:07Z | https://github.com/lepture/authlib/issues/283 | [
"bug",
"client",
"httpx"
] | btel | 9 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,275 | After starting training the model,it doesn't work and only see "learning rate 0.0002000 -> 0.0002000" | I try to start training the model but it doesn't work and stay in the same position for a long time,may you tell me the reason?
My GPU is RTX5000 16G batchsize 1
| open | 2021-04-22T08:59:35Z | 2021-12-08T21:07:31Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1275 | [] | KnightWin123 | 7 |
axnsan12/drf-yasg | django | 727 | Undocumented TypeError: NetworkError when attempting to fetch resource. | I have api then i try request from swagger ui and get

But if i send request from browse i get status 200 (ok) | closed | 2021-07-21T08:45:45Z | 2021-07-21T13:31:32Z | https://github.com/axnsan12/drf-yasg/issues/727 | [] | Barolina | 1 |
zihangdai/xlnet | tensorflow | 295 | xlnet, transformer xl attention score funtion problem | A function tf.einsum (‘ibnd,jbnd->ijbn’, (head_q, head_k) used to obtain an attachment score from xlnet, transformer xl, can’t find correlation between all words, can anyone explain it on calculation? For example, if you have a 2,2 tensor called [i,am],[a,boy] i is i,a am is am,boy A is i, a Boy only calculates am,boy correlation. please help me
```
def call(self, w, r, attn_mask, mems, head_mask, output_attentions, training=False):
qlen, rlen, bsz = shape_list(w)[0], shape_list(r)[0], shape_list(w)[1]
if mems is not None:
cat = tf.concat([mems, w], 0)
if self.pre_lnorm:
w_heads = self.qkv_net(self.layer_norm(cat))
else:
w_heads = self.qkv_net(cat)
r_head_k = self.r_net(r)
w_head_q, w_head_k, w_head_v = tf.split(w_heads, 3, axis=-1)
w_head_q = w_head_q[-qlen:]
else:
if self.pre_lnorm:
w_heads = self.qkv_net(self.layer_norm(w))
else:
w_heads = self.qkv_net(w)
r_head_k = self.r_net(r)
w_head_q, w_head_k, w_head_v = tf.split(w_heads, 3, axis=-1)
klen = shape_list(w_head_k)[0]
w_head_q = tf.reshape(w_head_q, (qlen, bsz, self.n_head, self.d_head)) # qlen x bsz x n_head x d_head
w_head_k = tf.reshape(w_head_k, (klen, bsz, self.n_head, self.d_head)) # qlen x bsz x n_head x d_head
w_head_v = tf.reshape(w_head_v, (klen, bsz, self.n_head, self.d_head)) # qlen x bsz x n_head x d_head
r_head_k = tf.reshape(r_head_k, (rlen, self.n_head, self.d_head)) # qlen x n_head x d_head
# compute attention score
rw_head_q = w_head_q + self.r_w_bias # qlen x bsz x n_head x d_head
AC = tf.einsum("ibnd,jbnd->ijbn", rw_head_q, w_head_k) # qlen x klen x bsz x n_head
rr_head_q = w_head_q + self.r_r_bias
BD = tf.einsum("ibnd,jnd->ijbn", rr_head_q, r_head_k) # qlen x klen x bsz x n_head
BD = self._rel_shift(BD)
```
attention_score = np.einsum ("ijkl,ijml->ijkm", Q, K)/np.sqrt(hidden_size) # [batch_size, num_haed, sequence_length, sequence_length] As above, the formula for finding the basic attrition score finds the attrition score between all words and all words tf.einsum ("ibnd,jbnd->ijbn", rw_head_q, w_head_k) doesn't get the attrition socre with all the words as described above, so I want to know about this part | open | 2023-11-07T02:15:40Z | 2023-11-07T02:15:40Z | https://github.com/zihangdai/xlnet/issues/295 | [] | wonjunchoi-arc | 0 |
JaidedAI/EasyOCR | pytorch | 1,209 | supported optimizer engine for optimization in training | @rkcosmos I just saw list of optimizers currently supported by EasyOCR for training a custom model, is there any reasons that we have just only these 2 optimizers? if not, i can help making a PR to add more optimizers for more robust model optimization of EasyOCR.
https://github.com/JaidedAI/EasyOCR/blob/c999505ef6b43be1c4ee36aa04ad979175178352/trainer/train.py#L138-L145 | open | 2024-02-03T08:17:47Z | 2024-02-03T08:17:47Z | https://github.com/JaidedAI/EasyOCR/issues/1209 | [] | pavaris-pm | 0 |
piskvorky/gensim | data-science | 3,543 | The model architecture of word2vec | Excuse me,I want to know that the model architecture of gensim.models.Word2Vec(CBOW).
It seems that the architecture is not as simple as an embedding layer,a hidden layer(linear)and softmax. | closed | 2024-07-10T08:18:09Z | 2024-07-10T19:01:13Z | https://github.com/piskvorky/gensim/issues/3543 | [] | LiuBurger | 1 |
microsoft/nni | pytorch | 5,483 | TypeError: __new__() missing 1 required positional argument: 'task' | **Describe the issue**: Hello, developers. I'm a newbie just getting started learning nas. When I run the tutorial file in the notebook, I get an error, I hope you can help me to solve this problem:
File "search_2.py", line 59, in <module>
max_epochs=5)
File "D:\anaconda3\envs\venv_copy\lib\site-packages\nni\nas\evaluator\pytorch\lightning.py", line 372, in __init__
weight_decay=weight_decay, optimizer=optimizer, export_onnx=export_onnx)
File "D:\anaconda3\envs\venv_copy\lib\site-packages\nni\common\serializer.py", line 473, in new_init
**{kw: _argument_processor(arg) for kw, arg in kwargs.items()}
File "D:\anaconda3\envs\venv_copy\lib\site-packages\nni\nas\evaluator\pytorch\lightning.py", line 310, in __init__
export_onnx=export_onnx)
File "D:\anaconda3\envs\venv_copy\lib\site-packages\nni\nas\evaluator\pytorch\lightning.py", line 224, in __init__
self.metrics = nn.ModuleDict({name: cls() for name, cls in metrics.items()})
File "D:\anaconda3\envs\venv_copy\lib\site-packages\nni\nas\evaluator\pytorch\lightning.py", line 224, in <dictcomp>
self.metrics = nn.ModuleDict({name: cls() for name, cls in metrics.items()})
TypeError: __new__() missing 1 required positional argument: 'task'
here is the demo code of Retiarii_example_multi-trial_NAS ,When calling pl.Classification() the error is reported.
```python
@model_wrapper
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.LayerChoice([nn.Conv2d(3, 6, 3, padding=1), nn.Conv2d(3, 6, 5, padding=2)])# input[3,32,32] output[6,32,32]
self.pool = nn.MaxPool2d(2, 2) #output[6,16,16]
self.conv2 = nn.LayerChoice([nn.Conv2d(6, 16, 3, padding=1), nn.Conv2d(6, 16, 5, padding=2)]) #output[16,16,16]
self.conv3 = nn.Conv2d(16, 16, 1) #output[16,16,16]
self.skipconnect = nn.InputChoice(n_candidates=2)
self.bn = nn.BatchNorm2d(16)
self.gap = nn.AdaptiveAvgPool2d(4) #output[16,4,4]
self.fc1 = nn.Linear(16 * 4 * 4, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
bs = x.size(0)
x = self.pool(F.relu(self.conv1(x)))
x0 = F.relu(self.conv2(x))
x1 = F.relu(self.conv3(x0))
x1 = self.skipconnect([x1, x1+x0])
x = self.pool(self.bn(x1))
x = self.gap(x).view(bs, -1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
model = Net()
simple_strategy = strategy.Random() # choice: Random, GridSearch, RegularizedEvolution, TPEStrategy
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
train_dataset = serialize(CIFAR10, root="./data_cifar10", train=True, download=True, transform=transform)
test_dataset = serialize(CIFAR10, root="./data_cifar10", train=False, download=True, transform=transform)
trainer = pl.Classification(train_dataloader=pl.DataLoader(train_dataset, batch_size=16),
val_dataloaders=pl.DataLoader(test_dataset, batch_size=16),
max_epochs=5, gpus=[0])
if __name__ == '__main__':
exp = RetiariiExperiment(model, trainer, [], simple_strategy)
exp_config = RetiariiExeConfig('local')
exp_config.experiment_name = 'search darts example'
exp_config.trial_concurrency = 1
exp_config.max_trial_number = 10
exp_config.trial_gpu_number = 1
exp_config.max_experiment_duration = '10m'
exp_config.execution_engine = 'base'
exp_config.training_service.use_active_gpu = True
exp.run(exp_config, 8745)
print('Final model:')
for model_code in exp.export_top_models():
print(model_code)
exp.stop()
```
**Environment**:
- NNI version:2.10
- Training service (local|remote|pai|aml|etc):local
- Client OS:
- Server OS (for remote mode only):
- Python version:3.7.11
- PyTorch/TensorFlow version:pytorch 1.11
- Is conda/virtualenv/venv used?: yes
- Is running in Docker?: no
| closed | 2023-03-28T01:18:25Z | 2023-03-29T02:38:04Z | https://github.com/microsoft/nni/issues/5483 | [] | zzfer490 | 3 |
jazzband/django-oauth-toolkit | django | 1,074 | Add missing migration caused by #1020 | Discovered by @max-wittig:
```sh
poetry run python manage.py makemigrations --check --dry-run
Migrations for 'oauth2_provider':
/Users/max/Library/Caches/pypoetry/virtualenvs/codeapps-wUwe-wZH-py3.10/lib/python3.10/site-packages/oauth2_provider/migrations/0006_alter_application_client_secret.py
- Alter field client_secret on application
```
_Originally posted by @max-wittig in https://github.com/jazzband/django-oauth-toolkit/issues/1067#issuecomment-1005204093_ | closed | 2022-01-07T20:00:27Z | 2022-01-07T23:03:08Z | https://github.com/jazzband/django-oauth-toolkit/issues/1074 | [
"bug"
] | n2ygk | 1 |
horovod/horovod | deep-learning | 3,698 | reducescatter() and grouped_reducescatter() crash for scalar inputs | `hvd.reducescatter(3.14)` currently leads to a C++ assertion failure in debug builds or a segmentation fault in release builds.
There should either be a clear error message or it should behave similarly to `hvd.reducescatter([3.14])`, which returns a one-element tensor on the root rank and an empty tensor on the other ranks. | closed | 2022-09-12T18:00:01Z | 2022-09-20T06:30:14Z | https://github.com/horovod/horovod/issues/3698 | [
"bug"
] | maxhgerlach | 0 |
collerek/ormar | pydantic | 343 | Foreign Key as int | hello your library is great. But it has one big drawback - the output of external keys for get requests. It is very inconvenient when in post requests you can send an int and receive a dict. is it possible to add the field <field_id> next to your key as in SqlAlchemy so that the request and responses are the same.
I can output <field_id> in get requests, but I cannot send them to the post ... How to form forms on the front is not clear.

| closed | 2021-09-12T10:51:43Z | 2021-09-14T20:21:57Z | https://github.com/collerek/ormar/issues/343 | [
"enhancement"
] | artel1992 | 1 |
gradio-app/gradio | data-science | 10,144 | autofocus in multimodal chatbot takes focus away from textbox in additional_inputs | ### Describe the bug
When `multimodal=True` is set, when I try to type something in the textbox in `additional_inputs` of `gr.ChatInterface`, the focus shifts to the main textbox right after pressing the first key. This doesn't happen with `multimodal=False`.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
https://huggingface.co/spaces/hysts-debug/multimodal-chat-autofocus
```python
import gradio as gr
def fn(message, history, system_prompt):
return message
gr.ChatInterface(fn=fn, additional_inputs=[gr.Textbox()], multimodal=True).launch()
```
### Screenshot
https://github.com/user-attachments/assets/dee72406-0963-4f57-868a-09f9e9785acb
### Logs
_No response_
### System Info
```shell
gradio==5.8.0
```
### Severity
I can work around it | closed | 2024-12-06T06:07:05Z | 2024-12-11T14:16:22Z | https://github.com/gradio-app/gradio/issues/10144 | [
"bug"
] | hysts | 1 |
benbusby/whoogle-search | flask | 523 | App crashes on fly.io deployment. | The docker image gets deployed successfully and the landing page is loaded, but when a search request is sent it just crashes and throws ERROR 502.

I think due to the limited ram availability the app crashes i am using v0.6.0, a screen cap of ram usage and app log is attached below.

[APP LOGS](https://pastebin.mozilla.org/ANtg9Mfb)
| closed | 2021-11-04T21:07:13Z | 2021-11-07T18:26:10Z | https://github.com/benbusby/whoogle-search/issues/523 | [] | doloresjose | 1 |
K3D-tools/K3D-jupyter | jupyter | 40 | Use instancing to render glyphs more efficiently | I see in `vectors.js` that you do:
```
useHead ? new THREE.Geometry().copy(singleConeGeometry) : null,
```
i.e. you make a copy of the cone head for each glyph.
A possible optimization is to use instanced rendering, avoiding duplication of the glyph model.
https://threejs.org/docs/#api/core/InstancedBufferGeometry
Consider this if deciding to pursue an improved glyph model as I suggested in
https://github.com/K3D-tools/K3D-jupyter/issues/38
| closed | 2017-05-31T07:05:41Z | 2017-05-31T08:41:54Z | https://github.com/K3D-tools/K3D-jupyter/issues/40 | [] | martinal | 2 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 288 | during Encoder train, MemoryError happened sometimes | Hi,
below is log:
Average execution time over 10 steps:
Blocking, waiting for batch (threaded) (10/10): mean: 8459ms std: 16934ms
Data to cuda (10/10): mean: 8ms std: 6ms
Forward pass (10/10): mean: 7ms std: 3ms
Loss (10/10): mean: 74ms std: 8ms
Backward pass (10/10): mean: 232ms std: 10ms
Parameter update (10/10): mean: 71ms std: 6ms
Extras (visualizations, saving) (10/10): mean: 584ms std: 1753ms
..........
Step 310 Loss: 2.7407 EER: 0.1718 Step time: mean: 9797ms std: 16272ms
Average execution time over 10 steps:
Blocking, waiting for batch (threaded) (10/10): mean: 8824ms std: 15893ms
Data to cuda (10/10): mean: 8ms std: 6ms
Forward pass (10/10): mean: 6ms std: 0ms
Loss (10/10): mean: 76ms std: 10ms
Backward pass (10/10): mean: 225ms std: 10ms
Parameter update (10/10): mean: 74ms std: 8ms
Extras (visualizations, saving) (10/10): mean: 2ms std: 6ms
Traceback (most recent call last):
File "D:\Anaconda3\envs\RTVD\lib\multiprocessing\queues.py", line 234, in _feed
obj = _ForkingPickler.dumps(obj)
File "D:\Anaconda3\envs\RTVD\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
MemoryError
..Traceback (most recent call last):
File "encoder_train.py", line 46, in <module>
train(**vars(args))
File "E:\RTVC\encoder\train.py", line 70, in train
for step, speaker_batch in enumerate(loader, init_step):
File "D:\Anaconda3\envs\RTVD\lib\site-packages\torch\utils\data\dataloader.py", line 819, in __next__
return self._process_data(data)
File "D:\Anaconda3\envs\RTVD\lib\site-packages\torch\utils\data\dataloader.py", line 846, in _process_data
data.reraise()
File "D:\Anaconda3\envs\RTVD\lib\site-packages\torch\_utils.py", line 369, in reraise
raise self.exc_type(msg)
MemoryError: Caught MemoryError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "D:\Anaconda3\envs\RTVD\lib\site-packages\torch\utils\data\_utils\worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "D:\Anaconda3\envs\RTVD\lib\site-packages\torch\utils\data\_utils\fetch.py", line 47, in fetch
return self.collate_fn(data)
File "E:\RTVC\encoder\data_objects\speaker_verification_dataset.py", line 55, in collate
return SpeakerBatch(speakers, self.utterances_per_speaker, partials_n_frames)
File "E:\RTVC\encoder\data_objects\speaker_batch.py", line 8, in __init__
self.partials = {s: s.random_partial(utterances_per_speaker, n_frames) for s in speakers}
File "E:\RTVC\encoder\data_objects\speaker_batch.py", line 8, in <dictcomp>
self.partials = {s: s.random_partial(utterances_per_speaker, n_frames) for s in speakers}
File "E:\RTVC\encoder\data_objects\speaker.py", line 38, in random_partial
a = [(u,) + u.random_partial(n_frames) for u in utterances]
File "E:\RTVC\encoder\data_objects\speaker.py", line 38, in <listcomp>
a = [(u,) + u.random_partial(n_frames) for u in utterances]
File "E:\RTVC\encoder\data_objects\utterance.py", line 20, in random_partial
frames = self.get_frames()
File "E:\RTVC\encoder\data_objects\utterance.py", line 10, in get_frames
return np.load(self.frames_fpath)
File "D:\Anaconda3\envs\RTVD\lib\site-packages\numpy\lib\npyio.py", line 440, in load
pickle_kwargs=pickle_kwargs)
File "D:\Anaconda3\envs\RTVD\lib\site-packages\numpy\lib\format.py", line 704, in read_array
array = numpy.fromfile(fp, dtype=dtype, count=count)
MemoryError
hardware: CPU memory 32G, GPU memory 12G,
analysis:
maybe something wrong with DataLoader.
collate_fn in DataLoader is used for provide a way to deal with dataset ( just deal with, not create), however, our code use return np.load(self.frames_fpath) in Utterance.py line 10.
it means dataset just provide a filepath, the payload is created in DataLoader. As a result, the CPU memory is creased from 15GB to 32GB, and then MemoryError happened
is it becaused by memory in DataLoader not be released? | closed | 2020-02-21T02:28:33Z | 2020-08-13T06:11:55Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/288 | [] | DatanIMU | 2 |
pytest-dev/pytest-cov | pytest | 77 | pytest-cov makes test fail by putting a temporary file into cwd | i have tests that check if some specific files have been extracted into cwd.
they fail when using pytest-cov because some .coverage\* file gets created there.
is there any way to avoid this?
| closed | 2015-08-08T01:10:07Z | 2015-10-02T18:31:55Z | https://github.com/pytest-dev/pytest-cov/issues/77 | [
"bug"
] | ThomasWaldmann | 11 |
cleanlab/cleanlab | data-science | 383 | CI: check docs for newly added source code files | - [ ] Add CI check that documentation index pages on docs.cleanlab.ai include new source code files which have been added in a new commit. Otherwise somebody may push commit with new source code files, but the documentation for them will never appear on docs.cleanlab.ai.
Ideally we also need to edit less docs/ files and index files to make documentation for new source code files appear in docs.cleanlab.ai. Solutions to minimize number of files that need to be touched are welcomed!
- [ ] Add reminder to CONTRIBUTING.md that lists steps needed to ensure new source code files will have their documentation appear in docs.cleanlab.ai
Currently new source code files must be listed in various places like:
- The appropriate index files in here: https://github.com/cleanlab/cleanlab/tree/master/docs/source
- A module docs page in: https://github.com/cleanlab/cleanlab/tree/master/docs/source/cleanlab
- The __init__ file: https://github.com/cleanlab/cleanlab/blob/master/cleanlab/__init__.py | open | 2022-08-31T09:11:19Z | 2024-12-25T20:09:56Z | https://github.com/cleanlab/cleanlab/issues/383 | [
"enhancement",
"good first issue",
"help-wanted"
] | jwmueller | 2 |
idealo/image-super-resolution | computer-vision | 218 | Can't run Dockerfile.gpu. AttributeError: module 'google.protobuf.internal.containers' has no attribute 'MutableMapping' | Running command
```
docker run -v $(pwd)/data/:/home/isr/data -v $(pwd)/weights/:/home/isr/weights -v $(pwd)/config.yml:/home/isr/config.yml -it isr -p -d -c config.yml
```
Gets error message
```Traceback (most recent call last):
File "/home/isr/ISR/assistant.py", line 90, in <module>
prediction=args['prediction'],
File "/home/isr/ISR/assistant.py", line 22, in run
module = _get_module(generator)
File "/home/isr/ISR/assistant.py", line 11, in _get_module
return import_module('ISR.models.' + generator)
File "/usr/lib/python3.5/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 944, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 665, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "/home/isr/ISR/models/__init__.py", line 1, in <module>
from .cut_vgg19 import Cut_VGG19
File "/home/isr/ISR/models/cut_vgg19.py", line 1, in <module>
from tensorflow.keras.models import Model
File "/usr/local/lib/python3.5/dist-packages/tensorflow/__init__.py", line 41, in <module>
from tensorflow.python.tools import module_util as _module_util
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/__init__.py", line 40, in <module>
from tensorflow.python.eager import context
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/eager/context.py", line 32, in <module>
from tensorflow.core.framework import function_pb2
File "/usr/local/lib/python3.5/dist-packages/tensorflow/core/framework/function_pb2.py", line 7, in <module>
from google.protobuf import descriptor as _descriptor
File "/usr/local/lib/python3.5/dist-packages/google/protobuf/descriptor.py", line 47, in <module>
from google.protobuf.pyext import _message
AttributeError: module 'google.protobuf.internal.containers' has no attribute 'MutableMapping'
```
| open | 2021-11-08T22:39:39Z | 2022-02-13T17:53:46Z | https://github.com/idealo/image-super-resolution/issues/218 | [] | zelkourban | 0 |
proplot-dev/proplot | data-visualization | 322 | Compatible with norm colorbar ticks in matplotlib3.5+ | ### Description
The ticks of colorbar for manual levels are wrong for matplotlib3.5+.
### Steps to reproduce
```python
import proplot as pplt
import matplotlib
import numpy as np
crf_bounds = np.array([0, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 0.97, 0.98, 0.99, 1.0])
norm = matplotlib.colors.BoundaryNorm(boundaries=crf_bounds, ncolors=256)
x = y = np.array([-10, -5, 0, 5, 10])
np.random.seed(20)
data = np.random.uniform(0,1,(4,4))
fig, axs = pplt.subplots()
m = axs.pcolormesh(x, y, data, cmap='RdYlGn', norm=norm)
axs.colorbar([m])
```
**Expected behavior**: [What you expected to happen]

**Actual behavior**: [What actually happened]

### Equivalent steps in matplotlib
```python
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
crf_bounds = np.array([0, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 0.97, 0.98, 0.99, 1.0])
norm = matplotlib.colors.BoundaryNorm(boundaries=crf_bounds, ncolors=256)
x = y = np.array([-10, -5, 0, 5, 10])
np.random.seed(20)
data = np.random.uniform(0,1,(4,4))
plt.pcolormesh(x, y, data, cmap='RdYlGn', norm=norm)
plt.colorbar()
```

### Proplot version
Paste the results of `import matplotlib; print(matplotlib.__version__); import proplot; print(proplot.version)`here.
```
3.5.1
0.9.5.post105
``` | closed | 2022-01-14T14:16:40Z | 2022-01-15T02:43:15Z | https://github.com/proplot-dev/proplot/issues/322 | [
"bug",
"external issue"
] | zxdawn | 6 |
pydata/pandas-datareader | pandas | 655 | Cannot import name 'StringIO' from 'pandas.compat' | Conflict with Pandas 0.25.0 and pandas-datareader 0.7.0.
On Python 3.7.3
import pandas_datareader as pdr
raises exception:
/usr/local/lib/python3.7/dist-packages/pandas_datareader/base.py in <module>
9 from pandas import read_csv, concat
10 from pandas.io.common import urlencode
---> 11 from pandas.compat import StringIO, bytes_to_str
12
13 from pandas_datareader._utils import (RemoteDataError, SymbolWarning,
ImportError: cannot import name 'StringIO' from 'pandas.compat' (/usr/local/lib/python3.7/dist-packages/pandas/compat/__init__.py)
| closed | 2019-07-21T12:03:56Z | 2019-09-09T06:43:24Z | https://github.com/pydata/pandas-datareader/issues/655 | [] | coulanuk | 14 |
man-group/arctic | pandas | 692 | Fix flaky integration test: test_multiprocessing_safety | This test seems to be quite flaky, have seen it fail half the times. Will take a look at it if there is an underlying issue or just the test being flaky.
LOG:
@pytest.mark.timeout(600)
def test_multiprocessing_safety(mongo_host, library_name):
# Create/initialize library at the parent process, then spawn children, and start them aligned in time
total_processes = 64
total_writes_per_child = 100
register_get_auth_hook(my_auth_hook)
global MY_ARCTIC
MY_ARCTIC = Arctic(mongo_host=mongo_host)
MY_ARCTIC.initialize_library(library_name, VERSION_STORE)
assert isinstance(MY_ARCTIC.get_library(library_name), VersionStore)
processes = [Process(target=f, args=(library_name, total_writes_per_child, True)) for _ in range(total_processes)]
for p in processes:
p.start()
for p in processes:
> p.join()
tests/integration/test_arctic_multithreading.py:66:
self = <multiprocessing.forking.Popen object at 0x7fdc1d3d5350>, flag = 0
def poll(self, flag=os.WNOHANG):
if self.returncode is None:
while True:
try:
> pid, sts = os.waitpid(self.pid, flag)
E Failed: Timeout >600s
Failed
| closed | 2019-01-14T14:32:03Z | 2019-01-30T16:28:00Z | https://github.com/man-group/arctic/issues/692 | [] | shashank88 | 5 |
microsoft/nlp-recipes | nlp | 186 | [BUG] Problem when activating conda with an ADO pipeline on a DSVM | ### Description
<!--- Describe your bug in detail -->
When trying to execute the cpu tests https://github.com/microsoft/nlp/blob/staging/tests/ci/cpu_unit_tests_linux.yml, the conda env can't be activated. In the [documentation](https://docs.microsoft.com/en-us/azure/devops/pipelines/languages/anaconda?view=azure-devops&tabs=ubuntu-16-04) they use `source activate`. But in the new machines, `conda activate` should be used.
### How do we replicate the bug?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for gpu -->
<!--- * Run unit test `test_timer.py` -->
<!--- * ... -->
With the following configuration:
```
jobs:
- job: cpu_unit_tests_linux
pool:
name: nlpagentpool
steps:
- bash: |
echo "##vso[task.prependpath]/data/anaconda/bin"
conda env list
displayName: Add Conda to PATH
- bash: |
conda activate nlp_cpu
pytest tests/unit -m "not notebooks and not gpu" --junitxml=junit/test-unitttest.xml
displayName: 'Run Unit tests'
```
I get:
```
========================== Starting Command Output ===========================
[command]/bin/bash --noprofile --norc /data/home/nlpadmin/myagent/_work/_temp/ca49c18b-7830-4168-9a01-f3dbe515b170.sh
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
See 'conda init --help' for more information and options.
IMPORTANT: You may need to close and restart your shell after running 'conda init'.
Traceback (most recent call last):
File "/data/anaconda/bin/pytest", line 7, in <module>
from py.test import main
ModuleNotFoundError: No module named 'py'
```
| closed | 2019-07-23T12:56:26Z | 2019-07-23T16:28:33Z | https://github.com/microsoft/nlp-recipes/issues/186 | [
"bug"
] | miguelgfierro | 5 |
aleju/imgaug | deep-learning | 27 | Image Translation | I am not sure how to ask this question properly so bare with me.
I am going to **try** to use an illustrative example:
1. Consider a door on its hinges. Lets say door opens away from you (so that you have to push the door open rather than pull it towards you). Picture the door rotating on its hinge away from you, the door nob now appears further away from you, assuming you pushed the door and stayed in the same spot.
Is there an image translation that does the equivalent of this? So the "door knob" portions of the image would appear further away and the portions of the image closest to the "hinge" would appear closer.
Similar idea to these?

I am not looking for a 3d images I am just wanting some portions of my image closer and some portions further away.
| closed | 2017-04-01T18:05:19Z | 2017-04-05T18:04:27Z | https://github.com/aleju/imgaug/issues/27 | [] | pGit1 | 4 |
coqui-ai/TTS | pytorch | 2,522 | [Feature request] | <!-- Welcome to the 🐸TTS project!
We are excited to see your interest, and appreciate your support! --->
**🚀 Feature Description**
Hi there,
is there a chance you could make an angrier and especially louder sounding voice ?
<!--A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Solution**
<!-- A clear and concise description of what you want to happen. -->
**Alternative Solutions**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
| closed | 2023-04-14T20:05:11Z | 2023-04-21T09:59:06Z | https://github.com/coqui-ai/TTS/issues/2522 | [
"feature request"
] | nicenice134 | 0 |
onnx/onnx | scikit-learn | 6,432 | export T5 model to onnx | # Bug Report
hello
i am using https://huggingface.co/Ahmad/parsT5-base model
i want toe xport it to onnx using "python -m transformers.onnx --model="Ahmad/parsT5-base" onnx/"
but i get this error:
```
/usr/local/lib/python3.10/dist-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: '/usr/local/lib/python3.10/dist-packages/torchvision/image.so: undefined symbol: _ZN3c104cuda9SetDeviceEi'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
warn(
Framework not requested. Using torch to export to ONNX.
/usr/local/lib/python3.10/dist-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.__get__(instance, owner)()
/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
warnings.warn(
Using framework PyTorch: 2.0.0+cu117
Overriding 1 configuration item(s)
- use_cache -> False
/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py:1092: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if causal_mask.shape[1] < attention_mask.shape[1]:
============= Diagnostic Run torch.onnx.export version 2.0.0+cu117 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
```
could you please help me? | closed | 2024-10-08T21:08:39Z | 2024-10-08T21:51:41Z | https://github.com/onnx/onnx/issues/6432 | [] | arefekh | 1 |
CTFd/CTFd | flask | 2,687 | Timed Release Challenges | I have wanted to implement this for some time.
Challenges that automatically release themselves at a specific time. | open | 2024-12-30T19:59:53Z | 2024-12-30T19:59:54Z | https://github.com/CTFd/CTFd/issues/2687 | [] | ColdHeat | 0 |
wandb/wandb | tensorflow | 8,623 | [Bug]: wandb fails to log power on MI300x GPUs | ### Describe the bug
When extracting power consumption from `rocm-smi -a --json`, wandb tries to match the field [`Average Graphics Package Power (W)`](https://github.com/wandb/wandb/blob/3f841abaa6d9f4ca814f61901b3f344938a59c3c/core/pkg/monitor/gpu_amd.go#L279). However, some newer AMD GPUs (e.g, MI300x) instead report `Current Socket Graphics Package Power (W)`. This results in power being unreported for jobs on MI300x GPUs.
Ideally, wandb would parse both fields -- the current string for older GPUs and the newer one for MI300x (and onwards).
cc @charis-poag-amd who can provide more insight from the AMD side on current and future power reporting across AMD SKUs. | closed | 2024-10-15T18:29:14Z | 2024-10-17T22:54:58Z | https://github.com/wandb/wandb/issues/8623 | [
"ty:bug",
"a:sdk",
"c:sdk:system-metrics"
] | ntenenz | 3 |
nl8590687/ASRT_SpeechRecognition | tensorflow | 19 | 怎么样识别方言 | 比如南方方言 我需要怎么做才能识别 | closed | 2018-06-12T08:48:46Z | 2022-07-05T06:31:50Z | https://github.com/nl8590687/ASRT_SpeechRecognition/issues/19 | [] | a2741432 | 3 |
developmentseed/lonboard | data-visualization | 762 | Animate `ColumnLayer` | **Is your feature request related to a problem? Please describe.**
I'm currently using lonboard to animate geospatial data with the `TripsLayer` and it's working great. I'd like to extend this to animate statistics about point locations using the `ColumnLayer`, but it doesn't currently support animation.
**Describe the solution you'd like**
An API similar to `TripsLayer` for the `ColumnLayer` to animate column values, preferably allowing different colored stacks at a single geometry over a time range.
**Additional context**
My end goal is something similar to this video of agent-based travel:
https://www.youtube.com/watch?v=B0v2Wi5t7Go
| open | 2025-02-25T06:44:31Z | 2025-03-05T16:06:24Z | https://github.com/developmentseed/lonboard/issues/762 | [] | Jake-Moss | 1 |
OpenInterpreter/open-interpreter | python | 610 | Unsupported Language: powershell , Win11 | ### Describe the bug
get this error whenever any of the models output powershell script.
```
Traceback (most recent call last):
File
"C:\Users\Dave\AppData\Local\Programs\Python\Python310\lib\site-packages\interpreter\code_interpreters\create_code_i
nterpreter.py", line 8, in create_code_interpreter
CodeInterpreter = language_map
KeyError: 'powershell'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Dave\AppData\Local\Programs\Python\Python310\lib\site-packages\interpreter\core\respond.py", line
105, in respond
interpreter._code_interpreters = create_code_interpreter(language)
File
"C:\Users\Dave\AppData\Local\Programs\Python\Python310\lib\site-packages\interpreter\code_interpreters\create_code_i
nterpreter.py", line 11, in create_code_interpreter
raise ValueError(f"Unknown or unsupported language: {language}")
ValueError: Unknown or unsupported language: powershell
```
### Reproduce
On windows 11 using latest release of intepreter 0.1.7
ask the model to output a powershell script, even simple things like "list the files in the current working directory"
### Expected behavior
can execute powershell script and continues.
### Screenshots

### Open Interpreter version
0.1.7
### Python version
3.10
### Operating System name and version
Windows 11
### Additional context
_No response_ | closed | 2023-10-09T19:44:48Z | 2023-10-11T18:26:26Z | https://github.com/OpenInterpreter/open-interpreter/issues/610 | [
"Bug"
] | DaveChini | 2 |
python-gitlab/python-gitlab | api | 2,706 | Wiki subpage moved up when updated | ## Description of the problem, including code/CLI snippet
Hi, I'm using the libraire inside a Python script and when I saving a wiki subpage, it is moved up.
```python
gwuc_page: ProjectWiki = glp.wikis.get('my/slug')
# do some work
gwuc_page.save()
```
## Expected Behavior
The page is saved.
## Actual Behavior
The page is saved and moved to 'slug' instead of 'my/slug'
## Specifications
- python-gitlab version: 3.15.0
- API version you are using (v3/v4): v4
- Gitlab server version (or gitlab.com): on-premise GitLab Enterprise Edition v16.2.8-ee
| closed | 2023-10-26T16:40:40Z | 2025-03-03T01:48:37Z | https://github.com/python-gitlab/python-gitlab/issues/2706 | [
"upstream"
] | trotro | 7 |
tfranzel/drf-spectacular | rest-api | 1,145 | Typing issue when `extend_schema` contained in dict literal | **Describe the bug**
I have a module where I maintain all `extend_schema` definitions like so:
```python
from drf_spectacular.utils import extend_schema
a = extend_schema(summary="a")
b = extend_schema(summary="b")
f = {"a": a, "b": b}
```
Checking types gives following errors:
```
main.py:6: error: Dict entry 0 has incompatible type "str": "Callable[[F], F]"; expected "str": "Callable[[object], object]" [dict-item]
main.py:6: error: Dict entry 1 has incompatible type "str": "Callable[[F], F]"; expected "str": "Callable[[object], object]" [dict-item]
Found 2 errors in 1 file (checked 1 source file)
```
**To Reproduce**
Run `mypy>=1.6.0` on given Python script. The issue does not appear with `mypy==1.5.1`
**Expected behavior**
No errors? | closed | 2024-01-18T18:09:47Z | 2024-07-31T20:54:40Z | https://github.com/tfranzel/drf-spectacular/issues/1145 | [] | realsuayip | 2 |
open-mmlab/mmdetection | pytorch | 11,989 | Can parallel inference be used in dino detection? | Can parallel inference be used in dino detection? I based it on image_demo.py and tried using the batch_size parameter, but it didn't work | open | 2024-10-10T06:34:31Z | 2024-10-10T06:34:46Z | https://github.com/open-mmlab/mmdetection/issues/11989 | [] | zhoulin2545210131 | 0 |
seleniumbase/SeleniumBase | web-scraping | 2,563 | Possibility integrating SeleniumBase in dotnet ? | Normally been using normal selenium library. But when needed google automation, or somewhere cloudflare, only SB worked for me.
But I mainly write for .Net Framework and Visual Basic as programming language.
Wrapping in to pyinstaller looks workaround at a moment. But harder to control.
Are there any SB bindings for dotnet ?
or any similar project that have UC mode ?
I also tried uc_driver.exe which patched by SB. But on url navigation cloudflare detects my dotnet app.
So I'm sure on navigation SB does some additional work too ?
Any advice or suggestion would be highly appreciated.
Thanks a ton | closed | 2024-03-04T13:05:35Z | 2025-03-06T03:03:12Z | https://github.com/seleniumbase/SeleniumBase/issues/2563 | [
"external",
"UC Mode / CDP Mode"
] | graysuit | 4 |
pallets/flask | flask | 5,275 | Pallets | <!--
Replace this comment with a description of what the feature should do.
Include details such as links to relevant specs or previous discussions.
-->
<!--
Replace this comment with an example of the problem which this feature
would resolve. Is this problem solvable without changes to Flask, such
as by subclassing or using an extension?
--> | closed | 2023-10-02T07:19:26Z | 2023-10-17T00:05:47Z | https://github.com/pallets/flask/issues/5275 | [] | markjpT | 0 |
huggingface/datasets | pandas | 6,729 | Support zipfiles that span multiple disks? | See https://huggingface.co/datasets/PhilEO-community/PhilEO-downstream
The dataset viewer gives the following error:
```
Error code: ConfigNamesError
Exception: BadZipFile
Message: zipfiles that span multiple disks are not supported
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 67, in compute_config_names_response
get_dataset_config_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1871, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1846, in dataset_module_factory
return HubDatasetModuleFactoryWithoutScript(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1240, in get_module
module_name, default_builder_kwargs = infer_module_for_data_files(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 584, in infer_module_for_data_files
split_modules = {
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 585, in <dictcomp>
split: infer_module_for_data_files_list(data_files_list, download_config=download_config)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 526, in infer_module_for_data_files_list
return infer_module_for_data_files_list_in_archives(data_files_list, download_config=download_config)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 554, in infer_module_for_data_files_list_in_archives
for f in xglob(extracted, recursive=True, download_config=download_config)[
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 576, in xglob
fs, *_ = fsspec.get_fs_token_paths(urlpath, storage_options=storage_options)
File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 622, in get_fs_token_paths
fs = filesystem(protocol, **inkwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 290, in filesystem
return cls(**storage_options)
File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 79, in __call__
obj = super().__call__(*args, **kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/zip.py", line 57, in __init__
self.zip = zipfile.ZipFile(
File "/usr/local/lib/python3.9/zipfile.py", line 1266, in __init__
self._RealGetContents()
File "/usr/local/lib/python3.9/zipfile.py", line 1329, in _RealGetContents
endrec = _EndRecData(fp)
File "/usr/local/lib/python3.9/zipfile.py", line 286, in _EndRecData
return _EndRecData64(fpin, -sizeEndCentDir, endrec)
File "/usr/local/lib/python3.9/zipfile.py", line 232, in _EndRecData64
raise BadZipFile("zipfiles that span multiple disks are not supported")
zipfile.BadZipFile: zipfiles that span multiple disks are not supported
```
The files (https://huggingface.co/datasets/PhilEO-community/PhilEO-downstream/tree/main/data) are:
<img width="629" alt="Capture d’écran 2024-03-11 à 22 07 30" src="https://github.com/huggingface/datasets/assets/1676121/0bb15a51-d54f-4d73-8572-e427ea644b36">
| closed | 2024-03-11T21:07:41Z | 2024-06-26T05:08:59Z | https://github.com/huggingface/datasets/issues/6729 | [
"enhancement",
"question"
] | severo | 6 |
iperov/DeepFaceLab | machine-learning | 835 | New bug of train.sh? | I have followed the installation of linux. There was a numpy package in my conda environment, but it still told me "no module named numpy" when I ran the train.sh.
I am doing a work about DL, so I have been using numpy for a long time.
Why this happened?
Thanks! | closed | 2020-07-15T12:30:07Z | 2020-07-24T07:45:42Z | https://github.com/iperov/DeepFaceLab/issues/835 | [] | Joevaen | 6 |
bmoscon/cryptofeed | asyncio | 95 | book delta support in backends | Support book deltas in the backends as storing complete books for some of the larger exchanges is somewhat unfeasible due to the volume of data (4000+ levels per side, 100s of updates a second). | closed | 2019-05-20T22:45:39Z | 2019-05-23T22:22:05Z | https://github.com/bmoscon/cryptofeed/issues/95 | [] | bmoscon | 1 |
autogluon/autogluon | data-science | 4,808 | [BUG] [timeseries] TimeSeriesPredictor.feature_importance outputting 0 when covariate is used by regressor | **Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [x] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [x] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [ ] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
When `TimeSeriesPredictor.feature_importance` is called given only Chronos with CatBoost covariate regressor, feature importance is computed as 0 even though CatBoost assigns 100% feature importance to one of the covariates.
set up:
```
import numpy as np
import pandas as pd
from autogluon.timeseries import TimeSeriesDataFrame, TimeSeriesPredictor
df = pd.read_csv("https://autogluon.s3.amazonaws.com/datasets/timeseries/m4_daily_subset/train.csv")
df.head()
static_features_df = pd.read_csv("https://autogluon.s3.amazonaws.com/datasets/timeseries/m4_daily_subset/metadata.csv")
static_features_df.head()
train_data = TimeSeriesDataFrame.from_data_frame(
df,
id_column="item_id",
timestamp_column="timestamp",
static_features_df=static_features_df,
)
train_data.head()
train_data["log_target"] = np.log(train_data["target"])
WEEKEND_INDICES = [5, 6]
timestamps = train_data.index.get_level_values("timestamp")
train_data["weekend"] = timestamps.weekday.isin(WEEKEND_INDICES).astype(float)
train_data.head()
predictor = TimeSeriesPredictor(
prediction_length=14,
target="target",
known_covariates_names=["weekend"],
).fit(
train_data,
hyperparameters={
"Chronos": {
"model_path": "bolt_small",
"covariate_regressor": "CAT",
"target_scaler": "standard",
"ag_args": {"name_suffix": "WithRegressor"},
},
}
)
predictor.feature_importance(train_data)
```
```python
trainer = predictor._learner.load_trainer()
model = trainer.load_model(
trainer.get_model_best()
)
reg = model._get_model_base().covariate_regressor.fit(train_data).model.model
dict(zip(reg.feature_names_, reg.feature_importances_)) # {'weekend': 100.0}
```
Similar issue: https://github.com/autogluon/autogluon/issues/4322
```python
# Replace this code with the output of the following:
from autogluon.core.utils import show_versions
show_versions()
```
</details>
| closed | 2025-01-17T08:09:53Z | 2025-01-29T16:31:16Z | https://github.com/autogluon/autogluon/issues/4808 | [
"bug",
"module: timeseries"
] | canerturkmen | 1 |
HumanSignal/labelImg | deep-learning | 706 | File explorer is missing for no reason | Hello, I was using labelimg and suddenly the file explorer is missing.

What I have to do to view again this explorer?
Thanks in advance
- **OS:** Ubuntu 20.04
- **PyQt version:** Python3.8
| closed | 2021-02-11T15:23:15Z | 2021-03-25T02:06:53Z | https://github.com/HumanSignal/labelImg/issues/706 | [] | hdnh2006 | 3 |
rougier/numpy-100 | numpy | 8 | examples 32 and 33 are identical | and this one is really cool BTW!
| closed | 2016-03-08T23:03:33Z | 2016-03-09T06:02:21Z | https://github.com/rougier/numpy-100/issues/8 | [] | ev-br | 1 |
mouredev/Hello-Python | fastapi | 422 | 全球网上赌博最佳靠谱选择靠谱的游戏下载APP平台推荐 | 十大赌博靠谱网络平台娱乐游戏网址:376838.com
游戏开户经理 薇:xiaolu460570 飞机:lc15688正规实体平台-创联娱乐实体网赌平台-创联游戏联盟实体网赌平台推荐-十大亚洲博彩娱乐-亚洲赌博平台线上开户-线上赌博/APP-十大实体赌博平台推荐-实体网络赌博平台-实体博彩娱乐创联国际-创联娱乐-创联游戏-全球十大网赌正规平台在线推荐! 推荐十大赌博靠谱平台-十大赌博靠谱信誉的平台 国际 东南亚线上网赌平台 合法网上赌场,中国网上合法赌场,最好的网上赌场 网赌十大快3平台 - 十大网赌平台推荐- 网赌最好最大平台-十大最好的网赌平台 十大靠谱网赌平台- 网上赌搏网站十大排行 全球最大网赌正规平台: 十大,资金安全有保证 东南亚实体大赌场的现场实况连线平台,连续多年占据全球线上娱乐顶级品牌榜!是您在线博彩娱乐的首选平台! | closed | 2025-03-02T07:33:33Z | 2025-03-02T11:12:18Z | https://github.com/mouredev/Hello-Python/issues/422 | [] | 376838 | 0 |
PablocFonseca/streamlit-aggrid | streamlit | 132 | Component error: when using double forward slash in JsCode | I have encountered an error when used AgGrid with `allow_unsafe_jscode` to `True` and passing `cellRenderer` for one of the Column
```
Component Error
`` literal not terminated before end of script
```
I am constructing a custom anchor tag based on `params.value` and using `https://` there was resulting in error
I have escaped forward slash i.e. `https:\/\/` to fix the issue,
>return `<a href="https://example.com/${params.value}/" target="_blank">${params.value}</a>`;
to
>return `<a href="https:\/\/example.com/${params.value}/" target="_blank">${params.value}</a>`;
I was wondering if this can be handled more elegantly in the component itself ? Thanks
used to work in pre version.
(have switched from `0.2.3` where it was working to `0.3.2` so not sure when it was broken in between)
PS: thanks for building an awesome component that works all the time out of the box 💯 | closed | 2022-08-16T09:56:09Z | 2022-08-27T19:55:14Z | https://github.com/PablocFonseca/streamlit-aggrid/issues/132 | [] | DharmeshPandav | 1 |
google-research/bert | nlp | 385 | dose <s> represent whitespace in the chinese pretrained vocabulary? | Because the chinese pretrained vocab does not include all the english words, so I split english words into characters. Then how do I represent whitespace between english words? | open | 2019-01-22T09:17:56Z | 2021-06-29T06:30:07Z | https://github.com/google-research/bert/issues/385 | [] | lorashen | 1 |
statsmodels/statsmodels | data-science | 8,918 | How can I use version 0.15.0? | I'd like to use the fact that SARIMAX in 0.15.0 supports a maxiter argument. When will 0.15.0 be released? In the meantime, is there a way for me to use the code in 0.15.0? | closed | 2023-06-19T17:16:13Z | 2023-10-27T09:57:28Z | https://github.com/statsmodels/statsmodels/issues/8918 | [
"comp-tsa-statespace",
"question"
] | dblim | 1 |
FlareSolverr/FlareSolverr | api | 1,227 | TimeOut during solving | ### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version: 3.3.19
- Last working FlareSolverr version: few week ago they are not issue
- Operating system: docker under debian12
- Are you using Docker: yes
- FlareSolverr User-Agent (see log traces or / endpoint):
- Are you using a VPN: no
- Are you using a Proxy: no
- Are you using Captcha Solver: no
- If using captcha solver, which one:
- URL to test this issue:https://www.ygg.re/engine/search?do=search&order=desc&sort=seed&category=all
```
### Description
seem ygg.r is now broken again , trying with 60s and 120s bug get timeout each times, also i have disable ipv6 and do all check asked in wiki, also tryed to connect on container and update chromium or something else but all is allready up to date...
thanks for your help
### Logged Error Messages
```text
2024-06-21 18:23:52 INFO FlareSolverr 3.3.19
2024-06-21 18:23:52 INFO Testing web browser installation...
2024-06-21 18:23:52 INFO Platform: Linux-6.1.0-21-amd64-x86_64-with-glibc2.31
2024-06-21 18:23:52 INFO Chrome / Chromium path: /usr/bin/chromium
2024-06-21 18:23:53 INFO Chrome / Chromium major version: 120
2024-06-21 18:23:53 INFO Launching web browser...
version_main cannot be converted to an integer
2024-06-21 18:23:54 INFO FlareSolverr User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36
2024-06-21 18:23:54 INFO Test successful!
2024-06-21 18:23:54 INFO Serving on http://0.0.0.0:8291
2024-06-21 18:24:36 INFO Incoming request => POST /v1 body: {'maxTimeout': 60000, 'cmd': 'request.get', 'url': 'https://www.ygg.re/engine/search?do=search&order=desc&sort=seed&category=all'}
version_main cannot be converted to an integer
2024-06-21 18:24:38 INFO Challenge detected. Title found: Just a moment...
2024-06-21 18:25:38 ERROR Error: Error solving the challenge. Timeout after 60.0 seconds.
2024-06-21 18:25:38 INFO Response in 61.715 s
2024-06-21 18:25:38 INFO 127.0.0.1 POST http://127.0.0.1:8291/v1 500 Internal Server Error
2024-06-21 18:26:01 INFO Incoming request => POST /v1 body: {'maxTimeout': 120000, 'cmd': 'request.get', 'url': 'https://www.ygg.re/engine/search?do=search&order=desc&sort=seed&category=all'}
version_main cannot be converted to an integer
2024-06-21 18:26:02 INFO Challenge detected. Title found: Just a moment...
2024-06-21 18:28:02 ERROR Error: Error solving the challenge. Timeout after 120.0 seconds.
2024-06-21 18:28:02 INFO Response in 121.19 s
2024-06-21 18:28:02 INFO 127.0.0.1 POST http://127.0.0.1:8291/v1 500 Internal Server Error
```
### Screenshots
_No response_ | closed | 2024-06-21T16:33:41Z | 2024-06-21T19:22:24Z | https://github.com/FlareSolverr/FlareSolverr/issues/1227 | [
"duplicate"
] | bozoweed | 7 |
seleniumbase/SeleniumBase | pytest | 2,481 | Getting "selenium.common.exceptions.WebDriverException: Message: disconnected: Unable to receive message from renderer" in Driver(uc=True) | I am getting `"selenium.common.exceptions.WebDriverException: Message: disconnected: Unable to receive message from renderer"` in `Driver(uc=True)`
google search suggests adding `--no-sandbox` and `--disable-dev-shm-usage` but I can see they are already added in [line](https://github.com/seleniumbase/SeleniumBase/blob/77ee5149a34a0bbca17c27a7e745d0754e3ad9e3/seleniumbase/core/browser_launcher.py#L1042)
I am using Python 3.10 with latest SeleniumBase and here is snippet from my code
```
from seleniumbase import Driver
from sbvirtualdisplay import Display
from selenium import webdriver
chrome_options = webdriver.ChromeOptions()
prefs = {"profile.managed_default_content_settings.images": 2}
chrome_options.add_experimental_option("prefs", prefs)
with Display(visible=0, size=(1440, 1880)):
driver = Driver(
uc=True,
browser='chrome',
cap_string=chrome_options.to_capabilities()
)
driver.get('https://www.website.com/')
```
| closed | 2024-02-11T06:28:14Z | 2024-11-05T16:40:58Z | https://github.com/seleniumbase/SeleniumBase/issues/2481 | [
"duplicate",
"invalid usage",
"UC Mode / CDP Mode"
] | iamumairayub | 11 |
Esri/arcgis-python-api | jupyter | 1,755 | Error displaying widget: model not found for arcgis 2.1.0 | **Describe the bug**
When installing arcgis=2.1.0, jupyterlab=2 and nodejs=18.16.0=ha637b67_1, get an "error displaying widget: model not found" error when trying to display a map.
**To Reproduce**
Steps to reproduce the behavior:
```python
#I have the esri channel in my conda channel list
conda create -n arcgis210env python=3.8
conda activate arcgis210env
conda install -c esri arcgis=2.1.0
conda install jupyterlab=2
conda install nodejs=18.16.0=ha637b67_1
jupyter labextension install @jupyter-widgets/jupyterlab-manager@2
jupyter labextension install arcgis-map-ipywidget@2.1.0
#run jupyter lab, I've been using the following command but there may be a better way
jupyter lab --ip 0.0.0.0 --no-browser --allow-root
```
error:
```python
Error displaying widget: model not found
```
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Expected behavior**
interactive map should display
**Platform (please complete the following information):**
- OS: [e.g. iOS] linux x64
- Browser [e.g. chrome, safari] chrome
- Python API Version [e.g. `1.6.2`] (you can get this by typing `print(arcgis.__version__)` 2.1.0
**Additional context**
Add any other context about the problem here, attachments etc.
Using the same kind of commands, I can install arcgis=2.0.0, jupyterlab=2 and nodejs=18.16.0=ha637b67_1 and the map widget displays. I installed nodejs=18.16.0=ha637b67_1 because that was the highest version that still used openssl=1.1.1. If I installed a nodejs with higher version, openssl 3.x was installed, and my jupyterlab builds would fail.

| closed | 2024-02-06T20:49:13Z | 2024-02-09T12:45:36Z | https://github.com/Esri/arcgis-python-api/issues/1755 | [
"As-Designed"
] | timhaverland-noaa | 3 |
NullArray/AutoSploit | automation | 664 | Unhandled Exception (d2d3bf199) | Autosploit version: `3.1`
OS information: `Linux-4.17.0-kali3-amd64-x86_64-with-Kali-kali-rolling-kali-rolling`
Running context: `autosploit.py`
Error meesage: `object of type 'NoneType' has no len()`
Error traceback:
```
Traceback (most recent call):
File "/root/Autosploit/autosploit/main.py", line 116, in main
terminal.terminal_main_display(loaded_tokens)
File "/root/Autosploit/lib/term/terminal.py", line 494, in terminal_main_display
if len(choice_data_list) < 4:
TypeError: object of type 'NoneType' has no len()
```
Metasploit launched: `False`
| closed | 2019-04-16T19:16:59Z | 2019-04-17T18:33:03Z | https://github.com/NullArray/AutoSploit/issues/664 | [] | AutosploitReporter | 0 |
aimhubio/aim | data-visualization | 3,181 | RuntimeError: dictionary keys changed during iteration | ## 🐛 Bug
When using ray.tune for hyperparameter optimization and invoking the AimLoggerCallback, the following error occurs:
```
Traceback (most recent call last):
File "/root/data/mamba-reid/tst.py", line 92, in <module>
results = tuner.fit()
^^^^^^^^^^^
File "/root/data/anaconda3/lib/python3.11/site-packages/ray/tune/tuner.py", line 377, in fit
return self._local_tuner.fit()
^^^^^^^^^^^^^^^^^^^^^^^
File "/root/data/anaconda3/lib/python3.11/site-packages/ray/tune/impl/tuner_internal.py", line 476, in fit
analysis = self._fit_internal(trainable, param_space)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/data/anaconda3/lib/python3.11/site-packages/ray/tune/impl/tuner_internal.py", line 592, in _fit_internal
analysis = run(
^^^^
File "/root/data/anaconda3/lib/python3.11/site-packages/ray/tune/tune.py", line 994, in run
runner.step()
File "/root/data/anaconda3/lib/python3.11/site-packages/ray/tune/execution/tune_controller.py", line 685, in step
if not self._actor_manager.next(timeout=0.1):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/data/anaconda3/lib/python3.11/site-packages/ray/air/execution/_internal/actor_manager.py", line 223, in next
self._actor_task_events.resolve_future(future)
File "/root/data/anaconda3/lib/python3.11/site-packages/ray/air/execution/_internal/event_manager.py", line 118, in resolve_future
on_result(result)
File "/root/data/anaconda3/lib/python3.11/site-packages/ray/air/execution/_internal/actor_manager.py", line 766, in on_result
self._actor_task_resolved(
File "/root/data/anaconda3/lib/python3.11/site-packages/ray/air/execution/_internal/actor_manager.py", line 299, in _actor_task_resolved
tracked_actor_task._on_result(tracked_actor, result)
File "/root/data/anaconda3/lib/python3.11/site-packages/ray/tune/execution/tune_controller.py", line 1229, in _on_result
raise TuneError(traceback.format_exc())
ray.tune.error.TuneError: Traceback (most recent call last):
File "/root/data/anaconda3/lib/python3.11/site-packages/ray/tune/execution/tune_controller.py", line 1220, in _on_result
on_result(trial, *args, **kwargs)
File "/root/data/anaconda3/lib/python3.11/site-packages/ray/tune/execution/tune_controller.py", line 1947, in _on_trial_reset
self._actor_started(tracked_actor, log="REUSED")
File "/root/data/anaconda3/lib/python3.11/site-packages/ray/tune/execution/tune_controller.py", line 1131, in _actor_started
self._callbacks.on_trial_start(
File "/root/data/anaconda3/lib/python3.11/site-packages/ray/tune/callback.py", line 398, in on_trial_start
callback.on_trial_start(**info)
File "/root/data/anaconda3/lib/python3.11/site-packages/ray/tune/logger/logger.py", line 147, in on_trial_start
self.log_trial_start(trial)
File "/root/data/anaconda3/lib/python3.11/site-packages/ray/tune/logger/aim.py", line 110, in log_trial_start
self._trial_to_run[trial] = self._create_run(trial)
^^^^^^^^^^^^^^^^^^^^^^^
File "/root/data/anaconda3/lib/python3.11/site-packages/ray/tune/logger/aim.py", line 91, in _create_run
run = Run(
^^^^
File "/root/data/anaconda3/lib/python3.11/site-packages/aim/ext/exception_resistant.py", line 70, in wrapper
_SafeModeConfig.exception_callback(e, func)
File "/root/data/anaconda3/lib/python3.11/site-packages/aim/ext/exception_resistant.py", line 47, in reraise_exception
raise e
File "/root/data/anaconda3/lib/python3.11/site-packages/aim/ext/exception_resistant.py", line 68, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/root/data/anaconda3/lib/python3.11/site-packages/aim/sdk/run.py", line 859, in __init__
super().__init__(run_hash, repo=repo, read_only=read_only, experiment=experiment, force_resume=force_resume)
File "/root/data/anaconda3/lib/python3.11/site-packages/aim/sdk/run.py", line 308, in __init__
self._checkins = RunStatusReporter(self.hash, LocalFileManager(self.repo.path))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/data/anaconda3/lib/python3.11/site-packages/aim/sdk/reporter/__init__.py", line 439, in __init__
self.flush(block=True)
File "/root/data/anaconda3/lib/python3.11/site-packages/aim/sdk/reporter/__init__.py", line 671, in flush
for flag_name in flag_names:
RuntimeError: dictionary keys changed during iteration
```
Specifically located in: **/root/data/anaconda3/lib/python3.11/site-packages/aim/sdk/reporter/__init__.py**
```
def flush(
self,
flag_name: Optional[str] = None,
block: bool = True,
) -> None:
"""
Flush the last check-in.
If `flag_name` is specified, only the check-ins for the given flag
will be flushed.
Otherwise, all the check-ins will be flushed. In this case, the order
of (active) check-ins (per flag name) will be preserved.
"""
logger.debug(f"notifying {self}")
with self.reporter_lock:
flag_names = [flag_name] if flag_name is not None else self.timed_tasks
with self.flush_condition:
for flag_name in flag_names:
logger.debug(f"flushing {flag_name}")
# We add a new task with the highest priority to flush the
# last check-in. This task will be processed by the writer
# thread immediately.
self._schedule(TimedTask(when=0, flag_name=flag_name))
# As there may be no flag names at all, the queue may be
# untouched. In this case, we need to notify the writer thread
# explicitly.
with self.refresh_condition:
self.refresh_condition.notify()
# If `block` is set, we wait until the writer thread finishes
# flushing the last check-in.
if block:
logger.debug("blocking until the writer finishes...")
self.flush_condition.wait()
logger.debug("done")
```
### To reproduce
ray=2.31.0
aim=3.22.0
```
import argparse
import logging
import os.path as osp
import tempfile
import time
from functools import partial
from random import random
import numpy as np
import torch
import yaml
from ray import train, tune, init
from ray.train import Checkpoint
from ray.tune.logger.aim import AimLoggerCallback
from ray.tune.schedulers import HyperBandForBOHB
from ray.tune.search.bohb import TuneBOHB
from tabulate import tabulate
from data import build_dataloaders
from utils import setup_logger, setup_seed, str2dict, str2list
def my_train(config):
setup_seed(777)
growth_rate = config['a'] # [1.0, 10.0]
saturation_level = config['b'] # [1.0, 10.0]
noise_level = config['c'] # [0.01, 0.1]
noise_wo = config['d'] # {1.0, 0.0}
x_values = np.linspace(0, 10, 12)
start = 0
checkpoint = train.get_checkpoint()
if checkpoint:
with checkpoint.as_directory() as checkpoint_dir:
checkpoint_dict = torch.load(osp.join(checkpoint_dir, "checkpoint.pt"))
start = checkpoint_dict["epoch"] + 1
best_mAP = -1.0
for epoch in range(start, len(x_values)):
setup_seed(epoch)
x = x_values[epoch]
base_performance = 1 / (1 + np.exp(-growth_rate * (x - saturation_level)))
random_noise = np.random.normal(0, noise_level)
performance = base_performance + random_noise * (1.0 - noise_wo)
time.sleep(random()*5.0)
best_mAP = max(best_mAP, performance)
with tempfile.TemporaryDirectory() as tempdir:
torch.save(
{"epoch": epoch},
osp.join(tempdir, "checkpoint.pt"),
)
train.report(metrics={'mAP': performance, 'best_mAP':best_mAP}, checkpoint=Checkpoint.from_directory(tempdir))
if __name__ == '__main__':
# init(local_mode=True)
lg, savedir = setup_logger(exp='tune_tst')
config = {
"a": tune.uniform(1.0, 10.0),
"b": tune.uniform(1.0, 10.0),
"c": tune.uniform(0.01, 0.1),
"d": tune.choice([1.0, 0.0])
}
algo = TuneBOHB(
points_to_evaluate=[
{"a": 2.59095, "b": 5.6506, "c": 0.0967916, "d": 0.0},
{"a": 1.0, "b": 1.0, "c": 0.01, "d": 0.0},
{"a": 1.0, "b": 1.0, "c": 0.1, "d": 1.0},
],
seed=0,
max_concurrent=4)
sche = HyperBandForBOHB(
time_attr="training_iteration",
max_t=12,
reduction_factor=2)
tuner = tune.Tuner(
tune.with_resources(my_train, {"cpu": 0.1}),
param_space=config,
tune_config=tune.TuneConfig(num_samples=50, metric='best_mAP', mode='max', search_alg=algo, scheduler=sche, reuse_actors=True),
run_config=train.RunConfig(name="tst", storage_path=osp.abspath(savedir), callbacks=[AimLoggerCallback()], failure_config=train.FailureConfig(fail_fast=True)))
results = tuner.fit()
lg.info(results.get_best_result().config)
```
### Expected behavior
I can solve this problem by replacing self.timed_tasks with list(self.timed_tasks.keys()), but will there be any side effects?
```
with self.reporter_lock:
flag_names = [flag_name] if flag_name is not None else list(self.timed_tasks.keys())#self.timed_tasks
``` | closed | 2024-07-08T01:07:41Z | 2024-12-05T13:58:25Z | https://github.com/aimhubio/aim/issues/3181 | [
"type / bug",
"help wanted"
] | GuHongyang | 2 |
pytorch/pytorch | machine-learning | 149,349 | DISABLED test_unshard_async (__main__.TestFullyShardUnshardMultiProcess) | Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_unshard_async&suite=TestFullyShardUnshardMultiProcess&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38913895101).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_unshard_async`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 605, in wrapper
self._join_processes(fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 845, in _join_processes
self._check_return_codes(elapsed_time)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 899, in _check_return_codes
raise RuntimeError(
RuntimeError: Process 0 terminated or timed out after 300.0343186855316 seconds
```
</details>
Test file path: `distributed/_composable/fsdp/test_fully_shard_comm.py`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @clee2000 @zhaojuanmao @mrshenli @rohan-varma @chauhang @penguinwu | open | 2025-03-17T21:40:56Z | 2025-03-24T18:47:44Z | https://github.com/pytorch/pytorch/issues/149349 | [
"oncall: distributed",
"triaged",
"module: flaky-tests",
"skipped",
"module: fsdp",
"oncall: pt2"
] | pytorch-bot[bot] | 2 |
donnemartin/system-design-primer | python | 278 | 第二步:回顾可扩展性的文章对应的链接打不开 | ps:我已翻墙 | open | 2019-05-08T11:19:47Z | 2019-09-11T06:20:26Z | https://github.com/donnemartin/system-design-primer/issues/278 | [
"needs-review",
"response-needed"
] | baitian77 | 5 |
CorentinJ/Real-Time-Voice-Cloning | python | 1,026 | synthesizer.pt size changes after fine-tuning | Hey,
I'm fine-tuning the synthesizer model using my dateset, that I organized in LibriSpeech hierarchy (dataSet->Speaker_id -> segment_id -> audio.txt and audio.flac)
However, after running one epoch, the size of the saved synthesizer model (synthesizer.pt) have changed, causing the program to not use this model and download the default model again.
When I'm fine-tuning with one speaker from LibriSpeech, this problem doesn't accrue.
What can cause the change in model size?
Thank you.
| open | 2022-02-24T15:05:17Z | 2022-02-24T15:05:17Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1026 | [] | matnshrn | 0 |
WZMIAOMIAO/deep-learning-for-image-processing | pytorch | 501 | feature_pyramid_network.py | 在运行feature_pyramid_network.py模块时,出现错误,不知道怎么解决?
错误代码:
from .roi_head import RoIHeads
ImportError: attempted relative import with no known parent package | closed | 2022-03-26T07:56:45Z | 2022-03-29T08:34:04Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/501 | [] | pace0120 | 1 |
NullArray/AutoSploit | automation | 539 | Unhandled Exception (a21f12a70) | Autosploit version: `3.0`
OS information: `Linux-3.10.0-957.5.1.el7.x86_64-x86_64-with-centos-7.6.1810-Core`
Running context: `autosploit.py -a -q ***`
Error meesage: `'access_token'`
Error traceback:
```
Traceback (most recent call):
File "/dev/snd/Autosploit/autosploit/main.py", line 110, in main
AutoSploitParser().single_run_args(opts, loaded_tokens, loaded_exploits)
File "/dev/snd/Autosploit/lib/cmdline/cmd.py", line 207, in single_run_args
save_mode=search_save_mode
File "/dev/snd/Autosploit/api_calls/zoomeye.py", line 88, in search
raise AutoSploitAPIConnectionError(str(e))
errors: 'access_token'
```
Metasploit launched: `False`
| closed | 2019-03-05T18:35:52Z | 2019-03-06T01:26:27Z | https://github.com/NullArray/AutoSploit/issues/539 | [] | AutosploitReporter | 0 |
coqui-ai/TTS | deep-learning | 3,143 | [Bug] AttributeError: 'NoneType' object has no attribute 'load_wav' when using tts_with_vc_to_file | ### Describe the bug
Fix #3108 breaks `tts_with_vc_to_file` at least with VITS.
See: https://github.com/coqui-ai/TTS/blob/6fef4f9067c0647258e0cd1d2998716565f59330/TTS/api.py#L463
By changing the line from:
`self.tts_to_file(text=text, speaker=None, language=language, file_path=fp.name,speaker_wav=speaker_wav)`
To its pre-0.19.1 version:
`self.tts_to_file(text=text, speaker=None, language=language, file_path=fp.name)`
The issue is solved.
Please take a look at the script below for reproduction.
### To Reproduce
Clone the Coqui TTS repository and install the dependencies as specified in the README file.
Then, run the following script from TTS's root directory, but replace `speaker_wav` with any audio file you have at hand:
```python3
#!/usr/bin/env python3
import torch
from TTS.api import TTS
device = "cuda" if torch.cuda.is_available() else "cpu"
tts = TTS("tts_models/pt/cv/vits").to(device)
tts.tts_with_vc_to_file(
text="A radiografia apresentou algumas lesões no fêmur esquerdo ponto parágrafo",
speaker_wav="test_audios/1693678335_24253176-processed.wav",
file_path="test_audios/output.wav",
)
```
### Expected behavior
The output audio file defined in `file_path` is generated, saying the sentence in `text` with the voice cloned from `speaker_wav`.
### Logs
```shell
> tts_models/pt/cv/vits is already downloaded.
> Using model: vits
> Setting up Audio Processor...
| > sample_rate:22050
| > resample:False
| > num_mels:80
| > log_func:np.log10
| > min_level_db:0
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:None
| > fft_size:1024
| > power:None
| > preemphasis:0.0
| > griffin_lim_iters:None
| > signal_norm:None
| > symmetric_norm:None
| > mel_fmin:0
| > mel_fmax:None
| > pitch_fmin:None
| > pitch_fmax:None
| > spec_gain:20.0
| > stft_pad_mode:reflect
| > max_norm:1.0
| > clip_norm:True
| > do_trim_silence:False
| > trim_db:60
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:None
| > base:10
| > hop_length:256
| > win_length:1024
> initialization of speaker-embedding layers.
> initialization of language-embedding layers.
/home/probst/.pyenv/versions/coqui-tts/lib/python3.11/site-packages/torch/nn/utils/weight_norm.py:30: UserWarning: torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.
warnings.warn("torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.")
> Text splitted to sentences.
['A radiografia apresentou algumas lesões no fêmur esquerdo ponto parágrafo']
Traceback (most recent call last):
File "/home/probst/Projects/TTS-iara/./test.py", line 15, in <module>
tts.tts_with_vc_to_file(
File "/home/probst/Projects/TTS-iara/TTS/api.py", line 488, in tts_with_vc_to_file
wav = self.tts_with_vc(text=text, language=language, speaker_wav=speaker_wav)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/probst/Projects/TTS-iara/TTS/api.py", line 463, in tts_with_vc
self.tts_to_file(text=text, speaker=None, language=language, file_path=fp.name, speaker_wav=speaker_wav)
File "/home/probst/Projects/TTS-iara/TTS/api.py", line 403, in tts_to_file
wav = self.tts(text=text, speaker=speaker, language=language, speaker_wav=speaker_wav, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/probst/Projects/TTS-iara/TTS/api.py", line 341, in tts
wav = self.synthesizer.tts(
^^^^^^^^^^^^^^^^^^^^^
File "/home/probst/Projects/TTS-iara/TTS/utils/synthesizer.py", line 362, in tts
speaker_embedding = self.tts_model.speaker_manager.compute_embedding_from_clip(speaker_wav)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/probst/Projects/TTS-iara/TTS/tts/utils/managers.py", line 365, in compute_embedding_from_clip
embedding = _compute(wav_file)
^^^^^^^^^^^^^^^^^^
File "/home/probst/Projects/TTS-iara/TTS/tts/utils/managers.py", line 342, in _compute
waveform = self.encoder_ap.load_wav(wav_file, sr=self.encoder_ap.sample_rate)
^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'load_wav'
```
### Environment
```shell
- 🐸TTS Version: 0.19.1
- PyTorch Version: 2.1.0+cu121
- OS: Artix Linux
Not using GPU.
Installed everything through pip in a virtual environment created with pyenv.
```
### Additional context
_No response_ | closed | 2023-11-05T21:22:48Z | 2023-11-24T11:26:39Z | https://github.com/coqui-ai/TTS/issues/3143 | [
"bug"
] | pprobst | 4 |
tartiflette/tartiflette | graphql | 114 | Should execute only the specified operation | A GraphQL request containing multiple operation definition shouldn't execute all of them but only the one specified (cf. [GraphQL spec](https://facebook.github.io/graphql/June2018/#ExecuteRequest())):
* **Tartiflette version:** 0.3.6
* **Python version:** 3.6.1
* **Executed in docker:** No
* **Is a regression from a previous versions?** No
SDL Schema:
```graphql
type Dog {
name: String!
}
type Human {
name: String!
}
type Query {
cat: Cat
human: Human
}
```
Following queries:
```graphql
query Dog {
dog {
name
}
}
query Human {
human {
name
}
}
```
Should result to:
```json
{
"data": {
"dog": {
"name": "***"
}
}
}
```
If the `operation_name` requested is `Dog`. If the requested `operation_name` doesn't exists an error should be raised. | closed | 2019-02-05T17:07:40Z | 2019-02-06T12:55:07Z | https://github.com/tartiflette/tartiflette/issues/114 | [
"bug"
] | Maximilien-R | 0 |
biolab/orange3 | scikit-learn | 6,609 | Widget help function no longer working | <!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
This is probably either since I updated to Orange 3.36.1 for Mac, or to Mac OS Sonoma: when clicking on the question mark in the bottom left of any widget's dialog box, nothing happens.
**How can we reproduce the problem?**
Click on the "?" on the bottom left of any widget dialog box, on a system with specifications similar to mine.
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system: Mac OS Sonoma (14.0)
- Orange version: 3.36.1 (Apple ARM)
- How you installed Orange: From DMG
| closed | 2023-10-25T13:44:18Z | 2024-01-26T11:20:17Z | https://github.com/biolab/orange3/issues/6609 | [
"bug report"
] | wvdvegte | 8 |
qubvel-org/segmentation_models.pytorch | computer-vision | 762 | SegFormer | Kindly requesting you add segformer please :) | closed | 2023-05-19T10:12:24Z | 2023-05-21T01:30:59Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/762 | [] | chefkrym | 3 |
noirbizarre/flask-restplus | api | 523 | wrong swagger.json path | http://dev.py.haizb.cn/gifts/1/
this is my develop test server,and my code and document in there.
you can see , can't get the swagger.json, because the absolute path is http://127.0.0.1:10000/gifts/1/swagger.json.
and i use nginx to proxy uwsgi
nginx.conf like this:
> location / {
proxy_pass http://127.0.0.1:10000/;
client_max_body_size 1024m;
proxy_connect_timeout 1000;
proxy_set_header X-Real-IP $remote_addr;
}
uwsig.xml like this:
> <uwsgi>
<pythonpath>/var/www/python</pythonpath>
<uid>1000</uid>
<pidfile>/log/python/hzb.pid</pidfile>
<module>runproduct_debug</module>
<callable>app</callable>
<http>127.0.0.1:10000</http>
<py-autoreload>1</py-autoreload>
<uwsgi_read_timeout>180</uwsgi_read_timeout>
<master/>
<processes>4</processes>
<memory-report/>
</uwsgi>
i want to know that how to change "http://127.0.0.1:10000/gifts/1/swagger.json" to "http://dev.py.haizb.cn/gifts/1/swagger.json"
| closed | 2018-09-13T14:44:18Z | 2019-03-27T09:34:38Z | https://github.com/noirbizarre/flask-restplus/issues/523 | [
"duplicate",
"documentation"
] | dengqianyi | 3 |
thp/urlwatch | automation | 347 | Cache database redesign | I've been thinking about tackling the following issues:
* Support job specific check intervals #148, #171
* Find the latest entry of a job accurately #335
* Avoid cache entries duplication #326
* Export site change/version history #53, #170
I believe a redesign of the cache database can help with all these issues.
#### General idea
* Use one table to store snapshots data, where each row corresponds to a distinct snapshot of retrieved data. Rows are immutable.
* Use another table to store job info, where each row corresponds to a different job, together with its last run state info. Rows are mutable.
Currently, the `CacheEntry` table mixes both snapshot info and last run state info. This makes it inflexible to extend and prone to duplication.
#### Tentative details
* Add a new table called `LastRunState`, with columns `guid`, `name`, `location`, `last_checked_time`, `last_snapshot_id`, `tries`, `etag`...
* Deprecate the `tries` and `etag` columns of `the CacheEntry` table.
Then there's the tricky issue of migration... which I'm not sure how best to do it yet. | closed | 2019-01-16T05:23:33Z | 2020-07-29T20:25:05Z | https://github.com/thp/urlwatch/issues/347 | [] | cfbao | 2 |
ansible/ansible | python | 84,127 | include_tasks in a block unintentionally inherits block tags | ### Summary
When specifying tags: always in a block, the include_tasks task within the block unintentionally inherits the tags: always. This causes tasks that are not supposed to run (based on tag filtering) to be executed, as seen in the example below. Specifically, when the playbook is run with the tag tag1, tasks with tag2 are unexpectedly executed.
### Issue Type
Bug Report
### Component Name
tags, block, include_tasks
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.5]
config file = None
configured module search path = ['/Users/usr/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/usr/Library/Python/3.9/lib/python/site-packages/ansible
ansible collection location = /Users/usr/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/usr/Library/Python/3.9/bin/ansible
python version = 3.9.6 (default, Feb 3 2024, 15:58:27) [Clang 15.0.0 (clang-1500.3.9.4)] (/Library/Developer/CommandLineTools/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
macOS 14.6.1
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
## pb.yml
```yaml (paste below)
- hosts: localhost
gather_facts: false
tasks:
- name: TEST1 tags directly specified
ansible.builtin.include_tasks: foo.yml
tags: always
- block:
- name: TEST2 tags specified using block
ansible.builtin.include_tasks: foo.yml
tags: always
```
## foo.yml
```yaml
- name: "Debug ansible_run_tags when tags: tag2 is applied"
ansible.builtin.debug:
var: ansible_run_tags
tags: tag2
```
## Command to Run:
```
ansible-playbook pb.yml --tags tag1
```
## Result:
```
TASK [TEST1 tags directly specified] ****************************************************************************************
included: /hoge/foo.yml for localhost
TASK [TEST2 tags specified using block] *************************************************************************************
included: /hoge/foo.yml for localhost
TASK [Debug ansible_run_tags when tags: tag2 is applied] ********************************************************************
ok: [localhost] => {
"ansible_run_tags": [
"tag1"
]
}
```
## Expected Behavior:
TEST1: This task should be executed because tags: always is directly applied. However, the tasks in foo.yml (which have tags: tag2) should not run when --tags tag1 is specified, as they do not match the tag.
TEST2: Similar behavior is expected. Even though tags: always is applied to the block, the tasks within foo.yml (with tags: tag2) should not execute when the playbook is run with --tags tag1. However, in the current behavior, the tasks in foo.yml are executed, which is unexpected.
## Problem:
It appears that the include_tasks inside the block is inheriting the tags: always unintentionally. As a result, the tasks within the included file are executed, even though the specified tag filtering (--tags tag1) should prevent it.
### Expected Results
TASK [TEST1 tags directly specified] ****************************************************************************************
included: /hoge/foo.yml for localhost
TASK [TEST2 tags specified using block] *************************************************************************************
included: /hoge/foo.yml for localhost
### Actual Results
```console
TASK [TEST1 tags directly specified] ****************************************************************************************
included: /hoge/foo.yml for localhost
TASK [TEST2 tags specified using block] *************************************************************************************
included: /hoge/foo.yml for localhost
TASK [Debug ansible_run_tags when tags: tag2 is applied] ********************************************************************
ok: [localhost] => {
"ansible_run_tags": [
"tag1"
]
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | closed | 2024-10-16T07:15:14Z | 2024-10-31T13:00:02Z | https://github.com/ansible/ansible/issues/84127 | [
"module",
"bug",
"affects_2.15"
] | HIROYUKI-ONODERA1 | 7 |
gee-community/geemap | jupyter | 1,336 | Add Landsat Collection 2 cloud masking | ERROR: type should be string, got "\r\nhttps://code.earthengine.google.com/?as_external&scriptPath=Examples%3ACloud%20Masking%2FLandsat457%20Surface%20Reflectance \r\n\r\n\r\n" | closed | 2022-11-19T04:28:00Z | 2023-04-17T14:01:18Z | https://github.com/gee-community/geemap/issues/1336 | [
"Feature Request"
] | giswqs | 0 |
aws/aws-sdk-pandas | pandas | 2,220 | wr.s3.to_parquet fails while writing data from Excel file | ### Describe the bug
We are just reading some Excel file using
`df=wr.s3.read_excel("path")`
Immediately, we are writing it as parquet
`wr.s3.to_parquet(df, "path")`
But the above operation fails with the following in Glue 4.0
```
File "<stdin>", line 1, in <module>
File "/opt/amazon/python3.9-ray/lib/python3.9/site-packages/awswrangler/_config.py", line 546, in wrapper
return function(**args)
File "/opt/amazon/python3.9-ray/lib/python3.9/site-packages/awswrangler/s3/_write_parquet.py", line 598, in to_parquet
schema: pa.Schema = _data_types.pyarrow_schema_from_pandas(
File "/opt/amazon/python3.9-ray/lib/python3.9/site-packages/awswrangler/_data_types.py", line 654, in pyarrow_schema_from_pandas
columns_types: Dict[str, Optional[pa.DataType]] = pyarrow_types_from_pandas(
File "/opt/amazon/python3.9-ray/lib/python3.9/site-packages/awswrangler/_distributed.py", line 87, in wrapper
return cls.dispatch_func(func)(*args, **kw)
File "/opt/amazon/python3.9-ray/lib/python3.9/site-packages/awswrangler/distributed/ray/modin/_data_types.py", line 17, in pyarrow_types_from_pandas_distributed
first_block_object_ref = ray.data.from_modin(df).get_internal_block_refs()[0]
File "/opt/amazon/python3.9-ray/lib/python3.9/site-packages/ray/data/read_api.py", line 838, in from_modin
parts = unwrap_partitions(df, axis=0)
File "/opt/amazon/python3.9-ray/lib/python3.9/site-packages/modin/distributed/dataframe/pandas/partitions.py", line 49, in unwrap_partitions
raise ValueError(
ValueError: Only API Layer objects may be passed in here, got <class 'pandas.core.frame.DataFrame'> instead.
```
### How to Reproduce
```
*P.S. Please do not attach files as it's considered a security risk. Add code snippets directly in the message body as much as possible.*
```
### Expected behavior
_No response_
### Your project
_No response_
### Screenshots
_No response_
### OS
Glue
### Python version
3.9.16
### AWS SDK for pandas version
3.0.0rc2
### Additional context
_No response_ | closed | 2023-04-24T07:18:13Z | 2023-05-15T21:26:37Z | https://github.com/aws/aws-sdk-pandas/issues/2220 | [
"bug"
] | kondisettyravi | 7 |
psf/black | python | 4,098 | `hug_parens_with_braces_and_square_brackets`: Recursively hugging parens decrease readability | This is feedback related to the `hug_parens_with_braces_and_square_brackets` preview style.
**Describe the style change**
Only collapse the first set of parens rather than all parens recursively. Having too many opening/closing parens on a single line makes it harder to distinguish the individual parentheses (especially if they are all of the same kind).
**Examples in the current _Black_ style**
Here an example from Ruff's shade after implementing the `hug_parens_with_braces_and_square_brackets` preview style recursively.
```diff
)
)
if commented_entity:
- entity_map = CommentedMap(
- [(entity["entity"], entity["value"])]
- )
+ entity_map = CommentedMap([(
+ entity["entity"],
+ entity["value"],
+ )])
entity_map.yaml_add_eol_comment(
commented_entity, entity["entity"]
)
entities.append(entity_map)
else:
entities.append(
- OrderedDict(
- [(entity["entity"], entity["value"])]
- )
+ OrderedDict([(
+ entity["entity"],
+ entity["value"],
+ )])
)
else:
entities.append(
```
**Desired style**
```python
)
)
if commented_entity:
entity_map = CommentedMap([
(entity["entity"], entity["value"])
])
entity_map.yaml_add_eol_comment(
commented_entity, entity["entity"]
)
entities.append(entity_map)
else:
entities.append(
OrderedDict([
(entity["entity"], entity["value"])
])
)
else:
entities.append(
```
**Additional context**
This issue is split out from [my stable style 2024 comment](https://github.com/psf/black/issues/4042#issuecomment-1846342231).
Ruff implements `hug_parens_with_braces_and_square_brackets` in preview style but we intentionally haven't implemented recursive hugging yet. | open | 2023-12-10T23:53:54Z | 2024-11-25T09:39:59Z | https://github.com/psf/black/issues/4098 | [
"T: style",
"C: preview style"
] | MichaReiser | 1 |
jmcnamara/XlsxWriter | pandas | 592 | Feature Request: Print legend series array to know what to 'delete_series': [?,?] | Hello, I was wondering if there is a quick and easy way to print out or access the legend series that are available to delete.
Right now I feel blind when deleting series with only 'delete_series': [?,?] | closed | 2019-01-10T22:19:48Z | 2019-01-11T15:59:08Z | https://github.com/jmcnamara/XlsxWriter/issues/592 | [
"question"
] | NikoTumi | 2 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,248 | [Feature Request]: clarify in the reamd to set webui.sh. to executable | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
i know it sounds basic, but beginners get confused. 3) Set webui.sh to executable and run
### Proposed workflow
.
### Additional information
_No response_ | open | 2024-07-22T12:10:58Z | 2024-07-23T05:14:51Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16248 | [
"enhancement"
] | RustoMCSpit | 1 |
browser-use/browser-use | python | 640 | Empty <a> tags being sent to llm | ### Bug Description
Currently, empty <a> tags are being sent to llm and this is causing two issues:
1. it is increasing unnecessary cost by using more tokens.
2. it is confusing the llm.
### Reproduction Steps
NA
### Code Sample
```python
NA
```
### Version
0.1.36
### LLM Model
Other (specify in description)
### Operating System
Windows 11
### Relevant Log Output
```shell
logs: https://drive.google.com/file/d/1yPav2S2eWmYOyLCq0jMaEFPiDaorYC7U/view?usp=sharing
``` | closed | 2025-02-09T11:48:23Z | 2025-02-12T06:36:50Z | https://github.com/browser-use/browser-use/issues/640 | [
"bug"
] | PaperBoardOfficial | 0 |
flairNLP/flair | nlp | 2,686 | Replace print with logging in model training |
**Is your feature/enhancement request related to a problem? Please describe.**
When using the ModelTrainer, everything is printed out through a logger, except for when the learning rate is reduced. This is just a normal print to stdout. See picture:
<img width="983" alt="Screenshot 2022-03-23 at 20 22 46" src="https://user-images.githubusercontent.com/22773355/159779701-29d24393-c28e-41c2-92eb-5b2724177610.png">
**Describe the solution you'd like**
For consistency, I would replace the print call with a logging call. For this the logger has to be passed from the `ModelTrainer` to the `AnnealOnPlateu` object or just use the global `flair` logger.
If this is enough for a PR, then I would assign myself to this
Greetings,
Patrick
| closed | 2022-03-23T19:30:48Z | 2022-09-09T02:02:39Z | https://github.com/flairNLP/flair/issues/2686 | [
"wontfix"
] | HallerPatrick | 2 |
pyeve/eve | flask | 1,332 | Incorrect values deserialization | ### Description
Given a resource schema that uses an `anyof` in a `list`, when parsing a value in a field, Eve tries to apply all the possible deserializations but if several deserializations are possible it will end up parsing it using an invalid deserializator.
### Example
Using this resource definition:
```python
from eve import Eve
MONGO_HOST = 'localhost'
MONGO_PORT = 27017
RESOURCE_METHODS = ['GET', 'POST', 'DELETE']
ITEM_METHODS = ['GET', 'PATCH', 'PUT', 'DELETE']
resource = {
'__description__': __doc__,
'etag_ignore_fields': ['last_executed'],
"schema": {
"test": {
"type": "list",
"schema": {
"anyof": [
{
"type": "dict",
"schema": {
"type": {"type": "string", "allowed": ["number"], "required": True},
"value": {
"type": "number",
"required": True
}
}
},
{
"type": "dict",
"schema": {
"type": {"type": "string", "allowed": ["boolean"], "required": True},
"value": {
"type": "boolean",
"required": True
}
}
},
{
"type": "dict",
"schema": {
"type": {"type": "string", "allowed": ["datetime"], "required": True},
"value": {
"type": "datetime",
"required": True
}
}
},
{
"type": "dict",
"schema": {
"type": {"type": "string", "allowed": ["string"], "required": True},
"value": {
"type": "string",
"required": True
}
}
},
{
"type": "dict",
"schema": {
"type": {"type": "string", "allowed": ["integer"], "required": True},
"value": {
"type": "integer",
"required": True
}
}
}
]
}
}
},
'url': 'test',
'resource_methods': ['GET', 'POST'],
'item_methods': ['GET', 'DELETE', 'PUT', 'PATCH']
}
settings = {'DOMAIN': {'test': resource}}
app = Eve(settings=settings)
app.run()
```
This are valid POST queries that will fail:
```python
# Value is deserialized as an integer
q = {"test": [{ "type": "boolean", "value": True }]}
# Value is deserialized as a boolean
q = { "test": [{ "type": "number", "value": 0 }] }
# Value is deserialized as a datetime
q = {"test": [{"type": "string", "value": "Tue, 02 Apr 2013 10:29:13 GMT"}]}
```
This are invalid POST queries that won't fail:
```python
# Value is deserialized as an integer
q = { "test": [{ "type": "integer", "value": True }] }
q = { "test": [{ "type": "number", "value": True }] }
```
### Environment
* Python version: 2.7 and 3.7
* Eve version: 0.9.2 and master branch.
| closed | 2019-11-28T16:05:20Z | 2020-06-06T11:42:04Z | https://github.com/pyeve/eve/issues/1332 | [
"stale"
] | jordeu | 1 |
sebastianruder/NLP-progress | machine-learning | 412 | Incomparable results in the WMT 2014 EN-DE table for machine translation | I noticed that some of the results reported in the WMT 2014 EN-DE table are obtained by models trained on data from newer WMT datasets (but they report results on newstest2014), e.g, Edunov et al. (2018) uses WMT’18 and Wu et al. (2019) uses WMT’16 for training.
The few results on WMT 2014 EN-FR that i checked were fine though. Here are the papers i checked
| Paper | en-de data | en-fr data |
|--------------------------------------------- |---------- |---------- |
| Transformer (Vaswani et al., 2017) | WMT’2014 | WMT’2014 |
| AdvSoft + Transformer Big (Wang et al., 2019) | WMT’2014 | |
| MUSE (Zhao et al., 2019) | WMT’2014 | WMT’2014 |
| DynamicConv (Wu et al., 2019) | WMT’2016 | WMT’2014 |
| Transformer Big + BT (Edunov et al., 2018) | WMT’2018 | WMT’2014 |
| open | 2020-01-29T09:54:16Z | 2020-02-02T17:03:14Z | https://github.com/sebastianruder/NLP-progress/issues/412 | [] | rihardsk | 1 |
pydantic/pydantic | pydantic | 10,654 | Custom classes serialization schema not evaluated when specified as an `extra` field / Circular reference error when serializing custom passthrough class | ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
## Update:
Please see the comment here for the current status of this issue: https://github.com/pydantic/pydantic/issues/10654#issuecomment-2423475533
tl;dr:
the bug: one needs to manually define `__pydantic_serializer__` on custom classes in addition to specifying a serialization schema in `__get_pydantic_core_schema__` when they are used as a type in `__pydantic_extra__`.
the nice-to-have: The serialization warnings are cryptic and make it hard to diagnose when you have provided an invalid serialization schema :( (related to https://github.com/pydantic/pydantic/issues/10495 )
---
### Description
I have an edge case that i swear is an actual problem and not just esoteric bughunting.
To support multiple array backends from a single `NDArray` type in [numpydantic](https://github.com/p2p-ld/numpydantic), I use it as an abstract passthrough class that validates the data but then returns one of potentially multiple types (numpy ndarray, dask array, proxy to an hdf5 array, etc., more info in [docs](https://numpydantic.readthedocs.io/en/latest/interfaces.html)).
This works, including roundtrip json serialization, *except* when the type is specified on the `__pydantic_extra__` field like `__pydantic_extra__: dict[str, NDArray]` . Then pydantic seems to lose the connection between the object that's passed through and the `NDArray` object, and attempts to serialize the object directly (rather than using the serialization schema created by `NDArray.__get_pydantic_core_schema__` ).
I have been trying to work around this by giving `__pydantic_serializer__` declarations to the passed through types, but i have been running up against a consistent problem with the recursion checker:
```
pydantic_core._pydantic_core.PydanticSerializationError: Error serializing to JSON: ValueError: Circular reference detected (id repeated)
```
I've dug around in that code a bit, and it seems like it's a very tight check whether or not an id has been seen before, but I can't really tell here where it might have been repeated. I'm not sure what a fix here looks like because i can't tell if this is a bug or i'm doing something wrong.
I fully acknowledge this is a relatively offroad, nonstandard usages of maybe not fully finished APIs, so apologize in advance. If it does look like a bug to y'all, i would be happy to help with a patch, as i am always looking for more reasons to learn more about bilingual rust/python projects <3.
Below is a MWE that reproduces the error so you don't have to dig through all the numpydantic code :)
edit1: More info - i can't set a python debugger in the serialization function, so i assume (?) this happens before we actually reach it and try and call it
edit2: it appears as if it's not limited to `extra` fields, and it happens for any passthrough class like this, MWE updated to show that and the other pair of cases for when the serialization function is defined on the validating class and why it is necessary to declare the serialization function on the passed through class in the first place
### Example Code
When the serializer is on the passed-through class:
```Python
from pydantic import BaseModel, ConfigDict
from pydantic_core import SchemaSerializer, core_schema
def jsonize(x):
return [1,2,3]
def passthrough(x):
return x
class SerializableClass:
__pydantic_serializer__ = SchemaSerializer(
core_schema.plain_serializer_function_ser_schema(
jsonize, when_used="json"
)
)
class AnnotationClass:
def __get_pydantic_core_schema__(self, *args):
return core_schema.no_info_plain_validator_function(
passthrough
)
class MyModel(BaseModel):
real_field: AnnotationClass | None = None
__pydantic_extra__: dict[str, AnnotationClass]
model_config = ConfigDict(extra="allow")
# Fails - Circular reference
instance = MyModel(real_field=SerializableClass())
instance.model_dump_json()
# Fails - Circular Reference
instance = MyModel(extra_field=SerializableClass())
instance.model_dump_json()
```
When the serializer is on the validation class:
```python
from pydantic import BaseModel, ConfigDict
from pydantic_core import SchemaSerializer, core_schema
def jsonize(x):
return [1,2,3]
def passthrough(x):
return x
class SerializableClass:
pass
class AnnotationClass:
def __get_pydantic_core_schema__(self, *args):
return core_schema.no_info_plain_validator_function(
passthrough,
serialization=core_schema.plain_serializer_function_ser_schema(
jsonize, when_used="json"
)
)
class MyModel(BaseModel):
a: AnnotationClass | None = None
__pydantic_extra__: dict[str, AnnotationClass]
model_config = ConfigDict(extra="allow")
# Succeeds:
a = MyModel(real_field=SerializableClass())
a.model_dump_json()
# Fails - unable to serialize unknown type
instance = MyModel(extra_field=SerializableClass())
instance.model_dump_json()
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.9.2
pydantic-core version: 2.23.4
pydantic-core build: profile=release pgo=false
install path: /Users/jonny/git/p2p-ld/numpydantic/.venv/lib/python3.12/site-packages/pydantic
python version: 3.12.6 (main, Oct 2 2024, 16:53:48) [Clang 15.0.0 (clang-1500.1.0.2.5)]
platform: macOS-14.3.1-x86_64-i386-64bit
related packages: mypy-1.11.2 pydantic-settings-2.5.2 typing_extensions-4.12.2
commit: unknown
```
| open | 2024-10-18T06:57:06Z | 2024-10-19T02:04:32Z | https://github.com/pydantic/pydantic/issues/10654 | [
"bug V2",
"pending"
] | sneakers-the-rat | 2 |
yzhao062/pyod | data-science | 119 | SO_GAAL and MO_GAAL decision_function mistake | In the original paper "Generative adversarial active learning for unsupervised outlier detection", the outlier score is defined as OS(x)=1-D(x) (**Page 7 Algorithm 1**). However, in pyod's implementation, the outlier score is defined as D(x), so I hope this mistake can be revised。 | open | 2019-06-26T04:43:04Z | 2019-06-27T22:48:05Z | https://github.com/yzhao062/pyod/issues/119 | [] | WangXuhongCN | 2 |
pytest-dev/pytest-django | pytest | 1,041 | Python 3.11 support | Are there any plans to add support for python 3.11? Or is this supported already?
If everything is already working, I'm not sure if anything needs to be done other than update the `readme.md` and the `tox.ini` files. | closed | 2023-01-16T06:22:36Z | 2023-01-16T16:52:31Z | https://github.com/pytest-dev/pytest-django/issues/1041 | [] | jacksund | 2 |
mwaskom/seaborn | pandas | 3,379 | AttributeError: 'DataFrame' object has no attribute 'get' | It appears that seaborn somewhat supports `polars.DataFrame` objects, such as:
```
iris = pl.read_csv("data/iris.csv")
p = sns.displot(
data=iris,
x="sepal_width",
hue="species",
col="species",
height=3,
aspect=1,
alpha=1
)
```
...but not fully:
```
p = sns.catplot(
data=iris,
x="species",
y="sepal_width",
kind="box",
height=3,
aspect=1.3
)
```
The error:
```
3185 p = _CategoricalPlotter()
3186 p.require_numeric = plotter_class.require_numeric
-> 3187 p.establish_variables(x_, y_, hue, data, orient, order, hue_order)
3188 if (
3189 order is not None
3190 or (sharex and p.orient == "v")
3191 or (sharey and p.orient == "h")
3192 ):
3193 # Sync categorical axis between facets to have the same categories
3194 order = p.group_names
...
--> 532 x = data.get(x, x)
533 y = data.get(y, y)
534 hue = data.get(hue, hue)
AttributeError: 'DataFrame' object has no attribute 'get'
```
One must convert the polars dataframe to pandas, which requires extra computational overhead:
```
p = sns.catplot(
data=iris.to_pandas(),
x="species",
y="sepal_width",
kind="box",
height=3,
aspect=1.3
)
``` | closed | 2023-06-06T19:15:32Z | 2023-06-06T22:44:53Z | https://github.com/mwaskom/seaborn/issues/3379 | [] | nick-youngblut | 2 |
pydantic/bump-pydantic | pydantic | 167 | I have a big pile of improvements to upstream | I upgraded a large private codebase to Pydantic 2, and in the process I made a bunch of improvements to bump-pydantic. See [this branch](https://github.com/camillol/bump-pydantic/tree/camillo/public), and the [list of commits](https://github.com/pydantic/bump-pydantic/compare/main...camillol:bump-pydantic:camillo/public).
Here is a list of issues this branch addresses (possibly slightly incomplete, it's lightly edited from a personal log):
- [x] `allow_mutation` in Config should be converted to `frozen` https://github.com/pydantic/bump-pydantic/pull/161
- [x] `json_encoders` is allowed again, no need for comment https://github.com/pydantic/bump-pydantic/pull/162
- [x] doe not handle `values` param in validators
- [x] when replacing validators, `classmethod` can be duplicated https://github.com/pydantic/bump-pydantic/pull/160
- [x] stops everything on the first parse error instead of continuing
- [x] `validator(always=True)` should be converted to `validate_default=True` on Field
- [x] Add the ability to migrate Ormar models (for [ormar 0.20.0](https://collerek.github.io/ormar/0.20.0b1/migration/))
- [x] Class definition scanner logic is wrong, e.g. if you have A(B) B(C) C(BaseModel) across three modules
- [x] `test_class_def_visitor.py` is commented out
- [x] should ignore `.git`
- [x] tests don’t run `ClassDefVisitor`
- [x] It thinks any nested `Config` class is a pydantic config. It should check that the parent is actually a model.
- [x] `root_validator` gets turned into a `model_validator` without arguments, which is invalid (needs `mode`).
- [x] we handled that for `root_validator`, now also need to handle `root_validator()`
- [x] `smart_union` is now the default. It should be automatically removed.
- [x] `underscore_attrs_are_private` is also now the default and should be removed.
- [x] Scanning for classes takes forever on a large repo, should use Pyre
- [x] In Pydantic 1, the type of fields could be inferred from the default value. In 2, they must have a type annotation.
- [x] Done for simple types of literal.
- [x] Added support for type inference.
- [x] Reduce the FQN for things from the current module
- [x] `BaseModel.json` is deprecated, should use `model_dump_json`.
- [x] `parse_obj` also deprecated. use `model_validate`
- [x] `construct()`, `copy()`, `dict()`, `json_schema()`, `json()`, `parse_obj()`, `update_forward_refs()`
- [x] `__fields__`, `__private_attributes__`, `__validators__`
- [x] `parse_raw` is deprecated and should be replaced with `model_validate_json`
- [x] Pydantic 2 wants TypedDict to be imported from `typing_extensions` if using Python < 3.12, because it needs `__orig_bases__` which 3.12 introduced
- [x] The validator migration does not recognize `pydantic.validator` etc.
- [x] Same for the Field migration
- [x] `pydantic.Extra.forbid` should be replaced with `"forbid"`
- [x] `skip_on_failure=True` should be removed from root_validator
- [x] Batch Pyre requests for speed
- [x] Should replace `json.loads(x.model_dump_json())` with `x.model_dump(mode="json")`.
- [x] `parse_raw_as` has been removed entirely. Use TypeAdapter.
- [x] `parse_obj_as` is not fully removed but should be replaced with TypeAdapter too.
- [x] In `root_validator`s (→ `model_validator`s) with `mode="before"`, the second argument may be an instance of the actual model instead of a dict, in which case you should probably just return it. See [example](https://docs.pydantic.dev/latest/concepts/validators/#model-validators) in docs. In fact it could be anything since you can pass anything to `model_validate` ! Add a TODO comment when migrating
- [x] `model_validator(mode="after")` needs to be an instance method, not a class method
- [x] For debugging, need a way to run bump-pydantic on a single file while still importing the class hierarchy from a full repo
- [x] Add a TODO/warning for BaseModel subclasses that override deprecated method like `dict` or `json`
- [x] If you have an Optional, in `model.dict() if model else {}` Pyre infers the first model as Optional even though None should be excluded there. work around.
- [x] If you have a field starting with `model_`, e.g. `model_name`, Pydantic 2 will complain. You need to add `protected_namespaces=()` to the ConfigDict
- [x] `model_config` cannot be used as a field name at all
- [x] we need to add a model_config if the original class had no Config
- [x] Sometimes it generates `ConfigDict` with two `extra` args
- [x] It does not handle `__root__` with a type annotation
- [x] Some migrations break if you have nested BaseModel classes due to incorrect tracking logic.
Most of these changes should be generally useful: they migrate things that were not migrated, or they fix bugs in existing migrations. A few changes are needed to enable running bump_pydantic on a large repository. A couple of things depend on improvements to LibCST which I am also upstreaming.
I sent three of these as separate PRs about two months ago, but there has been no activity. Let's find a way to coordinate on how to get these changes upstreamed. I won't need this code again, but I'd like others to be able to benefit from it. | open | 2024-06-04T22:59:14Z | 2024-06-20T11:42:35Z | https://github.com/pydantic/bump-pydantic/issues/167 | [] | camillol | 2 |
faif/python-patterns | python | 390 | Shortened URL is spam now | https://github.com/faif/python-patterns/blob/79d12755010a33a5195d5475a2c8853cda674c29/patterns/creational/factory.py#L15
Just wanted to point out that this link is now spam which tries to fire a WhatsApp message on your behalf... I'm assuming this is the URL shortening company up to some hijinks? Google recommends Bit.ly or Ow.ly as alternatives to their previous goo.gl. | closed | 2022-05-31T16:16:16Z | 2022-05-31T17:33:22Z | https://github.com/faif/python-patterns/issues/390 | [] | rachtsingh | 2 |
comfyanonymous/ComfyUI | pytorch | 6,820 | prolbme | ### Expected Behavior
fg
### Actual Behavior
fd
### Steps to Reproduce
fd
### Debug Logs
```powershell
fd
```
### Other
fd | open | 2025-02-15T16:26:25Z | 2025-02-15T16:26:25Z | https://github.com/comfyanonymous/ComfyUI/issues/6820 | [
"Potential Bug"
] | szymektm | 0 |
activeloopai/deeplake | data-science | 2,663 | [BUG] ds.visualize cannot work offline in jupyter notebook with local dataset | ### Severity
P1 - Urgent, but non-breaking
### Current Behavior
I try ds.visualize in jupyter notebook with loacl dataset, it failed to visualize the dataset.Like this

It reported a failure to connect to app.activateloop.ai.Why is it necessary to visualize a local dataset online?
### Steps to Reproduce
import deeplake
ds = deeplake.load('./animals_complex_deeplake')
ds.visualize()
### Expected/Desired Behavior
ds.visualize worked offline with local dataset
### Python Version
3.8.18
### OS
ubuntu 20.04
### IDE
Jupyter
### Packages
deeplake 3.8.0
### Additional Context
_No response_
### Possible Solution
_No response_
### Are you willing to submit a PR?
- [ ] I'm willing to submit a PR (Thank you!) | closed | 2023-10-18T04:03:10Z | 2023-10-18T19:23:00Z | https://github.com/activeloopai/deeplake/issues/2663 | [
"bug"
] | alphabet-lgtm | 7 |
onnx/onnx | deep-learning | 6,649 | [Feature request] Can SpaceToDepth also add mode attribute? | ### System information
ONNX 1.17
### What is the problem that this feature solves?
Current SpaceToDepth Op https://github.com/onnx/onnx/blob/main/docs/Operators.md#spacetodepth doesn't have attributes to assign the DCR/CRD, and can only supports CRD in computation.
But the DepthToSpace Op https://github.com/onnx/onnx/blob/main/docs/Operators.md#depthtospace has such mode attributes, and is more flexible in supporting models conversion from tensorflow.
### Alternatives considered
_No response_
### Describe the feature
_No response_
### Will this influence the current api (Y/N)?
_No response_
### Feature Area
_No response_
### Are you willing to contribute it (Y/N)
None
### Notes
_No response_ | open | 2025-01-22T03:33:51Z | 2025-02-20T03:56:36Z | https://github.com/onnx/onnx/issues/6649 | [
"module: spec"
] | vera121 | 0 |
hankcs/HanLP | nlp | 567 | CoreDictionary中有一个"机收"的词,导致“手机收邮件”分词结果为“手 机收 邮件” | <!--
这是HanLP的issue模板,用于规范提问题的格式。本来并不打算用死板的格式限制大家,但issue区实在有点混乱。有时候说了半天才搞清楚原来对方用的是旧版、自己改了代码之类,浪费双方宝贵时间。所以这里用一个规范的模板统一一下,造成不便望海涵。除了注意事项外,其他部分可以自行根据实际情况做适量修改。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:1.3.4
我使用的版本是:1.3.2
## 我的问题
<!-- 请详细描述问题,越详细越可能得到解决 -->
在分词的时候发现对一个句子“手机收邮件的问题”进行分词,结果是“手 机收 邮件 的 问题”,即使将“手机”加到CustomDictionary中也还是这样子的结果。尝试了各个分词类:NotionalTokenizer,HanLP.segment(),HanLP.newSegment() 都出现这个问题
定位发现CoreDictionary中有一个"机收"的词,导致“手机收邮件”分词结果为“手 机收 邮件”
## 复现问题
<!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?-->
### 步骤
1. 首先……
2. 然后……
3. 接着……
### 触发代码
```
static void testSeg(){
Segment segment = HanLP.newSegment().enableCustomDictionary(true);
String str = "手机收邮件的问题";
List<Term> res = segment .seg(str);
StringBuilder sb = new StringBuilder();
for(Term term:res ){
sb.append(term.word).append("\t");
}
System.out.println(sb.toString());
}
```
### 期望输出
<!-- 你希望输出什么样的正确结果?-->
```
手机 收 邮件 的 问题
```
### 实际输出
<!-- HanLP实际输出了什么?产生了什么效果?错在哪里?-->
```
手 机收 邮件 的 问题
```
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
| closed | 2017-06-22T12:43:28Z | 2017-06-22T14:08:04Z | https://github.com/hankcs/HanLP/issues/567 | [
"improvement"
] | sjturan1 | 1 |
docarray/docarray | fastapi | 1,040 | v2: filter query languague | # Context
We need to implement the filter query language equivalent to docarray v1 : https://docarray.jina.ai/fundamentals/documentarray/find/#query-by-conditions
One of the goal is to keep compatibility with jina filetering in topology https://docs.jina.ai/concepts/flow/add-conditioning/
Under the hood the framework which handle the operation should rely on python operator. If we want to support a new type in the filtering language we should overload the operator of the type so that it works with the input in the query language.
At first filtering will only work on string and numeric field. We don't close the door to filtering on tensor but it is not plan for v2 initial release. One of the limitation is that the query should be json compatible so it cannot call torch or numpy object in it | closed | 2023-01-19T13:30:29Z | 2023-01-27T13:38:33Z | https://github.com/docarray/docarray/issues/1040 | [] | samsja | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.