repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
axnsan12/drf-yasg | django | 720 | api/swagger/?format=openapi response 500 |

| open | 2021-06-04T09:25:46Z | 2025-03-07T12:13:02Z | https://github.com/axnsan12/drf-yasg/issues/720 | [
"triage"
] | dpreal | 2 |
liangliangyy/DjangoBlog | django | 77 | 新增文章时如果同时指定 tag,保存时会出错 | 出错原因应该是同时写入两张表blog_article和 blog_article_tags,写入顺序错了。估计是先写入blog_article_tags,因为外键不存在而报错。 | closed | 2018-01-12T08:28:27Z | 2018-01-14T02:43:27Z | https://github.com/liangliangyy/DjangoBlog/issues/77 | [] | xmyangz | 2 |
reloadware/reloadium | flask | 178 | Reloadium fails to start | ## Describe the bug*
'RW_IDE_NAME': 'PyCharm 2023.3.2',
'RW_IDE_PLUGINVERSION': '1.3.4',
```Traceback (most recent call last):
File "C:\Users\Administrator\.reloadium\package\3.9\reloadium\corium\l111ll1111ll1l11Il1l1.py", line 189, in ll1l1ll11l1lllllIl1l1
File "D:\a\reloadware\reloadware\reload\package\__obfuscated__\reloadium\fast\ll111111llll1l1lIl1l1.py", line 69, in exec_module
File "D:\a\reloadware\reloadware\reload\package\__obfuscated__\reloadium\fast\ll1lll11l1l11l1lIl1l1\llllllll11l1l11lIl1l1.py", line 559, in l1l1ll1ll111111lIl1l1
File "D:\a\reloadware\reloadware\reload\package\__obfuscated__\reloadium\fast\ll1lll11l1l11l1lIl1l1\llllllll11l1l11lIl1l1.py", line 605, in ll1111l11l1l1ll1Il1l1
File "D:\a\reloadware\reloadware\reload\package\__obfuscated__\reloadium\fast\ll1lll11l1l11l1lIl1l1\llllllll11l1l11lIl1l1.py", line 116, in lllll1l111111ll1Il1l1
File "D:\a\reloadware\reloadware\reload\package\__obfuscated__\reloadium\fast\l1lll11111lll111Il1l1\ll111l1l1llll11lIl1l1.py", line 80, in visit
File "D:\a\reloadware\reloadware\reload\package\__obfuscated__\reloadium\fast\ll1lll11l1l11l1lIl1l1\llllllll11l1l11lIl1l1.py", line 122, in visit_Module
File "D:\a\reloadware\reloadware\reload\package\__obfuscated__\reloadium\fast\l1lll11111lll111Il1l1\ll111l1l1llll11lIl1l1.py", line 88, in generic_visit
File "D:\a\reloadware\reloadware\reload\package\__obfuscated__\reloadium\fast\l1lll11111lll111Il1l1\ll111l1l1llll11lIl1l1.py", line 80, in visit
File "D:\a\reloadware\reloadware\reload\package\__obfuscated__\reloadium\fast\ll1lll11l1l11l1lIl1l1\llllllll11l1l11lIl1l1.py", line 156, in visit_FunctionDef
File "D:\a\reloadware\reloadware\reload\package\__obfuscated__\reloadium\fast\ll1lll11l1l11l1lIl1l1\llllllll11l1l11lIl1l1.py", line 129, in visit_ClassDef
File "D:\a\reloadware\reloadware\reload\package\__obfuscated__\reloadium\fast\l1lll11111lll111Il1l1\ll111l1l1llll11lIl1l1.py", line 88, in generic_visit
File "D:\a\reloadware\reloadware\reload\package\__obfuscated__\reloadium\fast\l1lll11111lll111Il1l1\ll111l1l1llll11lIl1l1.py", line 80, in visit
File "D:\a\reloadware\reloadware\reload\package\__obfuscated__\reloadium\fast\ll1lll11l1l11l1lIl1l1\llllllll11l1l11lIl1l1.py", line 149, in visit_FunctionDef
File "D:\a\reloadware\reloadware\reload\package\__obfuscated__\reloadium\fast\ll1lll11l1l11l1lIl1l1\llllllll11l1l11lIl1l1.py", line 106, in ll1ll1ll111lll11Il1l1
File "<string>", line 9, in __init__
File "C:\Users\Administrator\.reloadium\package\3.9\reloadium\corium\ll1lll11l1l11l1lIl1l1\l1ll1l11111111llIl1l1.py", line 513, in __post_init__
File "C:\Users\Administrator\.reloadium\package\3.9\reloadium\corium\ll1lll11l1l11l1lIl1l1\l1ll1l11111111llIl1l1.py", line 375, in __post_init__
File "C:\Users\Administrator\.reloadium\package\3.9\reloadium\corium\ll1lll11l1l11l1lIl1l1\l1ll1l11111111llIl1l1.py", line 428, in l1l1llll1l111l1lIl1l1
File "C:\ProgramData\anaconda3\envs\xbrl\lib\tokenize.py", line 512, in _tokenize
raise IndentationError(
File "<tokenize>", line 4
_parser = XMLParser(recover=True, huge_tree=True, target=checkFileType())
IndentationError: unindent does not match any outer indentation level
(4.5398) Critical reloader Error
{'traceback': 'Traceback (most recent call last):\n'
" File '.../reloadium\\corium\\l111ll1111ll1l11Il1l1.py', line "
'189, in ll1l1ll11l1lllllIl1l1\n'
' File '
"'D:\\a\\reloadware\\reloadware\\reload\\package\\__obfuscated__\\reloadium\\fast\\ll111111llll1l1lIl1l1.py', "
'line 69, in exec_module\n'
' File '
"'D:\\a\\reloadware\\reloadware\\reload\\package\\__obfuscated__\\reloadium\\fast\\ll1lll11l1l11l1lIl1l1\\llllllll11l1l11lIl1l1.py', "
'line 559, in l1l1ll1ll111111lIl1l1\n'
' File '
"'D:\\a\\reloadware\\reloadware\\reload\\package\\__obfuscated__\\reloadium\\fast\\ll1lll11l1l11l1lIl1l1\\llllllll11l1l11lIl1l1.py', "
'line 605, in ll1111l11l1l1ll1Il1l1\n'
' File '
"'D:\\a\\reloadware\\reloadware\\reload\\package\\__obfuscated__\\reloadium\\fast\\ll1lll11l1l11l1lIl1l1\\llllllll11l1l11lIl1l1.py', "
'line 116, in lllll1l111111ll1Il1l1\n'
' File '
"'D:\\a\\reloadware\\reloadware\\reload\\package\\__obfuscated__\\reloadium\\fast\\l1lll11111lll111Il1l1\\ll111l1l1llll11lIl1l1.py', "
'line 80, in visit\n'
' File '
"'D:\\a\\reloadware\\reloadware\\reload\\package\\__obfuscated__\\reloadium\\fast\\ll1lll11l1l11l1lIl1l1\\llllllll11l1l11lIl1l1.py', "
'line 122, in visit_Module\n'
' File '
"'D:\\a\\reloadware\\reloadware\\reload\\package\\__obfuscated__\\reloadium\\fast\\l1lll11111lll111Il1l1\\ll111l1l1llll11lIl1l1.py', "
'line 88, in generic_visit\n'
' File '
"'D:\\a\\reloadware\\reloadware\\reload\\package\\__obfuscated__\\reloadium\\fast\\l1lll11111lll111Il1l1\\ll111l1l1llll11lIl1l1.py', "
'line 80, in visit\n'
' File '
"'D:\\a\\reloadware\\reloadware\\reload\\package\\__obfuscated__\\reloadium\\fast\\ll1lll11l1l11l1lIl1l1\\llllllll11l1l11lIl1l1.py', "
'line 156, in visit_FunctionDef\n'
' File '
"'D:\\a\\reloadware\\reloadware\\reload\\package\\__obfuscated__\\reloadium\\fast\\ll1lll11l1l11l1lIl1l1\\llllllll11l1l11lIl1l1.py', "
'line 129, in visit_ClassDef\n'
' File '
"'D:\\a\\reloadware\\reloadware\\reload\\package\\__obfuscated__\\reloadium\\fast\\l1lll11111lll111Il1l1\\ll111l1l1llll11lIl1l1.py', "
'line 88, in generic_visit\n'
' File '
"'D:\\a\\reloadware\\reloadware\\reload\\package\\__obfuscated__\\reloadium\\fast\\l1lll11111lll111Il1l1\\ll111l1l1llll11lIl1l1.py', "
'line 80, in visit\n'
' File '
"'D:\\a\\reloadware\\reloadware\\reload\\package\\__obfuscated__\\reloadium\\fast\\ll1lll11l1l11l1lIl1l1\\llllllll11l1l11lIl1l1.py', "
'line 149, in visit_FunctionDef\n'
' File '
"'D:\\a\\reloadware\\reloadware\\reload\\package\\__obfuscated__\\reloadium\\fast\\ll1lll11l1l11l1lIl1l1\\llllllll11l1l11lIl1l1.py', "
'line 106, in ll1ll1ll111lll11Il1l1\n'
" File '<string>', line 9, in __init__\n"
' File '
"'.../reloadium\\corium\\ll1lll11l1l11l1lIl1l1\\l1ll1l11111111llIl1l1.py', "
'line 513, in __post_init__\n'
' File '
"'.../reloadium\\corium\\ll1lll11l1l11l1lIl1l1\\l1ll1l11111111llIl1l1.py', "
'line 375, in __post_init__\n'
' File '
"'.../reloadium\\corium\\ll1lll11l1l11l1lIl1l1\\l1ll1l11111111llIl1l1.py', "
'line 428, in l1l1llll1l111l1lIl1l1\n'
' File '
"'C:\\ProgramData\\anaconda3\\envs\\xbrl\\lib\\tokenize.py', "
'line 512, in _tokenize\n'
' raise IndentationError(\n'
" File '<tokenize>', line 4\n"
' _parser = XMLParser(recover=True, huge_tree=True, '
'target=checkFileType())\n'
'IndentationError: unindent does not match any outer indentation '
'level\n'}
Reloadium experienced a fatal error and has to quit.
To see the exception run Reloadium with environmental variable RW_DEBUG=True
Please submit a github issue to let us know at https://github.com/reloadware/reloadium``` | closed | 2024-01-10T10:04:22Z | 2024-02-20T14:16:08Z | https://github.com/reloadware/reloadium/issues/178 | [] | hyabean | 2 |
RomelTorres/alpha_vantage | pandas | 190 | Add github actions for builds to automatically run | So that tests run on every PR automatically, and those PRs that don't pass tests can't be approved.
Github actions is new (as of a few months ago) but looks really powerful. | closed | 2020-02-13T19:42:24Z | 2021-11-19T18:45:13Z | https://github.com/RomelTorres/alpha_vantage/issues/190 | [
"enhancement"
] | PatrickAlphaC | 1 |
lyhue1991/eat_tensorflow2_in_30_days | tensorflow | 53 | Suggest a virtual environment. | I suggest a virtual environment for this tutorial
For one thing, it decouples the change in new relase of tf and the development environment we use. it saves authors' effort to answer tf version related problem and delegate them back to tf developers.
And it also saves readers effort to figure out missing package. e.g. When I run the 5-1, it tells me I miss the package pillow which is not explicitly imported.
Best
Neil | open | 2020-06-11T13:58:49Z | 2020-06-11T14:01:39Z | https://github.com/lyhue1991/eat_tensorflow2_in_30_days/issues/53 | [] | neilteng | 0 |
unionai-oss/pandera | pandas | 1,169 | Allow checks based on data types | **Is your feature request related to a problem? Please describe.**
Imagine you have this schema:
```
schema = pa.DataFrameSchema({
"a": pa.Column(int, checks=pa.Check.le(10)),
"b": pa.Column(float, checks=pa.Check.lt(-1.2)),
"c": pa.Column(str, checks=pa.Check.le(20)),
})
```
In above schema, column `c` has wrong check. But it will still flow through the entire process and may eventually fail in data side of validation but there may be other situations where check may not fail and fall through the cracks.
Such type of checks shouldn't be allowed for a given data type.
**Describe the solution you'd like**
There are 2 ways to solve this:
1. write a decorator to match the checks with respective allowed data types only. We are building it in forked branch here for `pyspark.sql` - `https://github.com/NeerajMalhotra-QB/pandera`.
2. Another (much better) solution will be to enhance `register_checks` and `register_dtype` to validate if a check should be allowed for a given type of the field.
We haven't adopted 2nd option yet, as it will require changes in common area (used by other frameworks) but will look into it in future release unless someone wants to take a stab on it first.
cc: @cosmicBboy | open | 2023-04-27T19:26:52Z | 2023-06-12T16:55:50Z | https://github.com/unionai-oss/pandera/issues/1169 | [
"enhancement"
] | NeerajMalhotra-QB | 1 |
graphdeco-inria/gaussian-splatting | computer-vision | 379 | Can't resume training from checkpoint | Hi there. I'm training on some photos of and alleyway. I've done 7k iterations in one session (loss 0.18). 30k iterations in the next session (loss 0.09). So obviously I'd like to keep training BUT this time I'd like to use the checkpoint saved at 30k iterations as my starting point to save training time.
Unfortunately this gives the following error message. Can anybody help me? Much appreciated all!
PS: I'm also unsure whether --iterations 40000 will stop at the 40000th iteration (meaning it does 10k iterations this time) or whether it means to train for a further 40000 iterations, stopping at 70k.
=====================
(gaussian_splatting) C:\Users\myUserName\gaussian-splatting>python train.py -s C:\Users\myUserName\gaussian-splatting\data\AlleywayAllFootage -m ./output/AlleyWayAll_40k --start_checkpoint C:\Users\myUserName\gaussian-splatting\output\AlleyWayAll_30k\point_cloud\iteration_30000\point_cloud.ply --iterations 40000
Optimizing ./output/AlleyWayAll_40k
Output folder: ./output/AlleyWayAll_40k [24/10 11:25:45]
Tensorboard not available: not logging progress [24/10 11:25:45]
Reading camera 241/241 [24/10 11:25:46]
Loading Training Cameras [24/10 11:25:47]
[ INFO ] Encountered quite large input images (>1.6K pixels width), rescaling to 1.6K.
If this is not desired, please explicitly specify '--resolution/-r' as 1 [24/10 11:25:47]
Loading Test Cameras [24/10 11:26:07]
Number of points at initialisation : 159302 [24/10 11:26:07]
Traceback (most recent call last):
File "train.py", line 216, in <module>
training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from)
File "train.py", line 38, in training
(model_params, first_iter) = torch.load(checkpoint)
File "C:\Users\myUserName\AppData\Local\anaconda3\envs\gaussian_splatting\lib\site-packages\torch\serialization.py", line 713, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "C:\Users\myUserName\AppData\Local\anaconda3\envs\gaussian_splatting\lib\site-packages\torch\serialization.py", line 920, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: unpickling stack underflow | open | 2023-10-24T00:37:32Z | 2024-01-17T23:16:45Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/379 | [] | shokomon | 2 |
huggingface/datasets | pytorch | 7,399 | Synchronize parameters for various datasets | ### Describe the bug
[IterableDatasetDict](https://huggingface.co/docs/datasets/v3.2.0/en/package_reference/main_classes#datasets.IterableDatasetDict.map) map function is missing the `desc` parameter. You can see the equivalent map function for [Dataset here](https://huggingface.co/docs/datasets/v3.2.0/en/package_reference/main_classes#datasets.Dataset.map).
There might be other parameters missing - I haven't checked.
### Steps to reproduce the bug
from datasets import Dataset, IterableDataset, IterableDatasetDict
ds = IterableDatasetDict({"train": Dataset.from_dict({"a": range(6)}).to_iterable_dataset(num_shards=3),
"validate": Dataset.from_dict({"a": range(6)}).to_iterable_dataset(num_shards=3)})
for d in ds["train"]:
print(d)
ds = ds.map(lambda x: {k: v+1 for k, v in x.items()}, desc="increment")
for d in ds["train"]:
print(d)
### Expected behavior
The description parameter should be available for all datasets (or none).
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Python version: 3.11.11
- `huggingface_hub` version: 0.28.1
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.9.0 | open | 2025-02-14T09:15:11Z | 2025-02-19T11:50:29Z | https://github.com/huggingface/datasets/issues/7399 | [] | grofte | 2 |
benbusby/whoogle-search | flask | 277 | [BUG] Captcha request on every page | **Describe the bug**
This page is now showing every time.

**Deployment Method**
- [X] Heroku (one-click deploy) --> Europe server
- [ ] Docker
- [ ] `run` executable
- [ ] pip/pipx
- [ ] Other: [describe setup]
**Version of Whoogle Search**
- [X] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc)
- [ ] Version [version number]
- [ ] Not sure
| closed | 2021-04-10T08:10:12Z | 2021-04-11T03:53:18Z | https://github.com/benbusby/whoogle-search/issues/277 | [
"bug"
] | federicotorrielli | 5 |
yeongpin/cursor-free-vip | automation | 100 | i cannot use cursor for this reason,it seems like there are somthing wrong about the temporary mailbox ,,,,, |  | closed | 2025-02-25T08:19:48Z | 2025-03-06T04:23:34Z | https://github.com/yeongpin/cursor-free-vip/issues/100 | [
"bug"
] | kitaharam | 12 |
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 795 | Error: cannot access local variable 'browser' where it is not associated with a value | **Describe the bug**
Whereas locally my code works on the bare machine and in a docker container, in Kubernetes, I get some very weird errors related to " cannot access local variable 'browser' where it is not associated with a value"
**To Reproduce**
Steps to reproduce the behavior:
- I guess just run a simple scrape of a page in a pod??
**Expected behavior**
Whereas locally my code works on the bare machine and in a docker container, in Kubernetes, I get some very weird errors.
**Desktop (please complete the following information):**
- OS: Ubuntu. This is a Docker container running in a Kubernetes pod.
- Browser: headless browser.
**Additional context**
Here are logs:
```
[2024-11-11, 15:30:56 UTC] {pod_manager.py:418} INFO - [base] Attempt 1 failed:
[2024-11-11, 15:30:56 UTC] {pod_manager.py:418} INFO - [base] 2024-11-11 15:30:56,719 - ERROR - **Error scraping swim-spa-abdeckung.de. Error: cannot access local variable 'browser' where it is not associated with a value**
[2024-11-11, 15:31:05 UTC] {pod_manager.py:418} INFO - [base] Future exception was never retrieved
[2024-11-11, 15:31:05 UTC] {pod_manager.py:418} INFO - [base] future: <Future finished exception=TargetClosedError('Target page, context or browser has been closed')>
[2024-11-11, 15:31:05 UTC] {pod_manager.py:418} INFO - [base] Traceback (most recent call last):
[2024-11-11, 15:31:05 UTC] {pod_manager.py:418} INFO - [base] File "/usr/local/lib/python3.11/site-packages/playwright/_impl/_connection.py", line 518, in wrap_api_call
[2024-11-11, 15:31:05 UTC] {pod_manager.py:418} INFO - [base] return await cb()
[2024-11-11, 15:31:05 UTC] {pod_manager.py:418} INFO - [base] ^^^^^^^^^^
[2024-11-11, 15:31:05 UTC] {pod_manager.py:418} INFO - [base] File "/usr/local/lib/python3.11/site-packages/playwright/_impl/_connection.py", line 85, in inner_send
[2024-11-11, 15:31:05 UTC] {pod_manager.py:418} INFO - [base] callback = self._connection._send_message_to_server(
[2024-11-11, 15:31:05 UTC] {pod_manager.py:418} INFO - [base] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2024-11-11, 15:31:05 UTC] {pod_manager.py:418} INFO - [base] File "/usr/local/lib/python3.11/site-packages/playwright/_impl/_connection.py", line 322, in _send_message_to_server
[2024-11-11, 15:31:05 UTC] {pod_manager.py:418} INFO - [base] raise self._closed_error
[2024-11-11, 15:31:05 UTC] {pod_manager.py:418} INFO - [base] playwright._impl._errors.TargetClosedError: Target page, context or browser has been closed
```
| open | 2024-11-11T16:05:21Z | 2025-01-08T03:36:25Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/795 | [] | aleenprd | 6 |
dask/dask | pandas | 11,260 | Dask-expr - Extremely slow with using .compute | I installed the basic dask version using "pip install dask". When running, I receive a FutureWarning:
> Dask dataframe query planning is disabled because dask-expr is not installed. You can install it with 'pip install dask[dataframe]' or 'conda install dask'. This will raise in a future version.
I proceeded to install using the provided pip command. However, upon doing so, my .compute() went from less than 1 second to run, to 22 seconds (sometimes longer in later parts of the code). These are small slices/excerpts from the entire dataframe so I can create a summary of certain events and do some calculations. The slices are only about 15 rows so I am just trying to extract them to pandas to get rid of any partitioning and do some calculations.
Code Excerpt (part of a for loop) --> I split the excerpt of the dataframe assignment and the compute steps up to figure out where the problem lies. For background, ddf is a dask dataframe that, in its final state, will consist of about 250,000 partitions, one for each CSV file read in.
```
event_a = ddf.loc[start_index:end_index]
print('Partial dataframe into event_i.', '---- Process Time: %s ----' % (datetime.now() - time_start_j))
event_i = event_a.compute()
print('Computed event_i to pandas dataframe.', '---- Process Time: %s ----' % (datetime.now() - time_start_j))
event_i['Event ID'] = eventid
print('Finished collecting Event DataFrame.', '---- Process Time: %s ----' % (datetime.now() - time_start_j))
```
Output (without dask-expr):
> Starting ID 1
> Partial dataframe into event_i. ---- Process Time: 0:00:00.017307 ----
> Computed event_i to pandas dataframe. ---- Process Time: 0:00:00.133990 ----
> Finished collecting Event DataFrame. ---- Process Time: 0:00:00.134989 ----
Output (with dask-expr):
> Starting ID 1
> Partial dataframe into event_i. ---- Process Time: 0:00:00.041680 ----
> Computed event_i to pandas dataframe. ---- Process Time: 0:00:21.894825 ----
> Finished collecting Event DataFrame. ---- Process Time: 0:00:21.894825 ---- | open | 2024-07-29T18:46:35Z | 2024-07-31T09:16:55Z | https://github.com/dask/dask/issues/11260 | [
"dataframe",
"dask-expr"
] | NCSUFeNiX | 3 |
scrapy/scrapy | python | 5,981 | Document the wrong reactor problem | We need a doc section about the "wrong reactor is installed" problem and how to debug and fix/work around it. | closed | 2023-07-20T09:19:18Z | 2023-08-04T11:26:52Z | https://github.com/scrapy/scrapy/issues/5981 | [
"enhancement",
"docs",
"asyncio"
] | wRAR | 0 |
taverntesting/tavern | pytest | 214 | Parametrize absence of parameter | Hello,
We want to test the absence of a parameter in combination with the rest of parameters, so if we have:
```
marks:
- parametrize:
key: p1
vals:
- "a"
- "b"
- parametrize:
key: optional_parameter
vals:
- "1"
-
```
The tests performed would be:
```
{"p1":"a", "optional_parameter":"1"}`
{"p1":"b", "optional_parameter":"1"}`
{"p1":"a"}
{"p1":"b"}
```
Is it possible to perform this check in tavern? What I'm seeing happening now is:
```
{"p1":"a", "optional_parameter":"1"}
{"p1":"b", "optional_parameter":"1"}
{"p1":"a", "optional_parameter":None}
{"p1":"b", "optional_parameter":None}
```
Thanks in advance. | closed | 2018-11-28T16:36:27Z | 2018-12-09T13:54:55Z | https://github.com/taverntesting/tavern/issues/214 | [] | elurisoto | 1 |
huggingface/peft | pytorch | 2,063 | question about training time | ### System Info
Dear authors,
I have a question regarding the training time utilizing the peft package. I tried using LoRA with a swin transformer to reduce the parameter size.
```
model = SwinModel.from_pretrained('./swin-large-patch4-window7-224-in22k').cuda()
config = LoraConfig(
r=16,
lora_alpha=16,
target_modules=["query", "value"],
lora_dropout=0.1,
bias="none",
modules_to_save=["classifier"],
)
lora_model = get_peft_model(model, config)
```
And finally, train on the lora_model.
My question is: as I tried, train 'model' and train 'lora_model' almost have the same running time, even though the parameter size is reduced from 200M to 1M. Is that normal? or did I do something wrong?
Thanks a lot for your reply.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
model = SwinModel.from_pretrained('./swin-large-patch4-window7-224-in22k').cuda()
config = LoraConfig(
r=16,
lora_alpha=16,
target_modules=["query", "value"],
lora_dropout=0.1,
bias="none",
modules_to_save=["classifier"],
)
lora_model = get_peft_model(model, config)
### Expected behavior
Please give an explanation about this situation | closed | 2024-09-12T07:02:48Z | 2024-10-25T15:03:38Z | https://github.com/huggingface/peft/issues/2063 | [] | harborsarah | 5 |
google/seq2seq | tensorflow | 146 | Nr. of steps vs. nr. of epochs | I was wondering is there is a way to define the nr. of epochs you want a certain model to run. After reading the tutorial, I am only aware of a way to easily change the nr. of training steps (which isn't the same as training a nr. of epochs, unless I am mistaken).
Thanks in advance | closed | 2017-04-05T15:39:15Z | 2017-04-10T14:44:24Z | https://github.com/google/seq2seq/issues/146 | [] | ghost | 3 |
piskvorky/gensim | data-science | 3,039 | Documentation Notebooks | Hello, I was going through some documentation notebooks, and noticed that many of them ([Poincare Embeddings](https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/Poincare%20Tutorial.ipynb), [WikiNews](https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/wikinews-bigram-en.ipynb), [Varembed](https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/Varembed.ipynb)) have been uploaded without being run to the end, they fail with errors halfway through.
@piskvorky (and others), would it be useful to have these updated? Or is someone from the RaRe / gensim team in charge of documentation? | open | 2021-02-02T01:27:23Z | 2021-02-03T08:52:25Z | https://github.com/piskvorky/gensim/issues/3039 | [
"bug",
"documentation",
"difficulty easy",
"impact MEDIUM",
"reach LOW"
] | bhargavvader | 5 |
ranaroussi/yfinance | pandas | 1,986 | incorrect 52 week high and low (compare with Yahoo) | ### Describe bug
incorrect 52 week high and low values (compare with Yahoo). It is the same as regular market day high and low values.
### Simple code that reproduces your problem
```
import yfinance as yf
msft = yf.Ticker("PTT.BK")
hist = msft.history(period="1mo")
msft.history_metadata
```
### Debug log
N/A
### Bad data proof
The result returned from yfinance is
`{
...,
"dataGranularity": "1d",
"exchangeName": "SET",
"exchangeTimezoneName": "Asia/Bangkok",
"fiftyTwoWeekHigh": 32.75,
"fiftyTwoWeekLow": 32.0,
"symbol": "PTT.BK",
"timezone": "ICT",
...
]
}`
But the 52 Week High and Low in Yahoo Finance (https://finance.yahoo.com/quote/PTT.BK/key-statistics/) are 36.5 and 31.25 respectively.

### `yfinance` version
0.2.40
### Python version
3.9.7
### Operating system
Windows 10 Pro | closed | 2024-07-16T09:51:23Z | 2024-07-19T03:43:54Z | https://github.com/ranaroussi/yfinance/issues/1986 | [] | leaderdevil | 4 |
google-research/bert | tensorflow | 960 | convert tf1 pretrained bert checkpoint to tf2 | I've trained a custom bert on my own data on tf1. now that i updated to tf2, i'm facing the issue on converting the checkpoint i have to something that's compatible with tf2. I couldn't find any converter that's working (or at least that generates a checkpoint not an h5 file).
I tried converting the checkpoint to pytorch and back to tf2, this gave me an h5 as well. I also tried the tf2_encoder_checkpoint_converter from google repo. didn't work as well.
does anyone have any suggestions or tried a successful way to do this the right way? | closed | 2019-12-11T15:22:40Z | 2024-10-16T18:49:03Z | https://github.com/google-research/bert/issues/960 | [] | fadybaly | 0 |
django-import-export/django-import-export | django | 1,760 | Failing test in different python environments | **Describe the bug**
In some environments one test fails:
```
(.venv) t14 ➜ django-import-export git:(feat/improve-docker-tests) python -V
Python 3.11.6
(.venv) t14 ➜ django-import-export git:(feat/improve-docker-tests) ./tests/manage.py test core --settings=settings -k test_import_data_error_saving_model
Creating test database for alias 'default'...
System check identified no issues (0 silenced).
F
======================================================================
FAIL: test_import_data_error_saving_model (core.tests.test_resources.test_modelresource.test_data_handling.DataHandlingTest.test_import_data_error_saving_model)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/bmihelac/dev/django/django-import-export/tests/core/tests/test_resources/test_modelresource/test_data_handling.py", line 115, in test_import_data_error_saving_model
self.assertIn(
AssertionError: "Invalid literal for Decimal: 'foo'" not found in {'could not convert string to float', "[<class 'decimal.ConversionSyntax'>]"}
----------------------------------------------------------------------
Ran 1 test in 0.012s
FAILED (failures=1)
Destroying test database for alias 'default'...
```
There is a Python issue (https://github.com/python/cpython/issues/70396) that describes different exception messages for the decimal module used in this particular test.
| closed | 2024-02-28T07:13:24Z | 2024-03-13T10:15:41Z | https://github.com/django-import-export/django-import-export/issues/1760 | [
"bug"
] | bmihelac | 0 |
matplotlib/matplotlib | data-visualization | 29,778 | [Bug]: interpolation_stage="data" removes too many pixels in the vicinity of nans in upsampled, interpolated images | ### Bug summary
Currently, when upsampling images with interpolation_stage="data", upsampled pixels are set to nan if*any* of the underlying data points is nan. This leads to much wider "nan-propagation" than interpolation_stage="rgba".
### Code for reproduction
```Python
from pylab import *
a = tril(arange(1., 26.).reshape(5, 5))
a[a == 0] = np.nan
axs = figure(layout="constrained").subplots(2, 2)
axs[0, 0].imshow(a, interpolation_stage="data", interpolation="none")
axs[0, 0].set_title("stage=data, interp=none")
axs[0, 1].imshow(a, interpolation_stage="data", interpolation="bilinear")
axs[0, 1].set_title("stage=data, interp=bilinear")
axs[1, 0].imshow(a, interpolation_stage="rgba", interpolation="none")
axs[1, 0].set_title("stage=rgba, interp=none")
axs[1, 1].imshow(a, interpolation_stage="rgba", interpolation="bilinear")
axs[1, 1].set_title("stage=rgba, interp=bilinear")
show()
```
### Actual outcome

Note how the blank area is much wider in the bilinear, data-stage interpolation case.
### Expected outcome
Although I'm not sure the choice is objective, I think a blurred boundary (similarly to the bottom right case) would make sense.
Implementation-wise, I suspect this arises from a similar issue as https://github.com/matplotlib/matplotlib/issues/29711#issuecomment-2729139906: it should indeed be possible to interpolate in data space even with nans if we interpret the data array as a single-channel image with an additional alpha channel (0-1, depending on whether the data is nan) and correctly weighting the data by the alpha channel (similarly to the premultiplied alpha filtering suggested in the comment). Without setting a zero weight on the nans, it becomes of course impossible to upsample pixels for which any underlying data points are nan (so setting the upsampled pixel to nan is the only reasonable choice).
### Additional information
_No response_
### Operating system
_No response_
### Matplotlib Version
3.11.0.dev525+g9f7b3dd205
### Matplotlib Backend
_No response_
### Python version
3.13
### Jupyter version
_No response_
### Installation
git checkout | open | 2025-03-19T00:06:01Z | 2025-03-21T02:34:02Z | https://github.com/matplotlib/matplotlib/issues/29778 | [
"topic: images"
] | anntzer | 1 |
ultralytics/yolov5 | deep-learning | 13,419 | How to generate the proper yolo style yaml? | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Recently I'm working on something with yolo, and I have developed my own model and train it in this framework. When I tried to prune my model, I found that because of the particularity of the framework, everytime you need to train a model, you have to first have a yaml file telling the detailed structures of the network. Since pruned models have different channels, I have to modify the yaml file manually as discussed in [this issue](https://github.com/ultralytics/yolov5/issues/13077#issuecomment-2479563063). So I'm now trying to design a script to automatically modify the yaml file for the network, though it can only be used for my network. I have tried a lot and failed to dump the correct format that looks same with original yaml format. Here is my code:
```python
import yaml
import torch
import argparse
from models.common import *
from models.yolo import *
from ruamel.yaml import YAML
from ruamel.yaml.comments import CommentedSeq
def make_compact_list(data):
result = CommentedSeq(data)
result.fa.set_flow_style()
return result
def parse_model(model_dict, model):
# parse the model and generate yaml dict
for i in range(len(model.model)):
if i < 22:
part = "backbone"
else:
part = "head"
if isinstance(model.model[i], Conv):
model_dict[part].append([-1, 1, "Conv", [model.model[i].conv.out_channels, model.model[i].conv.kernel_size[0], model.model[i].conv.stride[0]]])
elif isinstance(model.model[i], DWConv):
model_dict[part].append([-1, 1, "DWConv", [model.model[i].conv.out_channels, model.model[i].conv.kernel_size[0], model.model[i].conv.stride[0]]])
elif isinstance(model.model[i], Bottleneck3):
model_dict[part].append([-1, 1, "Bottleneck3", [model.model[i].cv3.conv.out_channels, model.model[i].cv1.conv.out_channels]])
elif isinstance(model.model[i], nn.Sequential):
for j in range(len(model.model[i])):
# all Bottleneck3
model_dict[part].append([-1, 1, "Bottleneck3", [model.model[i][j].cv3.conv.out_channels, model.model[i][j].cv1.conv.out_channels]])
elif isinstance(model.model[i], Concat):
if i == 23:
model_dict[part].append([[-1, -5], 1, "Concat", [1]])
elif i == 29:
model_dict[part].append([[-1, 12], 1, "Concat", [1]])
elif i == 35:
model_dict[part].append([[-1, 7], 1, "Concat", [1]])
else:
# error
print(f"Error: Concat layer position ({i}) is wrong")
elif isinstance(model.model[i], nn.Upsample):
model_dict[part].append([-5, 1, "nn.Upsample", [None, 2, "nearest"]])
elif isinstance(model.model[i], Detect):
model_dict[part].append([[44, 38, 32], 1, "Detect", ["nc", "anchors"]])
else:
# error
print(f"Error: Layer type is not supported: {model.model[i]}")
model_dict['backbone'] = make_compact_list(model_dict['backbone'])
model_dict['head'] = make_compact_list(model_dict['head'])
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Generate yaml file for the pruned model")
parser.add_argument("--model", type=str, help="Path to the pruned model")
opt = parser.parse_args()
model = torch.load(opt.model)['model'] # Load the pruned model
print(model)
# Create the yaml file
model_dict = {}
model_dict['nc'] = 1
model_dict['depth_multiple'] = 1.0
model_dict['width_multiple'] = 1.0
anchors = [[4, 6, 7, 10, 11, 15], [16, 24, 33, 25, 26, 41], [47, 60, 83, 97, 141, 149]]
model_dict['anchors'] = make_compact_list(anchors)
model_dict['backbone'] = []
model_dict['head'] = []
parse_model(model_dict, model)
yaml_save = YAML()
yaml.PreserveAnchor = False
yaml_save.default_block_style = True
yaml_save.indent(sequence=4, offset=2)
with open("pruned_model.yaml", "w") as f:
yaml_save.dump(model_dict, f)
```
Here shows what it looks like:
```yaml
nc: 1
depth_multiple: 1.0
width_multiple: 1.0
anchors: [[4, 6, 7, 10, 11, 15], [16, 24, 33, 25, 26, 41], [47, 60, 83, 97, 141, 149]]
backbone: [[-1, 1, Conv, [2, 3, 2]], [-1, 1, Conv, [2, 3, 1]], [-1, 1, Conv, [7, 1,
1]], [-1, 1, Conv, [19, 1, 1]], [-1, 1, Conv, [6, 1, 1]], [-1, 1, Bottleneck3,
[6, 34]], [-1, 1, Conv, [32, 1, 1]], [-1, 1, Conv, [32, 3, 2]], [-1, 1, Conv,
[8, 1, 1]], [-1, 1, Bottleneck3, [8, 33]], [-1, 1, Bottleneck3, [8, 44]], [
-1, 1, Conv, [38, 1, 1]], [-1, 1, Conv, [38, 3, 2]], [-1, 1, Conv, [12, 1, 1]],
[-1, 1, Bottleneck3, [12, 78]], [-1, 1, Bottleneck3, [12, 89]], [-1, 1, Bottleneck3,
[12, 88]], [-1, 1, Conv, [83, 1, 1]], [-1, 1, Conv, [83, 3, 1]], [-1, 1, Conv,
[24, 1, 1]], [-1, 1, Bottleneck3, [24, 113]], [-1, 1, Bottleneck3, [24, 132]],
[-1, 1, Conv, [115, 1, 1]], [-1, 1, Conv, [115, 3, 2]], [-1, 1, Conv, [25, 1, 1]],
[-1, 1, Bottleneck3, [25, 130]], [-1, 1, Bottleneck3, [25, 218]]]
head: [[-1, 1, Conv, [64, 1, 1]], [[-1, -5], 1, Concat, [1]], [-1, 1, Conv, [39, 1,
1]], [-1, 1, Conv, [39, 3, 1]], [-1, 1, Conv, [33, 1, 1]], [-1, 1, Conv,
[18, 1, 1]], [-5, 1, nn.Upsample, [null, 2, nearest]], [[-1, 12], 1, Concat,
[1]], [-1, 1, Conv, [19, 1, 1]], [-1, 1, Conv, [19, 3, 1]], [-1, 1, Conv, [
21, 1, 1]], [-1, 1, Conv, [18, 1, 1]], [-5, 1, nn.Upsample, [null, 2, nearest]],
[ [-1, 7], 1, Concat, [1]], [-1, 1, Conv, [13, 1, 1]], [-1, 1, Conv, [13, 3, 1]],
[-1, 1, Conv, [16, 1, 1]], [-1, 1, Conv, [18, 1, 1]], [[44, 38, 32], 1, Detect,
[nc, anchors]]]
```
And I just want the members of `backbone` and `head` be in a single row just like this:
```yaml
nc: 1 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors:
- [4, 6, 7, 10, 11, 15]
- [16, 24, 33, 25, 26, 41]
- [47, 60, 83, 97, 141, 149]
backbone:
# [from, number, module, args]
# args: out_channels, size, stride
[
[-1, 1, Conv, [2, 3, 2]], # 0 [batch, 8, size/2, size/2]
[-1, 1, DWConv, [2, 3, 1]], # 1 [320]
[-1, 1, Conv, [7, 1, 1 ]], # 2 [320]
[-1, 1, Conv, [19, 1, 1]], # 3 [-1, 1, DWConv, [24, 3, 2]] # 4
[-1, 1, Conv, [6, 1, 1]], # 4
[-1, 1, Bottleneck3, [6, 34]], # 5
[-1, 1, Conv, [32, 1, 1]], # 6
[-1, 1, DWConv, [32, 3, 2]], # 7 [160]
[-1, 1, Conv, [8, 1, 1]], # 8
[-1, 1, Bottleneck3, [8, 33]], # 9
[-1, 1, Bottleneck3, [8, 44]], # 10
[-1, 1, Conv, [38, 1, 1]], # 11
[-1, 1, DWConv, [38, 3, 2]], # 12 [80]
[-1, 1, Conv, [12, 1, 1]], # 13
[-1, 1, Bottleneck3, [12, 78]], # 14
[-1, 1, Bottleneck3, [12, 89]], # 15
[-1, 1, Bottleneck3, [12, 88]], # 16
[-1, 1, Conv, [83, 1, 1]], # 17
[-1, 1, DWConv, [83, 3, 1]], # 18
[-1, 1, Conv, [24, 1, 1]], # 19
[-1, 1, Bottleneck3, [24, 113]], # 20
[-1, 1, Bottleneck3, [24, 132]], # 21
[-1, 1, Conv, [115, 1, 1]], # 22 [80]
[-1, 1, DWConv, [115, 3, 2]], # 23 [80] -> [40]
[-1, 1, Conv, [25, 1, 1]], # 24
[-1, 1, Bottleneck3, [25, 130]], # 25 [batch, 40, size/16, size/16]
[-1, 1, Bottleneck3, [25, 218]], # 26 [batch, 40, size/16, size/16]
]
head: [
[-1, 1, Conv, [64, 1, 1]], # 27 [40]
[[-1, -5], 1, Concat, [1]], # 28 [batch, 224, size/16, size/16] [40] # to line 40 # changed from -4 to -5
[-1, 1, Conv, [39, 1, 1]], # 29
[-1, 1, DWConv, [39, 3, 1]], # 30
[-1, 1, Conv, [33, 1, 1]], # 31
[-1, 1, Conv, [18, 1, 1]], # 32 [batch, 18, size/8, size/8] -> [40] ###
[-5, 1, nn.Upsample, [None, 2, "nearest"]], # 33 [80]
[[-1, 12], 1, Concat, [1]], # 34 [80] ch = 272 # to line 27 # changed from 11 to 12
[-1, 1, Conv, [19, 1, 1]], # 35
[-1, 1, DWConv, [19, 3, 1]], # 36
[-1, 1, Conv, [21, 1, 1]], # 37
[-1, 1, Conv, [18, 1, 1]], # 38 [batch, 18, 160, 160] -> [80] ###
[-5, 1, nn.Upsample, [None, 2, "nearest"]], # 39 [1, 272, 320, 320] -> [160]
[[-1, 7], 1, Concat, [1]], # 40 # to line 21
[-1, 1, Conv, [13, 1, 1]], # 41
[-1, 1, DWConv, [13, 3, 1]], # 42
[-1, 1, Conv, [16, 1, 1]], # 43
[-1, 1, Conv, [18, 1, 1]], # 44 [batch, 18, 320, 320] -> [160] ###
[[44, 38, 32], 1, Detect, [nc, anchors]],
]
```
FYI, the reason for why I used `ruamel.yaml` instead of `yaml` is that I tried `yaml` before using:
```python
def dump_yaml(data, file_path):
class MyDumper(yaml.Dumper):
def increase_indent(self, flow=False, indentless=False):
return super(MyDumper, self).increase_indent(flow=flow, indentless=indentless)
with open(file_path, 'w') as f:
yaml.dump(data, f, Dumper=MyDumper, default_flow_style=None)
```
But it turns out that this only fits `anchors` since there is only one layer of embedded list instead of two.:
```yaml
anchors:
- [4, 6, 7, 10, 11, 15]
- [16, 24, 33, 25, 26, 41]
- [47, 60, 83, 97, 141, 149]
backbone:
- - -1
- 1
- Conv
- [2, 3, 2]
- - -1
- 1
- Conv
- [2, 3, 1]
...
```
### Additional
_No response_ | open | 2024-11-18T14:15:23Z | 2024-11-18T21:04:29Z | https://github.com/ultralytics/yolov5/issues/13419 | [
"question",
"detect"
] | tobymuller233 | 2 |
xzkostyan/clickhouse-sqlalchemy | sqlalchemy | 90 | Table engine does not reflected | I'm trying to reflect the existing table using SQL Expression Language:
```python
from sqlalchemy import create_engine, MetaData, Table
engine = create_engine('clickhouse://default:@localhost:8123/MyDatabase')
metadata = MetaData()
MyTable = Table('MyTable', metadata, autoload=True, autoload_with=engine)
```
But the newly created Table() object does not contains any info about table engine (the engine of an actual table is ReplacingMergeTree). I tried to get the engine info with Reflection Inspector:
```python
from sqlalchemy import create_engine, MetaData, Table
from sqlalchemy.engine import reflection
engine = create_engine('clickhouse://default:@localhost:8123/MyDatabase')
insp = reflection.Inspector.from_engine(engine)
print(insp.get_table_options('MyTable'))
```
It's possible to get the e.g., MySQL table engine in this way (get_table_options() returns {'mysql_engine': 'InnoDB'}), but I can't do this for ClickHouse. Is there any way?
I'm new to SQLAlchemy and clickhouse-sqlalchemy, so please forgive if I misunderstood something) | closed | 2020-03-12T10:42:43Z | 2022-05-28T11:45:27Z | https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/90 | [] | IlyaBel | 2 |
s3rius/FastAPI-template | graphql | 25 | Fix kubernetes configs. | Currently we have several problems in kubernetes configs.
- [x] migrator job doesn't have limits on cpu and ram;
- [x] redis env value converted to boolean after formatting;
- [x] wrong indent in yamls;
- [x] invalid format for CMD in Dockerfile.
If you find more, please fell free to add comments. | closed | 2021-10-01T00:22:48Z | 2021-10-02T13:06:05Z | https://github.com/s3rius/FastAPI-template/issues/25 | [] | s3rius | 4 |
noirbizarre/flask-restplus | api | 160 | Proper OAuth 1/2 support | Flask-RESTPlus needs to provide a proper OAuth1/2 support:
- [ ] Oauth security definition support
- [ ] Swagger-UI OAuth configuration
- [ ] Automatic parameters extraction from oauthlib/flask-oauthlib (definition + scopes)
- [ ] Postman export
| open | 2016-04-21T11:22:02Z | 2018-09-27T11:40:33Z | https://github.com/noirbizarre/flask-restplus/issues/160 | [
"enhancement"
] | noirbizarre | 2 |
ExpDev07/coronavirus-tracker-api | rest-api | 167 | Added simple Telegram BOT | I'm using this tracker as a data source for a Telegram BOT: @CovidWORLDbot
Thanks. | closed | 2020-03-24T16:42:41Z | 2020-04-19T18:09:41Z | https://github.com/ExpDev07/coronavirus-tracker-api/issues/167 | [
"user-created"
] | tonjo | 1 |
Kav-K/GPTDiscord | asyncio | 446 | [BUG] Bot not responding to mentions in reply | **Describe the bug**
Bot does not detect/respond to @ mentions in replies to itself
**To Reproduce**
Steps to reproduce the behaviour:
1. Set BOT_TAGGABLE=True in .env
2. Send message @ bot
3. When bot replies, click reply to message in discord.
4. See error
**Expected behaviour**
When the bot is mentioned (eg "I think today is Monday, right? @bot" or "@bot Is today Monday?" ) the bot replies as expected. | closed | 2023-12-11T07:07:05Z | 2023-12-31T10:08:29Z | https://github.com/Kav-K/GPTDiscord/issues/446 | [
"bug"
] | jeffe | 3 |
pyppeteer/pyppeteer | automation | 459 | newPage() causes infinite await loop | I'm running the following code:
```
import asyncio
from pyppeteer import launch
# Get page
url = 'https://quotes.toscrape.com/'
async def main():
browser = await launch(headless=False)
page = await browser.newPage()
await page.goto(url)
await page.screenshot({"path": "example.png"})
await browser.close()
asyncio.get_event_loop().run_until_complete(main())
```
When I run it I get a blank page and the program enters an infinite loop at the 'await browser.newPage()` line and never breaks out of he loop to get to the next line. I'm using the latest version of pyppeteer and I updated Chromium.
| open | 2024-01-04T10:47:55Z | 2024-02-10T13:43:12Z | https://github.com/pyppeteer/pyppeteer/issues/459 | [] | Ollenmire | 5 |
piskvorky/gensim | machine-learning | 2,942 | Segfault when training doc2vec | #### Problem description
When attempting to train doc2vec, gensim segfaults.
#### Steps/code/corpus to reproduce
I run the code:
```
import faulthandler
import gensim
faulthandler.enable()
model = gensim.models.doc2vec.Doc2Vec(corpus_file = "yelp_tripadvisor_linesentence.txt", vector_size=250, min_count=10, epochs=40, workers = 5)
```
I get the output:
```
Fatal Python error: Segmentation fault
Current thread 0x00007f2d9effd700 (most recent call first):
File "/home/paul/.local/lib/python3.8/site-packages/gensim/models/doc2vec.py", line 431 in _do_train_epoch
File "/home/paul/.local/lib/python3.8/site-packages/gensim/models/base_any2vec.py", line 172 in _worker_loop_corpusfile
File "/usr/lib/python3.8/threading.py", line 870 in run
File "/usr/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/usr/lib/python3.8/threading.py", line 890 in _bootstrap
Thread 0x00007f2d9f7fe700 (most recent call first):
File "/home/paul/.local/lib/python3.8/site-packages/gensim/models/doc2vec.py", line 431 in _do_train_epoch
File "/home/paul/.local/lib/python3.8/site-packages/gensim/models/base_any2vec.py", line 172 in _worker_loop_corpusfile
File "/usr/lib/python3.8/threading.py", line 870 in run
File "/usr/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/usr/lib/python3.8/threading.py", line 890 in _bootstrap
Thread 0x00007f2d9ffff700 (most recent call first):
File "/home/paul/.local/lib/python3.8/site-packages/gensim/models/doc2vec.py", line 431 in _do_train_epoch
File "/home/paul/.local/lib/python3.8/site-packages/gensim/models/base_any2vec.py", line 172 in _worker_loop_corpusfile
File "/usr/lib/python3.8/threading.py", line 870 in run
File "/usr/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/usr/lib/python3.8/threading.py", line 890 in _bootstrap
Thread 0x00007f2da48df700 (most recent call first):
File "/home/paul/.local/lib/python3.8/site-packages/gensim/models/doc2vec.py", line 431 in _do_train_epoch
File "/home/paul/.local/lib/python3.8/site-packages/gensim/models/base_any2vec.py", line 172 in _worker_loop_corpusfile
File "/usr/lib/python3.8/threading.py", line 870 in run
File "/usr/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/usr/lib/python3.8/threading.py", line 890 in _bootstrap
Thread 0x00007f2da50e0700 (most recent call first):
File "/home/paul/.local/lib/python3.8/site-packages/gensim/models/doc2vec.py", line 431 in _do_train_epoch
File "/home/paul/.local/lib/python3.8/site-packages/gensim/models/base_any2vec.py", line 172 in _worker_loop_corpusfile
File "/usr/lib/python3.8/threading.py", line 870 in run
File "/usr/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/usr/lib/python3.8/threading.py", line 890 in _bootstrap
Thread 0x00007f3055bd1740 (most recent call first):
File "/usr/lib/python3.8/threading.py", line 302 in wait
File "/usr/lib/python3.8/queue.py", line 170 in get
File "/home/paul/.local/lib/python3.8/site-packages/gensim/models/base_any2vec.py", line 345 in _log_epoch_progress
File "/home/paul/.local/lib/python3.8/site-packages/gensim/models/base_any2vec.py", line 430 in _train_epoch_corpusfile
File "/home/paul/.local/lib/python3.8/site-packages/gensim/models/base_any2vec.py", line 554 in train
File "/home/paul/.local/lib/python3.8/site-packages/gensim/models/base_any2vec.py", line 1063 in train
File "/home/paul/.local/lib/python3.8/site-packages/gensim/models/doc2vec.py", line 554 in train
File "/home/paul/.local/lib/python3.8/site-packages/gensim/models/doc2vec.py", line 360 in __init__
File "reproduce_segfault.py", line 4 in <module>
Segmentation fault (core dumped)
```
When run in gdb I get:
```
Thread 36 "python3" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffd450ca700 (LWP 112905)]
0x00007fffc9347737 in saxpy_kernel_16 ()
from /home/paul/.local/lib/python3.8/site-packages/scipy/spatial/../../scipy.libs/libopenblasp-r0-085ca80a.3.9.so
```
The backtrace I get is:
```
(gdb) backtrace
#0 0x00007fffc9347737 in saxpy_kernel_16 ()
from /home/paul/.local/lib/python3.8/site-packages/scipy/spatial/../../scipy.libs/libopenblasp-r0-085ca80a.3.9.so
#1 0x00007fffc934792f in saxpy_k_ZEN ()
from /home/paul/.local/lib/python3.8/site-packages/scipy/spatial/../../scipy.libs/libopenblasp-r0-085ca80a.3.9.so
#2 0x00007fffc84402cb in saxpy_ ()
from /home/paul/.local/lib/python3.8/site-packages/scipy/spatial/../../scipy.libs/libopenblasp-r0-085ca80a.3.9.so
#3 0x00007fffa0e81782 in ?? ()
from /home/paul/.local/lib/python3.8/site-packages/gensim/models/doc2vec_corpusfile.cpython-38-x86_64-linux-gnu.so
#4 0x00007fffa0e8243f in ?? ()
from /home/paul/.local/lib/python3.8/site-packages/gensim/models/doc2vec_corpusfile.cpython-38-x86_64-linux-gnu.so
#5 0x00000000005f17e5 in cfunction_call_varargs (kwargs=<optimized out>, args=<optimized out>,
func=<built-in function d2v_train_epoch_dm>) at ../Objects/call.c:772
#6 PyCFunction_Call (func=<built-in function d2v_train_epoch_dm>, args=<optimized out>, kwargs=<optimized out>) at ../Objects/call.c:772
#7 0x00000000005f2406 in _PyObject_MakeTpCall (callable=<built-in function d2v_train_epoch_dm>, args=<optimized out>,
nargs=<optimized out>, keywords=<optimized out>) at ../Include/internal/pycore_pyerrors.h:13
#8 0x000000000056cfd4 in _PyObject_Vectorcall (kwnames=('doctag_vectors', 'doctag_locks'), nargsf=<optimized out>,
args=<optimized out>, callable=<built-in function d2v_train_epoch_dm>) at ../Include/cpython/abstract.h:125
#9 _PyObject_Vectorcall (kwnames=('doctag_vectors', 'doctag_locks'), nargsf=<optimized out>, args=<optimized out>,
callable=<built-in function d2v_train_epoch_dm>) at ../Include/cpython/abstract.h:115
#10 call_function (kwnames=('doctag_vectors', 'doctag_locks'), oparg=<optimized out>, pp_stack=<synthetic pointer>,
tstate=<optimized out>) at ../Python/ceval.c:4987
#11 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at ../Python/ceval.c:3515
#12 0x0000000000565972 in PyEval_EvalFrameEx (throwflag=0,
f=Frame 0x7ffd34001710, for file /home/paul/.local/lib/python3.8/site-packages/gensim/models/doc2vec.py, line 1199, in _do_train_epoch (self=<Doc2Vec(sg=0, alpha=<float at remote 0x7fffa19bd8d0>, window=5, random=<numpy.random.mtrand.RandomState at remote 0x7fffa09f4640>, min_alpha=<float at remote 0x7fffa19bd910>, hs=0, negative=5, ns_exponent=<float at remote 0x7fffa19bd930>, cbow_mean=1, compute_loss=False, running_training_loss=<float at remote 0x7fffa19bd870>, min_alpha_yet_reached=<float at remote 0x7fffa19bd8d0>, corpus_count=9643078, corpus_total_words=1099181249, vector_size=250, workers=5, epochs=40, train_count=0, total_train_time=0, batch_words=10000, model_trimmed_--Type <RE--Type <RET> for more, q to quit, c to contin--Type <RET> for more, q to quit, c to continue without--Type <RET> for more, q --Type <RET> fo--Typ--Typ--Type <RET> for more, q to quit, c to continue without paging--
post_training=False, callbacks=(), load=<function at remote 0x7ffff412f310>, dbow_words=0, dm_concat=0, dm_tag_count=1, vocabulary=<Doc2VecVocab(max_vocab_size=None, min_count=10, sample=<float at remote 0x7fffa12f9670>, sorted_vocab=True, null_word=0, cum_table=<numpy.ndarray at remote 0x7fffa0998c10>, raw_vocab={}, max_final_vocab=None,...(truncated)) at ../Python/ceval.c:741
#13 _PyEval_EvalCodeWithName (_co=<optimized out>, globals=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=<optimized out>,
kwnames=<optimized out>, kwargs=0x7fffa0a01d68, kwcount=<optimized out>, kwstep=1, defs=0x7fffa12ad0f8, defcount=4, kwdefs=0x0, closure=0x0, name='_do_train_epoch',
qualname='Doc2Vec._do_train_epoch') at ../Python/ceval.c:4298
#14 0x00000000005f1d85 in _PyFunction_Vectorcall (func=<optimized out>, stack=0x7fffa0a01d30, nargsf=<optimized out>, kwnames=<optimized out>) at ../Objects/call.c:435
#15 0x0000000000507729 in _PyObject_Vectorcall (
kwnames=('total_examples', 'total_words', 'start_alpha', 'end_alpha', 'word_count', 'compute_loss', 'offsets', 'start_doctags'), nargsf=7, args=0x7fffa0a01d30,
callable=<function at remote 0x7fffa12ba3a0>) at ../Include/cpython/abstract.h:127
#16 method_vectorcall (method=<optimized out>, args=<optimized out>, nargsf=<optimized out>,
kwnames=('total_examples', 'total_words', 'start_alpha', 'end_alpha', 'word_count', 'compute_loss', 'offsets', 'start_doctags')) at ../Objects/classobject.c:89
#17 0x00000000005f1107 in PyVectorcall_Call (kwargs=<optimized out>, tuple=<optimized out>, callable=<method at remote 0x7fff9d8c7600>) at ../Objects/call.c:199
#18 PyObject_Call (callable=<method at remote 0x7fff9d8c7600>, args=<optimized out>, kwargs=<optimized out>) at ../Objects/call.c:227
#19 0x0000000000568e1f in do_call_core (
kwdict={'total_examples': 9643078, 'total_words': 1099181249, 'start_alpha': <float at remote 0x7fffa19bd8d0>, 'end_alpha': <float at remote 0x7fffa19bd910>, 'word_count': 0, 'compute_loss': False, 'offsets': [0, 1186792315, 2373585688, 3560378663, 4747171525], 'start_doctags': [0, 1296629, 3235497, 5388103, 7520884]},
callargs=('yelp_tripadvisor_linesentence.txt', 4, <float at remote 0x7fff98254b10>, <gensim.models.word2vec_corpusfile.CythonVocab at remote 0x7fff9dde38e0>, (<numpy.ndarray at remote 0x7fff9de26170>, <numpy.ndarray at remote 0x7fff9de26990>), 0), func=<method at remote 0x7fff9d8c7600>, tstate=<optimized out>)
at ../Python/ceval.c:5034
#20 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at ../Python/ceval.c:3559
#21 0x0000000000565972 in PyEval_EvalFrameEx (throwflag=0,
f=Frame 0x7ffd34000ba0, for file /home/paul/.local/lib/python3.8/site-packages/gensim/models/base_any2vec.py, line 940, in _worker_loop_corpusfile (self=<Doc2Vec(sg=0, alpha=<float at remote 0x7fffa19bd8d0>, window=5, random=<numpy.random.mtrand.RandomState at remote 0x7fffa09f4640>, min_alpha=<float at remote 0x7fffa19bd910>, hs=0, negative=5, ns_exponent=<float at remote 0x7fffa19bd930>, cbow_mean=1, compute_loss=False, running_training_loss=<float at remote 0x7fffa19bd870>, min_alpha_yet_reached=<float at remote 0x7fffa19bd8d0>, corpus_count=9643078, corpus_total_words=1099181249, vector_size=250, workers=5, epochs=40, train_count=0, total_train_time=0, batch_words=10000, model_trimmed_post_training=False, callbacks=(), load=<function at remote 0x7ffff412f310>, dbow_words=0, dm_concat=0, dm_tag_count=1, vocabulary=<Doc2VecVocab(max_vocab_size=None, min_count=10, sample=<float at remote 0x7fffa12f9670>, sorted_vocab=True, null_word=0, cum_table=<numpy.ndarray at remote 0x7fffa0998c10>, raw_vocab={}, max_final...(truncated)) at ../Python/ceval.c:741
#22 _PyEval_EvalCodeWithName (_co=<optimized out>, globals=<optimized out>, locals=<optimized out>, args=<optimized out>, argcount=<optimized out>,
kwnames=<optimized out>, kwargs=0x7fffa0a01ce0, kwcount=<optimized out>, kwstep=1, defs=0x7fffa178ba58, defcount=3, kwdefs=0x0, closure=0x0,
name='_worker_loop_corpusfile', qualname='BaseAny2VecModel._worker_loop_corpusfile') at ../Python/ceval.c:4298
#23 0x00000000005f1d85 in _PyFunction_Vectorcall (func=<optimized out>, stack=0x7fffa0a01cb0, nargsf=<optimized out>, kwnames=<optimized out>) at ../Objects/call.c:435
#24 0x0000000000507729 in _PyObject_Vectorcall (
kwnames=('start_alpha', 'end_alpha', 'word_count', 'compute_loss', 'offsets', 'start_doctags', 'cur_epoch', 'total_examples', 'total_words'), nargsf=6,
args=0x7fffa0a01cb0, callable=<function at remote 0x7fffa17970d0>) at ../Include/cpython/abstract.h:127
#25 method_vectorcall (method=<optimized out>, args=<optimized out>, nargsf=<optimized out>,
kwnames=('start_alpha', 'end_alpha', 'word_count', 'compute_loss', 'offsets', 'start_doctags', 'cur_epoch', 'total_examples', 'total_words'))
at ../Objects/classobject.c:89
#26 0x00000000005f1107 in PyVectorcall_Call (kwargs=<optimized out>, tuple=<optimized out>, callable=<method at remote 0x7fff9d84cdc0>) at ../Objects/call.c:199
#27 PyObject_Call (callable=<method at remote 0x7fff9d84cdc0>, args=<optimized out>, kwargs=<optimized out>) at ../Objects/call.c:227
#28 0x0000000000568e1f in do_call_core (
kwdict={'start_alpha': <float at remote 0x7fffa19bd8d0>, 'end_alpha': <float at remote 0x7fffa19bd910>, 'word_count': 0, 'compute_loss': False, 'offsets': [0, 1186792315, 2373585688, 3560378663, 4747171525], 'start_doctags': [0, 1296629, 3235497, 5388103, 7520884], 'cur_epoch': 0, 'total_examples': 9643078, 'total_words': 1099181249},
callargs=('yelp_tripadvisor_linesentence.txt', 4, <float at remote 0x7fff98254b10>, <gensim.models.word2vec_corpusfile.CythonVocab at remote 0x7fff9dde38e0>, <Queue(maxsize=0, queue=<collections.deque at remote 0x7fff9dde3d00>, mutex=<_thread.lock at remote 0x7fff98710420>, not_empty=<Condition(_lock=<_thread.lock at remote 0x7fff98710420>, acquire=<built-in method acquire of _thread.lock object at remote 0x7fff98710420>, release=<built-in method release of _thread.lock object at remote 0x7fff98710420>, _waiters=<collections.deque at remote 0x7fff9dde3ca0>) at remote 0x7fff98710460>, not_full=<Condition(_lock=<_thread.lock at remote 0x7fff98710420>, acquire=<built-in method acquire of _thread.lock object at remote 0x7fff98710420>, release=<built-in method release of _thread.lock object at remote 0x7fff98710420>, _waiters=<collections.deque at remote 0x7fff9dde3c40>) at remote 0x7fff987104c0>, all_tasks_done=<Condition(_lock=<_thread.lock at remote 0x7fff98710420>, acquire=<built-in method acquire of _thre--Type <RET> for more, q to quit, c to continue without paging--
ad.lock objec...(truncated), func=<method at remote 0x7fff9d84cdc0>, tstate=<optimized out>) at ../Python/ceval.c:5034
#29 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at ../Python/ceval.c:3559
#30 0x00000000005f1b8b in PyEval_EvalFrameEx (throwflag=0,
f=Frame 0x7fff9f3ee740, for file /usr/lib/python3.8/threading.py, line 870, in run (self=<Thread(_target=<method at remote 0x7fff9d84cdc0>, _name='Thread-5', _args=('yelp_tripadvisor_linesentence.txt', 4, <float at remote 0x7fff98254b10>, <gensim.models.word2vec_corpusfile.CythonVocab at remote 0x7fff9dde38e0>, <Queue(maxsize=0, queue=<collections.deque at remote 0x7fff9dde3d00>, mutex=<_thread.lock at remote 0x7fff98710420>, not_empty=<Condition(_lock=<_thread.lock at remote 0x7fff98710420>, acquire=<built-in method acquire of _thread.lock object at remote 0x7fff98710420>, release=<built-in method release of _thread.lock object at remote 0x7fff98710420>, _waiters=<collections.deque at remote 0x7fff9dde3ca0>) at remote 0x7fff98710460>, not_full=<Condition(_lock=<_thread.lock at remote 0x7fff98710420>, acquire=<built-in method acquire of _thread.lock object at remote 0x7fff98710420>, release=<built-in method release of _thread.lock object at remote 0x7fff98710420>, _waiters=<collections.deque at remote 0x7fff9dd...(truncated)) at ../Python/ceval.c:741
#31 function_code_fastcall (globals=<optimized out>, nargs=<optimized out>, args=<optimized out>, co=<optimized out>) at ../Objects/call.c:283
#32 _PyFunction_Vectorcall (func=<optimized out>, stack=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>) at ../Objects/call.c:410
#33 0x00000000005677c7 in _PyObject_Vectorcall (kwnames=0x0, nargsf=<optimized out>, args=0x7fff9f35c7b8, callable=<function at remote 0x7ffff732e9d0>)
at ../Include/cpython/abstract.h:127
#34 call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>, tstate=0xac0530) at ../Python/ceval.c:4987
#35 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at ../Python/ceval.c:3486
#36 0x00000000005f1b8b in PyEval_EvalFrameEx (throwflag=0,
f=Frame 0x7fff9f35c640, for file /usr/lib/python3.8/threading.py, line 932, in _bootstrap_inner (self=<Thread(_target=<method at remote 0x7fff9d84cdc0>, _name='Thread-5', _args=('yelp_tripadvisor_linesentence.txt', 4, <float at remote 0x7fff98254b10>, <gensim.models.word2vec_corpusfile.CythonVocab at remote 0x7fff9dde38e0>, <Queue(maxsize=0, queue=<collections.deque at remote 0x7fff9dde3d00>, mutex=<_thread.lock at remote 0x7fff98710420>, not_empty=<Condition(_lock=<_thread.lock at remote 0x7fff98710420>, acquire=<built-in method acquire of _thread.lock object at remote 0x7fff98710420>, release=<built-in method release of _thread.lock object at remote 0x7fff98710420>, _waiters=<collections.deque at remote 0x7fff9dde3ca0>) at remote 0x7fff98710460>, not_full=<Condition(_lock=<_thread.lock at remote 0x7fff98710420>, acquire=<built-in method acquire of _thread.lock object at remote 0x7fff98710420>, release=<built-in method release of _thread.lock object at remote 0x7fff98710420>, _waiters=<collections.deque at rem...(truncated)) at ../Python/ceval.c:741
#37 function_code_fastcall (globals=<optimized out>, nargs=<optimized out>, args=<optimized out>, co=<optimized out>) at ../Objects/call.c:283
#38 _PyFunction_Vectorcall (func=<optimized out>, stack=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>) at ../Objects/call.c:410
#39 0x00000000005677c7 in _PyObject_Vectorcall (kwnames=0x0, nargsf=<optimized out>, args=0x7fff9f3ee6f8, callable=<function at remote 0x7ffff732eca0>)
at ../Include/cpython/abstract.h:127
#40 call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>, tstate=0xac0530) at ../Python/ceval.c:4987
#41 _PyEval_EvalFrameDefault (f=<optimized out>, throwflag=<optimized out>) at ../Python/ceval.c:3486
#42 0x00000000005f1b8b in PyEval_EvalFrameEx (throwflag=0,
f=Frame 0x7fff9f3ee580, for file /usr/lib/python3.8/threading.py, line 890, in _bootstrap (self=<Thread(_target=<method at remote 0x7fff9d84cdc0>, _name='Thread-5', _args=('yelp_tripadvisor_linesentence.txt', 4, <float at remote 0x7fff98254b10>, <gensim.models.word2vec_corpusfile.CythonVocab at remote 0x7fff9dde38e0>, <Queue(maxsize=0, queue=<collections.deque at remote 0x7fff9dde3d00>, mutex=<_thread.lock at remote 0x7fff98710420>, not_empty=<Condition(_lock=<_thread.lock at remote 0x7fff98710420>, acquire=<built-in method acquire of _thread.lock object at remote 0x7fff98710420>, release=<built-in method release of _thread.lock object at remote 0x7fff98710420>, _waiters=<collections.deque at remote 0x7fff9dde3ca0>) at remote 0x7fff98710460>, not_full=<Condition(_lock=<_thread.lock at remote 0x7fff98710420>, acquire=<built-in method acquire of _thread.lock object at remote 0x7fff98710420>, release=<built-in method release of _thread.lock object at remote 0x7fff98710420>, _waiters=<collections.deque at remote 0x...(truncated)) at ../Python/ceval.c:741
#43 function_code_fastcall (globals=<optimized out>, nargs=<optimized out>, args=<optimized out>, co=<optimized out>) at ../Objects/call.c:283
#44 _PyFunction_Vectorcall (func=<optimized out>, stack=<optimized out>, nargsf=<optimized out>, kwnames=<optimized out>) at ../Objects/call.c:410
#45 0x000000000050722c in _PyObject_Vectorcall (kwnames=<optimized out>, nargsf=<optimized out>, args=<optimized out>, callable=<optimized out>)
at ../Include/cpython/abstract.h:127
#46 method_vectorcall (method=<optimized out>, args=0x7ffff7634058, nargsf=<optimized out>, kwnames=<optimized out>) at ../Objects/classobject.c:89
#47 0x00000000005f1107 in PyVectorcall_Call (kwargs=<optimized out>, tuple=<optimized out>, callable=<method at remote 0x7fff9d8c7540>) at ../Objects/call.c:199
#48 PyObject_Call (callable=<method at remote 0x7fff9d8c7540>, args=<optimized out>, kwargs=<optimized out>) at ../Objects/call.c:227
#49 0x000000000064fb98 in t_bootstrap (boot_raw=boot_raw@entry=0x7fff9f33a150) at ../Modules/_threadmodule.c:1002
#50 0x000000000066ee14 in pythread_wrapper (arg=<optimized out>) at ../Python/thread_pthread.h:237
#51 0x00007ffff7d96609 in start_thread (arg=<optimized out>) at pthread_create.c:477
#52 0x00007ffff7ed2103 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
```
I can provide the corpus at request.
#### Versions
```
Linux-5.4.0-45-generic-x86_64-with-glibc2.29
Python 3.8.2 (default, Jul 16 2020, 14:00:26)
[GCC 9.3.0]
Bits 64
NumPy 1.19.2
SciPy 1.5.2
gensim 3.8.3
FAST_VERSION 1
```
| closed | 2020-09-10T22:59:30Z | 2020-09-15T15:06:46Z | https://github.com/piskvorky/gensim/issues/2942 | [] | Paul-E | 5 |
coqui-ai/TTS | deep-learning | 4,017 | VITS model gives bad results (training an italian tts model) | ### Describe the bug
Hi everyone. I'm new to the world of ML, so I'm not used to training AI models...
I really want to create my own TTS model using coqui's VITS trainer, so I've done a lot of research about it. I configured some dataset parameters and configuration functions and then started training. For the training I used almost 10 hours of audio spoken in Italian. After training I tried the model but the result is not bad, it's FAIRLY bad... The model doesn't even "speak" a language. Here is an example of the sentence:
`"input_text": ""input_text": "Oh, finalmente sei arrivato fin qui. Non è affatto comune che un semplice essere umano riesca a penetrare così profondamente nella mia dimora. Scarlet Devil Mansion non è un posto per i deboli di cuore, lo sapevi?""`
(I do not recommend to listen to the audio at full volume.)
https://github.com/user-attachments/assets/b4039119-2666-455f-8ed7-6a0b05179f8f
The voice of the audio is actually from a RVC model. I imported the model into a program that makes TTS first and then uses the weights of a RVC model to the generated audio. It's not a RVC problem because I used this program with the same RVC and other TTS models (mostly in english and one in italian) and they work well, especially the english ones.
### To Reproduce
Here's my configuration:
Dataset config:
> output_path = "/content/gdrive/MyDrive/tts"
> dataset_config` = BaseDatasetConfig(
formatter="ljspeech",
meta_file_train="test.txt",
path=os.path.join(output_path, "Dataset/"),
language="it"
> )
Dataset format:
> wav_file|text|text
> imalavoglia_00_verga_f000053|Milano, diciannove gennaio mille ottocento ottantuno.|Milano, diciannove gennaio mille ottocento ottantuno.
Audio:
> audio_config = VitsAudioConfig(
sample_rate=22050,
win_length=1024,
hop_length=256,
num_mels=80,
mel_fmin=0,
mel_fmax=None
)
Characters:
> character_config = CharactersConfig(
characters_class= "TTS.tts.models.vits.VitsCharacters",
characters= "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz1234567890àèìòùÀÈÌÒÙáéíóúÁÉÍÓÚî",
punctuations=" !,.?-'",
pad= "<PAD>",
eos= "<EOS>",
bos= "<BOS>",
blank= "<BLNK>",
)
General config:
> config = VitsConfig(
audio=audio_config,
characters=character_config,
run_name="vits_vctk",
batch_size=16,
eval_batch_size=4,
num_loader_workers=4,
num_eval_loader_workers=4,
run_eval=True,
test_delay_epochs=0,
epochs=10,
text_cleaner="multilingual_cleaners",
use_phonemes=False,
phoneme_language="it",
phoneme_cache_path=os.path.join(output_path, "phoneme_cache"),
compute_input_seq_cache=True,
print_step=25,
print_eval=False,
save_best_after=1000,
save_checkpoints=True,
save_all_best=True,
mixed_precision=True,
max_text_len=250,
output_path=output_path,
datasets=[dataset_config],
cudnn_benchmark=False,
test_sentences=[
"Qualcosa non va? Mi dispiace, hai voglia di parlarne a riguardo?",
"Il mio nome è Remilia Scarlet. come posso aiutarti oggi?",
]
)`
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
- TTS version: 0.22.0
- Python version: 3.10.9
- OS: Windows
- CUDA version: 11.8
- GPU: GTX 1650 with 4GB of VRAM
All the libraries were installed via pip command
```
### Additional context
Additionally, After few days I tried to use espeak phonemes but the trainer.fit() function stucks at the beginning with this output:
> > EPOCH: 0/10
--> /content/gdrive/MyDrive/tts/vits_vctk-October-09-2024_08+23PM-0000000
> DataLoader initialization
| > Tokenizer:
| > add_blank: True
| > use_eos_bos: False
| > use_phonemes: True
| > phonemizer:
| > phoneme language: it
| > phoneme backend: espeak
| > Number of instances : 5798
/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:557: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
> TRAINING (2024-10-09 20:23:45)
| > Preprocessing samples
| > Max text length: 167
| > Min text length: 12
| > Avg text length: 82.22473266643671
|
| > Max audio length: 183618.0
| > Min audio length: 24483.0
| > Avg audio length: 82634.87443946188
| > Num. instances discarded samples: 0
| > Batch group size: 0.
/usr/local/lib/python3.10/dist-packages/torch/functional.py:666: UserWarning: stft with return_complex=False is deprecated. In a future pytorch release, stft will return complex tensors for all inputs, and return_complex=False will raise an error.
Note: you can still call torch.view_as_real on the complex output to recover the old return format. (Triggered internally at ../aten/src/ATen/native/SpectralOps.cpp:873.)
return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined] | closed | 2024-10-09T20:29:06Z | 2024-12-28T11:58:24Z | https://github.com/coqui-ai/TTS/issues/4017 | [
"bug",
"wontfix"
] | iDavide | 6 |
GibbsConsulting/django-plotly-dash | plotly | 105 | Using Bootstraps grid within a dash app | I'm attempting to place a formatted dash application into my django site. For the rest of my web application I'm using bootstrap 4.1.3 via CDN.
`<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous">`
I figured if I use {%plotly_direct name="SimpleExample" %} to embed my app I should be able to use bootstraps grid system to position the various parts of my application.
Here I have two div's a graph and a dropdown menu that are meant to each take up half the space:
```
app.layout = html.Div(children=[
html.Div([
html.Div([
dcc.Graph(
id='graph_one',
style={"height": "90%", "width": "98%"},
config={'modeBarButtonsToRemove':['sendDataToCloud','zoom2d','pan2d','select2d','lasso2d','toggleSpikelines',
'zoomIn2d','zoomOut2d','autoScale2d','resetScale2d','hoverClosestCartesian','hoverCompareCartesian'],
'displaylogo':False})],className='col-md-6')
,
html.Div([
dcc.Dropdown(
options=[
{'label': 'New York City', 'value': 'NYC'},
{'label': 'Montréal', 'value': 'MTL'},
{'label': 'San Francisco', 'value': 'SF'}
],
value='MTL'
)],className='col-md-6')
],className='row',)
])
```
This is the result: It seems height of the entire div is being set really small by something.

I then removed div with className='row' from my code and got the following result:
```
app.layout = html.Div(children=[
html.Div([
html.Div([
dcc.Graph(
id='graph_one',
style={"height": "90%", "width": "98%"},
config={'modeBarButtonsToRemove':['sendDataToCloud','zoom2d','pan2d','select2d','lasso2d','toggleSpikelines',
'zoomIn2d','zoomOut2d','autoScale2d','resetScale2d','hoverClosestCartesian','hoverCompareCartesian'],
'displaylogo':False})],className='col-md-6')
,
html.Div([
dcc.Dropdown(
options=[
{'label': 'New York City', 'value': 'NYC'},
{'label': 'Montréal', 'value': 'MTL'},
{'label': 'San Francisco', 'value': 'SF'}
],
value='MTL'
)],className='col-md-6')
],)
])
```

This time the graph shows up completely but then the requirements for them to be next to each other is ignored.
Then I thought it might just need to be wrapped in a 'row' class within the django template:
```
<div class='row'>
{%plotly_direct name="SimpleExample" %}
</div>
```
When i do this, this is the result:

I'm just starting out with django-plotly-dash, so i'm not sure if this is an actual issue or if i'm just not following best practices here.
If its not possible to use the bootstrap grid to position components of my dash app, how would I go about doing that? | closed | 2019-01-11T01:14:18Z | 2019-07-29T18:12:45Z | https://github.com/GibbsConsulting/django-plotly-dash/issues/105 | [] | vantaka2 | 7 |
d2l-ai/d2l-en | data-science | 1,738 | Typo in 5.2.2.1. Built-in Initialization | The TensorFlow tab for the final block in this section of 5.2.2.1 has a typo in the initialization.

The text says the constant initialization is 42 while the code uses a constant of 1. | closed | 2021-04-27T00:10:46Z | 2021-04-27T01:46:49Z | https://github.com/d2l-ai/d2l-en/issues/1738 | [] | bgreenawald | 0 |
zalandoresearch/fashion-mnist | computer-vision | 50 | benchmark: update on GRU+SVM with Dropout | Hey @hanxiao , it's me again. I saw an update in the dataset, regarding duplicate samples. I did another training using my GRU+SVM (with Dropout) model (from #8 ) on the updated dataset. Here's the result:
```
Epoch : 0 completed out of 100, loss : 316.9036560058594, accuracy : 0.734375
Epoch : 1 completed out of 100, loss : 201.2646026611328, accuracy : 0.83984375
Epoch : 2 completed out of 100, loss : 253.3709259033203, accuracy : 0.796875
Epoch : 3 completed out of 100, loss : 257.7744140625, accuracy : 0.8359375
Epoch : 4 completed out of 100, loss : 179.52682495117188, accuracy : 0.8671875
Epoch : 5 completed out of 100, loss : 224.97421264648438, accuracy : 0.83984375
Epoch : 6 completed out of 100, loss : 212.19381713867188, accuracy : 0.859375
Epoch : 7 completed out of 100, loss : 200.80978393554688, accuracy : 0.859375
Epoch : 8 completed out of 100, loss : 187.77052307128906, accuracy : 0.85546875
Epoch : 9 completed out of 100, loss : 190.96389770507812, accuracy : 0.86328125
Epoch : 10 completed out of 100, loss : 185.72314453125, accuracy : 0.85546875
Epoch : 11 completed out of 100, loss : 189.3765411376953, accuracy : 0.8515625
Epoch : 12 completed out of 100, loss : 130.086669921875, accuracy : 0.89453125
Epoch : 13 completed out of 100, loss : 151.38232421875, accuracy : 0.8828125
Epoch : 14 completed out of 100, loss : 159.71595764160156, accuracy : 0.88671875
Epoch : 15 completed out of 100, loss : 218.80592346191406, accuracy : 0.84375
Epoch : 16 completed out of 100, loss : 131.5895233154297, accuracy : 0.9140625
Epoch : 17 completed out of 100, loss : 162.96995544433594, accuracy : 0.8671875
Epoch : 18 completed out of 100, loss : 155.52630615234375, accuracy : 0.890625
Epoch : 19 completed out of 100, loss : 159.76901245117188, accuracy : 0.88671875
Epoch : 20 completed out of 100, loss : 137.74642944335938, accuracy : 0.890625
Epoch : 21 completed out of 100, loss : 162.48875427246094, accuracy : 0.890625
Epoch : 22 completed out of 100, loss : 179.6526336669922, accuracy : 0.8828125
Epoch : 23 completed out of 100, loss : 127.58981323242188, accuracy : 0.8984375
Epoch : 24 completed out of 100, loss : 185.6982421875, accuracy : 0.8671875
Epoch : 25 completed out of 100, loss : 159.8983612060547, accuracy : 0.8828125
Epoch : 26 completed out of 100, loss : 160.69525146484375, accuracy : 0.89453125
Epoch : 27 completed out of 100, loss : 173.42813110351562, accuracy : 0.859375
Epoch : 28 completed out of 100, loss : 166.0702667236328, accuracy : 0.87890625
Epoch : 29 completed out of 100, loss : 157.59085083007812, accuracy : 0.87109375
Epoch : 30 completed out of 100, loss : 127.72993469238281, accuracy : 0.9140625
Epoch : 31 completed out of 100, loss : 136.65415954589844, accuracy : 0.90234375
Epoch : 32 completed out of 100, loss : 172.4806365966797, accuracy : 0.8515625
Epoch : 33 completed out of 100, loss : 139.81488037109375, accuracy : 0.8984375
Epoch : 34 completed out of 100, loss : 144.55099487304688, accuracy : 0.85546875
Epoch : 35 completed out of 100, loss : 122.90949249267578, accuracy : 0.8984375
Epoch : 36 completed out of 100, loss : 150.0441131591797, accuracy : 0.890625
Epoch : 37 completed out of 100, loss : 153.2085723876953, accuracy : 0.88671875
Epoch : 38 completed out of 100, loss : 143.91455078125, accuracy : 0.8984375
Epoch : 39 completed out of 100, loss : 117.63712310791016, accuracy : 0.91796875
Epoch : 40 completed out of 100, loss : 93.80998229980469, accuracy : 0.92578125
Epoch : 41 completed out of 100, loss : 136.52537536621094, accuracy : 0.87109375
Epoch : 42 completed out of 100, loss : 137.24530029296875, accuracy : 0.90625
Epoch : 43 completed out of 100, loss : 108.73893737792969, accuracy : 0.921875
Epoch : 44 completed out of 100, loss : 106.48686218261719, accuracy : 0.9296875
Epoch : 45 completed out of 100, loss : 104.41219329833984, accuracy : 0.92578125
Epoch : 46 completed out of 100, loss : 101.19454956054688, accuracy : 0.94140625
Epoch : 47 completed out of 100, loss : 127.536376953125, accuracy : 0.91015625
Epoch : 48 completed out of 100, loss : 109.94172668457031, accuracy : 0.9296875
Epoch : 49 completed out of 100, loss : 85.25288391113281, accuracy : 0.94140625
Epoch : 50 completed out of 100, loss : 112.01800537109375, accuracy : 0.91796875
Epoch : 51 completed out of 100, loss : 107.6760482788086, accuracy : 0.91015625
Epoch : 52 completed out of 100, loss : 121.9848403930664, accuracy : 0.921875
Epoch : 53 completed out of 100, loss : 101.01953887939453, accuracy : 0.9375
Epoch : 54 completed out of 100, loss : 69.95838165283203, accuracy : 0.94921875
Epoch : 55 completed out of 100, loss : 119.3257827758789, accuracy : 0.91796875
Epoch : 56 completed out of 100, loss : 102.73481750488281, accuracy : 0.921875
Epoch : 57 completed out of 100, loss : 89.11821746826172, accuracy : 0.94921875
Epoch : 58 completed out of 100, loss : 110.71992492675781, accuracy : 0.9140625
Epoch : 59 completed out of 100, loss : 105.85194396972656, accuracy : 0.9375
Epoch : 60 completed out of 100, loss : 114.6805648803711, accuracy : 0.921875
Epoch : 61 completed out of 100, loss : 99.33323669433594, accuracy : 0.92578125
Epoch : 62 completed out of 100, loss : 128.26809692382812, accuracy : 0.90625
Epoch : 63 completed out of 100, loss : 117.59638214111328, accuracy : 0.9140625
Epoch : 64 completed out of 100, loss : 86.27313995361328, accuracy : 0.9453125
Epoch : 65 completed out of 100, loss : 114.16581726074219, accuracy : 0.92578125
Epoch : 66 completed out of 100, loss : 102.78227233886719, accuracy : 0.94921875
Epoch : 67 completed out of 100, loss : 88.23193359375, accuracy : 0.9375
Epoch : 68 completed out of 100, loss : 60.24769592285156, accuracy : 0.953125
Epoch : 69 completed out of 100, loss : 97.67103576660156, accuracy : 0.94140625
Epoch : 70 completed out of 100, loss : 86.58494567871094, accuracy : 0.91796875
Epoch : 71 completed out of 100, loss : 98.33272552490234, accuracy : 0.921875
Epoch : 72 completed out of 100, loss : 77.44849395751953, accuracy : 0.94921875
Epoch : 73 completed out of 100, loss : 114.52888488769531, accuracy : 0.9296875
Epoch : 74 completed out of 100, loss : 94.6647720336914, accuracy : 0.9453125
Epoch : 75 completed out of 100, loss : 106.62199401855469, accuracy : 0.921875
Epoch : 76 completed out of 100, loss : 116.0970230102539, accuracy : 0.91015625
Epoch : 77 completed out of 100, loss : 78.5435791015625, accuracy : 0.953125
Epoch : 78 completed out of 100, loss : 125.43787384033203, accuracy : 0.91796875
Epoch : 79 completed out of 100, loss : 112.84344482421875, accuracy : 0.9296875
Epoch : 80 completed out of 100, loss : 65.7440185546875, accuracy : 0.95703125
Epoch : 81 completed out of 100, loss : 115.66653442382812, accuracy : 0.91796875
Epoch : 82 completed out of 100, loss : 76.14566040039062, accuracy : 0.9375
Epoch : 83 completed out of 100, loss : 72.91943359375, accuracy : 0.95703125
Epoch : 84 completed out of 100, loss : 56.55884552001953, accuracy : 0.95703125
Epoch : 85 completed out of 100, loss : 87.09599304199219, accuracy : 0.93359375
Epoch : 86 completed out of 100, loss : 80.97771453857422, accuracy : 0.93359375
Epoch : 87 completed out of 100, loss : 94.14187622070312, accuracy : 0.9453125
Epoch : 88 completed out of 100, loss : 80.44708251953125, accuracy : 0.94140625
Epoch : 89 completed out of 100, loss : 52.18363952636719, accuracy : 0.96875
Epoch : 90 completed out of 100, loss : 93.15214538574219, accuracy : 0.9296875
Epoch : 91 completed out of 100, loss : 97.51387023925781, accuracy : 0.9296875
Epoch : 92 completed out of 100, loss : 82.44243621826172, accuracy : 0.9375
Epoch : 93 completed out of 100, loss : 60.52445983886719, accuracy : 0.96484375
Epoch : 94 completed out of 100, loss : 57.100406646728516, accuracy : 0.96484375
Epoch : 95 completed out of 100, loss : 89.62207794189453, accuracy : 0.94140625
Epoch : 96 completed out of 100, loss : 86.14447784423828, accuracy : 0.9375
Epoch : 97 completed out of 100, loss : 75.90823364257812, accuracy : 0.953125
Epoch : 98 completed out of 100, loss : 65.80587768554688, accuracy : 0.9609375
Epoch : 99 completed out of 100, loss : 114.98580169677734, accuracy : 0.92578125
Accuracy : 0.897300124168396
```
The hyper-parameters used were as follows:
```
BATCH_SIZE = 256
CELL_SIZE = 256
DROPOUT_P_KEEP = 0.85
EPOCHS = 100
LEARNING_RATE = 1e-3
SVM_C = 1
```
Trained using `tf.train.AdamOptimizer()`, with `tf.nn.dynamic_rnn()`. The source may still be found [here](https://gist.githubusercontent.com/AFAgarap/92c1c4a5dd771999b0201ec0e7edfee0/raw/58dbe7cd8b0d83e4386cd6896766113b1a9af096/gru_svm_zalando_dropout.py).
The graph from TensorBoard, tracking the training (accuracy at the top, loss at the bottom):

The improved accuracy may not be too much, but I suppose it's still a considerable difference, i.e. ~85.5% v. ~89.7%. | closed | 2017-09-01T14:57:58Z | 2017-09-01T15:24:35Z | https://github.com/zalandoresearch/fashion-mnist/issues/50 | [] | AFAgarap | 2 |
healthchecks/healthchecks | django | 772 | API providing project information | Hi,
At the moment there is no way, using the api, to get information about the project, if only the `API key` is known.
It would be great to have access to basic project information, like:
- Project Name
- Team Access
Best
m42e | closed | 2023-01-07T07:38:05Z | 2023-08-04T07:05:05Z | https://github.com/healthchecks/healthchecks/issues/772 | [
"feature"
] | m42e | 2 |
scikit-learn/scikit-learn | machine-learning | 30,397 | Unknown TypeError after updating to 1.5.2 | ### Describe the bug
I am not sure if this is an update bug or a compatibility issue for an older python version with scikit-learn.
### Steps/Code to Reproduce
```
from sklearn.model_selection import train_test_split
```
### Expected Results
no output
### Actual Results
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[13], line 4
2 import pandas as pd
3 from transformers import GPTNeoForCausalLM, GPT2Tokenizer
----> 4 from sklearn.model_selection import train_test_split
5 from sklearn.metrics import f1_score
6 import torch
File /opt/homebrew/anaconda3/envs/p39/lib/python3.9/site-packages/sklearn/__init__.py:84
70 # We are not importing the rest of scikit-learn during the build
71 # process, as it may not be compiled yet
72 else:
(...)
78 # later is linked to the OpenMP runtime to make it possible to introspect
79 # it and importing it first would fail if the OpenMP dll cannot be found.
80 from . import (
81 __check_build, # noqa: F401
82 _distributor_init, # noqa: F401
83 )
---> 84 from .base import clone
85 from .utils._show_versions import show_versions
87 __all__ = [
88 "calibration",
89 "cluster",
(...)
130 "show_versions",
131 ]
File /opt/homebrew/anaconda3/envs/p39/lib/python3.9/site-packages/sklearn/base.py:19
17 from ._config import config_context, get_config
18 from .exceptions import InconsistentVersionWarning
---> 19 from .utils._estimator_html_repr import _HTMLDocumentationLinkMixin, estimator_html_repr
20 from .utils._metadata_requests import _MetadataRequester, _routing_enabled
21 from .utils._param_validation import validate_parameter_constraints
File /opt/homebrew/anaconda3/envs/p39/lib/python3.9/site-packages/sklearn/utils/__init__.py:11
9 from . import _joblib, metadata_routing
10 from ._bunch import Bunch
---> 11 from ._chunking import gen_batches, gen_even_slices
12 from ._estimator_html_repr import estimator_html_repr
14 # Make _safe_indexing importable from here for backward compat as this particular
15 # helper is considered semi-private and typically very useful for third-party
16 # libraries that want to comply with scikit-learn's estimator API. In particular,
17 # _safe_indexing was included in our public API documentation despite the leading
18 # `_` in its name.
File /opt/homebrew/anaconda3/envs/p39/lib/python3.9/site-packages/sklearn/utils/_chunking.py:8
5 import numpy as np
7 from .._config import get_config
----> 8 from ._param_validation import Interval, validate_params
11 def chunk_generator(gen, chunksize):
12 """Chunk generator, ``gen`` into lists of length ``chunksize``. The last
13 chunk may have a length less than ``chunksize``."""
File /opt/homebrew/anaconda3/envs/p39/lib/python3.9/site-packages/sklearn/utils/_param_validation.py:14
11 from scipy.sparse import csr_matrix, issparse
13 from .._config import config_context, get_config
---> 14 from .validation import _is_arraylike_not_scalar
17 class InvalidParameterError(ValueError, TypeError):
18 """Custom exception to be raised when the parameter of a class/method/function
19 does not have a valid type or value.
20 """
File /opt/homebrew/anaconda3/envs/p39/lib/python3.9/site-packages/sklearn/utils/validation.py:26
24 from .. import get_config as _get_config
25 from ..exceptions import DataConversionWarning, NotFittedError, PositiveSpectrumWarning
---> 26 from ..utils._array_api import _asarray_with_order, _is_numpy_namespace, get_namespace
27 from ..utils.fixes import ComplexWarning, _preserve_dia_indices_dtype
28 from ._isfinite import FiniteStatus, cy_isfinite
File /opt/homebrew/anaconda3/envs/p39/lib/python3.9/site-packages/sklearn/utils/_array_api.py:11
8 import scipy.special as special
10 from .._config import get_config
---> 11 from .fixes import parse_version
13 _NUMPY_NAMESPACE_NAMES = {"numpy", "array_api_compat.numpy"}
16 def yield_namespaces(include_numpy_namespaces=True):
File /opt/homebrew/anaconda3/envs/p39/lib/python3.9/site-packages/sklearn/utils/fixes.py:21
19 import scipy
20 import scipy.sparse.linalg
---> 21 import scipy.stats
23 try:
24 import pandas as pd
File /opt/homebrew/anaconda3/envs/p39/lib/python3.9/site-packages/scipy/stats/__init__.py:606
1 """
2 .. _statsrefmanual:
3
(...)
601
602 """ # noqa: E501
604 from ._warnings_errors import (ConstantInputWarning, NearConstantInputWarning,
605 DegenerateDataWarning, FitError)
--> 606 from ._stats_py import *
607 from ._variation import variation
608 from .distributions import *
File /opt/homebrew/anaconda3/envs/p39/lib/python3.9/site-packages/scipy/stats/_stats_py.py:49
47 import scipy.special as special
48 from scipy import linalg
---> 49 from . import distributions
50 from . import _mstats_basic as mstats_basic
51 from ._stats_mstats_common import (_find_repeats, linregress, theilslopes,
52 siegelslopes)
File /opt/homebrew/anaconda3/envs/p39/lib/python3.9/site-packages/scipy/stats/distributions.py:11
8 from ._distn_infrastructure import (rv_discrete, rv_continuous, rv_frozen) # noqa: F401
10 from . import _continuous_distns
---> 11 from . import _discrete_distns
13 from ._continuous_distns import * # noqa: F403
14 from ._levy_stable import levy_stable
File /opt/homebrew/anaconda3/envs/p39/lib/python3.9/site-packages/scipy/stats/_discrete_distns.py:10
8 from scipy.special import entr, logsumexp, betaln, gammaln as gamln, zeta
9 from scipy._lib._util import _lazywhere, rng_integers
---> 10 from scipy.interpolate import interp1d
12 from numpy import floor, ceil, log, exp, sqrt, log1p, expm1, tanh, cosh, sinh
14 import numpy as np
File /opt/homebrew/anaconda3/envs/p39/lib/python3.9/site-packages/scipy/interpolate/__init__.py:167
1 """
2 ========================================
3 Interpolation (:mod:`scipy.interpolate`)
(...)
165 (should not be used in new code).
166 """
--> 167 from ._interpolate import *
168 from ._fitpack_py import *
170 # New interface to fitpack library:
File /opt/homebrew/anaconda3/envs/p39/lib/python3.9/site-packages/scipy/interpolate/_interpolate.py:14
11 from scipy._lib._util import copy_if_needed
12 from scipy.special import comb
---> 14 from . import _fitpack_py
15 from . import dfitpack
16 from ._polyint import _Interpolator1D
File /opt/homebrew/anaconda3/envs/p39/lib/python3.9/site-packages/scipy/interpolate/_fitpack_py.py:8
5 import numpy as np
7 # These are in the API for fitpack even if not used in fitpack.py itself.
----> 8 from ._fitpack_impl import bisplrep, bisplev, dblint # noqa: F401
9 from . import _fitpack_impl as _impl
10 from ._bsplines import BSpline
File /opt/homebrew/anaconda3/envs/p39/lib/python3.9/site-packages/scipy/interpolate/_fitpack_impl.py:103
52 _iermess = {
53 0: ["The spline has a residual sum of squares fp such that "
54 "abs(fp-s)/s<=0.001", None],
(...)
68 'unknown': ["An error occurred", TypeError]
69 }
71 _iermess2 = {
72 0: ["The spline has a residual sum of squares fp such that "
73 "abs(fp-s)/s<=0.001", None],
(...)
99 'unknown': ["An error occurred", TypeError]
100 }
102 _parcur_cache = {'t': array([], float), 'wrk': array([], float),
--> 103 'iwrk': array([], dfitpack_int), 'u': array([], float),
104 'ub': 0, 'ue': 1}
107 def splprep(x, w=None, u=None, ub=None, ue=None, k=3, task=0, s=None, t=None,
108 full_output=0, nest=None, per=0, quiet=1):
109 # see the docstring of `_fitpack_py/splprep`
110 if task <= 0:
TypeError:
```
### Versions
```shell
Same error occurred.
```
| closed | 2024-12-03T10:12:46Z | 2024-12-13T14:09:23Z | https://github.com/scikit-learn/scikit-learn/issues/30397 | [] | krishpy99 | 4 |
LibreTranslate/LibreTranslate | api | 746 | Chinese (zh) is not available as a target language from English (en) | Chinese (zh) is not available as a target language from English (en)
`curl -X 'POST' \
'http://localhost:5000/translate' \
-H 'accept: application/json' \
-H 'Content-Type: application/x-www-form-urlencoded' \
-d 'q=hello&source=en&target=zh&format=text&alternatives=3&api_key=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'
出現以下400訊息
{
"error": "簡體中文(zh)不能作為 英語 (en)的目標語言"
}`
| open | 2025-02-22T03:36:05Z | 2025-02-26T03:29:02Z | https://github.com/LibreTranslate/LibreTranslate/issues/746 | [
"possible bug"
] | joulong | 1 |
wkentaro/labelme | deep-learning | 1,459 | When i create ai polygon met the error | ### Provide environment information
python --version 3.8;labelme --vesion 5.5
2024-06-18 13:49:28,599 [INFO ] __init__:get_config:67- Loading config file from:
2024-06-18 13:49:36,430 [DEBUG ] canvas:initializeAiModel:139- Initializing AI model: 'EfficientSam (accuracy)'
/home/anaconda3/envs/labelme/lib/python3.8/site-packages/gdown/cached_download.py:102: FutureWarning: md5 is deprecated in favor of hash. Please use hash='md5:xxx...' instead.
### What OS are you using?
ubuntu20.04
### Describe the Bug
The ai polygon can't use
### Expected Behavior
_No response_
### To Reproduce
_No response_ | open | 2024-06-18T05:52:43Z | 2024-06-18T05:52:43Z | https://github.com/wkentaro/labelme/issues/1459 | [
"issue::bug"
] | Jll0716 | 0 |
sgl-project/sglang | pytorch | 4,417 | [Bug] MTP and cuda graph stuck at initialization on 2 h100 nodes | ### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
- [x] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [x] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
### Describe the bug
Hello.
I try to run on a sglang cluster on 2 nodes 8 x h100 Deepseek-R1.
```
python3 -m sglang.launch_server --model-path deepseek-ai/DeepSeek-R1 --tp 16 --dist-init-addr 172.16.1.68:5000 --nnodes 2 --node-rank 0 --trust-remote-code --host 0.0.0.0 --enable-cache-report --enable-metrics --watchdog-timeout=3000 \
--speculative-algorithm EAGLE --speculative-draft-model-path lmsys/DeepSeek-R1-NextN --speculative-num-steps 1 --speculative-eagle-topk 1 --speculative-num-draft-tokens 2 \
--reasoning-parser deepseek-r1 \
```
I tried with --attention-backend triton or --enable-flashinfer-mla, it stuck
```
[2025-03-14 07:50:56 TP4] Capture cuda graph begin. This can take up to several minutes. avail mem=15.99 GB
[2025-03-14 07:50:56 TP3] Capture cuda graph begin. This can take up to several minutes. avail mem=15.99 GB
[2025-03-14 07:50:56 TP0] Capture cuda graph begin. This can take up to several minutes. avail mem=15.99 GB
[2025-03-14 07:50:56 TP7] Capture cuda graph begin. This can take up to several minutes. avail mem=17.10 GB
[2025-03-14 07:50:56 TP2] Capture cuda graph begin. This can take up to several minutes. avail mem=15.99 GB
^M 0%| | 0/32 [00:00<?, ?it/s]^MCapturing batches (avail_mem=15.93 GB): 0%| | 0/32 [00:00<?, ?it/s][2025-03-14 07:50:56 TP5] Capture cuda graph begin. This can take up to several minutes. avail mem=15.97 GB
[2025-03-14 07:50:56 TP6] Capture cuda graph begin. This can take up to several minutes. avail mem=17.08 GB
[2025-03-14 07:50:56 TP1] Capture cuda graph begin. This can take up to several minutes. avail mem=15.99 GB
```
This is the full log
```
INFO 03-14 06:54:54 __init__.py:190] Automatically detected platform cuda.
[2025-03-14 06:54:58] server_args=ServerArgs(model_path='deepseek-ai/DeepSeek-R1', tokenizer_path='deepseek-ai/DeepSeek-R1', tokenizer_mode='auto', skip_tokenizer_init=False, load_format='auto', trust_remote_code=True, dtype='auto', kv_cache_dtype='auto', quantization=None, quantization_param_path=None, context_length=None, device='cuda', served_model_name='deepseek-ai/DeepSeek-R1', chat_template=None, is_embedding=False, revision=None, host='0.0.0.0', port=30000, mem_fraction_static=0.79, max_running_requests=32, max_total_tokens=None, chunked_prefill_size=8192, max_prefill_tokens=16384, schedule_policy='fcfs', schedule_conservativeness=1.0, cpu_offload_gb=0, page_size=1, tp_size=16, stream_interval=1, stream_output=False, random_seed=1070836707, constrained_json_whitespace_pattern=None, watchdog_timeout=3000.0, dist_timeout=None, download_dir=None, base_gpu_id=0, gpu_id_step=1, log_level='info', log_level_http=None, log_requests=False, log_requests_level=0, show_time_cost=False, enable_metrics=True, decode_log_interval=40, api_key=None, file_storage_path='sglang_storage', enable_cache_report=True, reasoning_parser='deepseek-r1', dp_size=1, load_balance_method='round_robin', ep_size=1, dist_init_addr='172.16.1.68:5000', nnodes=2, node_rank=0, json_model_override_args='{}', lora_paths=None, max_loras_per_batch=8, lora_backend='triton', attention_backend='flashinfer', sampling_backend='flashinfer', grammar_backend='outlines', speculative_algorithm='EAGLE', speculative_draft_model_path='lmsys/DeepSeek-R1-NextN', speculative_num_steps=1, speculative_eagle_topk=1, speculative_num_draft_tokens=2, speculative_accept_threshold_single=1.0, speculative_accept_threshold_acc=1.0, speculative_token_map=None, enable_double_sparsity=False, ds_channel_config_path=None, ds_heavy_channel_num=32, ds_heavy_token_num=256, ds_heavy_channel_type='qk', ds_sparse_decode_threshold=4096, disable_radix_cache=False, disable_cuda_graph=False, disable_cuda_graph_padding=True, enable_nccl_nvls=False, disable_outlines_disk_cache=False, disable_custom_all_reduce=False, disable_mla=False, disable_overlap_schedule=True, enable_mixed_chunk=False, enable_dp_attention=False, enable_ep_moe=False, enable_torch_compile=False, torch_compile_max_bs=32, cuda_graph_max_bs=160, cuda_graph_bs=None, torchao_config='', enable_nan_detection=False, enable_p2p_check=False, triton_attention_reduce_in_fp32=False, triton_attention_num_kv_splits=8, num_continuous_decode_steps=1, delete_ckpt_after_loading=False, enable_memory_saver=False, allow_auto_truncate=False, enable_custom_logit_processor=False, tool_call_parser=None, enable_hierarchical_cache=False, enable_flashinfer_mla=True, flashinfer_mla_disable_ragged=False, warmups=None, debug_tensor_dump_output_folder=None, debug_tensor_dump_input_file=None, debug_tensor_dump_inject=False)
A new version of the following files was downloaded from https://huggingface.co/deepseek-ai/DeepSeek-R1:
- configuration_deepseek.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
INFO 03-14 06:55:02 __init__.py:190] Automatically detected platform cuda.
INFO 03-14 06:55:02 __init__.py:190] Automatically detected platform cuda.
INFO 03-14 06:55:03 __init__.py:190] Automatically detected platform cuda.
INFO 03-14 06:55:04 __init__.py:190] Automatically detected platform cuda.
INFO 03-14 06:55:04 __init__.py:190] Automatically detected platform cuda.
INFO 03-14 06:55:04 __init__.py:190] Automatically detected platform cuda.
INFO 03-14 06:55:04 __init__.py:190] Automatically detected platform cuda.
INFO 03-14 06:55:04 __init__.py:190] Automatically detected platform cuda.
INFO 03-14 06:55:04 __init__.py:190] Automatically detected platform cuda.
[2025-03-14 06:55:09 TP1] MLA optimization is turned on. Use flashinfer mla backend.
[2025-03-14 06:55:09 TP1] Init torch distributed begin.
[2025-03-14 06:55:09 TP5] MLA optimization is turned on. Use flashinfer mla backend.
[2025-03-14 06:55:09 TP5] Init torch distributed begin.
[2025-03-14 06:55:09 TP2] MLA optimization is turned on. Use flashinfer mla backend.
[2025-03-14 06:55:09 TP2] Init torch distributed begin.
[2025-03-14 06:55:10 TP3] MLA optimization is turned on. Use flashinfer mla backend.
[2025-03-14 06:55:10 TP3] Init torch distributed begin.
[2025-03-14 06:55:10 TP7] MLA optimization is turned on. Use flashinfer mla backend.
[2025-03-14 06:55:10 TP7] Init torch distributed begin.
[2025-03-14 06:55:10 TP0] MLA optimization is turned on. Use flashinfer mla backend.
[2025-03-14 06:55:10 TP0] Init torch distributed begin.
[2025-03-14 06:55:10 TP4] MLA optimization is turned on. Use flashinfer mla backend.
[2025-03-14 06:55:10 TP4] Init torch distributed begin.
[2025-03-14 06:55:11 TP6] MLA optimization is turned on. Use flashinfer mla backend.
[2025-03-14 06:55:11 TP6] Init torch distributed begin.
[2025-03-14 06:55:20 TP0] sglang is using nccl==2.21.5
[2025-03-14 06:55:20 TP2] sglang is using nccl==2.21.5
[2025-03-14 06:55:20 TP3] sglang is using nccl==2.21.5
[2025-03-14 06:55:20 TP1] sglang is using nccl==2.21.5
[2025-03-14 06:55:20 TP6] sglang is using nccl==2.21.5
[2025-03-14 06:55:20 TP5] sglang is using nccl==2.21.5
[2025-03-14 06:55:20 TP7] sglang is using nccl==2.21.5
[2025-03-14 06:55:20 TP4] sglang is using nccl==2.21.5
NCCL version 2.21.5+cuda12.4
[2025-03-14 06:55:23 TP1] Custom allreduce is disabled because this process group spans across nodes.
[2025-03-14 06:55:23 TP2] Custom allreduce is disabled because this process group spans across nodes.
[2025-03-14 06:55:23 TP3] Custom allreduce is disabled because this process group spans across nodes.
[2025-03-14 06:55:23 TP4] Custom allreduce is disabled because this process group spans across nodes.
[2025-03-14 06:55:23 TP5] Custom allreduce is disabled because this process group spans across nodes.
[2025-03-14 06:55:23 TP6] Custom allreduce is disabled because this process group spans across nodes.
[2025-03-14 06:55:23 TP7] Custom allreduce is disabled because this process group spans across nodes.
[2025-03-14 06:55:23 TP0] Custom allreduce is disabled because this process group spans across nodes.
[2025-03-14 06:55:24 TP0] Init torch distributed ends. mem usage=2.69 GB
[2025-03-14 06:55:24 TP1] Init torch distributed ends. mem usage=2.69 GB
[2025-03-14 06:55:24 TP1] Load weight begin. avail mem=75.98 GB
[2025-03-14 06:55:24 TP5] Init torch distributed ends. mem usage=2.71 GB
[2025-03-14 06:55:24 TP0] Load weight begin. avail mem=75.98 GB
[2025-03-14 06:55:24 TP2] Init torch distributed ends. mem usage=2.69 GB
[2025-03-14 06:55:24 TP6] Init torch distributed ends. mem usage=1.60 GB
[2025-03-14 06:55:24 TP7] Init torch distributed ends. mem usage=1.58 GB
[2025-03-14 06:55:24 TP5] Load weight begin. avail mem=75.97 GB
[2025-03-14 06:55:24 TP3] Init torch distributed ends. mem usage=2.69 GB
[2025-03-14 06:55:24 TP6] Load weight begin. avail mem=77.08 GB
[2025-03-14 06:55:24 TP2] Load weight begin. avail mem=75.98 GB
[2025-03-14 06:55:24 TP7] Load weight begin. avail mem=77.09 GB
[2025-03-14 06:55:24 TP3] Load weight begin. avail mem=75.98 GB
[2025-03-14 06:55:24 TP4] Init torch distributed ends. mem usage=2.69 GB
[2025-03-14 06:55:24 TP4] Load weight begin. avail mem=75.98 GB
[2025-03-14 06:55:24 TP1] The following error message 'operation scheduled before its operands' can be ignored.
[2025-03-14 06:55:24 TP5] The following error message 'operation scheduled before its operands' can be ignored.
[2025-03-14 06:55:24 TP3] The following error message 'operation scheduled before its operands' can be ignored.
[2025-03-14 06:55:24 TP7] The following error message 'operation scheduled before its operands' can be ignored.
[2025-03-14 06:55:24 TP4] The following error message 'operation scheduled before its operands' can be ignored.
[2025-03-14 06:55:24 TP6] The following error message 'operation scheduled before its operands' can be ignored.
[2025-03-14 06:55:24 TP2] The following error message 'operation scheduled before its operands' can be ignored.
[2025-03-14 06:55:24 TP0] The following error message 'operation scheduled before its operands' can be ignored.
[2025-03-14 06:55:24 TP1] Detected fp8 checkpoint. Please note that the format is experimental and subject to change.
[2025-03-14 06:55:24 TP5] Detected fp8 checkpoint. Please note that the format is experimental and subject to change.
[2025-03-14 06:55:24 TP7] Detected fp8 checkpoint. Please note that the format is experimental and subject to change.
[2025-03-14 06:55:24 TP4] Detected fp8 checkpoint. Please note that the format is experimental and subject to change.
[2025-03-14 06:55:24 TP6] Detected fp8 checkpoint. Please note that the format is experimental and subject to change.
[2025-03-14 06:55:24 TP2] Detected fp8 checkpoint. Please note that the format is experimental and subject to change.
[2025-03-14 06:55:24 TP3] Detected fp8 checkpoint. Please note that the format is experimental and subject to change.
[2025-03-14 06:55:24 TP0] Detected fp8 checkpoint. Please note that the format is experimental and subject to change.
[2025-03-14 06:55:25 TP5] Using model weights format ['*.safetensors']
[2025-03-14 06:55:25 TP0] Using model weights format ['*.safetensors']
[2025-03-14 06:55:25 TP6] Using model weights format ['*.safetensors']
[2025-03-14 06:55:25 TP4] Using model weights format ['*.safetensors']
[2025-03-14 06:55:25 TP7] Using model weights format ['*.safetensors']
[2025-03-14 06:55:25 TP1] Using model weights format ['*.safetensors']
[2025-03-14 06:55:25 TP2] Using model weights format ['*.safetensors']
[2025-03-14 06:55:25 TP3] Using model weights format ['*.safetensors']
[2025-03-14 06:56:42 TP1] Load weight end. type=DeepseekV3ForCausalLM, dtype=torch.bfloat16, avail mem=35.52 GB, mem usage=40.46 GB.
[2025-03-14 06:56:42 TP0] Load weight end. type=DeepseekV3ForCausalLM, dtype=torch.bfloat16, avail mem=35.52 GB, mem usage=40.46 GB.
[2025-03-14 06:56:42 TP3] Load weight end. type=DeepseekV3ForCausalLM, dtype=torch.bfloat16, avail mem=35.52 GB, mem usage=40.46 GB.
[2025-03-14 06:56:46 TP7] Load weight end. type=DeepseekV3ForCausalLM, dtype=torch.bfloat16, avail mem=36.63 GB, mem usage=40.46 GB.
[2025-03-14 06:56:46 TP0] Memory pool end. avail mem=16.08 GB
[2025-03-14 06:56:46 TP3] Memory pool end. avail mem=16.08 GB
[2025-03-14 06:56:46 TP5] Memory pool end. avail mem=16.07 GB
[2025-03-14 06:56:46 TP2] Memory pool end. avail mem=16.08 GB
[2025-03-14 06:56:46 TP6] Memory pool end. avail mem=17.18 GB
[2025-03-14 06:56:46 TP4] Memory pool end. avail mem=16.08 GB
[2025-03-14 06:56:46 TP7] Memory pool end. avail mem=17.19 GB
[2025-03-14 06:56:46 TP1] Memory pool end. avail mem=16.08 GB
[2025-03-14 06:56:46 TP2] Capture cuda graph begin. This can take up to several minutes. avail mem=15.57 GB
[2025-03-14 06:56:46 TP6] Capture cuda graph begin. This can take up to several minutes. avail mem=16.67 GB
[2025-03-14 06:56:46 TP1] Capture cuda graph begin. This can take up to several minutes. avail mem=15.57 GB
[2025-03-14 06:56:46 TP7] Capture cuda graph begin. This can take up to several minutes. avail mem=16.68 GB
[2025-03-14 06:56:46 TP5] Capture cuda graph begin. This can take up to several minutes. avail mem=15.56 GB
[2025-03-14 06:56:46 TP4] Capture cuda graph begin. This can take up to several minutes. avail mem=15.57 GB
[2025-03-14 06:56:46 TP0] Capture cuda graph begin. This can take up to several minutes. avail mem=15.57 GB
[2025-03-14 06:56:46 TP3] Capture cuda graph begin. This can take up to several minutes. avail mem=15.57 GB
^M 0%| | 0/32 [00:00<?, ?it/s]^MCapturing batches (avail_mem=15.53 GB): 0%| | 0/32 [00:00<?, ?it/s]2025-03-14 06:56:46,880 - INFO - flashinfer.jit: Loading JIT ops: batch_mla_attention_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_ckv_512_head_dim_kpe_64_profiler_False_sm90
2025-03-14 06:56:46,974 - INFO - flashinfer.jit: Loading JIT ops: batch_mla_attention_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_ckv_512_head_dim_kpe_64_profiler_False_sm90
2025-03-14 06:56:47,005 - INFO - flashinfer.jit: Loading JIT ops: batch_mla_attention_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_ckv_512_head_dim_kpe_64_profiler_False_sm90
2025-03-14 06:56:47,006 - INFO - flashinfer.jit: Loading JIT ops: batch_mla_attention_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_ckv_512_head_dim_kpe_64_profiler_False_sm90
2025-03-14 06:56:47,007 - INFO - flashinfer.jit: Loading JIT ops: batch_mla_attention_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_ckv_512_head_dim_kpe_64_profiler_False_sm90
2025-03-14 06:56:47,017 - INFO - flashinfer.jit: Loading JIT ops: batch_mla_attention_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_ckv_512_head_dim_kpe_64_profiler_False_sm90
2025-03-14 06:56:47,018 - INFO - flashinfer.jit: Loading JIT ops: batch_mla_attention_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_ckv_512_head_dim_kpe_64_profiler_False_sm90
2025-03-14 06:56:47,018 - INFO - flashinfer.jit: Loading JIT ops: batch_mla_attention_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_ckv_512_head_dim_kpe_64_profiler_False_sm90
2025-03-14 06:57:08,943 - INFO - flashinfer.jit: Finished loading JIT ops: batch_mla_attention_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_ckv_512_head_dim_kpe_64_profiler_False_sm90
2025-03-14 06:57:08,995 - INFO - flashinfer.jit: Finished loading JIT ops: batch_mla_attention_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_ckv_512_head_dim_kpe_64_profiler_False_sm90
2025-03-14 06:57:09,047 - INFO - flashinfer.jit: Finished loading JIT ops: batch_mla_attention_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_ckv_512_head_dim_kpe_64_profiler_False_sm90
2025-03-14 06:57:09,099 - INFO - flashinfer.jit: Finished loading JIT ops: batch_mla_attention_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_ckv_512_head_dim_kpe_64_profiler_False_sm90
2025-03-14 06:57:09,162 - INFO - flashinfer.jit: Finished loading JIT ops: batch_mla_attention_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_ckv_512_head_dim_kpe_64_profiler_False_sm90
2025-03-14 06:57:09,230 - INFO - flashinfer.jit: Finished loading JIT ops: batch_mla_attention_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_ckv_512_head_dim_kpe_64_profiler_False_sm90
2025-03-14 06:57:09,282 - INFO - flashinfer.jit: Finished loading JIT ops: batch_mla_attention_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_ckv_512_head_dim_kpe_64_profiler_False_sm90
2025-03-14 06:57:09,334 - INFO - flashinfer.jit: Finished loading JIT ops: batch_mla_attention_dtype_q_bf16_dtype_kv_bf16_dtype_o_bf16_dtype_idx_i32_head_dim_ckv_512_head_dim_kpe_64_profiler_False_sm90
```
### Reproduction
docker run -dti \
--name sglang \
--device /dev/infiniband \
--cap-add=IPC_LOCK \
--pids-limit=-1 \
--shm-size 16g \
--ulimit memlock=-1 \
--network=host \
--runtime=nvidia \
--gpus all \
--privileged \
--entrypoint /bin/bash \
lmsysorg/sglang:v0.4.4.post1-cu124-srt /root/run_sglang.sh
### Environment
docker image lmsysorg/sglang:v0.4.4.post1-cu124-srt | closed | 2025-03-14T08:01:56Z | 2025-03-14T08:31:31Z | https://github.com/sgl-project/sglang/issues/4417 | [] | victorserbu2709 | 1 |
jonaswinkler/paperless-ng | django | 895 | [BUG] Contents are not recognized | Hello together!
Currently I'm testing paperless-ng and I've come across a strange thing, but I'm not sure if it's a bug or not. I have set up a mailbox that I retrieve via IMap. Here I have forwarded a mail with 2 attachments. They were also extracted, so far so good. But when I looked at the documents in Paperless NG, I noticed that the metadata is not correct. When I look at what was recognized in "Content", I only see this:
```
(cid:67)(cid:105)(cid:93)(cid:100)(cid:26)(cid:478)(cid:390)(cid:117)(cid:466)(cid:462)(cid:26)(cid:478)(cid:478)(cid:105)(cid:37)(cid:93)(cid:37)(cid:74)(cid:478)(cid:67)(cid:105)(cid:61)(cid:3)(cid:478)
(cid:117)(cid:168)(cid:517)(cid:517)(cid:1)(cid:93)(cid:495)(cid:168)(cid:1)(cid:492)(cid:168)(cid:517)(cid:1)(cid:116)(cid:168)(cid:525)(cid:250)(cid:525)(cid:482)(cid:184)(cid:1)(cid:267)(cid:495)(cid:492)(cid:168)(cid:525)(cid:525)(cid:529)(cid:179)(cid:168)(cid:517)(cid:1)(cid:267)(cid:224)(cid:510)(cid:510)(cid:168)(cid:517)(cid:304)(cid:1)(cid:492)(cid:482)(cid:517)(cid:517)(cid:1)(cid:179)(cid:533)(cid:510)(cid:510)(cid:168)(cid:517)(cid:1)(cid:93)(cid:495)(cid:168)(cid:1)(cid:157)(cid:495)(cid:250)(cid:250)(cid:168)(cid:1)(cid:492)(cid:495)(cid:168)(cid:243)(cid:168)(cid:243)(cid:1)(cid:37)(cid:224)(cid:525)(cid:516)(cid:529)(cid:510)(cid:482)(cid:525)(cid:1)(cid:482)(cid:529)(cid:243)(cid:1)(cid:529)(cid:517)(cid:492)(cid:1)(cid:243)(cid:168)(cid:517)(cid:492)(cid:168)(cid:517)(cid:1)(cid:93)(cid:495)(cid:168)(cid:1)(cid:168)(cid:243)(cid:1)(cid:279)(cid:529)(cid:525)(cid:533)(cid:158)(cid:207)(cid:1)(cid:482)(cid:517)(cid:307)
(cid:43)(cid:495)(cid:168)(cid:525)(cid:516)(cid:495)(cid:250)(cid:1)(cid:267)(cid:495)(cid:492)(cid:168)(cid:525)(cid:525)(cid:529)(cid:179)(cid:168)(cid:380)(cid:517)(cid:381)(cid:1)(cid:495)(cid:158)(cid:189)(cid:302)(cid:267)(cid:495)(cid:525)(cid:1)(cid:380)(cid:318)(cid:381)(cid:1)(cid:492)(cid:168)(cid:517)(cid:1)(cid:266)(cid:224)(cid:517)(cid:1)(cid:516)(cid:495)(cid:525)(cid:302)(cid:529)(cid:517)(cid:243)(cid:1)(cid:380)(cid:318)(cid:381)(cid:1)(cid:482)(cid:157)(cid:184)(cid:168)(cid:243)(cid:158)(cid:189)(cid:510)(cid:224)(cid:243)(cid:243)(cid:168)(cid:517)(cid:168)(cid:517)(cid:1)(cid:116)(cid:168)(cid:525)(cid:250)(cid:525)(cid:482)(cid:184)(cid:1)(cid:533)(cid:157)(cid:168)(cid:525)(cid:1)(cid:492)(cid:168)(cid:517)(cid:1)(cid:59)(cid:482)(cid:529)(cid:179)(cid:1)(cid:492)(cid:168)(cid:525)(cid:1)(cid:179)(cid:224)(cid:510)(cid:184)(cid:168)(cid:517)(cid:492)(cid:168)(cid:517)(cid:1)(cid:117)(cid:482)(cid:525)(cid:168)(cid:517)(cid:1)(cid:380)(cid:318)(cid:381)(cid:302)(cid:1)
(cid:492)(cid:495)(cid:168)(cid:1)(cid:26)(cid:525)(cid:157)(cid:525)(cid:495)(cid:517)(cid:184)(cid:529)(cid:517)(cid:184)(cid:1)(cid:492)(cid:168)(cid:525)(cid:1)(cid:179)(cid:224)(cid:510)(cid:184)(cid:168)(cid:517)(cid:492)(cid:168)(cid:517)(cid:1)(cid:462)(cid:495)(cid:168)(cid:517)(cid:243)(cid:250)(cid:510)(cid:168)(cid:495)(cid:243)(cid:250)(cid:529)(cid:517)(cid:184)(cid:1)(cid:380)(cid:318)(cid:381)
(cid:61)(cid:495)(cid:492)(cid:510)(cid:1)(cid:462)(cid:495)(cid:184)(cid:495)(cid:250)(cid:482)(cid:510)(cid:1)(cid:466)(cid:517)(cid:250)(cid:168)(cid:525)(cid:517)(cid:482)(cid:250)(cid:495)(cid:224)(cid:517)(cid:482)(cid:510)(cid:1)(cid:38)(cid:516)(cid:157)(cid:43)(cid:1)(cid:316)(cid:1)(cid:16)(cid:224)(cid:305)(cid:1)(cid:59)(cid:38)
(cid:93)(cid:250)(cid:495)(cid:179)(cid:250)(cid:243)(cid:157)(cid:168)(cid:525)(cid:184)(cid:243)(cid:250)(cid:525)(cid:482)(cid:249)(cid:168)(cid:1)(cid:396)(cid:1)
(cid:402)(cid:399)(cid:396)(cid:402)(cid:397)(cid:1)(cid:68)(cid:168)(cid:158)(cid:207)(cid:482)(cid:525)(cid:243)(cid:529)(cid:510)(cid:516)
(cid:1)
(cid:26)(cid:390)(cid:67)(cid:482)(cid:495)(cid:510)(cid:307)(cid:1)(cid:525)(cid:529)(cid:168)(cid:158)(cid:207)(cid:243)(cid:168)(cid:517)(cid:492)(cid:529)(cid:517)(cid:184)(cid:317)(cid:510)(cid:495)(cid:492)(cid:510)(cid:390)(cid:243)(cid:189)(cid:224)(cid:524)(cid:305)(cid:492)(cid:168)
(cid:100)(cid:168)(cid:510)(cid:305)(cid:307)(cid:1)(cid:395)(cid:403)(cid:395)(cid:395)(cid:1)(cid:392)(cid:1)(cid:399)(cid:398)(cid:1)(cid:400)(cid:398)(cid:1)(cid:398)(cid:401)(cid:396)(cid:1)
(cid:37)(cid:482)(cid:272)(cid:307)(cid:1)(cid:395)(cid:403)(cid:395)(cid:395)(cid:1)(cid:392)(cid:1)(cid:396)(cid:395)(cid:395)(cid:1)(cid:396)(cid:400)(cid:400)(cid:395)
(cid:461)(cid:168)(cid:243)(cid:250)(cid:168)(cid:510)(cid:510)(cid:250)(cid:1)(cid:482)(cid:516)(cid:1)(cid:380)(cid:318)(cid:381)(cid:302)(cid:168)(cid:525)(cid:189)(cid:482)(cid:510)(cid:250)(cid:168)(cid:517)(cid:1)(cid:482)(cid:516)(cid:1)(cid:380)(cid:318)(cid:381)(cid:307)
(cid:68)(cid:482)(cid:516)(cid:168)(cid:1)(cid:492)(cid:168)(cid:243)(cid:302)(cid:492)(cid:168)(cid:525)(cid:1)(cid:116)(cid:168)(cid:525)(cid:157)(cid:525)(cid:482)(cid:529)(cid:158)(cid:189)(cid:168)(cid:525)(cid:380)(cid:243)(cid:381)(cid:307)
(cid:3)(cid:517)(cid:243)(cid:158)(cid:189)(cid:525)(cid:495)(cid:179)(cid:250)(cid:1)(cid:492)(cid:168)(cid:243)(cid:302)(cid:492)(cid:168)(cid:525)(cid:1)(cid:116)(cid:168)(cid:525)(cid:157)(cid:525)(cid:482)(cid:529)(cid:158)(cid:189)(cid:168)(cid:525)(cid:380)(cid:243)(cid:381)(cid:307)
(cid:105)(cid:517)(cid:250)(cid:168)(cid:525)(cid:243)(cid:158)(cid:189)(cid:525)(cid:495)(cid:179)(cid:250)(cid:1)(cid:492)(cid:168)(cid:243)(cid:302)(cid:492)(cid:168)(cid:525)(cid:1)(cid:116)(cid:168)(cid:525)(cid:157)(cid:525)(cid:482)(cid:529)(cid:158)(cid:189)(cid:168)(cid:525)(cid:380)(cid:243)(cid:381)
(cid:462)(cid:482)(cid:250)(cid:529)(cid:516)
(cid:380)(cid:318)(cid:381)(cid:1)(cid:105)(cid:517)(cid:279)(cid:529)(cid:250)(cid:525)(cid:168)(cid:179)(cid:179)(cid:168)(cid:517)(cid:492)(cid:168)(cid:243)(cid:1)(cid:157)(cid:495)(cid:250)(cid:250)(cid:168)(cid:1)(cid:243)(cid:250)(cid:525)(cid:168)(cid:495)(cid:158)(cid:189)(cid:168)(cid:517)(cid:305)
(cid:93)(cid:168)(cid:495)(cid:250)(cid:168)(cid:1)(cid:396)(cid:1)(cid:302)(cid:1)(cid:396)
```
This is the case with both attachments. I am able to provide the documents as well, they do not contain any personal data.
This is the log output:
```
[2021-04-11 16:02:55,975] [INFO] [paperless.consumer] Consuming AGB Lidl-Onlineshop.pdf
[2021-04-11 16:02:55,978] [DEBUG] [paperless.consumer] Detected mime type: application/pdf
[2021-04-11 16:02:56,323] [INFO] [paperless.consumer] Consuming Widerrufsbelehrung Lidl-Onlineshop.pdf
[2021-04-11 16:02:56,325] [DEBUG] [paperless.consumer] Detected mime type: application/pdf
[2021-04-11 16:02:56,468] [DEBUG] [paperless.consumer] Parser: RasterisedDocumentParser
[2021-04-11 16:02:56,468] [DEBUG] [paperless.consumer] Parser: RasterisedDocumentParser
[2021-04-11 16:02:56,477] [DEBUG] [paperless.consumer] Parsing Widerrufsbelehrung Lidl-Onlineshop.pdf...
[2021-04-11 16:02:56,477] [DEBUG] [paperless.consumer] Parsing AGB Lidl-Onlineshop.pdf...
[2021-04-11 16:02:57,522] [DEBUG] [paperless.parsing.tesseract] Extracted text from PDF file /tmp/paperless/paperless-mail-a5yctz0u
[2021-04-11 16:02:59,372] [DEBUG] [paperless.parsing.tesseract] Extracted text from PDF file /tmp/paperless/paperless-mail-ayk3pf42
[2021-04-11 16:03:00,752] [DEBUG] [paperless.parsing.tesseract] Calling OCRmyPDF with args: {'input_file': '/tmp/paperless/paperless-mail-ayk3pf42', 'output_file': '/tmp/paperless/paperless-yehha7lz/archive.pdf', 'use_threads': True, 'jobs': 2, 'language': 'deu', 'output_type': 'pdfa', 'progress_bar': False, 'skip_text': True, 'clean': True, 'deskew': True, 'rotate_pages': True, 'rotate_pages_threshold': 12.0, 'sidecar': '/tmp/paperless/paperless-yehha7lz/sidecar.txt'}
[2021-04-11 16:03:00,752] [DEBUG] [paperless.parsing.tesseract] Calling OCRmyPDF with args: {'input_file': '/tmp/paperless/paperless-mail-a5yctz0u', 'output_file': '/tmp/paperless/paperless-9wqe9soa/archive.pdf', 'use_threads': True, 'jobs': 2, 'language': 'deu', 'output_type': 'pdfa', 'progress_bar': False, 'skip_text': True, 'clean': True, 'deskew': True, 'rotate_pages': True, 'rotate_pages_threshold': 12.0, 'sidecar': '/tmp/paperless/paperless-9wqe9soa/sidecar.txt'}
[2021-04-11 16:03:07,311] [DEBUG] [paperless.parsing.tesseract] Incomplete sidecar file: discarding.
[2021-04-11 16:03:07,408] [DEBUG] [paperless.parsing.tesseract] Extracted text from PDF file /tmp/paperless/paperless-9wqe9soa/archive.pdf
[2021-04-11 16:03:07,409] [DEBUG] [paperless.consumer] Generating thumbnail for Widerrufsbelehrung Lidl-Onlineshop.pdf...
[2021-04-11 16:03:07,418] [DEBUG] [paperless.parsing] Execute: convert -density 300 -scale 500x5000> -alpha remove -strip -auto-orient /tmp/paperless/paperless-9wqe9soa/archive.pdf[0] /tmp/paperless/paperless-9wqe9soa/convert.png
[2021-04-11 16:03:09,865] [DEBUG] [paperless.parsing.tesseract] Execute: optipng -silent -o5 /tmp/paperless/paperless-9wqe9soa/convert.png -out /tmp/paperless/paperless-9wqe9soa/thumb_optipng.png
[2021-04-11 16:03:11,603] [DEBUG] [paperless.parsing.tesseract] Incomplete sidecar file: discarding.
[2021-04-11 16:03:12,769] [DEBUG] [paperless.classifier] Document classification model does not exist (yet), not performing automatic matching.
[2021-04-11 16:03:12,782] [DEBUG] [paperless.consumer] Saving record to database
[2021-04-11 16:03:13,231] [DEBUG] [paperless.consumer] Deleting file /tmp/paperless/paperless-mail-a5yctz0u
[2021-04-11 16:03:13,355] [DEBUG] [paperless.parsing.tesseract] Deleting directory /tmp/paperless/paperless-9wqe9soa
[2021-04-11 16:03:13,357] [INFO] [paperless.consumer] Document 2021-04-11 Fwd: Vielen Dank für Ihre Bestellung consumption finished
[2021-04-11 16:03:13,683] [DEBUG] [paperless.parsing.tesseract] Extracted text from PDF file /tmp/paperless/paperless-yehha7lz/archive.pdf
[2021-04-11 16:03:13,684] [DEBUG] [paperless.consumer] Generating thumbnail for AGB Lidl-Onlineshop.pdf...
[2021-04-11 16:03:13,693] [DEBUG] [paperless.parsing] Execute: convert -density 300 -scale 500x5000> -alpha remove -strip -auto-orient /tmp/paperless/paperless-yehha7lz/archive.pdf[0] /tmp/paperless/paperless-yehha7lz/convert.png
[2021-04-11 16:03:18,073] [DEBUG] [paperless.parsing.tesseract] Execute: optipng -silent -o5 /tmp/paperless/paperless-yehha7lz/convert.png -out /tmp/paperless/paperless-yehha7lz/thumb_optipng.png
[2021-04-11 16:03:31,086] [DEBUG] [paperless.classifier] Document classification model does not exist (yet), not performing automatic matching.
[2021-04-11 16:03:31,099] [DEBUG] [paperless.consumer] Saving record to database
[2021-04-11 16:03:31,701] [DEBUG] [paperless.consumer] Deleting file /tmp/paperless/paperless-mail-ayk3pf42
[2021-04-11 16:03:31,775] [DEBUG] [paperless.parsing.tesseract] Deleting directory /tmp/paperless/paperless-yehha7lz
[2021-04-11 16:03:31,777] [INFO] [paperless.consumer] Document 2021-04-11 Fwd: Vielen Dank für Ihre Bestellung consumption finished
[2021-04-11 16:13:01,417] [DEBUG] [paperless.classifier] Document classification model does not exist (yet), not performing automatic matching.
[2021-04-11 16:13:40,763] [DEBUG] [paperless.classifier] Document classification model does not exist (yet), not performing automatic matching.
[2021-04-11 16:13:58,368] [DEBUG] [paperless.classifier] Document classification model does not exist (yet), not performing automatic matching.
``` | closed | 2021-04-11T14:24:59Z | 2021-04-17T20:17:58Z | https://github.com/jonaswinkler/paperless-ng/issues/895 | [] | prodigy7 | 3 |
BeanieODM/beanie | pydantic | 121 | Read and Write concerns | Any plans to implement https://pymongo.readthedocs.io/en/3.12.0/api/pymongo/write_concern.html
Seems fairly striaght forward to do though unsure how you would want to handle the API for it. Happy to mock something up. | closed | 2021-09-30T09:54:36Z | 2023-04-04T02:21:37Z | https://github.com/BeanieODM/beanie/issues/121 | [
"Stale"
] | zrothberg | 3 |
browser-use/browser-use | python | 963 | Handling pop-up windows from websites | ### Problem Description
I was testing brwoser-use and was impressed performance.
However, I noticed that browser-use may potentially get stuck on websites that has pop up windows after the website is loaded.
(For example, the cookie acknowledgement pop up window for this web page: https://www.mdpi.com/2673-947X/5/1/5)
The pop up windows can be a common challenge for web browsing agents (e.g., rate the webpage "how do you like our website", cookie use acknowledgement, pop-up ads, etc).
### Proposed Solution
It would be great if browser-use can auto-resolve the pop-up windows and the web browsing process will not get stuck because of pop-up windows.
### Alternative Solutions
_No response_
### Additional Context
_No response_ | open | 2025-03-06T20:21:58Z | 2025-03-11T08:42:22Z | https://github.com/browser-use/browser-use/issues/963 | [
"enhancement"
] | HengyueL | 1 |
donnemartin/system-design-primer | python | 542 | Calculation of attorney's fees | open | 2021-05-29T13:52:01Z | 2022-04-23T13:17:40Z | https://github.com/donnemartin/system-design-primer/issues/542 | [
"needs-review"
] | yezhuoying | 1 | |
jmcnamara/XlsxWriter | pandas | 465 | worksheet.repeat_rows(last) not functioning as worksheet.repeat_rows(0, last) | Code in worksheet.py line 3142, sucked down from PIP as of a few minutes ago:
if last_row is None:
last_row = first_row
# Convert rows to 1 based.
I believe, you need to then set first_row=0 in that if-clause. Not sure how this happened; it used to work fine in previous versions. | closed | 2017-09-07T17:44:39Z | 2017-09-07T18:37:54Z | https://github.com/jmcnamara/XlsxWriter/issues/465 | [] | kjosib | 4 |
benbusby/whoogle-search | flask | 593 | [BUG] xml.etree.ElementTree.ParseError: not well-formed (invalid token) | **Describe the bug**
```
ERROR:app:Exception on /autocomplete [POST]
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2446, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1951, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1820, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1949, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1935, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/whoogle/app/routes.py", line 275, in autocomplete
g.user_request.autocomplete(q) if not g.user_config.tor else []
File "/whoogle/app/request.py", line 232, in autocomplete
root = ET.fromstring(response)
File "/usr/local/lib/python3.8/xml/etree/ElementTree.py", line 1320, in XML
parser.feed(text)
xml.etree.ElementTree.ParseError: not well-formed (invalid token): line 2, column 11
```

**To Reproduce**
Steps to reproduce the behavior:
1. install latest tag using docker
2. use it and out of the blue it will shown an error
**Deployment Method**
- [ ] Heroku (one-click deploy)
- [X] Docker
- [ ] `run` executable
- [ ] pip/pipx
- [ ] Other: [describe setup]
**Version of Whoogle Search**
- [X] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc)
- [ ] Version [version number]
- [ ] Not sure
**Desktop (please complete the following information):**
- OS: Windows 10, Mac, Android etc
- Browser: Chrome, firefox, safari
- Version: Different versions
**Additional context**
Add any other context about the problem here.
| closed | 2021-12-27T11:00:23Z | 2021-12-28T18:38:35Z | https://github.com/benbusby/whoogle-search/issues/593 | [
"bug"
] | bruvv | 1 |
deepset-ai/haystack | machine-learning | 8,310 | AutoMerging-Retriever: support more document stores | - Pinecone doesn't support it because apparently, one cannot filter by `id`:
- https://docs.pinecone.io/guides/data/query-data#querying-by-record-id
- https://community.pinecone.io/t/does-pinecone-support-filtering-by-vector-id/3039/2
- Weaviate needs to be properly tested
- All the others document stores work, with the exception of Chroma which as a known BUG in the filtering mechanism | closed | 2024-08-29T13:32:10Z | 2024-09-19T07:44:02Z | https://github.com/deepset-ai/haystack/issues/8310 | [
"P2"
] | davidsbatista | 2 |
stanfordnlp/stanza | nlp | 523 | Introducing external Chinese tokenizer into the pipeline makes 'tokenize_no_ssplit=True' not working | Hi, after careful comparison with jieba tokenizer with your Chinese tokenizer, I am using jieba tokenizer for my downstream NER task, as it shows better performance on Chinese names and locations identification.
I do NER in batchces in order to speed up, so I add "\n\n" between all the segmented sentences as mentioned in the [tutorial](https://stanfordnlp.github.io/stanza/tokenize.html#tokenization-without-sentence-segmentation) and enable "tokenize_no_ssplit=True" in the pipeline.
When using your tokenizer in the pipeline, "tokenize_no_ssplit=True" works perfectly well.
```
ZH_DOC = "北京是中国的首都。\n\n北京有2100万人口。是一个直辖市。"
nlp_zh = stanza.Pipeline(lang='zh-hans', dir=r'C:\Users\WT.YX\stanza_resources', processors='tokenize,ner',
use_gpu=False, tokenize_no_ssplit=True)
doc = nlp_zh(ZH_DOC)
for item in doc.sentences:
print(item._text)
北京是中国的首都。
北京有2100万人口。是一个直辖市。
```
However, when introducing jieba tokenizer into the pipeline, it no longer works.
```
ZH_DOC = "北京是中国的首都。\n\n北京有2100万人口。是一个直辖市。"
nlp_zh = stanza.Pipeline(lang='zh-hans',
processors={'tokenize': 'jieba', 'ner': 'ontonotes'}, package=None,
use_gpu=False, tokenize_no_ssplit=True)
doc = nlp_zh(ZH_DOC)
for item in doc.sentences:
print(item._text)
北京是中国的首都。
北京有2100万人口。
是一个直辖市。
```
```
ZH_DOC = ZH_DOC = "北京是中国的首都 \n\n北京有2100万人口。是一 \n\n 个直辖市,"
nlp_zh = stanza.Pipeline(lang='zh-hans',
processors={'tokenize': 'jieba', 'ner': 'ontonotes'}, package=None,
use_gpu=False, tokenize_no_ssplit=True)
doc = nlp_zh(ZH_DOC)
for item in doc.sentences:
print(item._text)
北京是中国的首都
北京有2100万人口。
是一
个直辖市,
len(doc.sentences) gives 2
doc.sentences
[[
{
"id": 1,
"text": "北京",
"misc": "start_char=0|end_char=2",
"ner": "S-GPE"
},
{
"id": 2,
"text": "是",
"misc": "start_char=2|end_char=3",
"ner": "O"
},
{
"id": 3,
"text": "中国",
"misc": "start_char=3|end_char=5",
"ner": "S-GPE"
},
{
"id": 4,
"text": "的",
"misc": "start_char=5|end_char=6",
"ner": "O"
},
{
"id": 5,
"text": "首都",
"misc": "start_char=6|end_char=8",
"ner": "O"
},
{
"id": 6,
"text": "北京",
"misc": "start_char=11|end_char=13",
"ner": "S-GPE"
},
{
"id": 7,
"text": "有",
"misc": "start_char=13|end_char=14",
"ner": "O"
},
{
"id": 8,
"text": "2100",
"misc": "start_char=14|end_char=18",
"ner": "B-CARDINAL"
},
{
"id": 9,
"text": "万",
"misc": "start_char=18|end_char=19",
"ner": "E-CARDINAL"
},
{
"id": 10,
"text": "人口",
"misc": "start_char=19|end_char=21",
"ner": "O"
},
{
"id": 11,
"text": "。",
"misc": "start_char=21|end_char=22",
"ner": "O"
}
], [
{
"id": 1,
"text": "是",
"misc": "start_char=22|end_char=23",
"ner": "O"
},
{
"id": 2,
"text": "一",
"misc": "start_char=23|end_char=24",
"ner": "O"
},
{
"id": 3,
"text": "个",
"misc": "start_char=28|end_char=29",
"ner": "O"
},
{
"id": 4,
"text": "直辖市",
"misc": "start_char=29|end_char=32",
"ner": "O"
},
{
"id": 5,
"text": ",",
"misc": "start_char=32|end_char=33",
"ner": "O"
}
]]
```
Is this a bug or am I implementing the pipeline correctly? Thanks for your insights!
Besides, regarding NER performance, for a neural network, Chinese word segmentation might not be needed. Both research and Bert NER model performance support this. Moreover currently there is no good Chinese tokenizer, and all the segment errors affect downstream NER performance. Are you considering bypassing Chinese word segmentation and focusing on the variety of language model embeddings (n-gram, n-char, Bert-embeddings, etc.) with user-defined vocabs &dictionaries incorporated? | closed | 2020-11-18T08:54:00Z | 2021-02-02T00:33:24Z | https://github.com/stanfordnlp/stanza/issues/523 | [
"bug",
"fixed on dev"
] | twang18 | 6 |
mirumee/ariadne | graphql | 1,234 | convert_names_case on default resolvers doesn't work well with directives | For example in the current test https://github.com/mirumee/ariadne/blob/e03d333244d1521aad4ba9a5a811d9ade117d410/tests/test_directives.py#L60
For simplicity if I change the test case to have both `convert_names_case=True` and `@upper` it will fail
```python
def test_field_definition_directive_replaces_field_resolver_with_custom_one():
type_defs = """
directive @upper on FIELD_DEFINITION
type Query {
test: Custom
}
type Custom {
nodeField: String @upper
}
"""
query = QueryType()
query.set_field("test", lambda *_: {"node_field": "custom"})
schema = make_executable_schema(
type_defs,
[query],
directives={"upper": UpperDirective},
convert_names_case=True,
)
result = graphql_sync(schema, "{ test { nodeField }}")
assert result.errors is None
assert result.data == {"test": {"nodeField": "CUSTOM"}}
```
```bash
python -m pytest tests/test_directives.py::test_field_definition_directive_replaces_field_resolver_with_custom_one
```
```log
FAILED tests/test_directives.py::test_field_definition_directive_replaces_field_resolver_with_custom_one - assert [GraphQLError("'NoneType' object has no attribute 'upper'", locations=[SourceLocation(line=1, column=10)], path=['test', 'nodeField'])] is None
```
But if I change the test case to have only `convert_names_case=True` it will succeed
```python
def test_field_definition_directive_replaces_field_resolver_with_custom_one():
type_defs = """
type Query {
test: Custom
}
type Custom {
nodeField: String
}
"""
query = QueryType()
query.set_field("test", lambda *_: {"node_field": "custom"})
schema = make_executable_schema(
type_defs,
[query],
convert_names_case=True,
)
result = graphql_sync(schema, "{ test { nodeField }}")
assert result.errors is None
assert result.data == {"test": {"nodeField": "custom"}}
```
```bash
python -m pytest tests/test_directives.py::test_field_definition_directive_replaces_field_resolver_with_custom_one
```
```log
1 passed, 2 warnings in 0.04s
```
It will also succeed if I explicitly provide a resolver when having both `convert_names_case=True` and `@upper`
```python
def test_field_definition_directive_replaces_field_resolver_with_custom_one():
type_defs = """
directive @upper on FIELD_DEFINITION
type Query {
test: Custom
}
type Custom {
nodeField: String @upper
}
"""
query = QueryType()
test = ObjectType('Custom')
query.set_field("test", lambda *_: {"node_field": "custom"})
test.set_field("nodeField", lambda obj, *_: obj["node_field"])
schema = make_executable_schema(
type_defs,
[query, test],
directives={"upper": UpperDirective},
convert_names_case=True,
)
result = graphql_sync(schema, "{ test { nodeField }}")
assert result.errors is None
assert result.data == {"test": {"nodeField": "CUSTOM"}}
```
```bash
python -m pytest tests/test_directives.py::test_field_definition_directive_replaces_field_resolver_with_custom_one
```
```log
1 passed, 2 warnings in 0.05s
``` | open | 2025-02-19T23:45:51Z | 2025-02-20T00:02:44Z | https://github.com/mirumee/ariadne/issues/1234 | [] | zwangBLP | 0 |
dpgaspar/Flask-AppBuilder | flask | 1,643 | Issue get user data in class inherits modelview body | ### Issue get user data in class inherits modelview body
- i want to access user data in class Customers(ModelView) to set show_fieldsets columns or base_permissions based on user data
- when i call get_user_type like this it return function name
- when i call get_user_type() i get error: **RuntimeError: working outside of application context**
- i found session['user_id'] but when i call in class Customers(ModelView) i get error: **RuntimeError: working outside of request context**
```
def get_user_type():
return g.user.user_type
class Customers(ModelView):
datamodel = SQLAInterface(Customer)
usertype = get_user_type
if usertype==4 :
base_permissions = ['can_get', 'can_list', 'can_show']
else:
base_permissions = ['can_get', 'can_list', 'can_show', 'can_add', 'can_edit']
user1 = session['user_id']
```
| closed | 2021-05-19T11:07:28Z | 2022-04-17T16:24:31Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1643 | [
"question",
"stale"
] | Basma18 | 4 |
satwikkansal/wtfpython | python | 206 | Prehistory | Hi Satwik! If you are interested in antediluvian times, check out https://github.com/MarcinCiura/6-gotchas :)
Cheers! | closed | 2020-05-22T09:35:34Z | 2024-10-11T11:46:16Z | https://github.com/satwikkansal/wtfpython/issues/206 | [] | MarcinCiura | 2 |
flairNLP/flair | nlp | 3,033 | TypeError: RobertaModel.__init__() got an unexpected keyword argument 'force_max_length' when loading model with SequenceTagger | When loading a fine-tuned model, I am getting the above error locally and in a Docker container. I am using flair==0.11.3, torch==1.11 and transformers==4.21.3.
When loading the model within a notebook hosted on a cloud platform, the loading runs just fine. But locally (I am using an M1 MacBook) I am getting that error. Same happens when I am building a Docker container from my Laptop (maybe it has something to do with the operating system or chip?).
The weirdest part is though that I cannot find the argument ```force_max_length``` in neither flair, torch or transformers code. So what is even passing that argument?
Any help is highly appreciated.
Thank you!
Same question was asked here but was never answered. #2935 | closed | 2022-12-16T12:36:54Z | 2022-12-16T14:22:58Z | https://github.com/flairNLP/flair/issues/3033 | [
"question"
] | agademic | 1 |
pytest-dev/pytest-django | pytest | 755 | pytest.ini is ignored when using manage.py test with pytest-django | When I run pytest with coverage from the command line I get right results.
But when I try to run it with manage.py test coverage results are wrong.
Model fields are ignored.
What I figured out, is that manage.py test ignores pytest.ini file. If I delete it tests won't fail.
It says "Django settings: seabattle.settings (from environment variable)".
But I don't have such environment variable.
If I try to run pytest from command line without pytest.ini file it fails.
Here is my runner.py:
```
class PytestTestRunner:
def __init__(self, verbosity=1, failfast=False, keepdb=False, **kwargs):
self.verbosity = verbosity
self.failfast = failfast
self.keepdb = keepdb
def run_tests(self, test_labels):
import pytest
argv = []
if self.verbosity == 0:
argv.append("--quiet")
if self.verbosity == 2:
argv.append("--verbose")
if self.verbosity == 3:
argv.append("-vv")
if self.failfast:
argv.append("--exitfirst")
if self.keepdb:
argv.append("--reuse-db")
argv.extend(test_labels)
return pytest.main(argv)
```
pytest==5.0.1
pytest-django==3.5.1 | closed | 2019-08-16T19:23:59Z | 2020-10-16T19:36:27Z | https://github.com/pytest-dev/pytest-django/issues/755 | [] | nvishnya | 2 |
plotly/dash | dash | 2,878 | [BUG] `id` passed through `dcc.Loading` not visible in DOM | **Describe your context**
Hello guys 👋
I am currently trying to pass an `id` to the dcc.Loading component or its parent container and I would like the `id` to be visible in the DOM such that I can target the CSS of the components inside the `dcc.Loading` via ID.
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.17.0
dash-bootstrap-components 1.5.0
dash-core-components 2.0.0
dash-html-components 2.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: [e.g. iOS]
- Browser: Chrome
- Version [e.g. 22]
**Describe the bug**
Let's take the example app below - what I would have expected is that there would be an html div visible with a className="loading" and an id="loading-id". However, if I provide the `className="loading"` I see a div but it does not have the className="loading" in the DOM nor does it have the id="loading-id" in the DOM.
When I switch this to `parent_className="loading"`, now I see a div with the className="loading", but I cannot attach an id to this parent container.
I am not a react expert, but from the source I can see that the `id` doesn't seem to be passed on in the return of the react component and is therefore not visible in the DOM? Is there any reason for that?
https://github.com/plotly/dash/blob/09252f8d2f690480cc468b2e015f9e2417dc90ad/components/dash-core-components/src/components/Loading.react.js#L128-L133
```
from dash import Dash, html, dcc, callback, Output, Input
import plotly.express as px
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/gapminder_unfiltered.csv')
app = Dash()
app.layout = [
html.H1(children='Title of Dash App', style={'textAlign':'center'}),
dcc.Dropdown(df.country.unique(), 'Canada', id='dropdown-selection'),
dcc.Loading(dcc.Graph(id='graph-content'), color='grey', id="loading-id", parent_className="loading")
]
@callback(
Output('graph-content', 'figure'),
Input('dropdown-selection', 'value')
)
def update_graph(value):
dff = df[df.country==value]
return px.line(dff, x='year', y='pop')
if __name__ == "__main__":
app.run(debug=True)
```
**Expected behavior**
I would expect the `id` being passed on to the react component and visible in the DOM, so having a <div class="loading" id="loading-id" </div> visible in the DOM.
**Screenshots**

| closed | 2024-06-07T10:41:21Z | 2024-06-18T13:22:13Z | https://github.com/plotly/dash/issues/2878 | [
"good first issue"
] | huong-li-nguyen | 4 |
tortoise/tortoise-orm | asyncio | 1,742 | Timezone-aware datetime returns incorrect timezone for SQLite | **Describe the bug**
When I create a new object in a SQLite database, the timezone being put into the database, and the timezone being retrieved from the database, are different from the value I originally inputted.
**To Reproduce**
```py
import datetime
from tortoise import Tortoise, Model, fields, run_async
class TestModel(Model):
dt = fields.DatetimeField()
async def main() -> None:
await Tortoise.init(db_url="sqlite://:memory:", modules={"models": ["__main__"]}, use_tz=True)
await Tortoise.generate_schemas()
now = datetime.datetime.now(datetime.timezone(datetime.timedelta(hours=1)))
print(now.tzinfo)
test = await TestModel.create(dt=now)
print(test.dt.tzinfo)
run_async(main())
```
Result:
```
UTC+01:00
UTC
```
**Expected behavior**
Expected result:
```
UTC+01:00
UTC+01:00
```
| closed | 2024-10-17T11:07:36Z | 2024-10-19T23:44:54Z | https://github.com/tortoise/tortoise-orm/issues/1742 | [] | seriaati | 3 |
gee-community/geemap | jupyter | 1,358 | GEEMAP installation not possible on Mac M1 Max using Anaconda | Hi Qiusheng. I'm trying to install geemap, but I am unable to install the most recent version of geemap because Anaconda is not supporting python versions less than 3.8 on Apple M1 chips. From what I understand, geemap was built on python 3.7, which is why I believe I am having issues installing the latest version of geemap. Do you have any advice? Or plans to release another version of geemap built on a more recent versions of python? Thanks! | closed | 2022-12-02T23:29:02Z | 2022-12-05T19:46:09Z | https://github.com/gee-community/geemap/issues/1358 | [] | melrohde | 7 |
microsoft/unilm | nlp | 1,622 | How can I have dit document layout analysis checkpoints? | **Describe**
Model I am using is Dit object detection, when I want to run inference, I found the checkpoint is unavailable.
It is possible for us to have it? Is there any suggestion for me to the the object detection model fine tune?
Thank you in advance.
| open | 2024-09-10T20:36:25Z | 2024-09-10T20:36:25Z | https://github.com/microsoft/unilm/issues/1622 | [] | WYY220062 | 0 |
pbugnion/gmaps | jupyter | 80 | How do you just set the origin/center longitude and latitude to a particular area? | The documentation doesn't show a basic example for this?
| closed | 2016-08-27T14:42:08Z | 2016-09-04T06:35:37Z | https://github.com/pbugnion/gmaps/issues/80 | [] | CMCDragonkai | 4 |
voila-dashboards/voila | jupyter | 1,441 | Extension registration requires a kernel? | <!--To help us understand and resolve your issue, please fill out the form to the best of your ability.-->
<!--You can feel free to delete the sections that do not apply.-->
### Problem
We're migrating from voila 0.4 to 0.5. We have an extension we've built for jupyter that we had been using in voila 0.4. However, in voila 0.5, it seems that it requires a kernel now? Is that intentional?
https://github.com/voila-dashboards/voila/blob/main/packages/voila/src/plugins/widget/index.ts#L58
Is there any documentation around how we need to modify our extension to actually successfully register? The `kernelid` at this point seems to need an actual running kernel, so I'm not sure how the extension would have that info.
| closed | 2024-02-05T15:30:30Z | 2024-02-05T21:55:38Z | https://github.com/voila-dashboards/voila/issues/1441 | [
"documentation"
] | ClaytonAstrom | 9 |
abhiTronix/vidgear | dash | 303 | [Bug]:NetGear is not OPENCV 4.5.5 Compatible | closed | 2022-05-06T18:04:16Z | 2022-05-07T09:18:03Z | https://github.com/abhiTronix/vidgear/issues/303 | [
"INVALID :stop_sign:"
] | rubar-tech | 5 | |
ydataai/ydata-profiling | jupyter | 1,303 | Feature Request - Add support for Pandas 2 | ### Missing functionality
Add support for Pandas 2
### Proposed feature
I'd just like to be able to install ydata-profiling and Pandas 2 into the same environment
### Alternatives considered
_No response_
### Additional context
_No response_ | open | 2023-04-05T01:03:14Z | 2023-06-12T17:29:44Z | https://github.com/ydataai/ydata-profiling/issues/1303 | [
"feature request 💬"
] | owenlamont | 5 |
horovod/horovod | deep-learning | 3,921 | the repuirements installing Horovod in Conda in CUDA12 | **Environment:**
1. Framework: (TensorFlow, Keras, PyTorch, MXNet)
2. Framework version:PyTorch
3. Horovod version:
4. MPI version:4.1.5
5. CUDA version:12
6. NCCL version:
7. Python version:
8. Spark / PySpark version:
9. Ray version:
10. OS and version:
11. GCC version:9+
12. CMake version:
**Checklist:**
1. Did you search issues to find if somebody asked this question before?
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
yes
**Bug report:**
Please describe erroneous behavior you're observing and steps to reproduce it.
My cuda is 12.1, with ubuntu 20 and RTX 3090.The tutorial about install Horovod in Conda is based on cudatoolkit10.1, which is outdated.So I install all the dependencies with no version specified. However, it seems something is wrong. I wonder is there any limitaion about CUDA for install Horovod in conda?
| open | 2023-05-11T06:13:11Z | 2023-05-11T06:13:11Z | https://github.com/horovod/horovod/issues/3921 | [
"bug"
] | CoconutSweet999 | 0 |
ShishirPatil/gorilla | api | 774 | [BFCL] Error when consecutively generate from multiple oss_models with vllm | I have plenty of models and I want to test them all. So I run a bash script like this:
```
bfcl generate --model "model_1" --test-category simple
bfcl generate --model "model_2" --test-category simple
...
```
The first generation was good. But the second run and thereafter encountered the following error:

I suspect the reason is that VLLM_PORT still occupied by the previous run, so I got the output of `ps -aux` when the second run just began. However it seems the previous vllm server has already ended.

Now I have no idea why this isn't working, could you help me with this?
Or what's the recommended way if I want to run many models automatically?
Update:
Just after I open this issue, I find the error disappears if I wait long enough before the next run. (`sleep 60` after each line).
But still, I wonder if there's a better way to do this? | closed | 2024-11-21T09:28:01Z | 2024-11-27T08:05:49Z | https://github.com/ShishirPatil/gorilla/issues/774 | [
"BFCL-General"
] | YifanHao | 2 |
ultralytics/ultralytics | machine-learning | 19,111 | YOLO with dinov2 as backbone | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hello @Y-T-G ,
I saw your code to support different backbones from torchvision. Could you please provide me with some guidance on how to implement YOLO with DINOv2?
### Additional
_No response_ | open | 2025-02-06T23:57:42Z | 2025-02-14T13:34:43Z | https://github.com/ultralytics/ultralytics/issues/19111 | [
"enhancement",
"question"
] | SebastianJanampa | 6 |
amdegroot/ssd.pytorch | computer-vision | 202 | Add pretrained weights for COCO dataset [feature request] | It would be great if there are weights for a the model trained on the COCO dataset. | open | 2018-07-11T09:37:56Z | 2018-07-13T09:45:58Z | https://github.com/amdegroot/ssd.pytorch/issues/202 | [] | sotte | 1 |
keras-team/keras | deep-learning | 20,106 | Unrecognized keyword arguments passed to LSTM: {'batch_input_shape' | model = Sequential()
model.add(LSTM(4, batch_input_shape=(1, X_train.shape[1], X_train.shape[2]), stateful=True))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(X_train, y_train, epochs=100, batch_size=1, verbose=1, shuffle=False)
ValueError: Unrecognized keyword arguments passed to LSTM: {'batch_input_shape': (1, 1, 7)}
My version :
TensorFlow version: 2.17.0
Keras version: 3.4.1
I've seen similar issue raised on stackoverflow. I was able to run the code 2 weeks ago without error. What new keyword argument should I use?
https://stackoverflow.com/questions/78805181/valueerror-unrecognized-keyword-arguments-passed-to-lstm-batch-input-shape | closed | 2024-08-09T19:19:55Z | 2025-03-22T12:16:24Z | https://github.com/keras-team/keras/issues/20106 | [
"type:support",
"stat:awaiting response from contributor"
] | Ineedsomehelpah | 7 |
microsoft/unilm | nlp | 1,342 | [Kosmos-G] Docker failed to run demo | I try to run docker demo for test kosmos-g, but it's still has failed, error with torchscale,
Can anyone give docker fix?
tks | open | 2023-10-26T01:55:55Z | 2023-10-26T01:55:55Z | https://github.com/microsoft/unilm/issues/1342 | [] | trangtv57 | 0 |
deepspeedai/DeepSpeed | deep-learning | 6,737 | [BUG] CUDA out of memory error when using a customized model at deepspeed.initialize(). | **Describe the bug**
In my own implementation, I combine a large language model and a speculator model. And my goal is to train the speculator model to make it better at predicting the n+2, n+3... tokens. I have read the doc of deepspeed, and I think it supports any customized model on top of nn.Module. But I have encountered CUDA OOM error when initializing the customized model with deepspeed.initialize().
**To Reproduce**
Here is my main code
```
import torch
import torch.backends.cudnn as cudnn
import torch.nn as nn
import torch.utils.data
from time import time
import json
from transformers import AutoTokenizer, AutoModelForCausalLM, PreTrainedTokenizer
from datasets import load_dataset
from torch.nn.utils.rnn import pad_sequence
import deepspeed
import argparse
import random
import numpy as np
import os
from torch.utils.data import DataLoader, Dataset
from speculator import MLPSpeculator
class CombinedModel(nn.Module):
def __init__(self, base_model, speculator):
super(CombinedModel, self).__init__()
self.base_model = base_model
self.speculator = speculator
for param in self.base_model.parameters():
param.requires_grad = False
def forward_base_model(self, *args, **kwargs):
return self.base_model(*args, **kwargs)
def forward_speculator(self, *args, **kwargs):
return self.speculator(*args, **kwargs)
def print_model_parameters(model):
for name, param in model.named_parameters():
print(f"Parameter: {name}, requires_grad: {param.requires_grad}")
def get_argument_parser():
parser = argparse.ArgumentParser(description="GAN for NLP Task using Alpaca Dataset")
# Other parameters
parser.add_argument('--backend', type=str, default='nccl', help='distributed backend')
parser.add_argument('--batchSize', type=int, default=64, help='input batch size')
parser.add_argument('--epochs', type=int, default=1, help='number of epochs to train for')
parser.add_argument('--lr', type=float, default=0.0002, help='learning rate, default=0.0002')
parser.add_argument('--beta1', type=float, default=0.5, help='beta1 for adam. default=0.5')
parser.add_argument('--cuda', action='store_true', help='enables cuda')
parser.add_argument('--ngpu', type=int, default=16, help='number of GPUs to use')
parser.add_argument('--outf', default='./gan_output', help='folder to output model checkpoints')
parser.add_argument('--manualSeed', type=int, default=999, help='manual seed')
# parser.add_argument('--tensorboard_path', default='./runs/deepspeed', help='tensorboard log dir')
parser.add_argument("--local_rank", type=int, default=-1, help="local_rank for distributed training on gpus")
return parser
def set_seed(value):
print("Random Seed: ", value)
random.seed(value)
torch.manual_seed(value)
torch.cuda.manual_seed_all(value)
np.random.seed(value)
def create_folder(folder):
try:
os.makedirs(folder)
except OSError:
pass
class AlpacaDataset(Dataset):
def __init__(self, json_path: str, tokenizer: PreTrainedTokenizer, max_length: int):
self.data = []
self.tokenizer = tokenizer
self.max_length = max_length
# Load and filter data
with open(json_path, 'r', encoding='utf-8') as f:
raw_data = json.load(f)
for entry in raw_data:
instruction = entry.get("instruction", "")
input_text = entry.get("input", "")
output_text = entry.get("output", "")
if not output_text:
continue # Skip if output is empty
# Combine instruction, input, and output
combined_text = f"Instruction: {instruction} Input: {input_text} Output: {output_text}"
self.data.append(combined_text)
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
text = self.data[idx]
encoded = self.tokenizer(
text,
max_length=self.max_length,
padding='max_length',
truncation=True,
return_tensors='pt'
)
input_ids = encoded['input_ids'].squeeze(0) # Shape: (max_length,)
attention_mask = encoded['attention_mask'].squeeze(0)
# Create target data by shifting input_ids
target_ids = input_ids.clone()
target_ids[:-1] = input_ids[1:] # Shift left by one
target_ids[-1] = -100 # Set the last token to be ignored
return input_ids, target_ids, attention_mask
def get_alpaca_dataloader(json_path: str, tokenizer: PreTrainedTokenizer, batch_size: int, max_length: int):
dataset = AlpacaDataset(json_path, tokenizer, max_length)
dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=True)
return dataloader
def train_stage1(cfg, combined_model, base_model_input, input_texts, criterion, ddp_stats):
with torch.no_grad():
outputs = combined_model.forward_base_model(input_ids=base_model_input, attention_mask=torch.ones_like(base_model_input),
output_hidden_states=True, use_cache=False)
embeds = outputs.hidden_states[-1]
# print(embeds.shape) # torch.Size([8, 124, 4096])
preds = combined_model.forward_speculator(embeds.detach(), input_texts)
# print("preds", preds, preds.shape) # 3, 8, 124, 128256
# assert 1==0
losses = []
for i in range(preds.size(0)):
targ = input_texts[:, i + 1: preds.size(2) + i + 1]
# print(targ)
loss = criterion(preds[i].reshape(-1, preds.size(3)), targ.long().reshape(-1))
losses.append(loss)
ddp_stats[2 + i] += loss.item()
total_loss = sum(losses)
return total_loss, ddp_stats, input_texts.numel()
def train_stage2(cfg, combined_model, base_model_input, input_texts, criterion, ddp_stats):
with torch.no_grad():
outputs = combined_model.forward_base_model.generate(input_ids=base_model_input, return_dict_in_generate=True, output_hidden_states=True)
targs = outputs.sequences
embeds = outputs.hidden_states[-1]
preds = combined_model.forward_speculator(embeds.detach(), targs[:, :-1].detach())
losses = []
for i in range(preds.size(0)):
targ = targs[:, i + 1: preds.size(2) + i + 1]
loss = criterion(preds[i].reshape(-1, preds.size(3)), targ.long().reshape(-1))
losses.append(loss)
ddp_stats[2 + i] += loss.item()
total_loss = sum(losses)
return total_loss, ddp_stats, targs.numel()
def train(args):
# writer = SummaryWriter(log_dir=args.tensorboard_path)
create_folder(args.outf)
set_seed(args.manualSeed)
cudnn.benchmark = True
torch.cuda.set_device(args.local_rank)
device = torch.device("cuda", args.local_rank) if args.cuda else torch.device("cpu")
# tokenizer = AutoTokenizer.from_pretrained("/path_to/LLama3/8B-ins-hf/")
tokenizer = AutoTokenizer.from_pretrained("/path_to/Mistral-Large-Instruct-2407/")
# tokenizer = AutoTokenizer.from_pretrained("/path_to/Mistral-Small-Instruct-2409")
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = 'left'
# model = AutoModelForCausalLM.from_pretrained("/path_to/LLama3/8B-ins-hf/")
model = AutoModelForCausalLM.from_pretrained(
"/path_to/Mistral-Large-Instruct-2407/",
torch_dtype=torch.bfloat16,
# device_map="auto",
low_cpu_mem_usage=True
)
# model = AutoModelForCausalLM.from_pretrained("/path_to/Mistral-Small-Instruct-2409")
emb_dim = model.config.hidden_size
vocab_size = model.config.vocab_size
speculator = MLPSpeculator(
emb_dim,
4096, # speculator_width
vocab_size,
3, # n_speculator_heads
tie_weights=True,
scale_input=True,
)
speculator.reset_parameters()
combined_model = CombinedModel(model, speculator)
del model
del speculator
criterion = nn.CrossEntropyLoss()
model_engine_combined, optimizer, _, _ = deepspeed.initialize(args=args, model=combined_model,
model_parameters=combined_model.speculator.parameters(),
# optimizer=optimizer
)
deepspeed.init_distributed()
torch.cuda.synchronize()
dataloader = get_alpaca_dataloader('/path_to/alpaca_data.json',
tokenizer,
batch_size=8,
max_length=128)
ddp_stats = torch.zeros(2 + combined_model.speculator.n_predict).to(device)
start = time()
for epoch in range(args.epochs):
for i, (input_ids, target_ids, attention_mask) in enumerate(dataloader, 0):
input_texts = input_ids.to(device)
base_model_input = input_texts[:, :-combined_model.speculator.n_predict - 1]
optimizer.zero_grad()
if i < len(dataloader) // 2: # First half of the training: Stage 1
loss, ddp_stats, _ = train_stage1(args, combined_model, base_model_input, input_texts, criterion,
ddp_stats)
else: # Second half of the training: Stage 2
assert 1==0
loss, ddp_stats, _ = train_stage2(args, combined_model, base_model_input, input_texts, criterion,
ddp_stats)
# print(loss)
model_engine_combined.backward(loss)
# optimizer.step()
model_engine_combined.step()
if i % 10 == 0:
print('EPOCH [%d/%d] ITER [%d/%d] Loss: %.4f' % (epoch, args.epochs, i, len(dataloader), loss.item()))
# writer.add_scalar("Loss", loss.item(), epoch * len(dataloader) + i)
torch.cuda.synchronize()
stop = time()
print(f"total wall clock time for {args.epochs} epochs is {stop - start} secs")
def main():
parser = get_argument_parser()
parser = deepspeed.add_config_arguments(parser)
args = parser.parse_args()
train(args)
if __name__ == "__main__":
main()
```
And the speculator.py is adapted from https://github.com/foundation-model-stack/fms-extras/blob/main/fms_extras/models/speculator.py
```
import torch.nn as nn
import torch
import math
from typing import Dict, List, Tuple, Set, Any, Optional
import torch.nn.functional as F
class LayerNormParameterized(nn.Module):
"""
A generalized LayerNorm implementation. With all optional arguments set to True, equivalent to nn.LayerNorm up to epsilon stabilization term
(this class divides inputs by min(norm, eps), while nn.LayerNorm divides by norm + eps).
...
Args
----
normalized_shape : int
Dimensionality of input data (size of final tensor axis)
eps : float
Safety term to prevent division by zero. Make sure the chosen value fits in the range of your encoding scheme (i.e. fp16 requires eps >= 6e-8).
elementwise_scale : bool
Include a learned scaling term after normalization?
elementwise_shift : bool
Include a learned bias term after normalization?
use_mean : bool
Recenter inputs around zero before normalizing, or just rescale?
"""
def __init__(
self,
normalized_shape,
eps=1e-06,
elementwise_scale=True,
elementwise_shift=False,
use_mean=False,
use_high_precision_pow=False,
):
super(LayerNormParameterized, self).__init__()
self.normalized_shape = normalized_shape
self.eps = eps
self.elementwise_scale = elementwise_scale
self.elementwise_shift = elementwise_shift
self.use_mean = use_mean
self.use_high_precision_pow = use_high_precision_pow
if self.elementwise_scale:
self.weight = nn.Parameter(torch.empty(self.normalized_shape))
# else:
# self.register_parameter("weight", None)
if self.elementwise_shift:
self.bias = nn.Parameter(torch.empty(self.normalized_shape))
# else:
# self.register_parameter("bias", None)
def reset_parameters(self):
if self.elementwise_scale:
self.weight.data.fill_(1)
if self.elementwise_shift:
self.bias.data.zero_()
def forward(self, x):
if self.use_mean:
x = x - x.mean(-1, keepdim=True)
# x = F.normalize(x, dim=-1)*math.sqrt(x.size(-1))
xf = x
if self.use_high_precision_pow:
xf = x.float()
xf = xf * torch.rsqrt(xf.pow(2).mean(-1, keepdim=True) + self.eps)
x = xf.type_as(x)
if self.elementwise_scale:
x = self.weight * x
if self.elementwise_shift:
x = x + self.bias
return x
class MLPSpeculator(nn.Module):
"""
This is a simple MLP-based speculator that functions similarly to Medusa
(https://arxiv.org/abs/2401.10774), ingesting context via the final embedding
vector from the base model. However, this model also conditions on previously
predicted tokens, similarly to an RNN, allowing it to generate better-quality n-grams.
The architecture is as flat and simple as possible: for each prediction head,
the current state vector is projected into a new latent space and added to the
previous token's embedding. This sum goes through layernorm and activation, forming
the new state vector. This state predicts the next token (or set of candidate tokens)
for the current head, and then is passed on to the next.
...
Args
----
emb_dim : int
Dimensionality of the input vector from the base model.
inner_dim : int
Latent dimensionality of the speculator model.
vocab_size : int
Number of entries in the tokenizer associated with the base model.
n_predict : int
Number of heads / number of tokens to guess ahead. Model size and speed scale with this value.
tie_weights : bool
If true, use a single set of weights for every model head/stage after the first.
The initial projection from the base model may have a different size, so that stays separate.
scale_input: bool
If true, apply an extra layernorm to the initial state vector input.
Helps training dynamics, particularly when base model output has unusual scale.
"""
def __init__(
self,
emb_dim=4096,
inner_dim=0,
vocab_size=32000,
n_predict=3,
tie_weights=False,
scale_input=False,
):
super().__init__()
self.n_predict = n_predict
self.emb_dim = emb_dim
inner_dim = inner_dim if inner_dim != 0 else emb_dim
self.inner_dim = inner_dim
self.vsize = vocab_size
self.scale_input = scale_input
self.emb = nn.ModuleList(
[nn.Embedding(vocab_size, inner_dim) for _ in range(n_predict)]
)
self.proj = nn.ModuleList(
[
nn.Linear((emb_dim if i == 0 else inner_dim), inner_dim, bias=False)
for i in range(n_predict)
]
)
self.head = nn.ModuleList(
[nn.Linear(inner_dim, vocab_size, bias=False) for _ in range(n_predict)]
)
self.ln = nn.ModuleList(
[
LayerNormParameterized(
inner_dim, elementwise_shift=True, elementwise_scale=True
)
for _ in range(n_predict)
]
)
if self.scale_input:
self.ln0 = LayerNormParameterized(
emb_dim, elementwise_shift=False, elementwise_scale=False
)
# Weights ensure that state_0 accounts for 50% of state magnitude by final head in expectation
self.state_weight = 0.5 ** (0.5 / n_predict)
self.emb_weight = math.sqrt((1 - self.state_weight**2) * (self.inner_dim / 2))
self.activation = nn.GELU()
# Handle weight tying as specified
if tie_weights:
assert (
n_predict > 1
), "You cannot tie weights between stages when only 1 exists"
for emb in self.emb:
emb.weight = self.emb[0].weight
for head in self.head:
head.weight = self.head[0].weight
for ln in self.ln:
ln.weight = self.ln[0].weight
ln.bias = self.ln[0].bias
# Since first proj has different size, allow different initial proj from base into model
for i in range(2, n_predict):
self.proj[i].weight = self.proj[1].weight
def reset_parameters(self):
for m in self.modules():
if isinstance(m, nn.Embedding) or isinstance(m, nn.Linear):
nn.init.normal_(m.weight, 0, 1 / math.sqrt(self.inner_dim))
elif isinstance(m, LayerNormParameterized) and hasattr(m, "weight"):
m.weight.data.fill_(1)
m.bias.data.zero_()
def generate_suffixes(
self,
state: torch.Tensor,
ind: torch.Tensor,
topk: List[int] = [5, 4, 3],
n: int = 5,
) -> torch.Tensor:
"""
FOR INFERENCE
Generate tree of candidate sequences.
...
Args
----
state : torch.Tensor
Most recent embedding vector from the base model (pre-classification head).
Expects size [b 1 d] where b is batch size and d is model width.
ind : torch.Tensor
Token indices of the base model's most recent predicted token(s).
Expects size [b 1] where b is batch size.
topk : List(int)
Number of tokens to consider from each head when forming the candidate tree.
For each candidate branch in the tree, head n produces topk[n] additional sub-branches.
n : int
Given the final tree of prod(topk) candidates, return only the top n most confident.
...
Output : torch.Tensor
The tensor of most likely candidate sequences.
Has size [b n self.n_predict], where b is batch size and n is provided above.
"""
# k indicates # of candidates
# h indicates # of generated tokens
b = state.size(0)
k = math.prod(topk)
out = torch.empty(
b, 1, k, self.n_predict, device=state.device
).int() # b 1 k h -> b k 1 h
log_probs = torch.zeros(b, 1, k, device=state.device) # b 1 k -> b k 1
assert (
len(topk) == self.n_predict
), f"You must provide a topk number for each head ({self.n_predict} heads, {len(topk)} provided)"
if self.scale_input:
state = self.ln0(state) / (2**0.5)
for i in range(self.n_predict):
# Project and predict
z = self.emb[i](ind) # b k d
state = self.proj[i](state)
# Weighted add of state_weight*state and emb_weight*z
# Let subsequent LN take care of denominator
# state_weight is close to 1, so shouldn't be any precision issues
state = torch.add(state, z, alpha=self.emb_weight / self.state_weight)
state = self.activation(self.ln[i](state)) # b k d
probs = F.log_softmax(self.head[i](state), dim=2) # b k v
probs, preds = probs.topk(topk[i], dim=2) # b k k'
# Update candidate set with new predictions, repeating shared prefixes as needed
out = out.view(b, preds.size(1) * preds.size(2), -1, self.n_predict)
out[:, :, :, i] = preds.view(b, -1, 1)
# Update state, log_probs and ind for new predictions
state = state.unsqueeze(2).expand(-1, -1, topk[i], -1) # b k k' d
state = state.reshape(b, -1, state.size(3)) # b kk' d
ind = preds.view(b, -1) # b kk'
log_probs = log_probs.view(b, probs.size(1) * probs.size(2), -1)
log_probs = log_probs.add(probs.view(b, -1, 1))
# Take only top n best guesses
out = out.view(b, k, self.n_predict)
log_probs = log_probs.view(b, k)
best_guesses = log_probs.topk(n, dim=1)[1] # b k
return out.gather(
1, best_guesses.unsqueeze(2).expand(-1, -1, self.n_predict)
) # b n h
def forward(
self,
state: torch.Tensor,
inds: torch.Tensor,
) -> torch.Tensor:
"""
FOR TRAINING
A parallel forward pass on pre-existing ground-truth tokens in pretraining contexts.
Produces self.n_predict predicted tokens for each token embedding in state.
Inds requires self.n_predict extra tokens on the right to "simulate" recursive
behavior for end positions.
...
Args
----
state : torch.Tensor
Embedding vectors from the base model for a given sequence.
Expects size [b n d] where b is batch size, n is seq len, and d is model width.
inds : torch.Tensor
Ground-truth token indices. inds[:,i] is the prediction coming from state[:,i]
(or the legal fiction ground truth corresponding to that prediction).
Expects size [b n+self.n_predict].
...
Output : torch.Tensor
Prediction logits at each position, for each head of the speculator.
Has size [self.n_predict b n v] where v is vocab size.
"""
out = []
if self.scale_input:
state = self.ln0(state) / (2**0.5)
for i in range(self.n_predict):
z = self.emb[i](inds[:, i : i + state.size(1)]) # b n d
state = self.proj[i](state)
# Weighted add of state_weight*state and emb_weight*z
# Let subsequent LN take care of denominator
# state_weight is close to 1, so shouldn't be any precision issues
state = torch.add(state, z, alpha=self.emb_weight / self.state_weight)
state = self.activation(self.ln[i](state)) # b n d
out.append(self.head[i](state)) # b n v
return torch.stack(out, dim=0) # h b n v
```
My command is
```
deepspeed --hostfile=hostfile --num_nodes=12 --num_gpus=8 main_new.py --cuda --deepspeed_config deepspeed_config.json
```
The error is (from one rank):
```
worker-5: [rank22]: File "/nvfile-data/thu/hehaowei/codellama-main/fms_hhw/main_new.py", line 307, in <module>
worker-5: [rank22]: main()
worker-5: [rank22]: File "/nvfile-data/thu/hehaowei/codellama-main/fms_hhw/main_new.py", line 303, in main
worker-5: [rank22]: train(args)
worker-5: [rank22]: File "/nvfile-data/thu/hehaowei/codellama-main/fms_hhw/main_new.py", line 242, in train
worker-5: [rank22]: model_engine_combined, optimizer, _, _ = deepspeed.initialize(args=args, model=combined_model,
worker-5: [rank22]: File "/root/miniconda/envs/hurry_up_hhw_h/lib/python3.10/site-packages/deepspeed/__init__.py", line 181, in initialize
worker-5: [rank22]: engine = DeepSpeedEngine(args=args,
worker-5: [rank22]: File "/root/miniconda/envs/hurry_up_hhw_h/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 262, in __init__
worker-5: [rank22]: self._configure_distributed_model(model)
worker-5: [rank22]: File "/root/miniconda/envs/hurry_up_hhw_h/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1103, in _configure_distributed_model
worker-5: [rank22]: self.module.to(self.device)
worker-5: [rank22]: File "/root/miniconda/envs/hurry_up_hhw_h/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1174, in to
worker-5: [rank22]: return self._apply(convert)
worker-5: [rank22]: File "/root/miniconda/envs/hurry_up_hhw_h/lib/python3.10/site-packages/torch/nn/modules/module.py", line 780, in _apply
worker-5: [rank22]: module._apply(fn)
worker-5: [rank22]: File "/root/miniconda/envs/hurry_up_hhw_h/lib/python3.10/site-packages/torch/nn/modules/module.py", line 780, in _apply
worker-5: [rank22]: module._apply(fn)
worker-5: [rank22]: File "/root/miniconda/envs/hurry_up_hhw_h/lib/python3.10/site-packages/torch/nn/modules/module.py", line 780, in _apply
worker-5: [rank22]: module._apply(fn)
worker-5: [rank22]: [Previous line repeated 3 more times]
worker-5: [rank22]: File "/root/miniconda/envs/hurry_up_hhw_h/lib/python3.10/site-packages/torch/nn/modules/module.py", line 805, in _apply
worker-5: [rank22]: param_applied = fn(param)
worker-5: [rank22]: File "/root/miniconda/envs/hurry_up_hhw_h/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1160, in convert
worker-5: [rank22]: return t.to(
worker-5: [rank22]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 672.00 MiB. GPU 6 has a total capacity of 79.33 GiB of which 211.81 MiB is free. Process 1600879 has 79.11 GiB memory in use. Of the allocated memory 78.70 GiB is allocated by PyTorch, and 592.50 KiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
```
**Expected behavior**
I find another code which suggest that a full Mistral-Large-2407 (a 123B model) can be trained on 12 8*80G GPUs using deepspeed zero 3. And there was much free GPU memory. So I think when I add this speculator, which is a very small customized model, there should not be CUDA OOM error.
**ds_report output**
```
[2024-11-11 14:38:53,778] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.4
[WARNING] using untested triton version (3.0.0), only 1.0.0 is known to be compatible
/root/miniconda/envs/hurry_up_hhw_h/lib/python3.10/site-packages/deepspeed/runtime/zero/linear.py:49: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
def forward(ctx, input, weight, bias=None):
/root/miniconda/envs/hurry_up_hhw_h/lib/python3.10/site-packages/deepspeed/runtime/zero/linear.py:67: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
def backward(ctx, grad_output):
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
fp_quantizer ........... [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.4
[WARNING] using untested triton version (3.0.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/root/miniconda/envs/hurry_up_hhw_h/lib/python3.10/site-packages/torch']
torch version .................... 2.4.0+cu121
deepspeed install path ........... ['/root/miniconda/envs/hurry_up_hhw_h/lib/python3.10/site-packages/deepspeed']
deepspeed info ................... 0.14.4, unknown, unknown
torch cuda version ............... 12.1
torch hip version ................ None
nvcc version ..................... 12.4
deepspeed wheel compiled w. ...... torch 0.0, cuda 0.0
shared memory (/dev/shm) size .... 1007.84 GB
```
| closed | 2024-11-11T14:50:56Z | 2024-11-15T01:36:13Z | https://github.com/deepspeedai/DeepSpeed/issues/6737 | [
"bug",
"training"
] | 962086838 | 4 |
python-gino/gino | asyncio | 559 | Inserting millions of rows. Much much slower than SQLite. Am I doing it wrong? How can I improve throughput? | * GINO version: 0.8.3
* Python version: 3.7.3
* asyncpg version: 0.18.3
* aiocontextvars version: 0.2.2
* PostgreSQL version: 9.6
### Description
I am building something similar/based on: https://github.com/p3pperp0tts/leaks_parser.
Parsing GB of text files, and inputting to database. The difference between what I am doing, and p3pperp0tts/leaks_parser, is that I am only inserting emails/passwords, and I have a constraint for unique email/password combo (**could that be what is causing such a slow down?**).
p3pperp0tts/leaks_parser parser goes *much* *much* faster, parsing files very quickly. i.e. to parse a 500mb compressed .tar.gz archive of txt files, to a 7.9 GB SQLite databse.. ..maybe a few minutes.
### What I Did
```
for line in read_file.read().splitlines():
email, password = parseline(line)
if email:
save_credentials = await Credentials.create(id_data_archive_file=data_archive_file.id, email=email, password=password)
```
Whereas a single text file in an archive inserting into postgresql... doing the above does maybe 10k rows in ...20-30 seconds (guess-timating). It's a really large difference though when I am trying to go through millions of rows.
_Just wondering why SQLite is so much faster_.. **is there something I am doing wrong? Is there another way I should be trying to accomplish this?**
I was expecting PostgreSQL to be on par with SQLite ... but I'm pretty newb to working with databases. | closed | 2019-10-07T00:29:48Z | 2019-10-12T11:47:59Z | https://github.com/python-gino/gino/issues/559 | [
"question"
] | brizzbane | 8 |
identixone/fastapi_contrib | pydantic | 170 | Why Serializer do not output "_id" field? | My model need output _id field to frontend of my project.
```python
def dict(self, *args, **kwargs) -> dict:
"""
Removes excluded fields based on `Meta` and `kwargs`
:return: dict of serializer data fields
"""
exclude = kwargs.get("exclude")
if not exclude:
exclude = set()
**exclude.update({"_id"})**
if hasattr(self.Meta, "exclude") and self.Meta.exclude:
exclude.update(self.Meta.exclude)
if (
hasattr(self.Meta, "write_only_fields")
and self.Meta.write_only_fields
):
exclude.update(self.Meta.write_only_fields)
kwargs.update({"exclude": exclude})
original = super().dict(*args, **kwargs)
return original
``` | open | 2021-05-10T06:43:33Z | 2021-06-03T14:06:33Z | https://github.com/identixone/fastapi_contrib/issues/170 | [] | ChandlerBent | 1 |
ludwig-ai/ludwig | computer-vision | 3,960 | Dependency issue | **Describe the bug**
When importing ludwig.backend and initializing the ray cluster I am getting the following error:
/Users/robertheise/Documents/SD/accelator/venv/lib/python3.11/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
Traceback (most recent call last):
File "/Users/robertheise/Documents/SD/accelator/model.py", line 46, in <module>
backend = initialize_backend(backend_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/robertheise/Documents/SD/accelator/venv/lib/python3.11/site-packages/ludwig/backend/__init__.py", line 109, in initialize_backend
backend = create_backend(**backend)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/robertheise/Documents/SD/accelator/venv/lib/python3.11/site-packages/ludwig/backend/__init__.py", line 103, in create_backend
return backend_registry[type](**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/robertheise/Documents/SD/accelator/venv/lib/python3.11/site-packages/ludwig/backend/__init__.py", line 79, in create_ray_backend
from ludwig.backend.ray import RayBackend
File "/Users/robertheise/Documents/SD/accelator/venv/lib/python3.11/site-packages/ludwig/backend/ray.py", line 23, in <module>
import dask
ModuleNotFoundError: No module named 'dask'
**To Reproduce**
backend_config = {
"type": "ray",
"processor": {
"parallelism": 6,
"type": "dask",
},
"trainer": {
"use_gpu": False,
"num_workers": 3,
"resources_per_worker": {
"CPU": 2,
"GPU": 0,
},
},
}
backend = initialize_backend(backend_config)
**Expected behavior**
I would expect that all dependencies are installed as part of the pip install ludwig
**Environment (please complete the following information):**
- OS: \[e.g. iOS\] Macbook Pro Apple M1
- Version \[e.g. 22\] Ventura 13.3.1 (a)
- Python version: 3.11
- Ludwig version ludwig v0.10.0
| closed | 2024-03-08T21:22:40Z | 2024-03-11T18:31:29Z | https://github.com/ludwig-ai/ludwig/issues/3960 | [] | robhheise | 8 |
BeanieODM/beanie | pydantic | 574 | Search operators without fields | ### Discussed in https://github.com/roman-right/beanie/discussions/570
<div type='discussions-op-text'>
<sup>Originally posted by **akriese** May 23, 2023</sup>
Hi, I want to be able to use query operators like Eq, GT etc. without fields. One simple use case is, that I want to use ElemMatch on a list of numbers. Like so:
```python
some_id: PydanticObjectId = ...
ElemMatch(UserRelation.users, Eq(some_id)) # UserRelation.users is a list of ids
# instead of
ElemMatch(UserRelation.users, {"$eq": some_id}) # this doesnt seem very beanie'ish to me :)
# or
threshold: int = 50
ElemMatch(User.bucket, GT(50))
# instead of
ElemMatch(User.bucket, {"$gt": 50}) # same here as above
```
In both cases, we can emit the field name, as it is about a simple element. The mongoDB queries would look like this:
```
{ "users": { "$elemMatch": { "$eq": some_id }}}
and
{ "bucket": { "$elemMatch": { "$gt": 50 }}}
```
where both times, the operator part doesnt include a field name.
So finally, my question is, how you would tackle this small problem. Would you just go with the inline query dict, or is there another way of doing this in beanie or even in mongoDB query?
Alternatively, I am thinking of making the field parameter of `BaseFindComparisonOperator` optional and if it is not given, then the query dict is constructed without it:
```python
class BaseFindComparisonOperator(BaseFindOperator):
operator = ""
def __init__(
self,
field = None,
other = None,
):
self.field = field
self.other = other
@property
def query(self):
inner = {self.operator: self.other}
if self.field is None:
return inner
return {self.field: inner}
```
What do you think? Happy to hear your opinion :)</div> | open | 2023-05-25T19:24:05Z | 2024-12-08T21:53:54Z | https://github.com/BeanieODM/beanie/issues/574 | [
"feature request"
] | roman-right | 0 |
PablocFonseca/streamlit-aggrid | streamlit | 259 | Row selection events not reported | Row selection events are not reported. I took the examples that one finds in the web. They do not report what they are supposed to do. | closed | 2024-03-30T18:29:46Z | 2024-04-23T01:17:21Z | https://github.com/PablocFonseca/streamlit-aggrid/issues/259 | [] | mmf431 | 14 |
ets-labs/python-dependency-injector | flask | 240 | Add Python 3.8 support | Python 3.8.0 is available since Oct of 2019 and it's needed to start supporting it.
Links:
- https://www.python.org/downloads/release/python-380/ | closed | 2020-01-24T02:09:13Z | 2020-01-29T18:33:53Z | https://github.com/ets-labs/python-dependency-injector/issues/240 | [
"enhancement"
] | rmk135 | 0 |
keras-team/keras | data-science | 21,076 | calculate score calculation within callback | ```python
import keras
def get_model():
model = keras.Sequential()
model.add(keras.layers.Dense(1))
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=0.1),
loss="mean_squared_error",
metrics=["mean_absolute_error"],
)
return model
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(-1, 784).astype("float32") / 255.0
x_test = x_test.reshape(-1, 784).astype("float32") / 255.0
x_train = x_train[:1000]
y_train = y_train[:1000]
x_test = x_test[:1000]
y_test = y_test[:1000]
class CustomCallback(keras.callbacks.Callback):
def __init__(self, x, y):
super().__init__()
self.x = x
self.y = y
def on_epoch_end(self, epoch, logs=None):
y_pred = self.model.predict(self.x, verbose=0)
score = self.model.compute_metrics(self.x, self.y, y_pred, sample_weight=None)
print()
print(score)
model = get_model()
model.fit(
x_train,
y_train,
batch_size=256,
epochs=5,
verbose=1,
callbacks=[CustomCallback(x_train, y_train)],
)
```
```bash
Epoch 1/5
1/4 ━━━━━━━━━━━━━━━━━━━━ 1s 526ms/step - loss: 25.3436 - mean_absolute_error: 4.2441
{'loss': 242.523193359375, 'mean_absolute_error': 6.016280174255371}
4/4 ━━━━━━━━━━━━━━━━━━━━ 1s 57ms/step - loss: 256.4755 - mean_absolute_error: 10.3880
Epoch 2/5
1/4 ━━━━━━━━━━━━━━━━━━━━ 0s 27ms/step - loss: 6.6658 - mean_absolute_error: 2.1646
{'loss': 6.03378438949585, 'mean_absolute_error': 1.8999731540679932}
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 40ms/step - loss: 6.2043 - mean_absolute_error: 2.0708
Epoch 3/5
1/4 ━━━━━━━━━━━━━━━━━━━━ 0s 24ms/step - loss: 4.1691 - mean_absolute_error: 1.6324
{'loss': 4.564587593078613, 'mean_absolute_error': 1.7218464612960815}
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 42ms/step - loss: 4.4746 - mean_absolute_error: 1.7039
Epoch 4/5
1/4 ━━━━━━━━━━━━━━━━━━━━ 0s 25ms/step - loss: 4.4333 - mean_absolute_error: 1.7299
{'loss': 4.227972030639648, 'mean_absolute_error': 1.6317805051803589}
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 43ms/step - loss: 4.2346 - mean_absolute_error: 1.6612
Epoch 5/5
1/4 ━━━━━━━━━━━━━━━━━━━━ 0s 24ms/step - loss: 3.7971 - mean_absolute_error: 1.5549
{'loss': 5.39981746673584, 'mean_absolute_error': 2.682666063308716}
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 42ms/step - loss: 4.6834 - mean_absolute_error: 1.7220
<keras.src.callbacks.history.History at 0x7fa5642bba60>
```
1. About printing log, why printing caused in the middle of epoch, when it should be after finishing the epoch.
```bash
Epoch 1/5
1/4 ━━━━━━━━━━━━━━━━━━━━ 1s 526ms/step - loss: 25.3436 - mean_absolute_error: 4.2441
{'loss': 242.523193359375, 'mean_absolute_error': 6.016280174255371}
4/4 ━━━━━━━━━━━━━━━━━━━━ 1s 57ms/step - loss: 256.4755 - mean_absolute_error: 10.3880
```
2. About score, it should match but callback gives loss 242 and logs gives 256 - callback gives mae 6.01 and log gives 10.3. | open | 2025-03-20T20:18:57Z | 2025-03-21T11:10:29Z | https://github.com/keras-team/keras/issues/21076 | [
"keras-team-review-pending"
] | pure-rgb | 2 |
hankcs/HanLP | nlp | 1,009 | pyhanlp 在python3环境下data-for-1.6.8.zip 解压缩乱码 | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:1.6.8
我使用的版本是:1.6.8
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
pyhanlp 在第一次 import 的时候会下载并解压数据文件,实测python3 环境下,data-for-1.6.8使用`zipfile`直接解压会出现中文乱码(custom目录的几个文件)。python2 没有这个问题,参考后面两个链接。
## 复现问题
手工执行`static.__init__.py` 里面的代码
https://github.com/hankcs/pyhanlp/blob/master/pyhanlp/static/__init__.py#L241
### 触发代码
```python
with zipfile.ZipFile("data-for-1.6.8.zip", "r") as f:
for fn in f.namelist():
print(fn)
```
### 期望输出
正确的中文文件名
### 实际输出
乱码
## 其他信息
参见这两个 StackOverflow 问题:
https://stackoverflow.com/questions/41019624/python-zipfile-module-cant-extract-filenames-with-chinese-characters
https://stackoverflow.com/questions/37723505/namelist-from-zipfile-returns-strings-with-an-invalid-encoding
话说我解压了,再mac用系统自带的zip再打包,结果还是乱码:`zip -r data data`,使用7z压缩后就没问题了。`7z a -tzip data.zip data`。另外7z压缩的文件更小,推荐。
| closed | 2018-11-02T13:31:40Z | 2020-01-01T10:56:11Z | https://github.com/hankcs/HanLP/issues/1009 | [
"ignored"
] | passerbythesun | 2 |
arogozhnikov/einops | tensorflow | 252 | Support JAX's distributed arrays | **Describe the bug**
Using `...` in `einops.rearrange` introduces extraneous reshape operations, where multiple dimensions are flattened into 1D and then reshaped back.
This is typically fine, but can be problematic in (at least) two contexts:
1. When using JAX's [distributed arrays](https://jax.readthedocs.io/en/latest/notebooks/Distributed_arrays_and_automatic_parallelization.html), the reshape operation removes axis identity. This makes it harder (impossible?) for XLA to preserve the sharding of distributed arrays.
2. With non-C-contiguous arrays, `np.reshape` can entail a memory copy.
**Reproduction steps**
Using JAX:
```python
import einops
import jax
import numpy as np
x = np.zeros((2, 3, 4))
jax.make_jaxpr(lambda x: einops.rearrange(x, '... -> ...'))(x)
# { lambda ; a:f32[2,3,4]. let
# b:f32[24] = reshape[dimensions=None new_sizes=(24,)] a
# c:f32[2,3,4] = reshape[dimensions=None new_sizes=(2, 3, 4)] b
# in (c,) }
```
Using NumPy:
```python
import einops
import numpy as np
x = np.zeros((2, 3, 4), order='F')
y = einops.rearrange(x, '... -> ...')
x[...] = 1
print(y) # all zeros
````
**Expected behavior**
This operation should just be the identity, preserving the original array shapes and memory views:
- For the JAX example, the JAXpr would be just `{ lambda ; a:f32[2,3,4]. let in (a,) }`.
- For the NumPy example, the array `y` would be all ones after modifying `x`.
This is what happens currently if you use explicitly named dimensions:
```python
import einops
import jax
import numpy as np
x = np.zeros((2, 3, 4))
jax.make_jaxpr(lambda x: einops.rearrange(x, 'x y z -> x y z '))(x)
# { lambda ; a:f32[2,3,4]. let in (a,) }
```
```python
import einops
import numpy as np
x = np.zeros((2, 3, 4), order='F')
y = einops.rearrange(x, 'x y z -> x y z')
x[...] = 1
print(y) # all ones
```
More generally `...` should generate code equivalent to fully explicit dimension names.
**Your platform**
einops 0.6.1 | closed | 2023-04-21T01:21:52Z | 2023-10-02T03:50:52Z | https://github.com/arogozhnikov/einops/issues/252 | [
"enhancement"
] | shoyer | 4 |
collerek/ormar | pydantic | 632 | poetry add fails | When I run `poetry add ormar` I see this error:
```
$ poetry add ormar
Using version ^0.11.0 for ormar
Updating dependencies
Resolving dependencies... (0.0s)
SolverProblemError
Because no versions of ormar match >0.11.0,<0.12.0
and ormar (0.11.0) depends on SQLAlchemy (>=1.3.18,<=1.4.31), ormar (>=0.11.0,<0.12.0) requires SQLAlchemy (>=1.3.18,<=1.4.31).
So, because python-template depends on both SQLAlchemy (^1.4.36) and ormar (^0.11.0), version solving failed.
at venv/lib/python3.8/site-packages/poetry/puzzle/solver.py:241 in _solve
237│ packages = result.packages
238│ except OverrideNeeded as e:
239│ return self.solve_in_compatibility_mode(e.overrides, use_latest=use_latest)
240│ except SolveFailure as e:
→ 241│ raise SolverProblemError(e)
242│
243│ results = dict(
244│ depth_first_search(
245│ PackageNode(self._package, packages), aggregate_package_nodes
``` | closed | 2022-04-30T00:04:42Z | 2022-05-04T07:59:20Z | https://github.com/collerek/ormar/issues/632 | [
"bug"
] | mturilin | 1 |
brightmart/text_classification | tensorflow | 113 | tensorflow.python.framework.errors_impl.NotFoundError: Key is_training not found in checkpoint | Hi Mr.Brightmart
@brightmart when I try to run fast text, I got an error like this.
I try to run a01_FastText,
steps:
1. python p6_fastTextB_train_multilabel.py
everything works fine, then I got fast_text_checkpoint_multi folder with file tree like following: fast_text_checkpoint_multi
├── checkpoint
├── model.ckpt-5.data-00000-of-00001
├── model.ckpt-5.index
├── model.ckpt-5.meta
├── model.ckpt-6.data-00000-of-00001
├── model.ckpt-6.index
├── model.ckpt-6.meta
├── model.ckpt-7.data-00000-of-00001
├── model.ckpt-7.index
├── model.ckpt-7.meta
├── model.ckpt-8.data-00000-of-00001
├── model.ckpt-8.index
├── model.ckpt-8.meta
├── model.ckpt-9.data-00000-of-00001
├── model.ckpt-9.index
└── model.ckpt-9.meta
2. I try to run
` python p5_fastTextB_predict_multilabel.py`
I got following errors:
```
started...
ended...
('cache_path:', 'cache_vocabulary_label_pik/_word_voabulary.pik', 'file_exists:', True)
('vocab_size:', 142040)
('create_voabulary_label_sorted.started.traning_data_path:', 'train-zhihu4-only-title-all.txt')
('length of total question lists:', 0)
('number_examples:', 0)
start padding....
end padding...
2019-03-21 12:12:45.623531: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
Restoring Variables from Checkpoint
$$$ fast_text_checkpoint_multi/model.ckpt-9
2019-03-21 12:12:45.669992: W tensorflow/core/framework/op_kernel.cc:1318] OP_REQUIRES failed at save_restore_v2_ops.cc:184 : Not found: Key is_training not found in checkpoint
Traceback (most recent call last):
File "p5_fastTextB_predict_multilabel.py", line 101, in <module>
tf.app.run()
File "/usr/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 126, in run
_sys.exit(main(argv))
File "p5_fastTextB_predict_multilabel.py", line 66, in main
saver.restore(sess,tf.train.latest_checkpoint(FLAGS.ckpt_dir))
File "/usr/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1802, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/usr/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 900, in run
run_metadata_ptr)
File "/usr/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1135, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1316, in _do_run
run_metadata)
File "/usr/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1335, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: Key is_training not found in checkpoint
[[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_INT32, DT_INT32, DT_INT32, DT_FLOAT, DT_FLOAT, DT_BOOL], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]
Caused by op u'save/RestoreV2', defined at:
File "p5_fastTextB_predict_multilabel.py", line 101, in <module>
tf.app.run()
File "/usr/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 126, in run
_sys.exit(main(argv))
File "p5_fastTextB_predict_multilabel.py", line 62, in main
saver=tf.train.Saver()
File "/usr/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1338, in __init__
self.build()
File "/usr/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1347, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/usr/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1384, in _build
build_save=build_save, build_restore=build_restore)
File "/usr/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 835, in _build_internal
restore_sequentially, reshape)
File "/usr/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 472, in _AddRestoreOps
restore_sequentially)
File "/usr/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 886, in bulk_restore
return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
File "/usr/lib/python2.7/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1463, in restore_v2
shape_and_slices=shape_and_slices, dtypes=dtypes, name=name)
File "/usr/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3392, in create_op
op_def=op_def)
File "/usr/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1718, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
NotFoundError (see above for traceback): Key is_training not found in checkpoint
[[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_INT32, DT_INT32, DT_INT32, DT_FLOAT, DT_FLOAT, DT_BOOL], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]
```
Could you help me with this, thanks ahead.
| open | 2019-03-21T04:37:53Z | 2019-03-21T04:37:53Z | https://github.com/brightmart/text_classification/issues/113 | [] | luhawk803 | 0 |
python-gino/gino | asyncio | 238 | Can't load plugin: sqlalchemy.dialects:postgresql.asyncpg | * GINO version: 0.7.3
* Python version: Python 3.6.3
* asyncpg version: 0.15.0
* aiocontextvars version: 0.1.2
* PostgreSQL version: postgresql-9.2.23-3.el7_4.x86_64
I have an app using gino that works well on two Fedora machines running Python 3.6.5
However, on a CentOS 7 machine running Python 3.6.3, gino refuses to work:
```
Traceback (most recent call last):
File "test.py", line 46, in <module>
asyncio.get_event_loop().run_until_complete(main())
File "/usr/lib64/python3.6/asyncio/base_events.py", line 467, in run_until_complete
return future.result()
File "test.py", line 15, in main
await db.set_bind('postgresql://localhost/gino')
File "/home/test/.local/lib/python3.6/site-packages/gino/api.py", line 386, in set_bind
bind = await create_engine(bind, loop=loop, **kwargs)
File "/home/test/.local/lib/python3.6/site-packages/gino/strategies.py", line 22, in create
dialect_cls = u.get_dialect()
File "/home/test/.local/lib/python3.6/site-packages/sqlalchemy/engine/url.py", line 171, in get_dialect
entrypoint = self._get_entrypoint()
File "/home/test/.local/lib/python3.6/site-packages/sqlalchemy/engine/url.py", line 156, in _get_entrypoint
cls = registry.load(name)
File "/home/test/.local/lib/python3.6/site-packages/sqlalchemy/util/langhelpers.py", line 221, in load
(self.group, name))
sqlalchemy.exc.NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:postgresql.asyncpg
```
To make sure it's not an issue with my code, I tried the code in the gino readme, and it too produced the same error.
Both the working setup and the non-working setup have `~/.local/lib/python3.6/site-packages/gino-0.7.3.dist-info/entry_points.txt` containing
```
[sqlalchemy.dialects]
postgresql.asyncpg = gino.dialects.asyncpg:AsyncpgDialect
asyncpg = gino.dialects.asyncpg:AsyncpgDialect
```
(with the same newlines and indentation)
Any advice?
cc @salty-horse
see also: https://github.com/elad661/curlbus/issues/6 | closed | 2018-06-02T17:52:20Z | 2025-03-12T15:09:02Z | https://github.com/python-gino/gino/issues/238 | [
"invalid"
] | elad661 | 3 |
benbusby/whoogle-search | flask | 186 | [FEATURE] Config file for preset Configuration | **Describe the feature you'd like to see added**
Well, it would be wonderful if there's a config file in the repo where Whoogle default Configuration is defined before and can be set before deploying.
**Additional context**
Like, variables, Country, City, IsDarkModeOn etc. I know there is a Load function for loading the previously saved settings. Still, this also be helpful IMO | closed | 2021-01-29T20:37:08Z | 2021-03-28T18:37:23Z | https://github.com/benbusby/whoogle-search/issues/186 | [
"enhancement"
] | mizzunet | 4 |
plotly/dash-table | dash | 579 | [FEATURE] Sorting/Filtering of selected rows | This is basically asking for https://github.com/plotly/dash-table-experiments/issues/60
I face a similar issue - a large data table where the selected rows are not necessarily visible in the current page. This means the users need to jump through significant hoops to locate the selected rows. | open | 2019-09-10T07:47:00Z | 2021-03-02T14:09:50Z | https://github.com/plotly/dash-table/issues/579 | [
"dash-type-enhancement"
] | orenbenkiki | 3 |
NVIDIA/pix2pixHD | computer-vision | 110 | Cuda run out of memory. Pytorch 1.0 - Cuda 10.0 | I am testing pix2pixHD. It works on my local machine, but it raise an error in a cloud server machine. The strange think is that the server machine is more powerful.
Here the details:
**LOCAL MACHINE**
Ubuntu 16.04
GPU: Geforce GTX 1050 - 4GB GPU Memory
Pytorch version: 0.4.0
Cuda 9.0
**SERVER MACHINE**
Ubuntu 16.04
GPU: Geforce GTX 1080 - 8GB GPU Memory
Pytorch version: 1.0
Cuda 10.0
In the server machine I get:
`RuntimeError: CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 7.93 GiB total capacity; 7.27 GiB already allocated; 92.19 MiB free; 33.05 MiB cached)`
I run nvidia-smi before launch the script and I get:
```
No running process found
Memory usage: 0Mib/8119Mib
```
How that is possible?
Could the reason be the difference in **Pytorch** and **Cuda** version? | open | 2019-03-22T14:52:35Z | 2019-05-06T15:43:46Z | https://github.com/NVIDIA/pix2pixHD/issues/110 | [] | ghost | 3 |
oegedijk/explainerdashboard | dash | 162 | TypeError: can only concatenate str (not "int") to str | from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(X_train_norm, y_train)
explainer = ClassifierExplainer(model, X_test_norm, y_test,
shap='linear',
X_background=X_train,
model_output='logodds',labels=['0', '1'])
ExplainerDashboard(explainer).run()
And i'm getting this error:
TypeError: can only concatenate str (not "int") to str | closed | 2021-12-06T12:34:17Z | 2021-12-23T19:15:57Z | https://github.com/oegedijk/explainerdashboard/issues/162 | [] | andrecasotti | 1 |
onnx/onnx | scikit-learn | 6,267 | [1.16.2/1.17?] ONNX build Windows | # Bug Report
### Is the issue related to model conversion?
No. I can't even perform imports.
### Describe the bug
My projects are permissive with respect to which `onnx` PyPI package version is installed. `onnx 1.16.2` came out this morning and broke my projects.
For example, in a turnkeyml environment that used to work:
```
pip install --upgrade onnx
turnkey -h
```
results in:
```
(tkml) PS C:\work\turnkeyml> turnkey -h
Traceback (most recent call last):
File "C:\Users\jefowers\AppData\Local\miniconda3\envs\tkml\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\jefowers\AppData\Local\miniconda3\envs\tkml\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\jefowers\AppData\Local\miniconda3\envs\tkml\Scripts\turnkey.exe\__main__.py", line 4, in <module>
File "C:\work\turnkeyml\src\turnkeyml\__init__.py", line 3, in <module>
from .files_api import evaluate_files
File "C:\work\turnkeyml\src\turnkeyml\files_api.py", line 8, in <module>
from turnkeyml.sequence import Sequence
File "C:\work\turnkeyml\src\turnkeyml\sequence\__init__.py", line 1, in <module>
from .sequence import Sequence
File "C:\work\turnkeyml\src\turnkeyml\sequence\sequence.py", line 11, in <module>
import turnkeyml.common.status as status
File "C:\work\turnkeyml\src\turnkeyml\common\status.py", line 10, in <module>
import turnkeyml.common.analyze_model as analyze_model
File "C:\work\turnkeyml\src\turnkeyml\common\analyze_model.py", line 4, in <module>
import onnx
File "C:\Users\jefowers\AppData\Local\miniconda3\envs\tkml\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: A dynamic link library (DLL) initialization routine failed.
```
Setting `onnx<1.16.2` resolves the issue.
### System information
- OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*): Windows 11
- ONNX version (*e.g. 1.13*): 1.16.2
- Python version: 3.10 (works on Python 3.8)
- Protobuf version: 3.20.2
### Reproduction instructions
```
conda create -n otest python=3.10
conda activate otest
pip install turnkeyml==3.0.1
turnkey -h
```
### Expected behavior
Patch version increases to packages (1.16.1 -> 1.16.2) should not include breaking changes.
| open | 2024-08-01T14:26:01Z | 2025-03-14T07:51:06Z | https://github.com/onnx/onnx/issues/6267 | [
"bug",
"announcement"
] | jeremyfowers | 45 |
joouha/euporie | jupyter | 31 | [Feature request] Toggle top menu | Thanks for the great project!
Is there an option to toggle whether the top menu (`File`, `Edit`, etc.) is displayed, and can this be activated by a keyboard shortcut? Having the ability to remove the top menu at times would aid a minimalist setup greatly. | closed | 2022-08-31T03:59:46Z | 2022-08-31T17:03:26Z | https://github.com/joouha/euporie/issues/31 | [] | jjeffrey | 2 |
plotly/dash | jupyter | 2,423 | Add loading attribute to html.Img component | **Is your feature request related to a problem? Please describe.**
I'm trying to lazy load images using the in built browser functionality, but I can't because that's not exposed in the html.Img component.
**Describe the solution you'd like**
I'd like the loading attribute to be added to the html.Img built in component, so I can use
```
html.Img(src=..., loading="lazy")
```
**Describe alternatives you've considered**
I tried using dangerously set html from the dcc markdown component and the dash-dangerously-set-html library. The former didn't work (I'm assuming something todo with the async nature of the markdown loading process). The later works, but this component doesn't support serialisation like other dash components and broke some caching (standard Flask-Caching stuff) required for my particular usecase.
**Additional context**
Discussed briefly on the plotly forum https://community.plotly.com/t/html-img-browser-based-lazy-loading/72637/3
| open | 2023-02-13T12:15:58Z | 2024-08-13T19:26:45Z | https://github.com/plotly/dash/issues/2423 | [
"feature",
"P3"
] | LiamLombard | 1 |
chatanywhere/GPT_API_free | api | 73 | 为什么IDEA中搜索不到ChatGPT呢 | closed | 2023-08-02T22:48:44Z | 2023-08-11T02:24:39Z | https://github.com/chatanywhere/GPT_API_free/issues/73 | [] | ghost | 1 | |
microsoft/nni | tensorflow | 5,656 | InputChoice raises issues with TPE search strategy in NAS | **Describe the issue**:
In version 3.0rc1 TPE does not seem to be compatible with the InputChoice primitive. In addition, I found TPE tuner is default to minimize (I'm assuming because hpo tuners all minimize), but this isn't coherent with https://github.com/microsoft/nni/issues/5626#issuecomment-1615350440 , perhaps it should be initialized with optimize_mode='maximize'.
Relevant code:
```
class Block(nn.Module):
def __init__(...) -> None:
super().__init__()
(...)
self.skip_connection = InputChoice(n_candidates=2, n_chosen=1,
label='block_skip_connection' + str(index))
def forward(self, x: Tensor) -> Tensor:
x_input = x
(...)
x = self.skip_connection([x, x + x_input])
return x
```
Raises:
```[2023-08-03 17:59:50] Creating experiment, Experiment ID: zdquyoek
[2023-08-03 17:59:50] Starting web server...
[2023-08-03 17:59:51] Setting up...
[2023-08-03 17:59:52] Web portal URLs: http://169.254.250.24:9120 http://10.0.0.1:9120 http://192.168.56.1:9120 http://169.254.81.199:9120 http://169.254.164.10:9120 http://169.254.20.238:9120 http://169.254.178.165:9120 http://192.168.1.68:9120 http://169.254.160.23:9120 http://10.0.4.52:9120 http://169.254.168.66:9120 http://127.0.0.1:9120
[2023-08-03 17:59:52] WARNING: Cannot convert CategoricalMultiple([0, 1], n_chosen=1, label='dnn/block_skip_connection0') to legacy format. It will not show on WebUI.
[2023-08-03 17:59:52] WARNING: Cannot convert CategoricalMultiple([0, 1], n_chosen=1, label='dnn/block_skip_connection1') to legacy format. It will not show on WebUI.
[2023-08-03 17:59:52] WARNING: Cannot convert CategoricalMultiple([0, 1], n_chosen=1, label='dnn/block_skip_connection2') to legacy format. It will not show on WebUI.
[2023-08-03 17:59:52] WARNING: Cannot convert CategoricalMultiple([0, 1], n_chosen=1, label='dnn/block_skip_connection3') to legacy format. It will not show on WebUI.
[2023-08-03 17:59:52] WARNING: Cannot convert CategoricalMultiple([0, 1], n_chosen=1, label='dnn/block_skip_connection4') to legacy format. It will not show on WebUI.
[2023-08-03 17:59:52] Successfully update searchSpace.
[2023-08-03 17:59:52] Experiment initialized successfully. Starting exploration strategy...
[2023-08-03 17:59:52] ERROR: Strategy failed to execute.
[2023-08-03 17:59:52] Stopping experiment, please wait...
Traceback (most recent call last):
File "C:\Users\Leonardo\Documents\Universidade Leo\5º ano\tese\Omnia\omnia\omnia\examples\single_drug.py", line 97, in <module>
nni_predictor.fit(refit_best=True)
File "C:\Users\Leonardo\Documents\Universidade Leo\5º ano\tese\Omnia\omnia\omnia\src\omnia\generics\nas\nni_predictor.py", line 309, in fit
experiment.start_experiment()
File "C:\Users\Leonardo\Documents\Universidade Leo\5º ano\tese\Omnia\omnia\omnia\src\omnia\generics\nas\experiment.py", line 107, in start_experiment
experiment.run(port=self.port)
File "C:\Users\Leonardo\AppData\Local\pypoetry\Cache\virtualenvs\omnia-local-1fEoJYjW-py3.9\lib\site-packages\nni\experiment\experiment.py", line 236, in run
return self._run_impl(port, wait_completion, debug)
File "C:\Users\Leonardo\AppData\Local\pypoetry\Cache\virtualenvs\omnia-local-1fEoJYjW-py3.9\lib\site-packages\nni\experiment\experiment.py", line 205, in _run_impl
self.start(port, debug)
File "C:\Users\Leonardo\AppData\Local\pypoetry\Cache\virtualenvs\omnia-local-1fEoJYjW-py3.9\lib\site-packages\nni\nas\experiment\experiment.py", line 270, in start
self._start_engine_and_strategy()
File "C:\Users\Leonardo\AppData\Local\pypoetry\Cache\virtualenvs\omnia-local-1fEoJYjW-py3.9\lib\site-packages\nni\nas\experiment\experiment.py", line 230, in _start_engine_and_strategy
self.strategy.run()
File "C:\Users\Leonardo\AppData\Local\pypoetry\Cache\virtualenvs\omnia-local-1fEoJYjW-py3.9\lib\site-packages\nni\nas\strategy\base.py", line 170, in run
self._run()
File "C:\Users\Leonardo\AppData\Local\pypoetry\Cache\virtualenvs\omnia-local-1fEoJYjW-py3.9\lib\site-packages\nni\nas\strategy\hpo.py", line 69, in _run
tuner_search_space = {label: mutable.as_legacy_dict() for label, mutable in self.model_space.simplify().items()}
File "C:\Users\Leonardo\AppData\Local\pypoetry\Cache\virtualenvs\omnia-local-1fEoJYjW-py3.9\lib\site-packages\nni\nas\strategy\hpo.py", line 69, in <dictcomp>
tuner_search_space = {label: mutable.as_legacy_dict() for label, mutable in self.model_space.simplify().items()}
File "C:\Users\Leonardo\AppData\Local\pypoetry\Cache\virtualenvs\omnia-local-1fEoJYjW-py3.9\lib\site-packages\nni\mutable\mutable.py", line 356, in as_legacy_dict
raise NotImplementedError(f'as_legacy_dict is not implemented for this type of mutable: {type(self)}.')
NotImplementedError: as_legacy_dict is not implemented for this type of mutable: <class 'nni.mutable.mutable.CategoricalMultiple'>.
[2023-08-03 17:59:52] Experiment stopped
Process finished with exit code 1
```
**Environment**:
- NNI version: 3.0rc1
- Training service (local|remote|pai|aml|etc): local
- Client OS: windows 10
- Python version: 3.9.13
- PyTorch/TensorFlow version: 1.13.0
- Is conda/virtualenv/venv used?: pypoetry
- Is running in Docker?: No
| open | 2023-08-03T17:17:37Z | 2023-08-17T04:52:55Z | https://github.com/microsoft/nni/issues/5656 | [] | sw33zy | 1 |
piskvorky/gensim | data-science | 3,377 | Install gensim fails because C code is not ANSI-compliant | #### Problem description
Can't use `pip install gensim` to install the latest `gensim`
#### Steps/code/corpus to reproduce
```
building 'gensim.similarities.fastss' extension
creating build/temp.linux-x86_64-3.6/gensim/similarities
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switch$s -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/nas/home/hstk/gitProject/venv/centaur/include -I/usr/include/python3.6m -I/nas/home/hstk/gitProject/venv/centaur/lib64/pyth$n3.6/site-packages/numpy/core/include -c gensim/similarities/fastss.c -o build/temp.linux-x86_64-3.6/gensim/similarities/fastss.o
gensim/similarities/fastss.c: 在函数‘ceditdist’中:
gensim/similarities/fastss.c:725:9: 错误:只允许在 C99 模式下使用‘for’循环初始化声明
for (WIDTH tmpi = 0; tmpi <= len_s1; tmpi++) row2[tmpi] = tmpi;
^
gensim/similarities/fastss.c:725:9: 附注:使用 -std=c99 或 -std=gnu99 来编译您的代码
gensim/similarities/fastss.c:727:9: 错误:只允许在 C99 模式下使用‘for’循环初始化声明
for (WIDTH i2 = 0; i2 < len_s2; i2++) {
^
gensim/similarities/fastss.c:738:13: 错误:只允许在 C99 模式下使用‘for’循环初始化声明
for (WIDTH i1 = 0; i1 < len_s1; i1++) {
^
error: command 'gcc' failed with exit status 1
```
Sorry, the error info have Chinese, but I think It's easy to figure out the error.
Now, I try to install `gensim==4.0.0`, it's OK.
#### Versions
```
Linux-3.10.0-1127.18.2.el7.x86_64-x86_64-with-centos-7.8.2003-Core
Python 3.6.8 (default, Apr 2 2020, 13:34:55)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Bits 64
NumPy 1.19.5
SciPy 1.5.4
```
| closed | 2022-08-10T09:49:06Z | 2022-08-22T12:51:26Z | https://github.com/piskvorky/gensim/issues/3377 | [] | hstk30 | 1 |
assafelovic/gpt-researcher | automation | 609 | No module named 'gpt_researcher.retrievers.custom' | I am trying to run multi-agent researcher but I am getting following error:
> Traceback (most recent call last):
File "C:\Users\Tomas\Documents\Python Workspace\gpt-researcher\multi_agents\main.py", line 32, in <module>
asyncio.run(main())
File "C:\Users\Tomas\anaconda3\envs\gpt\Lib\asyncio\runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "C:\Users\Tomas\anaconda3\envs\gpt\Lib\asyncio\runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Tomas\anaconda3\envs\gpt\Lib\asyncio\base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "C:\Users\Tomas\Documents\Python Workspace\gpt-researcher\multi_agents\main.py", line 27, in main
research_report = await chief_editor.run_research_task()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Tomas\Documents\Python Workspace\gpt-researcher\multi_agents\agents\master.py", line 57, in run_research_task
result = await chain.ainvoke({"task": self.task})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Tomas\anaconda3\envs\gpt\Lib\site-packages\langgraph\pregel\__init__.py", line 1504, in ainvoke
async for chunk in self.astream(
File "C:\Users\Tomas\anaconda3\envs\gpt\Lib\site-packages\langgraph\pregel\__init__.py", line 1333, in astream
_panic_or_proceed(done, inflight, step)
File "C:\Users\Tomas\anaconda3\envs\gpt\Lib\site-packages\langgraph\pregel\__init__.py", line 1537, in _panic_or_proceed
raise exc
File "C:\Users\Tomas\anaconda3\envs\gpt\Lib\site-packages\langgraph\pregel\retry.py", line 120, in arun_with_retry
await task.proc.ainvoke(task.input, task.config)
File "C:\Users\Tomas\anaconda3\envs\gpt\Lib\site-packages\langchain_core\runnables\base.py", line 2540, in ainvoke
input = await step.ainvoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Tomas\anaconda3\envs\gpt\Lib\site-packages\langgraph\utils.py", line 117, in ainvoke
ret = await asyncio.create_task(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Tomas\Documents\Python Workspace\gpt-researcher\multi_agents\agents\researcher.py", line 36, in run_initial_research
return {"task": task, "initial_research": await self.research(query=query, verbose=task.get("verbose"),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Tomas\Documents\Python Workspace\gpt-researcher\multi_agents\agents\researcher.py", line 13, in research
researcher = GPTResearcher(query=query, report_type=research_report, parent_query=parent_query,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Tomas\anaconda3\envs\gpt\Lib\site-packages\gpt_researcher\master\agent.py", line 55, in __init__
self.retriever = get_retriever(self.cfg.retriever)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Tomas\anaconda3\envs\gpt\Lib\site-packages\gpt_researcher\master\actions.py", line 42, in get_retriever
from gpt_researcher.retrievers import TavilySearch
File "C:\Users\Tomas\anaconda3\envs\gpt\Lib\site-packages\gpt_researcher\retrievers\__init__.py", line 8, in <module>
from .custom.custom import CustomRetriever
ModuleNotFoundError: No module named 'gpt_researcher.retrievers.custom' | closed | 2024-06-18T17:39:09Z | 2024-06-19T06:13:34Z | https://github.com/assafelovic/gpt-researcher/issues/609 | [] | JustUser1410 | 3 |
jupyterlab/jupyter-ai | jupyter | 1,138 | Dev install on CI times out | ## Description
See the 42-minute workflow run in #1129: https://github.com/jupyterlab/jupyter-ai/actions/runs/12167591800/job/33939176577?pr=1129
Relevant logs below:
```
@jupyter-ai/core: INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
@jupyter-ai/core: INFO: pip is still looking at multiple versions of langchain-nvidia-ai-endpoints to determine which version is compatible with other requirements. This could take a while.
@jupyter-ai/core: Downloading langchain_nvidia_ai_endpoints-0.2.1-py3-none-any.whl.metadata (9.3 kB)
@jupyter-ai/core: INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
@jupyter-ai/core: Downloading langchain_nvidia_ai_endpoints-0.2.0-py3-none-any.whl.metadata (9.4 kB)
@jupyter-ai/core: ERROR: Exception:
@jupyter-ai/core: Traceback (most recent call last):
@jupyter-ai/core: File "/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/pip/_internal/cli/base_command.py", line 105, in _run_wrapper
@jupyter-ai/core: status = _inner_run()
@jupyter-ai/core: File "/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/pip/_internal/cli/base_command.py", line 96, in _inner_run
@jupyter-ai/core: return self.run(options, args)
@jupyter-ai/core: File "/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/pip/_internal/cli/req_command.py", line 67, in wrapper
@jupyter-ai/core: return func(self, options, args)
@jupyter-ai/core: File "/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/pip/_internal/commands/install.py", line 379, in run
@jupyter-ai/core: requirement_set = resolver.resolve(
@jupyter-ai/core: File "/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 95, in resolve
@jupyter-ai/core: result = self._result = resolver.resolve(
@jupyter-ai/core: File "/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/pip/_vendor/resolvelib/resolvers.py", line 546, in resolve
@jupyter-ai/core: state = resolution.resolve(requirements, max_rounds=max_rounds)
@jupyter-ai/core: File "/opt/hostedtoolcache/Python/3.9.20/x64/lib/python3.9/site-packages/pip/_vendor/resolvelib/resolvers.py", line 457, in resolve
@jupyter-ai/core: raise ResolutionTooDeep(max_rounds)
@jupyter-ai/core: pip._vendor.resolvelib.resolvers.ResolutionTooDeep: 200000
@jupyter-ai/core:
```
| closed | 2024-12-04T21:59:16Z | 2024-12-05T14:59:45Z | https://github.com/jupyterlab/jupyter-ai/issues/1138 | [
"bug"
] | dlqqq | 7 |
WZMIAOMIAO/deep-learning-for-image-processing | deep-learning | 737 | MAP和AP50 | 如果我的数据集只有一个类别,这时候输出的指标里MAP和AP50应该差不多吧?为什么MAP才0.3,AP50倒是有0.7。怎么修改相应指标呢,如果我想输出其他的指标,例如准确率,召回率或者自定义的一些指标 | open | 2023-05-20T06:12:37Z | 2023-05-20T06:12:37Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/737 | [] | thestars-maker | 0 |
cvat-ai/cvat | tensorflow | 8,723 | Cant communicate cvat Api from my dockerized application | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
1) Docker compose up - CVAT application
2) Docker compose up - my application
3) Call CVAT REST API from my application
### Expected Behavior
{
status:200,
message:"successfuly",
}
### Possible Solution
We should able to reach the CVAT REST API from different networks.
### Context
I am planning to integrate the CVAT application with my MERN application. So in that, I have facing the issue while i was inside the dockerized containers.
### Environment
_No response_ | closed | 2024-11-20T07:46:52Z | 2024-11-20T07:55:33Z | https://github.com/cvat-ai/cvat/issues/8723 | [
"bug",
"invalid"
] | Nishanth-KR | 1 |
huggingface/datasets | deep-learning | 6,695 | Support JSON file with an array of strings | Support loading a dataset from a JSON file with an array of strings.
See: https://huggingface.co/datasets/CausalLM/Refined-Anime-Text/discussions/1 | closed | 2024-02-26T12:35:11Z | 2024-03-08T14:16:25Z | https://github.com/huggingface/datasets/issues/6695 | [
"enhancement"
] | albertvillanova | 1 |
pydantic/pydantic-settings | pydantic | 300 | Validation error for 3 levels of nested dicts in v2.3.0 | Hello,
My model does not work anymore with the latest version of pydantic-settings.
Here is a test that reproduces my issue (`env` is the fixture from the pydantic-settings tests):
```python
def test_nested_dicts(env):
class Settings(BaseSettings):
nested: Dict[str, Dict[str, Dict[str, str]]]
model_config = SettingsConfigDict(env_nested_delimiter='__')
env.set('nested__foo__a__b', 'bar')
s = Settings()
assert s.model_dump() == {'nested': {'foo': {'a': {'b': 'bar'}}}}
```
This test passes in version 2.2.1 of pydantic-settings but fails in version 2.3.0 with error:
```
E pydantic_settings.sources.SettingsError: error parsing value for field "nested" from source "EnvSettingsSource"
.venv/lib/python3.11/site-packages/pydantic_settings/sources.py:377: SettingsError
```
This happens with Python 3.11.9 on Linux, and the following packages:
```
Package Version
----------------- -------
annotated-types 0.7.0
iniconfig 2.0.0
packaging 24.0
pip 24.0
pluggy 1.5.0
pydantic 2.7.3
pydantic_core 2.18.4
pydantic-settings 2.3.0
pytest 8.2.2
python-dotenv 1.0.1
setuptools 69.5.1
typing_extensions 4.12.1
wheel 0.43.0
```
Is this considered a regression or should I find an alternative way to solve my issue? | closed | 2024-06-05T09:48:33Z | 2024-06-05T15:16:50Z | https://github.com/pydantic/pydantic-settings/issues/300 | [
"bug"
] | bpicardat | 7 |
alteryx/featuretools | scikit-learn | 2,284 | Add primitive for 2 digit Postal Code Prefix (US-only) | - As a user of Featuretools, I would like to do feature engineering for Postal Codes in USA.
- I would like to extract the 2 digit prefix:

| closed | 2022-09-12T14:38:50Z | 2022-11-29T20:08:15Z | https://github.com/alteryx/featuretools/issues/2284 | [] | gsheni | 0 |
pytest-dev/pytest-mock | pytest | 405 | Failing tests for python 3.12 | When running contribution tests for python 3.12 env:
```bash
tox -e py312
```
I get the following output:
```
py312: install_package> python -I -m pip install --force-reinstall --no-deps /home/brandon/remotes/pytest-mock/.tox/.tmp/package/3/pytest-mock-3.12.1.dev10+g3d48ff9.tar.gz
py312: commands[0]> coverage run --append --source=/home/brandon/remotes/pytest-mock/.tox/py312/lib/python3.12/site-packages/pytest_mock -m pytest tests --color=yes
Traceback (most recent call last):
File "/home/brandon/remotes/pytest-mock/.tox/py312/bin/coverage", line 5, in <module>
from coverage.cmdline import main
File "/home/brandon/remotes/pytest-mock/.tox/py312/lib/python3.12/site-packages/coverage/__init__.py", line 24, in <module>
from coverage.control import (
File "/home/brandon/remotes/pytest-mock/.tox/py312/lib/python3.12/site-packages/coverage/control.py", line 28, in <module>
from coverage.collector import Collector, HAS_CTRACER
File "/home/brandon/remotes/pytest-mock/.tox/py312/lib/python3.12/site-packages/coverage/collector.py", line 19, in <module>
from coverage.data import CoverageData
File "/home/brandon/remotes/pytest-mock/.tox/py312/lib/python3.12/site-packages/coverage/data.py", line 24, in <module>
from coverage.sqldata import CoverageData
File "/home/brandon/remotes/pytest-mock/.tox/py312/lib/python3.12/site-packages/coverage/sqldata.py", line 16, in <module>
import sqlite3
File "/usr/local/lib/python3.12/sqlite3/__init__.py", line 57, in <module>
from sqlite3.dbapi2 import *
File "/usr/local/lib/python3.12/sqlite3/dbapi2.py", line 27, in <module>
from _sqlite3 import *
ModuleNotFoundError: No module named '_sqlite3'
py312: exit 1 (0.05 seconds) /home/brandon/remotes/pytest-mock> coverage run --append --source=/home/brandon/remotes/pytest-mock/.tox/py312/lib/python3.12/site-packages/pytest_mock -m pytest tests --color=yes pid=2829753
.pkg: _exit> python /home/brandon/remotes/pytest-mock/.env/lib/python3.12/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
py312: FAIL code 1 (6.26=setup[6.21]+cmd[0.05] seconds)
evaluation failed :( (6.32 seconds)
```
I'm running this in Debian 12 inside a `venv` with `python 3.12.1`, and by the way, also get a similar failure when running `pre-commit install`:
```
Traceback (most recent call last):
File "/home/brandon/remotes/pytest-mock/.env/bin/pre-commit", line 5, in <module>
from pre_commit.main import main
File "/home/brandon/remotes/pytest-mock/.env/lib/python3.12/site-packages/pre_commit/main.py", line 14, in <module>
from pre_commit.commands.clean import clean
File "/home/brandon/remotes/pytest-mock/.env/lib/python3.12/site-packages/pre_commit/commands/clean.py", line 6, in <module>
from pre_commit.store import Store
File "/home/brandon/remotes/pytest-mock/.env/lib/python3.12/site-packages/pre_commit/store.py", line 6, in <module>
import sqlite3
File "/usr/local/lib/python3.12/sqlite3/__init__.py", line 57, in <module>
from sqlite3.dbapi2 import *
File "/usr/local/lib/python3.12/sqlite3/dbapi2.py", line 27, in <module>
from _sqlite3 import *
ModuleNotFoundError: No module named '_sqlite3'
```
Is there any other tool required to be installed in the machine and therefore not documented in contributing docs or is this a bug? | closed | 2024-01-25T23:04:41Z | 2024-01-25T23:29:35Z | https://github.com/pytest-dev/pytest-mock/issues/405 | [] | blotero | 1 |
fastapi/sqlmodel | sqlalchemy | 150 | How to create computed columns ? | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
class Album(SQLModel):
title: str
slug: str
description: str
# what i've tried but doesn't work e.g :
# slug: str = column_property(slugify(title))
# E NameError: name 'title' is not defined
class Album(SQLModel):
title: str
slug: str = column_property(slugify(title))
description: str
```
### Description
I'm trying to generate a column named "slug" that is a slugified version of "title" so it should be persisted in database, and be updated when title changes.
But so far no luck, I've looked at column property but didn't manage to make it work with SQLModel. I saw that there are events "before update" ... but I don't think this is the way to go
### Operating System
Windows
### Operating System Details
_No response_
### SQLModel Version
0.0.4
### Python Version
3.9.2
### Additional Context
_No response_ | open | 2021-10-30T11:35:51Z | 2024-02-14T13:28:18Z | https://github.com/fastapi/sqlmodel/issues/150 | [
"question"
] | sorasful | 12 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 15,903 | [Fix Guide] ImportError: cannot import name 'packaging' from 'pkg_resources' | Recent update to the `setuptools=70.0.0` prevents some new installs of Web UI from launching
Error:
```py
ImportError: cannot import name 'packaging' from 'pkg_resources' (venv\lib\site-packages\pkg_resources\__init__.py)
```
initial issue post
- https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15863
fix PR has already been proposed
- https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15882
- https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15883
but it might take some time for this fix to be pushed to `master` branch as AUTOMATIC1111 is currently not active
# Simple Fix Guide
if your experiencing this issue you can perform a simple fix
1. temporarily adding this line of text to `stable-diffusion-webui/requirements_versions.txt` and `save` the file
```
setuptools==69.5.1
```
[the end result should look like this](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15882/files) (ignore the comment after `#`)
2. launch webui and it should be able to launch normally
3. after this you should be able to remove the modification (`ctrl + z` to undo in most text editors) and it should continue to work (until you reinstall)
> if the modification was not removed you might have trouble updating webui in the future
---
other the methods involving manually downgrading `setuptools==69.5.1` will also work see
- https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15863
but it is harder to perform for the average user
| closed | 2024-05-28T16:57:51Z | 2024-05-28T16:58:54Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15903 | [
"announcement"
] | w-e-w | 0 |
influxdata/influxdb-client-python | jupyter | 403 | create_bucket trigger IndexError: list index out of range | <!--
Thank you for reporting a bug.
* Please add a :+1: or comment on a similar existing bug report instead of opening a new one.
* https://github.com/influxdata/influxdb-client-python/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+is%3Aclosed+sort%3Aupdated-desc+label%3Abug+
* Please check whether the bug can be reproduced with the latest release.
* The fastest way to fix a bug is to open a Pull Request.
* https://github.com/influxdata/influxdb-client-python/pulls
-->
__Steps to reproduce:__
run any of the following code, the error triggered
```client.buckets_api().create_bucket(bucket_name='my-bycket')```
```client.buckets_api().create_bucket(bucket_name='my-bycket', org='my-org')```
only working if:
```client.buckets_api().create_bucket(bucket_name='my-bycket', org='the org id here')```
```client.buckets_api().create_bucket(bucket_name='my-bycket', org_id='the org id here')```
But, seems there is no way from this python client to get the org id
__Actual behavior:__
The error triggered as bellow
```
Traceback (most recent call last):
File "/Users/mr.banana/Bugazelle/export-csv-to-influx/src/ExportCsvToInflux/influx_object.py", line 161, in <module>
influxdb.create_influx_db_if_not_exists(bucket='ken')
File "/Users/mr.banana/Bugazelle/export-csv-to-influx/src/ExportCsvToInflux/influx_object.py", line 119, in create_influx_db_if_not_exists
client.buckets_api().create_bucket(bucket_name=bucket)
File "/Users/mr.banana/PythonVM3/lib/python3.8/site-packages/influxdb_client-1.25.0-py3.8.egg/influxdb_client/client/bucket_api.py", line 55, in create_bucket
org_id=get_org_query_param(org=(org_id if org is None else org),
File "/Users/mr.banana/PythonVM3/lib/python3.8/site-packages/influxdb_client-1.25.0-py3.8.egg/influxdb_client/client/util/helpers.py", line 35, in get_org_query_param
return client.organizations_api().find_organizations(org=_org)[0].id
IndexError: list index out of range
```
__Specifications:__
- Client Version: 1.25.0
- InfluxDB Version: 2.1.1
- Platform: MacOS
| closed | 2022-02-13T15:51:22Z | 2022-03-18T07:27:23Z | https://github.com/influxdata/influxdb-client-python/issues/403 | [
"state: confirmed"
] | Bugazelle | 3 |
sinaptik-ai/pandas-ai | data-visualization | 1,360 | Unable to analyze the DataFrame when it contains data in list format. | ### System Info
OS version: MacOS Sonoma
Python version: 3.12.5
The current version of `pandasai` being used: 2.2.14
### 🐛 Describe the bug
I tried using pandasai `Agent` to analyze my data in DataFrame format, but I found that if the DataFrame contains data in list format, the analysis fails, and there are no error logs in `pandasai.log`. The following is a simple example code:
```python
import pandas as pd
data = {
'Employee_ID': [101, 102, 103, 104],
'Employee_Name': ['Alice', 'Bob', 'Charlie', 'Diana'],
'Projects': [['Project A', 'Project B'], ['Project C'], ['Project D', 'Project E', 'Project F'], ['Project G']],
'Salary': [70000, 80000, 75000, 90000]
}
df = pd.DataFrame(data)
agent = Agent(
dfs=df,
config=Config(llm=OpenAI(api_token=os.getenv("OAI_API_KEY"), model="gpt-4o"))
)
print(agent.chat('Tell me the average salary of the employees'))
```
Here is the output:
```python
"Unfortunately, I was not able to get your answers, because of the following error:\n\nunhashable type: 'list'\n"
``` | closed | 2024-09-08T06:30:09Z | 2024-12-15T16:08:08Z | https://github.com/sinaptik-ai/pandas-ai/issues/1360 | [
"bug"
] | ReeveWu | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.