repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
graphistry/pygraphistry | jupyter | 336 | [FEA] Generalize collapse matchers to accept kv-dict and col list | **Is your feature request related to a problem? Please describe.**
Typical cases of `.collapse(` are acting on a dict (`{'type': 'account', 'risk': 'high'}`) or a col list (`cols=['type']` or `cols=['type', 'risk']`). Currently, a manual outer loop is required: `for v in g._nodes[cols].drop_duplicates()...` .
**Describe the solution you'd like**
```python
g2 = g.collapse('root', match=['type', 'risk'])
g2 = g.collapse('root', match={'type': 'account', 'risk': 'high'})
```
It should also handle the case of the root id changing..
**Describe alternatives you've considered**
We can also support a general udf `f(n1, n2)`, but until we get remote mode clearer, e.g., vectorization, wait
**Additional context**
It may make sense to wait until we've added collapse reductions and optimized those | open | 2022-04-23T15:30:28Z | 2022-04-23T15:30:28Z | https://github.com/graphistry/pygraphistry/issues/336 | [
"enhancement"
] | lmeyerov | 0 |
gunthercox/ChatterBot | machine-learning | 2,159 | error for AttributeError: module 'time' has no attribute 'clock' in chatterbot | ```
C:\Users\User\Downloads\TSC>main.py
Traceback (most recent call last):
File "C:\Users\User\Downloads\TSC\main.py", line 8, in <module>
chatbot = ChatBot("Ninja")
File "C:\Python39\lib\site-packages\chatterbot\chatterbot.py", line 41, in __init__
self.storage = utils.initialize_class(storage_adapter, **kwargs)
File "C:\Python39\lib\site-packages\chatterbot\utils.py", line 54, in initialize_class
return Class(*args, **kwargs)
File "C:\Python39\lib\site-packages\chatterbot\storage\sql_storage.py", line 22, in __init__
from sqlalchemy import create_engine
File "C:\Python39\lib\site-packages\sqlalchemy\__init__.py", line 8, in <module>
from . import util as _util # noqa
File "C:\Python39\lib\site-packages\sqlalchemy\util\__init__.py", line 14, in <module>
from ._collections import coerce_generator_arg # noqa
File "C:\Python39\lib\site-packages\sqlalchemy\util\_collections.py", line 16, in <module>
from .compat import binary_types
File "C:\Python39\lib\site-packages\sqlalchemy\util\compat.py", line 264, in <module>
time_func = time.clock
AttributeError: module 'time' has no attribute 'clock'
``` | closed | 2021-05-12T15:19:45Z | 2025-02-17T21:38:19Z | https://github.com/gunthercox/ChatterBot/issues/2159 | [] | dev-adalz | 5 |
proplot-dev/proplot | data-visualization | 189 | ProPlot fails to automatically detect legend entries for "outer" axes legends | ## Description
Copied from @ssssOO's comment: https://github.com/lukelbd/proplot/issues/188#issuecomment-643797637
Is it possible to call `legend ` on an axis of a given plot of a subplots without the handles argument? Whenever I try this I get an error:
`ax[1].legend(loc='b', ncols=4, center=True)`
Error:
`line 3362, in legend_wrapper
interval = 1 / len(pairs) # split up axes
ZeroDivisionError: division by zero`
It would be nice if the legend could directly access the default handles without relying on their generation when plotting...
| closed | 2020-06-14T21:56:56Z | 2020-06-14T22:03:08Z | https://github.com/proplot-dev/proplot/issues/189 | [
"bug"
] | lukelbd | 0 |
pennersr/django-allauth | django | 3,362 | Need help with google auth in django using mongodb cloud database | Hey guys. So I am working on a project which uses mongodb cloud databse. I implemented django-allauth with google as mentioned in tutorial videos but when I try to migrate I end up with the error given below. I am stuck on this for a while and would appreciate any help. I tried all the ways I could find on stackoverflow and other tutorials like managing the library versions but am still facing the same problem.
Operations to perform:
Apply all migrations: account, admin, auth, contenttypes, sessions, sites, socialaccount
Running migrations:
Applying socialaccount.0004_alter_socialaccount_id_alter_socialapp_id_and_more...Not implemented alter command for SQL ALTER TABLE "socialaccount_socialaccount" ALTER COLUMN "id" TYPE long
Traceback (most recent call last):
File "D:\mlOpsProj\mlops\lib\site-packages\djongo\cursor.py", line 51, in execute
self.result = Query(
File "D:\mlOpsProj\mlops\lib\site-packages\djongo\sql2mongo\query.py", line 784, in __init__
self._query = self.parse()
File "D:\mlOpsProj\mlops\lib\site-packages\djongo\sql2mongo\query.py", line 876, in parse
raise e
File "D:\mlOpsProj\mlops\lib\site-packages\djongo\sql2mongo\query.py", line 857, in parse
return handler(self, statement)
File "D:\mlOpsProj\mlops\lib\site-packages\djongo\sql2mongo\query.py", line 889, in _alter
query = AlterQuery(self.db, self.connection_properties, sm, self._params)
File "D:\mlOpsProj\mlops\lib\site-packages\djongo\sql2mongo\query.py", line 425, in __init__
super().__init__(*args)
File "D:\mlOpsProj\mlops\lib\site-packages\djongo\sql2mongo\query.py", line 84, in __init__
super().__init__(*args)
File "D:\mlOpsProj\mlops\lib\site-packages\djongo\sql2mongo\query.py", line 62, in __init__
self.parse()
File "D:\mlOpsProj\mlops\lib\site-packages\djongo\sql2mongo\query.py", line 441, in parse
self._alter(statement)
File "D:\mlOpsProj\mlops\lib\site-packages\djongo\sql2mongo\query.py", line 500, in _alter
raise SQLDecodeError(f'Unknown token: {tok}')
djongo.exceptions.SQLDecodeError:
Keyword: Unknown token: TYPE
Sub SQL: None
FAILED SQL: ('ALTER TABLE "socialaccount_socialaccount" ALTER COLUMN "id" TYPE long',)
Params: ([],)
Version: 1.3.6
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\mlOpsProj\mlops\lib\site-packages\django\db\backends\utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
File "D:\mlOpsProj\mlops\lib\site-packages\djongo\cursor.py", line 59, in execute
raise db_exe from e
djongo.database.DatabaseError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\mlOpsProj\machinelearning_development\manage.py", line 22, in <module>
main()
File "D:\mlOpsProj\machinelearning_development\manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "D:\mlOpsProj\mlops\lib\site-packages\django\core\management\__init__.py", line 446, in execute_from_command_line
utility.execute()
File "D:\mlOpsProj\mlops\lib\site-packages\django\core\management\__init__.py", line 440, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "D:\mlOpsProj\mlops\lib\site-packages\django\core\management\base.py", line 414, in run_from_argv
self.execute(*args, **cmd_options)
File "D:\mlOpsProj\mlops\lib\site-packages\django\core\management\base.py", line 460, in execute
output = self.handle(*args, **options)
File "D:\mlOpsProj\mlops\lib\site-packages\django\core\management\base.py", line 98, in wrapped
res = handle_func(*args, **kwargs)
File "D:\mlOpsProj\mlops\lib\site-packages\django\core\management\commands\migrate.py", line 290, in handle
post_migrate_state = executor.migrate(
File "D:\mlOpsProj\mlops\lib\site-packages\django\db\migrations\executor.py", line 131, in migrate
state = self._migrate_all_forwards(
File "D:\mlOpsProj\mlops\lib\site-packages\django\db\migrations\executor.py", line 163, in _migrate_all_forwards
state = self.apply_migration(
File "D:\mlOpsProj\mlops\lib\site-packages\django\db\migrations\executor.py", line 248, in apply_migration
state = migration.apply(state, schema_editor)
File "D:\mlOpsProj\mlops\lib\site-packages\django\db\migrations\migration.py", line 131, in apply
operation.database_forwards(
File "D:\mlOpsProj\mlops\lib\site-packages\django\db\migrations\operations\fields.py", line 235, in database_forwards
schema_editor.alter_field(from_model, from_field, to_field)
File "D:\mlOpsProj\mlops\lib\site-packages\django\db\backends\base\schema.py", line 747, in alter_field
self._alter_field(
File "D:\mlOpsProj\mlops\lib\site-packages\django\db\backends\base\schema.py", line 963, in _alter_field
self.execute(
File "D:\mlOpsProj\mlops\lib\site-packages\django\db\backends\base\schema.py", line 192, in execute
cursor.execute(sql, params)
File "D:\mlOpsProj\mlops\lib\site-packages\django\db\backends\utils.py", line 103, in execute
return super().execute(sql, params)
File "D:\mlOpsProj\mlops\lib\site-packages\django\db\backends\utils.py", line 67, in execute
return self._execute_with_wrappers(
File "D:\mlOpsProj\mlops\lib\site-packages\django\db\backends\utils.py", line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
File "D:\mlOpsProj\mlops\lib\site-packages\django\db\backends\utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
File "D:\mlOpsProj\mlops\lib\site-packages\django\db\utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "D:\mlOpsProj\mlops\lib\site-packages\django\db\backends\utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
File "D:\mlOpsProj\mlops\lib\site-packages\djongo\cursor.py", line 59, in execute
raise db_exe from e
django.db.utils.DatabaseError | closed | 2023-07-26T07:37:49Z | 2023-08-01T13:30:54Z | https://github.com/pennersr/django-allauth/issues/3362 | [] | PrasannaKumaran | 1 |
apache/airflow | automation | 47,905 | Fix mypy-boto3-appflow version | ### Body
We set TODO to handle the version limitation
https://github.com/apache/airflow/blob/9811f1d6d0fe557ab204b20ad5cdf7423926bd22/providers/src/airflow/providers/amazon/provider.yaml#L146-L148
I open issue for viability as it's a small scope and good task for new contributors.
### Committer
- [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | closed | 2025-03-18T11:28:58Z | 2025-03-19T13:33:43Z | https://github.com/apache/airflow/issues/47905 | [
"provider:amazon",
"area:providers",
"good first issue",
"kind:task"
] | eladkal | 2 |
NullArray/AutoSploit | automation | 1,005 | Ekultek, you are correct. | Kek | closed | 2019-04-19T16:46:45Z | 2019-04-19T16:57:49Z | https://github.com/NullArray/AutoSploit/issues/1005 | [] | AutosploitReporter | 0 |
ivy-llc/ivy | pytorch | 28,544 | Fix Frontend Failing Test: jax - math.paddle.stanh | To-do List: https://github.com/unifyai/ivy/issues/27496 | closed | 2024-03-11T11:16:50Z | 2024-05-02T08:41:37Z | https://github.com/ivy-llc/ivy/issues/28544 | [
"Sub Task"
] | ZJay07 | 0 |
fastapi/sqlmodel | sqlalchemy | 475 | How to join tables across multiple schemas | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
Not applicable
```
### Description
Hi there, it is possible to join tables that are '*within*' the same database but '*in different'* schemas?
Lets say I have two schemas: `A` and `B`
For schema `A` I have full control and populate it with tables with SQLModel; e.g.
```
class Sample(SQLModel, table=True):
__table_args__ = {"schema": "A"}
id: Optional[int] = Field(default=None, primary_key=True)
key: int
```
For schema `B` I only have read rights. The table of interest named `Order` within schema `B` looks like this:
```
Order
id | key |
=============
1 | 435
... | ....
```
Now I would like to join my `Sample` table within schema `A` with the `Order` table within schema `B`.
From my understanding I should implement the `Order` table as pydantic model which I can use then in my SQL Statement powerd by SQLModel
```
class Order(SQLModel):
__table_args__ = {"schema": "B"}
id: Optional[int] = Field(default=None, primary_key=True)
key: int
```
```
statement = select(Sample).join(Table, Sample.key == Order.key)
```
However, this seems not to work. Any help would be highly appreciated
### Operating System
Windows
### Operating System Details
_No response_
### SQLModel Version
0.0.7
### Python Version
3.8.1
### Additional Context
_No response_ | closed | 2022-10-21T11:28:47Z | 2022-11-26T11:04:32Z | https://github.com/fastapi/sqlmodel/issues/475 | [
"question",
"investigate"
] | christianholland | 3 |
deepfakes/faceswap | machine-learning | 707 | Improving or simplifying the face recognition step | The preprocessing, or **extract** stage consists of 3 different steps:
1. Face detection finds all the faces in the photo
2. Face recognition filters a particular person from all detected faces
3. Face alignment matches the landmarks
The recognition step is currently the bottleneck. Of 3000 frames it only manages to find 700 examples. It's very poor at detecting profile poses or with closed eyes and that's why people say that face swaps don't blink. Different examples for a reference photo only make things worse.
I'd like to implement a simpler filtering technique by sorting all detected faces based on positional distance from target person in the previous frame and just picking the first one. I managed to achieve this by modifying the [`handleImage`](https://github.com/deepfakes/faceswap/blob/20753a64b76a156aea17724348269d60dd525f87/scripts/extract.py#L64) function in `extract.py` script from one year ago, but the current version is cluttered with GUI methods and multithreading techniques and is difficult to wrap my head around. Can you pinpoint the function which I need to modify in order to sort the detected faces based on a custom metric?
Also, I figured out the recognition is poor because it fails to extract face encoding in `face_filter.py` script. Is there any way to compare faces based on the landmarks only and refrain from generating the encoding? I think it would be useful since most of the time you only need a single face extracted from an image and picking the highest scoring one seems like a natural way to go.
Thanks. | closed | 2019-04-16T18:02:57Z | 2019-04-17T22:01:42Z | https://github.com/deepfakes/faceswap/issues/707 | [] | 6o6o | 3 |
nltk/nltk | nlp | 2,684 | Wordnet tree() is not directly compatible with tree.Tree() | The trees built by the tree() function in the _wordnet_ module have a slightly different structure than those from NLTK's Tree class from the _tree_ module. This is regrettable, since the Tree class provides additional functionality that would be nice to use for Wordnet relation trees also, such as the ability do draw trees in Postscript or Latex formats. | closed | 2021-04-04T06:18:59Z | 2021-04-04T10:17:35Z | https://github.com/nltk/nltk/issues/2684 | [] | ekaf | 0 |
pydantic/pydantic-settings | pydantic | 407 | context not passed to field validators | moving here from #pydantic/pydantic#10418
### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
Hi there! It appears that the `context` arg that can be passed to `model_validate` does not work when using Pydantic Settings, when it does work using a regular Pydantic model.
### Example Code
```Python
Python 3.12.6 (main, Sep 13 2024, 19:07:08) [GCC 11.4.0]
Type 'copyright', 'credits' or 'license' for more information
IPython 8.27.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: from pydantic_settings import *
In [2]: from pydantic import *
In [3]: class Settings(BaseSettings):
...: text: str
...: @field_validator('text')
...: @classmethod
...: def test_validator(cls, v: str, info: ValidationInfo):
...: context = info.context
...: print(f'{context=}')
...: if context:
...: print('have context')
...: return v
...:
In [4]: Settings.model_validate({'text': 'foo bar'})
context=None
Out[4]: Settings(text='foo bar')
In [5]: Settings.model_validate({'text': 'foo bar'}, context={'biz': 'baz'}) # can't see context
context=None
Out[5]: Settings(text='foo bar')
In [6]: class Model(BaseModel):
...: text: str
...: @field_validator('text')
...: @classmethod
...: def test_validator(cls, v: str, info: ValidationInfo):
...: context = info.context
...: print(f'{context=}')
...: if context:
...: print('have context')
...: return v
...:
In [7]: Model.model_validate({'text': 'foo bar'})
context=None
Out[7]: Model(text='foo bar')
In [8]: Model.model_validate({'text': 'foo bar'}, context={'biz': 'baz'}) # but this one does
context={'biz': 'baz'}
have context
Out[8]: Model(text='foo bar')
In [9]:
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.9.1
pydantic-core version: 2.23.3
pydantic-core build: profile=release pgo=false
install path: /home/aam7/.pyenv/versions/3.12.6/envs/bug/lib/python3.12/site-packages/pydantic
python version: 3.12.6 (main, Sep 13 2024, 19:07:08) [GCC 11.4.0]
platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
related packages: typing_extensions-4.12.2 pydantic-settings-2.5.2
commit: unknown
```
| closed | 2024-09-16T20:26:05Z | 2024-11-01T10:44:20Z | https://github.com/pydantic/pydantic-settings/issues/407 | [
"bug"
] | aidanmontare-fed | 5 |
adbar/trafilatura | web-scraping | 733 | Downloads: fully use information from both `config` and `options` variables | - [x] `fetch_url()`: `config = config or options.config`
- [x] `buffered_downloads()`: propagate options to `fetch_response()` via `options.config`
Related to #703 and #732. | closed | 2024-10-30T12:44:47Z | 2024-11-01T13:09:16Z | https://github.com/adbar/trafilatura/issues/733 | [
"maintenance"
] | adbar | 0 |
tox-dev/tox | automation | 2,662 | Tox 4 interprets quoted hash in commands as comment start | ## Issue
Using `'foo#bar' other` as part of a command seems to truncate it as soon as the first hash is encountered (i.e. `'foo`).
## Environment
Provide at least:
- OS: Ubuntu 20.04
- `pip list` of the host Python where `tox` is installed: irrelevant
## Output of running tox
```console
...
py: commands[0]> sed -i ''"'"'s'
/bin/sed: -e expression #1, char 1: unknown command: `''
py: exit 1 (0.00 seconds) /usr/users/ga002/soranzon/software/nsoranzo_ephemeris> sed -i ''"'"'' pid=720035
.pkg: _exit> python /usr/users/ga002/soranzon/software/nsoranzo_ephemeris/.venv/lib/python3.8/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
py: FAIL code 1 (1.23=setup[1.23]+cmd[0.00] seconds)
evaluation failed :( (1.28 seconds)
```
## Minimal example
Minimal `tox.ini` to reproduce:
```ini
[testenv]
allowlist_externals = sed
commands = sed -i 's#/path/to#/newpath/to#' test.txt
```
| closed | 2022-12-09T12:37:41Z | 2022-12-09T15:48:31Z | https://github.com/tox-dev/tox/issues/2662 | [] | nsoranzo | 4 |
deepinsight/insightface | pytorch | 2,704 | C++ SDK加载模型时失败 | 使用交叉编译的C++ SDK在RV1106 ARMv7设备上无法加载模型(Gundam_RV1109、Megatron、Pikachu都不能加载),编译的测试程序也同样无法加载,HFLaunchInspireFace函数返回1361,提示错误信息如下,模型文件从官方提供的网盘文件test_res.zip(https://drive.google.com/drive/folders/1krmv9Pj0XEZXR1GRPHjW_Sl7t4l0dNSS?usp=sharing)中提取得到。
[root@luckfox home]# ./FaceRecognitionSample test_res/pack/Pikachu
opencv-mobile HW JPG encoder with rk mpp
opencv-mobile HW JPG encoder with rk mpp
[simple_archive.h][47]: Invalid archive file: -7
Load Resource error: 1361
[root@luckfox home]# ./FaceRecognitionSample test_res/pack/Gundam_RV1109
opencv-mobile HW JPG encoder with rk mpp
opencv-mobile HW JPG encoder with rk mpp
[simple_archive.h][47]: Invalid archive file: -7
Load Resource error: 1361
[root@luckfox home]# ./FaceRecognitionSample test_res/pack/Megatron
opencv-mobile HW JPG encoder with rk mpp
opencv-mobile HW JPG encoder with rk mpp
[simple_archive.h][47]: Invalid archive file: -7
Load Resource error: 1361
硬件环境:LuckFox Pico Max, RV1106, ARMv7, buildroot
编译环境:WSL2 Ubuntu 20.04.6 LTS, 工具链arm-rockchip830-linux-uclibcgnueabihf,OpenCV-mobile 4.8.1 | closed | 2024-12-03T03:11:53Z | 2024-12-26T02:02:41Z | https://github.com/deepinsight/insightface/issues/2704 | [] | junanxia | 11 |
sinaptik-ai/pandas-ai | pandas | 1,160 | (cx_Oracle.DatabaseError) ORA-00904: "RAND": invalid identifier | ### System Info
OS version: Microsoft Windows 11 Pro, 10.0.22631 Build 22631
Python version: 3.10.4
pandas-ai version: 2.0.42
### 🐛 Describe the bug
Unfortunately, I was not able to get your answers, because of the following error:
(cx_Oracle.DatabaseError) ORA-00904: "RAND": invalid identifier
[SQL: SELECT *
FROM TABLE ORDER BY RAND() ASC
FETCH FIRST 3 ROWS ONLY]
(Background on this error at: https://sqlalche.me/e/20/4xp6) | closed | 2024-05-17T02:50:05Z | 2024-05-18T23:37:06Z | https://github.com/sinaptik-ai/pandas-ai/issues/1160 | [] | eaandersen | 0 |
huggingface/transformers | pytorch | 36,495 | `_load_state_dict_into_meta_model` - `'NoneType' object has no attribute 'load_state_dict'` | https://github.com/huggingface/diffusers/actions/runs/13615360562/job/38057746315?pr=10898
```
model = StableDiffusionSafetyChecker(
(vision_model): CLIPVisionModel(
(vision_model): CLIPVisionTransformer(
(emb...=1e-05, elementwise_affine=True)
)
)
(visual_projection): Linear(in_features=32, out_features=64, bias=False)
)
state_dict = {'concept_embeds': tensor([[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1...., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]]), 'special_care_embeds_weights': tensor([1., 1., 1.]), ...}
start_prefix = ''
expected_keys = ['concept_embeds', 'special_care_embeds', 'concept_embeds_weights', 'special_care_embeds_weights', 'vision_model.vision_model.embeddings.class_embedding', 'vision_model.vision_model.embeddings.patch_embedding.weight', ...]
device_map = None, offload_folder = None, offload_index = None
state_dict_folder = None, state_dict_index = None, dtype = torch.float16
hf_quantizer = None, is_safetensors = False, keep_in_fp32_modules = None
unexpected_keys = [], device_mesh = None
shard_file = '/github/home/.cache/huggingface/hub/models--hf-internal-testing--tiny-stable-diffusion-pipe/snapshots/3ee6c9f225f088ad5d35b624b6514b091e6a4849/safety_checker/pytorch_model.bin'
@torch.no_grad()
def _load_state_dict_into_meta_model(
model: torch.nn.Module,
state_dict: Dict[str, torch.Tensor],
start_prefix,
expected_keys,
device_map=None,
offload_folder=None,
offload_index=None,
state_dict_folder=None,
state_dict_index=None,
dtype=None,
hf_quantizer=None,
is_safetensors=False,
keep_in_fp32_modules=None,
unexpected_keys=None, # passing `unexpected` for cleanup from quantization items
device_mesh=None,
shard_file=None,
):
"""
This is somewhat similar to `_load_state_dict_into_model`, but deals with a model that has some or all of its
params on a `meta` device. It replaces the model params with the data from the `state_dict`, while moving the
params back to the normal device, but only for `loaded_state_dict_keys`.
`start_prefix` is used for models which insert their name into model keys, e.g. `bert` in
`bert.pooler.dense.weight`
It also initialize tensor parallelism for each module if needed.
"""
tensor_device = None
if device_map is not None and device_map.get("", None) is not None:
tensor_device = device_map[""].index if isinstance(device_map[""], torch.device) else device_map[""]
if device_map is not None:
device_map_regex = "|".join(sorted(device_map.keys(), reverse=True))
# we need this later to initialize tensor parallelism
if device_mesh is not None:
full_tp_plan = model.config.base_model_tp_plan
for submodule in model.modules():
full_tp_plan.update(getattr(submodule, "_tp_plan", {}))
file_pointer = None
bin_state_dict = None
if shard_file.endswith(".safetensors"):
file_pointer = safe_open(shard_file, framework="pt", device=tensor_device)
else:
bin_state_dict = load_state_dict(shard_file, map_location="cpu")
error_msgs = []
is_quantized = hf_quantizer is not None
is_torch_e4m3fn_available = hasattr(torch, "float8_e4m3fn")
for serialized_param_name, empty_param in state_dict.items():
# serialized_param_name is the raw, serialized name
# fixed_param_name is the model's equivalent
fixed_param_name, _ = model.rename_key(serialized_param_name)
if fixed_param_name not in expected_keys:
continue
# we need to use serialized_param_name as file pointer is untouched
param = (
file_pointer.get_slice(serialized_param_name)
if shard_file.endswith(".safetensors")
else bin_state_dict[serialized_param_name]
)
# We convert floating dtypes to the `dtype` passed except for float8_e4m3fn type. We also want to keep the buffers/params
# in int/uint/bool and not cast them.
param_casting_dtype = None
is_param_float8_e4m3fn = is_torch_e4m3fn_available and empty_param.dtype == torch.float8_e4m3fn
if dtype is not None and empty_param.dtype.is_floating_point and not is_param_float8_e4m3fn:
if (
keep_in_fp32_modules is not None
and keep_in_fp32_modules.search(fixed_param_name)
and dtype == torch.float16
):
param_casting_dtype = torch.float32
else:
param_casting_dtype = dtype
if device_mesh is not None: # In this case, the param is already on the correct device!
module_to_tp, param_type = find_submodule_and_param_name(model, fixed_param_name)
current_module_plan = None
full_tp_plan_ = "|".join(full_tp_plan.keys()).replace("*", "[0-9]+")
if plan := re.search(full_tp_plan_, fixed_param_name):
match = re.sub("[0-9]+", "*", plan[0])
current_module_plan = full_tp_plan[match]
if current_module_plan is not None:
tp_layer = translate_to_torch_parallel_style(current_module_plan)
rank = tensor_device
row, col = empty_param.shape
if "rowwise" == current_module_plan:
param = param[:, rank * (col // device_mesh.size()) : (rank + 1) * (col // device_mesh.size())]
shard = Shard(1)
tp_layer.desired_input_layouts = (Shard(-1),)
elif "colwise" == current_module_plan:
param = param[rank * (row // device_mesh.size()) : (rank + 1) * (row // device_mesh.size()), :]
shard = Shard(0)
else:
param = param[rank * (row // device_mesh.size()) : (rank + 1) * (row // device_mesh.size()), :]
shard = Shard(0)
if param_casting_dtype is not None and param_casting_dtype != empty_param.dtype:
param = param.to(param_casting_dtype)
local_parameter = DTensor.from_local(
param,
device_mesh=device_mesh,
placements=[shard] * device_mesh.ndim,
)
if isinstance(module_to_tp.weight, nn.Parameter):
local_parameter = torch.nn.Parameter(local_parameter)
module_to_tp.weight = local_parameter
input_fn = partial(tp_layer._prepare_input_fn, tp_layer.input_layouts, tp_layer.desired_input_layouts)
output_fn = partial(tp_layer._prepare_output_fn, tp_layer.output_layouts, tp_layer.use_local_output)
distribute_module(module_to_tp, device_mesh, None, input_fn, output_fn)
else:
module_to_tp.load_state_dict({param_type: param[:]}, strict=False, assign=True)
else:
if device_map is None:
param_device = "cpu"
else:
module_layer = re.search(device_map_regex, fixed_param_name)
if not module_layer:
raise ValueError(f"{fixed_param_name} doesn't have any device set.")
else:
param_device = device_map[module_layer.group()]
if param_device == "disk":
if not is_safetensors:
offload_index = offload_weight(param[:], fixed_param_name, offload_folder, offload_index)
elif param_device == "cpu" and state_dict_index is not None:
state_dict_index = offload_weight(param[:], fixed_param_name, state_dict_folder, state_dict_index)
elif (
not is_quantized
or (not hf_quantizer.requires_parameters_quantization)
or (
not hf_quantizer.check_quantized_param(
model,
param,
fixed_param_name,
state_dict,
param_device=param_device,
device_map=device_map,
)
)
):
if is_fsdp_enabled():
param_device = "cpu" if is_local_dist_rank_0() else "meta"
module, param_type = find_submodule_and_param_name(model, fixed_param_name)
if param_casting_dtype is not None and param_casting_dtype != empty_param.dtype:
param = param[:].to(param_casting_dtype)
> module.load_state_dict(
{param_type: param[:].to(param_device)},
strict=False,
assign=True,
)
E AttributeError: 'NoneType' object has no attribute 'load_state_dict'
``` | closed | 2025-03-02T12:34:36Z | 2025-03-03T17:53:31Z | https://github.com/huggingface/transformers/issues/36495 | [] | hlky | 5 |
replicate/cog | tensorflow | 1,274 | New feature: "cold" and "warm" | Hello there,
I would like to suggest a new feature to be implemented.
In the settings page `https://replicate.com/<account>/<model>/edit` furthermore the options to change the "Visibility" and "Hardware", would be good to have a feature on which the users chooses between "cold" or "warm" responses. For "warm" responses, the user would be charged for the amount of requests required to keep the model up, and the "cold" would represent the actual default. | closed | 2023-08-28T00:09:46Z | 2024-02-19T15:57:12Z | https://github.com/replicate/cog/issues/1274 | [] | wnakano | 1 |
ipython/ipython | jupyter | 14,491 | UltraTB module recommends a traceback mode that is not valid | `ultratb.py`'s module docstring includes the below section which recommends using `Verbose_novars` if you have large data structures that you would prefer to not be printed. This does not seem to exist, as the valid modes in `FormattedTB` are 'Plain', 'Context', 'Verbose', 'Minimal'.
If you provide `Verbose_novars` you get this error message:
```
ValueError: Unrecognized mode in FormattedTB: <Verbose_novars>
Valid modes: ['Plain', 'Context', 'Verbose', 'Minimal']
```
Docstring snippet:
```
If you encounter this kind of situation often, you may want to use the
Verbose_novars mode instead of the regular Verbose, which avoids formatting
variables (but otherwise includes the information and context given by
Verbose).
``` | open | 2024-07-26T21:42:37Z | 2024-07-31T15:11:15Z | https://github.com/ipython/ipython/issues/14491 | [] | NodeJSmith | 1 |
ageitgey/face_recognition | python | 1,235 | Does this tool detect normal person face too? I mean not bill gates or ronaldo 😄 | open | 2020-10-27T16:41:04Z | 2020-11-06T12:34:55Z | https://github.com/ageitgey/face_recognition/issues/1235 | [] | mrhydra-np | 3 | |
janosh/pymatviz | plotly | 136 | Legacy `eslint` config file deprecated and caused `pre-commit` to fail | ### Pre-commit `eslint` config file outdated
Failed both on my local machine and [CI](https://results.pre-commit.ci/run/github/340898532/1713960833.Pl3yBHPYQgiDaBd4dfbpTA).
https://github.com/eslint/eslint/issues/18350 points to a deprecated `.eslintrc.yml` file in v9.0.0 afterwards. I added an environment variable `ESLINT_USE_FLAT_CONFIG=false` in e8669fba0b7bc740c48cf821aaddf097b6376b5f to allow usage of the legacy config file for now. But it's certainly better to migrate to the new config file ([legacy config file support would be removed completely in v10](https://github.com/eslint/eslint/issues/18350#issuecomment-2058524220) ). | closed | 2024-04-25T04:04:21Z | 2024-04-28T22:51:39Z | https://github.com/janosh/pymatviz/issues/136 | [
"linting",
"ci"
] | DanielYang59 | 1 |
slackapi/python-slack-sdk | asyncio | 729 | Add Support for admin.conversations.restrictAccess methods | ### Description
I was hoping to have the new [admin.conversations.restrictAccess](https://api.slack.com/methods#admin.conversations.restrictAccess) methods be added. I'm currently just using `WebClient.api_call` for the methods.
### What type of issue is this? (place an `x` in one of the `[ ]`)
- [ ] bug
- [X] enhancement (feature request)
- [ ] question
- [ ] documentation related
- [ ] testing related
- [ ] discussion
### Requirements (place an `x` in each of the `[ ]`)
* [X] I've read and understood the [Contributing guidelines](https://github.com/slackapi/python-slackclient/blob/master/.github/contributing.md) and have done my best effort to follow them.
* [X] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct).
* [X] I've searched for any related issues and avoided creating a duplicate issue.
| closed | 2020-06-22T22:02:23Z | 2020-07-17T00:38:41Z | https://github.com/slackapi/python-slack-sdk/issues/729 | [
"Version: 2x",
"enhancement",
"web-client"
] | ruberVulpes | 2 |
ahmedfgad/GeneticAlgorithmPython | numpy | 8 | For some reason fitness never exceeds 1.0 | I use pygad to train my neural network. The code below is a test of pygad. And it worked. After I wrote simple NN implementation and tried to train it by pygad. But for some reason, fitness never exceeds 1.0. First I thought that my code doesn't work properly. But I again run my first test of pygad(the code below) and it has the same issue.
`
```
import math
import pygad
def calculate_neuron(input, weight, nonlinear=None, bias=False):
"""
Calculate value of neuron.
:param input: Input for neuron
:param weight: Weight for each input
:param nonlinear: Nonlinear function for neuron. If == None then neuron is linear
:param bias: If true bias exist in previous layer
:return: value of neuron
"""
value = 0
for i in range(len(input)):
value += input[i] * weight[i]
if bias:
value += 1 * weight[len(weight) - 1]
if nonlinear is not None:
value = nonlinear(value)
return value
def sigmoid(x):
return math.exp(x) / (math.exp(x) + 1)
def xor_neural_network(input, weight):
"""
This is neural network that must implement xor function. (I didn't read about objects yet)
:param input: Input for neural network. For this is 2
:param weight: Weight for neural. Length is 9
:return:
"""
hid1 = calculate_neuron(input, weight[:3], sigmoid, True)
hid2 = calculate_neuron(input, weight[3:6], sigmoid, True)
output = calculate_neuron([hid1, hid2], weight[6:9], sigmoid, bias=True)
return output
function_inputs = [[0, 0],
[0, 1],
[1, 0],
[1, 1]]
des_outputs = [0, 1, 1, 0]
def fitness_func(solution):
outputs = []
for input in function_inputs:
outputs.append(xor_neural_network(input, solution))
error = 0
for output, des_output in zip(outputs, des_outputs):
error += abs(output - des_output)
fitness = 1 / error
return fitness
if __name__ == "__main__":
num_generations = 1000
sol_per_pop = 800
num_parents_mating = 4
mutation_percent_genes = 10
parent_selection_type = "sss"
crossover_type = "single_point"
mutation_type = "random"
keep_parents = 1
num_genes = 9
ga_instance = pygad.GA(num_generations=num_generations,
sol_per_pop=sol_per_pop,
num_parents_mating=num_parents_mating,
num_genes=num_genes,
fitness_func=fitness_func,
mutation_percent_genes=mutation_percent_genes,
parent_selection_type=parent_selection_type,
crossover_type=crossover_type,
mutation_type=mutation_type,
keep_parents=keep_parents,
)
while True:
ga_instance.run()
print(ga_instance.best_solution())
print(xor_neural_network(function_inputs[0], ga_instance.best_solution()[0]))
print(xor_neural_network(function_inputs[1], ga_instance.best_solution()[0]))
print(xor_neural_network(function_inputs[2], ga_instance.best_solution()[0]))
print(xor_neural_network(function_inputs[3], ga_instance.best_solution()[0]))
ga_instance.plot_result()
```
`
| closed | 2020-05-08T14:00:52Z | 2020-09-19T01:48:06Z | https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/8 | [
"question"
] | CheshireCat26 | 6 |
PokeAPI/pokeapi | graphql | 1,039 | GraphQL api is down | Steps to Reproduce:
1. Attempt to make any request using the graphql console here https://beta.pokeapi.co/graphql/console/
2. Server sends back a Cloudflare 504 Gateway Timeout error
| closed | 2024-02-08T20:14:24Z | 2024-02-09T14:56:02Z | https://github.com/PokeAPI/pokeapi/issues/1039 | [] | blevy115 | 3 |
httpie/cli | python | 1,023 | System for meta information output options (timing, SSL info, …) | Meta issue for designing a system for displaying additional information to requests and responses. The current philosophy is to only show the actual HTTP exchanges so that they can be easily copied to API docs, sent elsewhere using `nc`, etc. The only non-http output is error messages & the download progress bar. This new system will provide a solution for showing additional metadata in a systematic and backward-compatible way.
Relevant issues:
#1011
#1007
#243
#329
#685
| open | 2021-01-20T14:43:00Z | 2021-12-28T10:41:49Z | https://github.com/httpie/cli/issues/1023 | [
"needs product design"
] | jkbrzt | 1 |
paperless-ngx/paperless-ngx | django | 8,637 | [BUG] Missing or incorrect Content-Length header results in inconsistent error | ### Description
I am building a tool that uses Spring WebClient to upload documents to paperless-ngx using the HTTP API.
It took me a moment to figure out that a missing `Content-Length` header caused `Error response body: {"document":["No file was submitted."]}` errors.
There seems to be no easy way to set the correct content length with WebClient because the file is not buffered leading to small (few byte) abbreviations from the correct length.
setting too small => `Error response body: Invalid HTTP request received.`
setting too large => it hangs waiting for data after flush (I interrupted before timeout)
Requiring a correct length may be a good idea, but it should then fail with a more consistent error message (cURL makes it more obvious). Maybe just a note in the docs might be helpful for others having the same issue.
### Steps to reproduce
...start docker compose
`export pathToPdf=...`
`export creds=...`
✅`curl -v -H "Authorization: Basic ${creds}" -H "Content-Length: 15955" -F "document=@/${pathToPdf}/invoice_Zuschuss_Donatelli_18672.pdf" http://localhost:8000/api/documents/post_document/`
❌`
curl -v -H "Authorization: Basic ${creds}" -H "Content-Length: 15945" -F "document=@/${pathToPdf}/invoice_Zuschuss_Donatelli_18672.pdf" http://localhost:8000/api/documents/post_document/
` => Invalid HTTP request received.
❌`
curl -v -H "Authorization: Basic ${creds}" -H "Content-Length: 15965" -F "document=@/${pathToPdf}/invoice_Zuschuss_Donatelli_18672.pdf" http://localhost:8000/api/documents/post_document/
` => "hangs" upload completely sent off: 15955 bytes
[invoice_Zuschuss_Donatelli_18672.pdf](https://github.com/user-attachments/files/18334328/invoice_Zuschuss_Donatelli_18672.pdf)
### Webserver logs
```bash
didn't check 😬
```
### Browser logs
_No response_
### Paperless-ngx version
2.13.5
### Host OS
Docker scripts from paperless repo
### Installation method
Docker - official image
### System status
_No response_
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2025-01-07T15:24:04Z | 2025-02-08T03:05:13Z | https://github.com/paperless-ngx/paperless-ngx/issues/8637 | [
"not a bug"
] | fabapp2 | 3 |
sunscrapers/djoser | rest-api | 160 | Possible to set email field to unique ? | Hi guys,
Possible to turn the email field unique ?
| closed | 2016-08-16T14:06:16Z | 2016-09-02T14:42:46Z | https://github.com/sunscrapers/djoser/issues/160 | [] | briva | 1 |
pallets-eco/flask-wtf | flask | 413 | FieldList fails validation because raw_data was not set | My full form structure is:
`myForm: FieldList: FormField: mySubForm: FieldList: BooleanField`
The first FieldList fails validation (`InputRequired`, which checks `.raw_data`) because its `.raw_data` attribute was not set, even though its `.data` was set!
I don't fiddle with the form data after submission, so I suppose this is a bug? | closed | 2020-06-19T13:27:19Z | 2021-05-26T00:54:54Z | https://github.com/pallets-eco/flask-wtf/issues/413 | [] | PDiracDelta | 1 |
graphdeco-inria/gaussian-splatting | computer-vision | 934 | SIBR viewer does not work on ubuntu when specifying pretrained model path | Hi, thank you for your awesome work.
After training a model, I used SIBR viewer to render the trained model, but it does not show anything.

Here is the output of the command line.

There are point cloud folder, cameras.json, cfg_args and input.ply which are needed to render trained model.

I think the viewer is using CPU to render the trained model. Is this the problem?

| open | 2024-08-17T05:25:11Z | 2024-12-12T14:07:12Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/934 | [] | Masaya1109 | 2 |
nvbn/thefuck | python | 1,021 | Git checkout should provide multiple corrections | When correcting git checkout, the default is to use the 'closest branch'. We have a lot of branches with similar names, but quite often, what I actually meant to do was supply the '-b' flag.
Can the git checkout rule be updated to return all of the possible options, rather than trying to guess, based on some arbitrary priority?
| closed | 2019-12-11T20:36:11Z | 2020-01-15T13:37:48Z | https://github.com/nvbn/thefuck/issues/1021 | [] | djh82 | 3 |
unit8co/darts | data-science | 2,381 | How to mask or ignore target features during model training | I am having a difficulty to train TSMixer model in my current project where target values wouldn't be available at the time of prediction.
Is there any option to mask or ignore past target values during model traing?
| closed | 2024-05-12T14:24:58Z | 2024-09-17T11:38:50Z | https://github.com/unit8co/darts/issues/2381 | [
"question"
] | yunakkano | 14 |
StackStorm/st2 | automation | 5,317 | st2stream doesn't return errors to st2client | ## SUMMARY
This is specific to a pack install and I'm not sure if anything else is affected by st2stream not returning errors to st2client.
Carrying part of the conversion over from https://github.com/StackStorm/st2/issues/5303#issuecomment-892134776
The scenario here is when installing a pack without having the global stream_view permissions st2stream errors out, but never sends that error back to st2client to inform the user. The error is however logged in the st2stream log file.
### STACKSTORM VERSION
Known versions this happens on
st2 3.4.1, on Python 3.6.8
st2 3.5.0, on Python 3.6.8
##### OS, environment, install method
Post what OS you are running this on, along with any other relevant information/
* OL7
* HA install with a controller node running web, redis, rabbitmq, and mongo and 2 cluster nodes running everything else.
## Steps to reproduce the problem
See the setup is in the other thread as its specific to RBAC and nothing else.
## Expected Results
st2stream should return an error to st2client so it can be reported to the user as to what the failure is.
The log does record this error
```
2021-08-03 19:02:29,809 139735081647144 DEBUG error_handling [-] API call failed: User "packuser" doesn't have required permission "stream_view" (exception_class='ResourceTypeAccessDeniedError',exception_message='User "packuser" doesn\'t have required permission "stream_view"',exception_data={'permission_type': 'stream_view', 'user_db': <UserDB: UserDB(id=60f6a1e06a82029b060389a6, is_service=False, name="packuser", nicknames={})>})
```
## Actual Results
Without the st2stream returning an error st2client fails out with an AttributeError exception. The process(s) still continues in the background. The full debug output is also in the other thread.
```shell
Traceback (most recent call last):
File "/opt/stackstorm/st2/lib/python3.6/site-packages/st2client/shell.py", line 408, in run
func(args)
File "/opt/stackstorm/st2/lib/python3.6/site-packages/st2client/commands/resource.py", line 48, in decorate
return func(*args, **kwargs)
File "/opt/stackstorm/st2/lib/python3.6/site-packages/st2client/commands/pack.py", line 274, in run_and_print
packs = instance.result['output']['packs_list']
AttributeError: 'Execution' object has no attribute 'result'
```
Thanks!
| open | 2021-08-03T21:53:18Z | 2021-10-11T20:18:17Z | https://github.com/StackStorm/st2/issues/5317 | [
"bug"
] | minsis | 0 |
mobarski/ask-my-pdf | streamlit | 58 | ai_bricks.api | I have tried everything and I cannot install this module. Any pointers/tips?
from ai_bricks.api import openai
This line of code doesn't work at all....
<img width="1574" alt="Screenshot 2023-05-23 at 15 29 54" src="https://github.com/mobarski/ask-my-pdf/assets/32650771/5602f7c9-296e-46ed-b090-67747a437b7c">
| open | 2023-05-23T14:31:20Z | 2023-06-01T08:49:38Z | https://github.com/mobarski/ask-my-pdf/issues/58 | [] | q-n-t-m | 2 |
tqdm/tqdm | jupyter | 1,322 | UnicodeDecodeError when using subprocess.getstatusoutput | I've made this script for finding corrupt images using Imagemagick. Full code:
```
from pathlib import Path
import time
import subprocess
import concurrent.futures
from tqdm import tqdm
_err_set = set()
def _imgerr(_img):
global _err_set
output = subprocess.getstatusoutput("magick identify -regard-warnings \"" + str(_img) + "\"")
if(int(output[0]) == 1):
_err_set.add(str(_img))
_root = input("Input directory path: ")
file_set = set(Path(_root).rglob("*.jpg"))
print("Scanning...")
start1 = time.perf_counter()
with concurrent.futures.ThreadPoolExecutor() as executor:
list(tqdm(executor.map(_imgerr, file_set),total=int(len(file_set))))
finish1 = time.perf_counter()
with open('bad_img.txt', 'w', encoding="utf-8") as f:
for item in sorted(_err_set):
f.write('"' + item + '"' + "\n")
f.close()
print(f'Total execution time [mt] = {round(finish1 - start1, 3)}s')
print(f'Average time per image = {round((finish1 - start1)/len(file_set), 10)}s')
print(f'Corrupt images = {len(_err_set)}')
```
I'm using tqdm for progress tracking. The problem is with this line:
```
list(tqdm(executor.map(_imgerr, file_set),total=int(len(file_set))))
```
If there are no non ascii characters in the image path then everything works fine, but if any unicode character appears I get
>Exception has occurred: UnicodeDecodeError 'charmap' codec can't decode byte 0x81 in position 37: character maps to /<undefined/>
If I instead just use
```
executor.map(_imgerr, file_set)
```
everyting works just fine, regardless if there are unicode characters present or not. Been scratching my head for couple of hours now but still can't figure out what causes the error. Any suggestions are welcome! Btw maybe it's relevant but when debugging the error pops out in the function at the following line:
```
output = subprocess.getstatusoutput("magick identify -regard-warnings \"" + str(_img) + "\"")
```
| closed | 2022-04-25T19:23:04Z | 2022-04-25T21:07:32Z | https://github.com/tqdm/tqdm/issues/1322 | [] | gotr3k | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 938 | Query regarding Unaligned dataset mode | Hi Junyanz,
I was wondering if I give aligned dataset (aligned trainA and trainB) to cycle gan model (with default dataset mode as unaligned), will it unalign the paired dataset itself, or do I need to shuffle my paired dataset to make it unaligned before loading it in the script?? | open | 2020-02-28T06:00:27Z | 2020-02-28T06:00:27Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/938 | [] | BismaAmjad | 0 |
huggingface/datasets | numpy | 6,759 | Persistent multi-process Pool | ### Feature request
Running .map and filter functions with `num_procs` consecutively instantiates several multiprocessing pools iteratively.
As instantiating a Pool is very resource intensive it can be a bottleneck to performing iteratively filtering.
My ideas:
1. There should be an option to declare `persistent_workers` similar to pytorch DataLoader. Downside would be that would be complex to determine the correct resource allocation and deallocation of the pool. i.e. the dataset can outlive the utility of the pool.
2. Provide a pool as an argument. Downside would be the expertise required by the user. Upside, is that there is better resource management.
### Motivation
Is really slow to iteratively perform map and filter operations on a dataset.
### Your contribution
If approved I could integrate it. I would need to know what method would be most suitable to implement from the two options above. | open | 2024-03-26T17:35:25Z | 2024-03-26T17:35:25Z | https://github.com/huggingface/datasets/issues/6759 | [
"enhancement"
] | fostiropoulos | 0 |
onnx/onnx | tensorflow | 6,328 | Model split failure using onnx.utils.extract_model | # Bug Report
### Describe the bug
I am trying to split [Phi-3 INT4 ONNX](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx/tree/main/cpu_and_mobile/cpu-int4-rtn-block-32-acc-level-4) for profiling experiments.
### System information
OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*): Windows
ONNX version (*e.g. 1.13*):1.16.2
Python version:3.12
### Reproduction instructions
import onnx
input_path = "E:\models\phi3-mini-4k-instruct-cpu-int4-rtn-block-32-acc-level-4.onnx"
output_path = "E:\tmp.onnx"
num_layers=1
input_names = ["input_ids"]
output_names=["/model/embed_tokens/Gather/output_0"]
onnx.utils.extract_model(input_path, output_path, input_names, output_names)
I see the following error:
onnx.onnx_cpp2py_export.checker.ValidationError: No Op registered for SimplifiedLayerNormalization with domain_version of 14
==> Context: Bad node spec for node. Name: /model/layers.0/input_layernorm/LayerNorm OpType: SimplifiedLayerNormalization
ONNX checker seems to be failing, due to contrib_ops in the models? Is there another way to split models that have contrib ops? | closed | 2024-08-29T19:17:11Z | 2024-09-17T17:36:12Z | https://github.com/onnx/onnx/issues/6328 | [
"bug"
] | bkaruman | 3 |
Asabeneh/30-Days-Of-Python | pandas | 189 | programacion | closed | 2022-02-14T18:27:35Z | 2023-07-08T22:21:37Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/189 | [] | Carlos188125 | 0 | |
microsoft/JARVIS | pytorch | 24 | Add a contributing.md file | Why isn't there a contributing.md file to help give guidance on how or even if we can contribute to this project?
I think adding one would help give guidance on how or if we can contribute to this project. | closed | 2023-04-04T14:20:02Z | 2023-04-04T22:37:46Z | https://github.com/microsoft/JARVIS/issues/24 | [] | rcallaby | 1 |
miguelgrinberg/python-socketio | asyncio | 277 | I got this printing all over the terminal:"GET /socket.io/?EIO=4&transport=websocket HTTP/1.1" 200 0 95.515740 and can't run my code | closed | 2019-03-24T22:42:07Z | 2019-08-24T09:25:13Z | https://github.com/miguelgrinberg/python-socketio/issues/277 | [
"question"
] | AbdulrahmanMossad | 9 | |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 57 | tiktok 的API 访问都不返回了。 | https://api.tiktokv.com/aweme/v1/multi/aweme/detail/?aweme_ids=%5B7108906863097367851%5D | closed | 2022-07-26T09:08:47Z | 2022-08-01T05:34:12Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/57 | [
"API Down",
"Fixed"
] | OhGui | 6 |
ultralytics/yolov5 | deep-learning | 12,547 | Incorrect logging level | I found a small bug! Please have a check! The bug is located here:
https://github.com/ultralytics/yolov5/blob/63555c8e2230328585d09fdc50a6601822a70ded/utils/general.py#L162C1-L165C1
Here fn is a nonlocal value, this will cause fn points to `LOGGER.warning`,so that `LOGGER.info` and `LOGGER.warning` calls the same `fn`,which points to the original `LOGGER.warning` at the end of the for loop. | closed | 2023-12-23T14:26:12Z | 2024-10-20T19:35:21Z | https://github.com/ultralytics/yolov5/issues/12547 | [
"Stale"
] | invoker-bot | 2 |
PaddlePaddle/models | nlp | 5,333 | 图像分类是否有热力图可视化呢? | closed | 2021-07-23T11:01:38Z | 2021-07-26T03:35:27Z | https://github.com/PaddlePaddle/models/issues/5333 | [] | Bobo-y | 2 | |
tiangolo/uwsgi-nginx-flask-docker | flask | 183 | Cannot make CORS works | So I'm building an app using you image `tiangolo/uwsgi-nginx-flask:python3.8` which looks like this (I know the `if` is useless here, I removed all the irrelevant code for simplicity sake) :
```python
app = Flask(__name__)
CORS(app)
@app.route('/', methods=['POST', 'GET', 'OPTIONS'])
def index():
if request.method == 'POST':
return render_template('index.html')
else:
return render_template('index.html')
```
And the terminal output is :
```
172.17.0.1 - - [23/May/2020:13:04:43 +0000] "GET / HTTP/1.1" 200 505 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36" "-"
[pid: 15|app: 0|req: 1/1] 172.17.0.1 () {54 vars in 900 bytes} [Sat May 23 13:04:43 2020] GET / => generated 505 bytes in 20 msecs (HTTP/1.1 200) 3 headers in 112 bytes (2 switches on core 0)
172.17.0.1 - - [23/May/2020:13:04:45 +0000] "OPTIONS / HTTP/1.1" 400 658 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36" "-"
```
What seems weird to me is the preflight request never goes to uwsgi (like the `GET /` did).
I tried :
* with and without listing `OPTIONS` in `methods=[]`
* with and without handling the `OPTIONS` in a conditional block (`elif request.method == 'OPTIONS'`) with appropriate response headers
* responding to the preflight request directly in nginx (without forwarding it to uwsgi)
Nothing worked 😞
I usually don't have issues with CORS because flask_cors does all the magic, but this time, I can't figure out what I'm missing. | closed | 2020-05-23T13:15:43Z | 2020-06-20T00:12:53Z | https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/183 | [
"answered"
] | jeromepin | 6 |
yt-dlp/yt-dlp | python | 11,961 | Video unavailable. This video is not available | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm asking a question and **not** reporting a bug or requesting a feature
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
I am not able to download this playlist:
https://music.youtube.com/playlist?list=OLAK5uy_nTX_UcyURUCsI2KNerL9nZi8mpxfshIAA
and some other yt music too, but I cant tell the pattern.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp -vU https://music.youtube.com/watch?v=iGjbuiZHk5k
[debug] Command-line config: ['-vU', 'https://music.youtube.com/watch?v=iGjbuiZHk5k']
[debug] Encodings: locale cp950, fs utf-8, pref cp950, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.12.23 from yt-dlp/yt-dlp [65cf46cdd] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 2023-11-09-git-acf63d5350-full_build-www.gyan.dev (setts), ffprobe 2023-11-09-git-acf63d5350-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.12.23 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.12.23 from yt-dlp/yt-dlp)
[youtube] Extracting URL: https://music.youtube.com/watch?v=iGjbuiZHk5k
[youtube] iGjbuiZHk5k: Downloading webpage
[youtube] iGjbuiZHk5k: Downloading ios player API JSON
[youtube] iGjbuiZHk5k: Downloading mweb player API JSON
[youtube] iGjbuiZHk5k: Downloading web music client config
[youtube] iGjbuiZHk5k: Downloading player 03dbdfab
[youtube] iGjbuiZHk5k: Downloading web music player API JSON
ERROR: [youtube] iGjbuiZHk5k: Video unavailable. This video is not available
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\youtube.py", line 4541, in _real_extract
File "yt_dlp\extractor\common.py", line 1276, in raise_no_formats
```
| closed | 2024-12-31T15:52:10Z | 2025-03-04T07:45:20Z | https://github.com/yt-dlp/yt-dlp/issues/11961 | [
"incomplete",
"geo-blocked",
"site-bug",
"site:youtube"
] | KameronJohn | 13 |
pytest-dev/pytest-html | pytest | 26 | Add how to add screenshots to html report to documentation | I love this plug in, is this included in pytest by default now?
I've incorporated in jenkins, and it works well, save for the screenshot bit.
Any help?
| closed | 2016-01-16T00:57:41Z | 2016-01-18T13:15:39Z | https://github.com/pytest-dev/pytest-html/issues/26 | [] | frimmy | 2 |
browser-use/browser-use | python | 745 | Why does the code try to locate elements using other attributes than browser-user-highlight-id? | ### Bug Description
I feel that browser-user-highlight-id attribute is injected into the elements in [buildDomTree.js#L124](https://github.com/browser-use/browser-use/blob/2d0f95f80150bffe788041c1811d14bc394481e3/browser_use/dom/buildDomTree.js#L124) and could later be used in [context.py#L788](https://github.com/browser-use/browser-use/blob/2d0f95f80150bffe788041c1811d14bc394481e3/browser_use/browser/context.py#L788) instead of creating css_selector.
### Reproduction Steps
1. Install browser use
2. Run any prompt
### Code Sample
```python
@classmethod
def _enhanced_css_selector_for_element(cls, element: DOMElementNode, include_dynamic_attributes: bool = True) -> str:
"""
Creates a CSS selector for a DOM element, handling various edge cases and special characters.
Args:
element: The DOM element to create a selector for
Returns:
A valid CSS selector string
"""
try:
# Get base selector from XPath
css_selector = cls._convert_simple_xpath_to_css_selector(element.xpath)
# Handle class attributes
if 'class' in element.attributes and element.attributes['class'] and include_dynamic_attributes:
# Define a regex pattern for valid class names in CSS
valid_class_name_pattern = re.compile(r'^[a-zA-Z_][a-zA-Z0-9_-]*$')
# Iterate through the class attribute values
classes = element.attributes['class'].split()
for class_name in classes:
# Skip empty class names
if not class_name.strip():
continue
# Check if the class name is valid
if valid_class_name_pattern.match(class_name):
# Append the valid class name to the CSS selector
css_selector += f'.{class_name}'
else:
# Skip invalid class names
continue
# Expanded set of safe attributes that are stable and useful for selection
SAFE_ATTRIBUTES = {
# Data attributes (if they're stable in your application)
'id',
# Standard HTML attributes
'name',
'type',
'placeholder',
# Accessibility attributes
'aria-label',
'aria-labelledby',
'aria-describedby',
'role',
# Common form attributes
'for',
'autocomplete',
'required',
'readonly',
# Media attributes
'alt',
'title',
'src',
# Custom stable attributes (add any application-specific ones)
'href',
'target',
}
if include_dynamic_attributes:
dynamic_attributes = {
'data-id',
'data-qa',
'data-cy',
'data-testid',
}
SAFE_ATTRIBUTES.update(dynamic_attributes)
# Handle other attributes
for attribute, value in element.attributes.items():
if attribute == 'class':
continue
# Skip invalid attribute names
if not attribute.strip():
continue
if attribute not in SAFE_ATTRIBUTES:
continue
# Escape special characters in attribute names
safe_attribute = attribute.replace(':', r'\:')
# Handle different value cases
if value == '':
css_selector += f'[{safe_attribute}]'
elif any(char in value for char in '"\'<>`\n\r\t'):
# Use contains for values with special characters
# Regex-substitute *any* whitespace with a single space, then strip.
collapsed_value = re.sub(r'\s+', ' ', value).strip()
# Escape embedded double-quotes.
safe_value = collapsed_value.replace('"', '\\"')
css_selector += f'[{safe_attribute}*="{safe_value}"]'
else:
css_selector += f'[{safe_attribute}="{value}"]'
return css_selector
except Exception:
# Fallback to a more basic selector if something goes wrong
tag_name = element.tag_name or '*'
return f"{tag_name}[highlight_index='{element.highlight_index}']"
```
### Version
git main branch
### LLM Model
GPT-4o
### Operating System
windows 11
### Relevant Log Output
```shell
``` | open | 2025-02-17T08:44:33Z | 2025-02-17T08:44:33Z | https://github.com/browser-use/browser-use/issues/745 | [
"bug"
] | shivamkhatri | 0 |
modelscope/modelscope | nlp | 305 | 使用ChatPLUG报错g++错误Exception: Run cmd failed: "/usr/bin/g++" | 环境如下:

报错如下:
/root/anaconda3/envs/chatglm/bin/python /usr/local/zx/ChatPLUG-3.7B/add_data_chatplug.py
g++: error: unrecognized command line option ‘-std=c++14’
Traceback (most recent call last):
File "/usr/local/zx/ChatPLUG-3.7B/add_data_chatplug.py", line 1, in <module>
from modelscope.pipelines import pipeline
File "/root/anaconda3/envs/chatglm/lib/python3.9/site-packages/modelscope/pipelines/__init__.py", line 4, in <module>
from modelscope.utils.import_utils import LazyImportModule
File "/root/anaconda3/envs/chatglm/lib/python3.9/site-packages/modelscope/utils/import_utils.py", line 18, in <module>
from modelscope.utils.ast_utils import (INDEX_KEY, MODULE_KEY, REQUIREMENT_KEY,
File "/root/anaconda3/envs/chatglm/lib/python3.9/site-packages/modelscope/utils/ast_utils.py", line 25, in <module>
from modelscope.utils.registry import default_group
File "/root/anaconda3/envs/chatglm/lib/python3.9/site-packages/modelscope/utils/registry.py", line 11, in <module>
logger = get_logger()
File "/root/anaconda3/envs/chatglm/lib/python3.9/site-packages/modelscope/utils/logger.py", line 48, in get_logger
from modelscope.utils.torch_utils import is_master
File "/root/anaconda3/envs/chatglm/lib/python3.9/site-packages/modelscope/utils/torch_utils.py", line 13, in <module>
import torch
File "/root/anaconda3/envs/chatglm/lib/python3.9/site-packages/torch/__init__.py", line 4, in <module>
import jittor as jt
File "/root/anaconda3/envs/chatglm/lib/python3.9/site-packages/jittor/__init__.py", line 18, in <module>
from . import compiler
File "/root/anaconda3/envs/chatglm/lib/python3.9/site-packages/jittor/compiler.py", line 1189, in <module>
check_cache_compile()
File "/root/anaconda3/envs/chatglm/lib/python3.9/site-packages/jittor/compiler.py", line 884, in check_cache_compile
recompile = compile(cc_path, cc_flags+f" {opt_flags} ", files, jit_utils.cache_path+'/jit_utils_core'+extension_suffix, True)
File "/root/anaconda3/envs/chatglm/lib/python3.9/site-packages/jittor/compiler.py", line 126, in compile
return do_compile(fix_cl_flags(cmd))
File "/root/anaconda3/envs/chatglm/lib/python3.9/site-packages/jittor/compiler.py", line 91, in do_compile
run_cmd(cmd)
File "/root/anaconda3/envs/chatglm/lib/python3.9/site-packages/jittor_utils/__init__.py", line 188, in run_cmd
raise Exception(err_msg)
Exception: Run cmd failed: "/usr/bin/g++" "/root/anaconda3/envs/chatglm/lib/python3.9/site-packages/jittor/src/utils/cache_compile.cc" "/root/anaconda3/envs/chatglm/lib/python3.9/site-packages/jittor/src/utils/log.cc" "/root/anaconda3/envs/chatglm/lib/python3.9/site-packages/jittor/src/utils/tracer.cc" "/root/anaconda3/envs/chatglm/lib/python3.9/site-packages/jittor/src/utils/jit_utils.cc" "/root/anaconda3/envs/chatglm/lib/python3.9/site-packages/jittor/src/utils/str_utils.cc" -Wall -Wno-unknown-pragmas -std=c++14 -fPIC -march=native -fdiagnostics-color=always -lstdc++ -ldl -shared -I"/root/anaconda3/envs/chatglm/lib/python3.9/site-packages/jittor/src" -I/root/anaconda3/envs/chatglm/include/python3.9 -I/root/anaconda3/envs/chatglm/include/python3.9 -O2 -o "/root/.cache/jittor/jt1.3.7/g++4.8.5/py3.9.16/Linux-3.10.0-1xc5/IntelCoreProcex4a/default/jit_utils_core.cpython-39-x86_64-linux-gnu.so"
[i 0518 14:28:49.382392 40 compiler.py:955] Jittor(1.3.7.13) src: /root/anaconda3/envs/chatglm/lib/python3.9/site-packages/jittor
[i 0518 14:28:49.390292 40 compiler.py:956] g++ at /usr/bin/g++(4.8.5)
[i 0518 14:28:49.390444 40 compiler.py:957] cache_path: /root/.cache/jittor/jt1.3.7/g++4.8.5/py3.9.16/Linux-3.10.0-1xc5/IntelCoreProcex4a/default
[i 0518 14:28:49.398650 40 __init__.py:411] Found nvcc(10.2.89) at /usr/bin/nvcc.
[i 0518 14:28:49.438185 40 __init__.py:411] Found gdb(7.6.1) at /usr/bin/gdb.
[i 0518 14:28:49.446655 40 __init__.py:411] Found addr2line(2.27) at /usr/bin/addr2line.
[i 0518 14:28:49.603421 40 compiler.py:1010] cuda key:cu10.2.89_sm_37
Process finished with exit code 1
代码如下,就是官方知识增强的示例代码:
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
from modelscope.models import Model
model_id = 'damo/ChatPLUG-3.7B'
pipeline_ins = pipeline(Tasks.fid_dialogue, model=model_id,model_revision='v1.0.1')
# 数据预处理设置
preprocess_params = {
'max_encoder_length': 380, # encoder最长输入长度
'context_turn': 5 # context最长轮数
}
# 解码策略,默认为beamsearch
forward_params = {
'min_length': 10,
'max_length': 512,
'num_beams': 3,
'temperature': 0.8,
'do_sample': False,
'early_stopping': True,
'top_k': 50,
'top_p': 0.8,
'repetition_penalty': 1.2,
'length_penalty': 1.2,
'no_repeat_ngram_size': 6
}
kwargs = {
'preprocess_params': preprocess_params,
'forward_params': forward_params
}
# 支持输入多段外部知识文本,进行知识增强
know_list = [
"《狂飙》由徐纪周执导的。《狂飙》的导演徐纪周也是编剧之一,代表作品有《永不磨灭的番号》《特战荣耀》《心理罪之城市之光》《杀虎口》《胭脂》等",
"《狂飙》(The Knockout)是一部由 张译、张颂文、李一桐、张志坚 领衔主演,韩童生 特邀主演,吴健、郝平 友情出演,高叶、贾冰、李健 主演,徐纪周 执导,朱俊懿、徐纪周 担任总编剧的 刑侦",
"狂飙是由中央政法委宣传教育局,中央政法委政法信息中心指导,爱奇艺,留白影视出品,徐纪周执导,张译,李一桐,张志坚领衔主演的刑侦剧。不是。是徐纪周,1976年12月19日出生,毕业于中央戏剧",
]
input = {
"history": "你好[SEP]你好!很高兴与你交流![SEP]狂飙的导演是谁呀",
"bot_profile": "我是达摩院的语言模型ChatPLUG, 是基于海量数据训练得到。",
# 推理时,会根据[SEP]符号拆分为多段文本,分别和context合并后输入模型中,以节省计算量
"knowledge": "[SEP]".join(know_list),
}
result = pipeline_ins(input,**kwargs)
print(result)
| closed | 2023-05-18T06:42:24Z | 2023-09-18T11:14:34Z | https://github.com/modelscope/modelscope/issues/305 | [
"Stale"
] | jiweizhangxu | 3 |
tiangolo/uvicorn-gunicorn-fastapi-docker | fastapi | 246 | apt-get update not working on specific version | "tiangolo/uvicorn-gunicorn-fastapi:python3.9-slim-2023-05-22" version can do this command after "apt-get update",
```
RUN apt-get install netcat -y
```
but, "tiangolo/uvicorn-gunicorn-fastapi:python3.9-slim-2023-07-03" version can't that command with this errors
```
#0 0.388 Get:1 http://deb.debian.org/debian bookworm InRelease [147 kB]
#0 0.513 Get:2 http://deb.debian.org/debian bookworm-updates InRelease [52.1 kB]
#0 0.559 Get:3 http://deb.debian.org/debian-security bookworm-security InRelease [48.0 kB]
#0 0.606 Get:4 http://deb.debian.org/debian bookworm/main amd64 Packages [8904 kB]
#0 0.896 Get:5 http://deb.debian.org/debian-security bookworm-security/main amd64 Packages [47.3 kB]
#0 1.418 Fetched 9199 kB in 1s (7743 kB/s)
#0 1.418 Reading package lists...
#0 1.665 Reading package lists...
#0 1.915 Building dependency tree...
#0 1.972 Reading state information...
#0 1.974 Package netcat is a virtual package provided by:
#0 1.974 netcat-openbsd 1.219-1
#0 1.974 netcat-traditional 1.10-47
#0 1.974
#0 1.976 E: Package 'netcat' has no installation candidate
``` | closed | 2023-07-07T07:29:09Z | 2024-08-25T04:05:15Z | https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker/issues/246 | [] | minyoung90 | 0 |
slackapi/python-slack-sdk | asyncio | 1,315 | Using the new channel parameter with a single str | I write a script to upload a file and in my call I get the message. But I can't find anything in the doc related to this message. In the [examples](https://api.slack.com/methods/files.upload/code) there is not even `client.files_upload_v2()` which is recommended.
```/Users/spyros.m/Developer/Scripts/Python/audit_macos/venv/lib/python3.11/site-packages/slack_sdk/web/client.py:3032: UserWarning: Although the channels parameter is still supported for smooth migration from legacy files.upload, we recommend using the new channel parameter with a single str value instead for more clarity.```
How to solve this warning? | closed | 2022-12-29T00:18:32Z | 2023-02-08T22:39:37Z | https://github.com/slackapi/python-slack-sdk/issues/1315 | [
"question",
"web-client",
"Version: 3x",
"auto-triage-stale"
] | GreekGreenGeek | 3 |
modelscope/data-juicer | data-visualization | 47 | Auto-HPO是全流程自动化的还是需要人工介入 | ### Before Asking 在提问之前
- [X] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully. 我已经仔细阅读了 [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) 上的操作指引。
- [X] I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
### Search before asking 先搜索,再提问
- [X] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar questions. 我已经在 [issue列表](https://github.com/alibaba/data-juicer/issues) 中搜索但是没有发现类似的问题。
### Question
看了论文,发现里面有Auto-HPO的功能,似乎没有看到相关的代码,请问这个功能现在是全流程自动化的吗,还是说,通过调整yaml文件,生成不同的数据,然后用https://modelscope.cn/studios/Data-Juicer/auto_evaluation_helm/summary这个里面的评测,来观察结果,再调整数据生成策略,如此循环,亦或者是其他方法?
### Additional 额外信息
_No response_ | closed | 2023-10-26T15:36:28Z | 2023-11-01T05:00:16Z | https://github.com/modelscope/data-juicer/issues/47 | [
"enhancement",
"question"
] | Steven-Luo | 2 |
python-restx/flask-restx | api | 521 | Oauth-redirect.html not found | *** redirect url does not exist ***
I'm just wondering if anyone here has seen this working? Currently authorize option is configured and appears but the return url is /oauth-redirect.html does not exist. Seem some issues on this repo where people would like this configurable but isn't currently. My issue is the html file is not served at all so my token server once verified does not redirect correctly. Or should I be pointing back to the root swagger url? | open | 2023-02-17T18:19:29Z | 2023-02-17T18:19:29Z | https://github.com/python-restx/flask-restx/issues/521 | [
"question"
] | dmeads89 | 0 |
noirbizarre/flask-restplus | api | 54 | No way to configure .../swagger.json to go over HTTPS? | Hello,
I have an application deployed using Flask-RESTPlus. Everything works nicely when it goes over http, but soon as I switch to https, it can't load the swagger.json because it is making requests to http://host/swagger.json and not https://host/swagger.json
The endpoints are registered properly with blueprints, because if I manually go to https://host/swagger.json, I get the correct JSON. However, the generated swagger page tries to load the JSON from a http endpoint, instead of the correct https endpoint.
Is there any way to fix this?
Thanks
| closed | 2015-06-25T17:54:24Z | 2022-01-05T01:52:28Z | https://github.com/noirbizarre/flask-restplus/issues/54 | [] | Iulian7 | 11 |
CorentinJ/Real-Time-Voice-Cloning | python | 1,286 | Is this project abandoned? | Hi, it's been long time from last update on source code. Is this project abandoned?
Does it still work good? Does it work with the latest version of Pytorch?
Or do you recommend another python project? | closed | 2024-01-16T14:36:01Z | 2024-02-05T11:21:03Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1286 | [] | StepHaze | 2 |
newpanjing/simpleui | django | 122 | 编辑页面默认无法显示保存按钮,页面比例缩放后才能正常 | **bug描述**
简单的描述下遇到的bug:
编辑页面显示不出来下面的保存按钮,把页面缩放一下就可以了。用的Chrome浏览器
**重现步骤**
1. Chrome浏览器100%缩放比例的情况下,点击任意列表页的ID链接,进入某个编辑页面
2. 编辑页无法看到底部的保存按钮,此时缩放一下浏览器的比例,如缩小为90%或扩大为110%,保存按钮就可以显示出来。
**环境**
1.操作系统:macOS Mojave 10.14.5
2.python版本:3.7.3
3.django版本:2.2.1
4.simpleui版本:2.5
| closed | 2019-07-21T07:20:02Z | 2019-07-22T10:08:05Z | https://github.com/newpanjing/simpleui/issues/122 | [
"bug"
] | victor-zhang | 3 |
huggingface/diffusers | pytorch | 10,172 | Raise an error when `len(gligen_images )` is not equal to `len(gligen_phrases)` in `StableDiffusionGLIGENTextImagePipeline` | To whom it may concern,
I found that when using `StableDiffusionGLIGENTextImagePipeline`, there is no error raised when `len(gligen_images )` is not equal to `len(gligen_phrases)`. And when I dig into the source code, it seems that these two features are zipped together in a for loop during the preprocessing. I guess this will cause the longer one to be clipped unintentionally. (If my understanding is wrong, feel free to correct me.) Is there any possibility to raise an error or at least warning? Thanks in advance.
Source Code: https://github.com/huggingface/diffusers/blob/v0.31.0/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L689
| closed | 2024-12-10T14:25:48Z | 2024-12-11T08:59:44Z | https://github.com/huggingface/diffusers/issues/10172 | [] | abcdefg133hi | 1 |
donnemartin/system-design-primer | python | 850 | hi | As a second year computer science student, Which area of Computer science should i choose ? | closed | 2024-04-05T22:21:18Z | 2024-04-27T09:06:13Z | https://github.com/donnemartin/system-design-primer/issues/850 | [] | Hakim-CS | 2 |
keras-team/keras | deep-learning | 20,073 | Roadmap for RaggedTensor? | Keras v3 does not work with ragged inputs. What is the roadmap for including this feature?
Per: #18414 | open | 2024-07-31T18:45:01Z | 2024-08-19T17:21:53Z | https://github.com/keras-team/keras/issues/20073 | [
"type:feature",
"stat:awaiting keras-eng"
] | swamidass | 9 |
yihong0618/running_page | data-visualization | 561 | 请问KEEP可以导出其他类型的运动数据吗? | 作者您好,我用keep进行了三次跑步运动用来测试,其中前两次跑步是正常的户外跑,第三段跑步是开启跑步运动然后进行室内有氧训练。我本是想通过这种方式来获取其他运动的心率数据,后来发现获取不到第三段跑步的数据。请问有其他方式可以获取其他运动的数据吗?(opp、xiaomi、vivo等运动手表)

| closed | 2023-12-04T09:23:48Z | 2024-01-29T08:45:24Z | https://github.com/yihong0618/running_page/issues/561 | [] | wuSaberMaster | 9 |
mckinsey/vizro | data-visualization | 901 | Custom DateTimePicker component | ### Question
Hello,
I am trying to create a custom DateTimePicker component. Although dmc has [one](https://www.dash-mantine-components.com/components/datetimepicker), it is available in v0.15 of dmc which is not available (yet?) in vizro. Can I first ask if anyone is aware of an open source version of such a vizro component? Seems like it should be pretty commonly used but could not find anything.
My idea was to take a [DatePicker](https://dmc-docs-0-12.onrender.com/components/datepicker) and a [TimeInput](https://dmc-docs-0-12.onrender.com/components/timeinput) both from dmc and put them next to each other in a group. However, I am running into the following issue:
- One of the parameter of the component would be `value`, which is a `datetime.datetime`. At build time, I pass `value.date()` to DatePicker and `value.time()` to TimeInput. However, when it is modified, I would like to recompute `value` from the `date_picker.value` and the `time_input.value`. Is it possible?
Thank you for your help!
### Code/Examples
This is what I came up with so far on [PyCafe](https://app.py.cafe/snippet/vizro/v1?pycafe-app-view=false&pycafe-edit-enable=true#c=H4sIAEI-R2cEA71ZfXPaOBr_Kjryx5FZcAJJmpYb7qbbbrs709vrbHu3fwBDhC1AF9vySTKEdvLd7_dItrEdSNrui5gBW3re9LxLfO6EKhKdUeeE_Ud-0opJw3jKVCbSvlG5DgWzSsW30rKl0izUgluZrliiojzmmkXccraRJuex_IQllTKeZbEM3bMJpukJC9civGUqt2xtbWZGZ2cradf5IghVcpaEtzI1Yne2ceyJSaK0YDJdKsYXhOUEI0I8jTyxOi2HF0CuyK5FpEITSHUm0jNj-SIWZ44ipvNEpNYJBZmmqUwypS0kJ-QsVjbeBeIu08Jg_4Zld9N0qVXiAVgBXQjSwIUiROxwNkmBY3cZqagAeyet0DzusX9lxJ2e_p3ioSZFxM16nvAUmhVzKCVTKYR1RKMkLKgSUEnzVSwJQEbiTZ6GRLbHfkqz3IJLbt3vB2xW9FhYQc5DHscLHt72WBSC6DS1ejeapgzDMch2EYkQBptByeeNFHHUY--13IDaS2t1j21gaVhd6Wkq7kKRWfaTA_5Ba6VHjJ2wTPNVwkcsVSxUmwMcvpB8tXErrEzgEh6L3h9Zovceo28YxvK2mhdKWWM1z9qKXjidOLINyxbILws1Ox_4nhvxT1o-gBDMuQMtf808XHOZlnSK2Xm1z_kSM0rvDpLayxjAj5NgTt_z3Mq9YAUhAQeC09InjDm8-DXmPkIL72V4K3S3KfdpYfhphz4fBZHiMTPw21gg-klEZkQsSDR206R14yKI0F8hVSwEDK428LAIqYJNbt7IGB5_M5s0tuJnZwzkJjfvueaJOABVLczAgziQxBGDMAhulqeR0PGOgmtyg8gISK5Cplm3TAjb7TYgU_eLiOrXlIjHs_3rGektc_innh1HhvGkabsupL6NMvmfJPTTSlkv9coUaqeBLCFYt8gOk2nH2dALM-3MTkfstVjyPIZ3Qqs3rXUygaOCkch0TousW2aYCb0SCSQBeIhbdGmQHjyJoEn-Z4jdoMnvjtH8ASpyS19HcYuUPzciVGlkWBdRCB9kv64F7KoJwazVFh7n1xEuZG9S4mHqb3hsGuQRBfkhaQssJ6gzt_lKua20iIguUgaofXQvAIPTR9JkMd-JqEWAAqpOoMgCrBtLYyc-jTjbCAFX8-_tMCigWpQnM6LrKfvALd_Il0ZlpSl9ibRXOcyMjYHzcN7jly40qoqUVyCQXI7uklZQN4QJtXQQ42nngHPVDTbtFDFV-tJXEX_gZQdJ151qxMipKqLOQ9pUv8LfKh7Os1qyE-Dj8heGO7qHirzzrxGDf1X0yLJteoc9r6JSONmI1Z0MBCez0kXmLhvNkaozoe2u5FirveDr9loRhWZtWcQAerRyNRA9aiSWbJFLbAZFZFlWGxoylZZIIFrH0H-8DPwLVOTeSlfcY6DZg-SAnuznaPhOxzEIJNqIvRS9g4DLaedzAXxfaMNAfCAQLnjyJipU534xHDTpoCWCKxBfIIFrxr5FgP3LgUauu1-l8bAp7KYopibjoYAHkVKrsO-xZQEzJxgs55kDqEF5K5mmTDS8Qcb-p7XmNTX2P7W1wqVo4EBQScrKvdC5Q4v_5VKj3KMd42yr9C3XCjWfbdfCByhJ1_fSEYLaCL2M1dZ127BPxjXooulEp4_yjBbSMcQ4YXJJFOhgYdCYWiZSla_WzGknQCOTU5-x2KEDUC5F1NhT-No18KQxSAV1qmVnUDvQmFSsZbz5L6rNwU7hzFExZ8PBiz2lvUb-NOtmykgCOWrfh_EVIXlEapu-P4pamP9BXKRz7DC8bTlTlS5onLA2eZbkxlLCQx5yqU9Zq5I-DnaoO20fISP9cfYhnxXoTXGibEvlCs2fIc6wTglF9TWMWQkjUzqaFz4MTITKrufa2RhgmjpolA2SPyI8UyHSLiB8Se87NiDUOqvjW_tN-7k43zOp-SWSbLOzb8WBjMaNPEqo8PgmEIoIERg7qLKiUApoTfhUQPWboe0S7qlFyiVB5LOyauEAgFRe92IahfoKhmXz_J3rKNDRWd6FUs14cLoXogQ6xhhl2qCYixTx_VHnjyzPqf1DkBthjIvKFmg7rgBaD6UH8JWrj30T1VxtuNHvuWF3bP3Z5ysCrLLZHqyRMIhV02mqM9sTPkOYDbo0agr9jfqmrvSD7yy9fuptagu2vmkSyyXQBsXGnrWwuUYoY7NvEY1Za6MhQi9CDRy3-hQauOAI3vGFiLsVlBPONaA9hHIS0-2Cn5TR3nD-BHTMajTIWoUlDqzW7HRgFXdRwQfqgrptKzlV7DskeuCrA8on9rwWoe3YnLXeqWSiYwCNJHqo6Wm6SXANsRIBjyLHDO0tdRRaxShfvda9isOJlnDA7I5SAw-klqbrpjNQwUJBrzCV0yZ4u_sYSsvvd6_4cu-OaOxRf8dAeueeuisNxUwm5z02mPXYZNhjw_rvRY9dVL940Go7pwS3FnK1tmA0uDzHbWZVd_eZuO4kYPeKaxw9qhkaVtwRBX_SdDO1cXJygvM7t5RF3Wb-8RDmZfp73iQ7krXxcOYXnsko3jm_7RuhN-CJthEhJpIF5tWShah7KpGfUCmpQi0Utu0OgFBaToWx7-IXmneoqQAgycujDUfs4wJaRSQ2pnBCk6uU4dpYaDRn2GCfGgK3Mxz6YnEnkShdETbo6ihr4rJzB-JpX6QuiT6965b2SzPS-EajvcX28gU5qJuqjVfl3bqz518Ngf6YL9BXw5FJCctcQyko_7ic10khLrancVOI5oOqqTDopilc5ALXlHRyRFPOeLxFOWBbEcP_xF-8eLWt0FhrsYTgB3qK5n8EDcSWPt7iYndNmQQJAipE50O3r67GsaVc5VqMEafFUjda9tidyyUZj-exSFd2TZA7zGXCYm4r8a8CTYUqRm4EaCZC6Q5FxznjqGAPsaV5tcIFZ4vxFzApc1iZiFqx6y9Xu8DPEyrvewJovh8D9Lssd_4UdEPcUjQazZzYdsgi431w18kO1jkNIWDXTeCyNxvTF1WOIFXbQ-3WIahD_UcL07dzTbQWyBaOV1brdidQwuJo7jN_mUJ8nn9dvnYpZGAl-pkB0gVU9zTwdyIV1mmg85SKRafX0f70m1Bqxv9xztX_Pj4PBsHw0jM60E2PCWAYDIBPparEw2vm8gwmsh0SViT6m_Ng-MxB-hLTGX3u4PxMRbQzGoI__hj5RW1puiCFUgLoslvojCbViuULnBuwuHVVpjO6GASX19c4Mr24uro6vzx_cYE18pPOCPWn14FZf_WvF_7txxIRrzICyaWMxffgiKxN6cOd349wJ9D-wsMCJONEt9O5n933KpgHEg4GV8HF5eBqeH15_vzi-fD5s8PEgbiniVQcZLtOk25tGXUFJ5ol30h4S4Cvp0BlQn5xFqsV_nVMV0fB684Q2Dt7FLCgGxr_ZwY5tW-1Akw1VeLNWenj6iIYPB8Mr4bPLp4NBhfn10f1AQ5AfNqEhUmeMl9puYZwFRvft0C-66sv4Yn8hGkeP8W0hCOu9LnvOTdCpAGGLkkA87guQRc8YBBAMoauuGjWGU42PiwpobnjR98vfPa5gi5zVu7ojMBFgh8hCeluv7-ky96yETj9WwGsNP7jov9OU9GY6mseyRw3vOfV9F3frDkOdiVB5Ffc5SKe--cluQZD6gzQFVFi43pXgiyxq75BQwTSwfPrK3hdseJNMWLD_VRM-yznB_t56LaaroFn6KFlugJl0K4o39OXA3CHaa_jvRqpQPhC0tSjyyfgel7S2RN63BqhiOOSyAP93pNH_B9dzUUiBiEAAA).
### Which package?
vizro
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | closed | 2024-11-27T15:45:16Z | 2025-02-03T09:33:07Z | https://github.com/mckinsey/vizro/issues/901 | [
"Custom Components :rocket:"
] | gtauzin | 18 |
modelscope/modelscope | nlp | 303 | modelscope 如何释放显存 | 参照[读光-文字识别-CRNN模型-中英-通用领域](https://modelscope.cn/models/damo/cv_crnn_ocr-recognition-general_damo/summary)和[读光-文字检测-行检测模型-中英-通用领域](https://modelscope.cn/models/damo/cv_resnet18_ocr-detection-line-level_damo/summary),将其封装成ocr方法,在基于django的web应用中调用,发现gpu占用随着调用ocr方法的次数不断增加,请问有什么方法可以释放显存么?
```
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
import cv2
import numpy as np
from PIL import Image
import time
class kpocr(object):
def __init__(self):
self.ocr_detection = pipeline(Tasks.ocr_detection, model='damo/cv_resnet18_ocr-detection-db-line-level_damo')
self.ocr_recognition = pipeline(Tasks.ocr_recognition, model='damo/cv_crnn_ocr-recognition-general_damo')
def ocr(self, img, path=False): # 接受 path 或 PIL格式图像
if path:
img = Image.open(img)
det_result = self.ocr_detection(img)
boxs = det_result.get('polygons')
boxs = boxs[::-1]
text_all = ''
for box in boxs:
xs = [box[0], box[2], box[4], box[6]]
ys = [box[1], box[3], box[5], box[7]]
box_i = (min(xs), min(ys), max(xs), max(ys))
img_i = img.crop(box_i)
img_i_cv = cv2.cvtColor(np.asarray(img_i), cv2.COLOR_RGB2BGR)
reg_result = self.ocr_recognition(img_i_cv)
print(reg_result)
text_i = reg_result.get('text')
if isinstance(text_i, list):
text_i = text_i[0]
text_all += text_i
return text_all
if __name__ == '__main__':
ocr = kpocr()
print('>>>', ocr.ocr("1.png"))
``` | closed | 2023-05-16T05:23:17Z | 2023-09-18T11:14:14Z | https://github.com/modelscope/modelscope/issues/303 | [
"Stale"
] | yfq512 | 10 |
s3rius/FastAPI-template | graphql | 165 | Unable to start project using docker on ubuntu machine | I am using ubuntu 22.04, poetry==1.4.2
On running the docker build command, I get following error -
```
=> ERROR [prod 2/10] RUN apt-get update && apt-get install -y default-libmysqlclient-dev gcc && rm -rf /var/lib/apt/lists/* 23.4s
------
> [prod 2/10] RUN apt-get update && apt-get install -y default-libmysqlclient-dev gcc && rm -rf /var/lib/apt/lists/*:
#0 20.78 Err:1 http://deb.debian.org/debian buster InRelease
#0 20.78 Temporary failure resolving 'debian.map.fastlydns.net' Temporary failure resolving 'deb.debian.org'
#0 20.78 Err:2 http://security.debian.org/debian-security buster/updates InRelease
#0 20.78 Temporary failure resolving 'debian.map.fastlydns.net' Temporary failure resolving 'security.debian.org'
#0 23.03 Get:3 http://deb.debian.org/debian buster-updates InRelease [56.6 kB]
#0 23.28 Get:4 http://deb.debian.org/debian buster-updates/main amd64 Packages [8788 B]
#0 23.32 Fetched 65.4 kB in 23s (2889 B/s)
#0 23.32 Reading package lists...
#0 23.34 W: Failed to fetch http://deb.debian.org/debian/dists/buster/InRelease Temporary failure resolving 'debian.map.fastlydns.net' Temporary failure resolving 'deb.debian.org'
#0 23.34 W: Failed to fetch http://security.debian.org/debian-security/dists/buster/updates/InRelease Temporary failure resolving 'debian.map.fastlydns.net' Temporary failure resolving 'security.debian.org'
#0 23.34 W: Some index files failed to download. They have been ignored, or old ones used instead.
#0 23.35 Reading package lists...
#0 23.36 Building dependency tree...
#0 23.37 Reading state information...
#0 23.37 E: Unable to locate package default-libmysqlclient-dev
#0 23.37 E: Unable to locate package gcc
------
failed to solve: executor failed running [/bin/sh -c apt-get update && apt-get install -y default-libmysqlclient-dev gcc && rm -rf /var/lib/apt/lists/*]: exit code: 100
```
Any ideas on how to fix this? | closed | 2023-04-30T06:41:47Z | 2023-04-30T15:18:32Z | https://github.com/s3rius/FastAPI-template/issues/165 | [] | 1995YogeshSharma | 4 |
Kludex/mangum | fastapi | 21 | Correctin naming/descriptions, Middleware -> Adapter | These are adapters, not middleware, because it is not ASGI-in/ASGI-out. Will need to rename things to use `Adapter`.
| closed | 2019-01-28T09:13:46Z | 2019-01-28T09:19:00Z | https://github.com/Kludex/mangum/issues/21 | [
"improvement"
] | jordaneremieff | 1 |
horovod/horovod | tensorflow | 3,587 | Installation fails with PyTorch on Mac M1 | **Environment:**
1. Framework: (TensorFlow, Keras, PyTorch, MXNet) PyTorch
2. Framework version: 1.11.0
3. Horovod version: 0.25.0
4. MPI version: 4.1.4
5. CUDA version: N/A
6. NCCL version: N/A
7. Python version: 3.9
8. Spark / PySpark version: N/A
9. Ray version: N/A
10. OS and version: OS X 12.4, Mac M1 Pro
11. GCC version: clang 13.1.6
12. CMake version: 3.23.1
**Checklist:**
1. Did you search issues to find if somebody asked this question before?
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
**Bug report:**
Hi, when trying to install Horovod via Pip using the command
```
HOROVOD_WITH_PYTORCH=1 HOROVOD_WITHOUT_TENSORFLOW=1 HOROVOD_WITHOUT_MXNET=1 pip install horovod --no-cache-dir
```
the installation fails with a linker error due to duplicate symbols:
```
Collecting horovod
Downloading horovod-0.25.0.tar.gz (3.4 MB)
|████████████████████████████████| 3.4 MB 3.1 MB/s
Requirement already satisfied: cloudpickle in ./anaconda3/lib/python3.9/site-packages (from horovod) (2.0.0)
Requirement already satisfied: psutil in ./anaconda3/lib/python3.9/site-packages (from horovod) (5.8.0)
Requirement already satisfied: pyyaml in ./anaconda3/lib/python3.9/site-packages (from horovod) (6.0)
Requirement already satisfied: cffi>=1.4.0 in ./anaconda3/lib/python3.9/site-packages (from horovod) (1.15.0)
Requirement already satisfied: pycparser in ./anaconda3/lib/python3.9/site-packages (from cffi>=1.4.0->horovod) (2.21)
Building wheels for collected packages: horovod
Building wheel for horovod (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /Users/mleimeister/anaconda3/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/cg/8x25dwnn2xx6b7bbw8kw973w0000gq/T/pip-install-2qg__ipb/horovod_cd0910fb8cf942ebbc6c7e3e0a0a9b78/setup.py'"'"'; __file__='"'"'/private/var/folders/cg/8x25dwnn2xx6b7bbw8kw973w0000gq/T/pip-install-2qg__ipb/horovod_cd0910fb8cf942ebbc6c7e3e0a0a9b78/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /private/var/folders/cg/8x25dwnn2xx6b7bbw8kw973w0000gq/T/pip-wheel-q1skcjy3
cwd: /private/var/folders/cg/8x25dwnn2xx6b7bbw8kw973w0000gq/T/pip-install-2qg__ipb/horovod_cd0910fb8cf942ebbc6c7e3e0a0a9b78/
[...]
Running CMake in build/temp.macosx-11.1-arm64-3.9/RelWithDebInfo:
cmake /private/var/folders/cg/8x25dwnn2xx6b7bbw8kw973w0000gq/T/pip-install-2qg__ipb/horovod_cd0910fb8cf942ebbc6c7e3e0a0a9b78 -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELWITHDEBINFO=/private/var/folders/cg/8x25dwnn2xx6b7bbw8kw973w0000gq/T/pip-install-2qg__ipb/horovod_cd0910fb8cf942ebbc6c7e3e0a0a9b78/build/lib.macosx-11.1-arm64-3.9 -DPYTHON_EXECUTABLE:FILEPATH=/Users/mleimeister/anaconda3/bin/python
cmake --build . --config RelWithDebInfo -- -j8 VERBOSE=1
-- Could not find CCache. Consider installing CCache to speed up compilation.
-- The CXX compiler identification is AppleClang 13.1.6.13160021
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /Library/Developer/CommandLineTools/usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Build architecture flags:
-- Using command /Users/mleimeister/anaconda3/bin/python
-- Found MPI_CXX: /opt/homebrew/Cellar/open-mpi/4.1.4/lib/libmpi.dylib (found version "3.1")
-- Found MPI: TRUE (found version "3.1")
-- Looking for a CUDA compiler
-- Looking for a CUDA compiler - NOTFOUND
-- Looking for a CUDA host compiler - /Library/Developer/CommandLineTools/usr/bin/c++
-- Could not find nvcc, please set CUDAToolkit_ROOT.
-- Could NOT find NVTX (missing: NVTX_INCLUDE_DIR)
-- The C compiler identification is AppleClang 13.1.6.13160021
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /Library/Developer/CommandLineTools/usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Gloo build as STATIC library
-- Found PkgConfig: /opt/homebrew/bin/pkg-config (found version "0.29.2")
-- Checking for one of the modules 'libuv>=1.26'
-- Found MPI_C: /opt/homebrew/Cellar/open-mpi/4.1.4/lib/libmpi.dylib (found version "3.1")
-- Found MPI: TRUE (found version "3.1")
-- MPI include path: /opt/homebrew/Cellar/open-mpi/4.1.4/include
-- MPI libraries: /opt/homebrew/Cellar/open-mpi/4.1.4/lib/libmpi.dylib
-- Found Pytorch: 1.11.0 (found suitable version "1.11.0", minimum required is "1.5.0")
-- Gloo build as STATIC library
-- MPI include path: /opt/homebrew/Cellar/open-mpi/4.1.4/include
-- MPI libraries: /opt/homebrew/Cellar/open-mpi/4.1.4/lib/libmpi.dylib
-- Configuring done
-- Generating done
[...]
7 warnings generated.
[100%] Linking CXX shared library ../../../../lib.macosx-11.1-arm64-3.9/horovod/torch/mpi_lib_v2.cpython-39-darwin.so
cd /private/var/folders/cg/8x25dwnn2xx6b7bbw8kw973w0000gq/T/pip-install-2qg__ipb/horovod_cd0910fb8cf942ebbc6c7e3e0a0a9b78/build/temp.macosx-11.1-arm64-3.9/RelWithDebInfo/horovod/torch && /opt/homebrew/Cellar/cmake/3.23.1_1/bin/cmake -E cmake_link_script CMakeFiles/pytorch.dir/link.txt --verbose=1
/Library/Developer/CommandLineTools/usr/bin/c++ -D_GLIBCXX_USE_CXX11_ABI=0 -pthread -fPIC -Wall -ftree-vectorize -O3 -g -DNDEBUG -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX12.3.sdk -dynamiclib -Wl,-headerpad_max_install_names -undefined dynamic_lookup -Wl,-exported_symbols_list,/private/var/folders/cg/8x25dwnn2xx6b7bbw8kw973w0000gq/T/pip-install-2qg__ipb/horovod_cd0910fb8cf942ebbc6c7e3e0a0a9b78/horovod.exp -o ../../../../lib.macosx-11.1-arm64-3.9/horovod/torch/mpi_lib_v2.cpython-39-darwin.so -install_name @rpath/mpi_lib_v2.cpython-39-darwin.so CMakeFiles/pytorch.dir/__/common/common.cc.o CMakeFiles/pytorch.dir/__/common/controller.cc.o CMakeFiles/pytorch.dir/__/common/fusion_buffer_manager.cc.o CMakeFiles/pytorch.dir/__/common/group_table.cc.o CMakeFiles/pytorch.dir/__/common/half.cc.o CMakeFiles/pytorch.dir/__/common/logging.cc.o CMakeFiles/pytorch.dir/__/common/message.cc.o CMakeFiles/pytorch.dir/__/common/operations.cc.o CMakeFiles/pytorch.dir/__/common/parameter_manager.cc.o CMakeFiles/pytorch.dir/__/common/process_set.cc.o CMakeFiles/pytorch.dir/__/common/response_cache.cc.o CMakeFiles/pytorch.dir/__/common/stall_inspector.cc.o CMakeFiles/pytorch.dir/__/common/thread_pool.cc.o CMakeFiles/pytorch.dir/__/common/timeline.cc.o CMakeFiles/pytorch.dir/__/common/tensor_queue.cc.o CMakeFiles/pytorch.dir/__/common/ops/collective_operations.cc.o CMakeFiles/pytorch.dir/__/common/ops/operation_manager.cc.o CMakeFiles/pytorch.dir/__/common/optim/bayesian_optimization.cc.o CMakeFiles/pytorch.dir/__/common/optim/gaussian_process.cc.o CMakeFiles/pytorch.dir/__/common/utils/env_parser.cc.o CMakeFiles/pytorch.dir/__/common/mpi/mpi_context.cc.o CMakeFiles/pytorch.dir/__/common/mpi/mpi_controller.cc.o CMakeFiles/pytorch.dir/__/common/ops/mpi_operations.cc.o CMakeFiles/pytorch.dir/__/common/ops/adasum/adasum_mpi.cc.o CMakeFiles/pytorch.dir/__/common/ops/adasum_mpi_operations.cc.o CMakeFiles/pytorch.dir/__/common/gloo/gloo_context.cc.o CMakeFiles/pytorch.dir/__/common/gloo/gloo_controller.cc.o CMakeFiles/pytorch.dir/__/common/gloo/http_store.cc.o CMakeFiles/pytorch.dir/__/common/gloo/memory_store.cc.o CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o CMakeFiles/pytorch.dir/handle_manager.cc.o CMakeFiles/pytorch.dir/ready_event.cc.o CMakeFiles/pytorch.dir/cuda_util.cc.o CMakeFiles/pytorch.dir/mpi_ops_v2.cc.o CMakeFiles/pytorch.dir/adapter_v2.cc.o -Wl,-rpath,/Users/mleimeister/anaconda3/lib/python3.9/site-packages/torch/lib /opt/homebrew/Cellar/open-mpi/4.1.4/lib/libmpi.dylib /Users/mleimeister/anaconda3/lib/python3.9/site-packages/torch/lib/libc10.dylib /Users/mleimeister/anaconda3/lib/python3.9/site-packages/torch/lib/libtorch.dylib /Users/mleimeister/anaconda3/lib/python3.9/site-packages/torch/lib/libtorch_cpu.dylib /Users/mleimeister/anaconda3/lib/python3.9/site-packages/torch/lib/libtorch_python.dylib ../../third_party/compatible_gloo/gloo/libcompatible_gloo.a /opt/homebrew/Cellar/open-mpi/4.1.4/lib/libmpi.dylib /opt/homebrew/Cellar/libuv/1.44.1_1/lib/libuv.a -lpthread
ld: warning: cannot export hidden symbol typeinfo for std::__1::enable_shared_from_this<horovod::common::Controller> from CMakeFiles/pytorch.dir/__/common/controller.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::Controller from CMakeFiles/pytorch.dir/__/common/controller.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::Controller from CMakeFiles/pytorch.dir/__/common/controller.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::enable_shared_from_this<horovod::common::Controller> from CMakeFiles/pytorch.dir/__/common/controller.cc.o
ld: warning: cannot export hidden symbol std::__1::thread::thread<void (&)(horovod::common::HorovodGlobalState&), std::__1::reference_wrapper<horovod::common::HorovodGlobalState>, void>(void (&)(horovod::common::HorovodGlobalState&), std::__1::reference_wrapper<horovod::common::HorovodGlobalState>&&) from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::ParameterManager::TunableParameter<Eigen::Matrix<double, -1, 1, 0, -1, 1> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::ParameterManager::ITunableParameter from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::ParameterManager::CategoricalParameter<bool> from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::ParameterManager::TunableParameter<bool> from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::__shared_ptr_pointer<horovod::common::MPIController*, std::__1::shared_ptr<horovod::common::Controller>::__shared_ptr_default_delete<horovod::common::Controller, horovod::common::MPIController>, std::__1::allocator<horovod::common::MPIController> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::shared_ptr<horovod::common::Controller>::__shared_ptr_default_delete<horovod::common::Controller, horovod::common::MPIController> from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::Controller from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::enable_shared_from_this<horovod::common::Controller> from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::__shared_ptr_pointer<horovod::common::GlooController*, std::__1::shared_ptr<horovod::common::Controller>::__shared_ptr_default_delete<horovod::common::Controller, horovod::common::GlooController>, std::__1::allocator<horovod::common::GlooController> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::shared_ptr<horovod::common::Controller>::__shared_ptr_default_delete<horovod::common::Controller, horovod::common::GlooController> from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::__shared_ptr_pointer<horovod::common::GlooAllreduce*, std::__1::shared_ptr<horovod::common::AllreduceOp>::__shared_ptr_default_delete<horovod::common::AllreduceOp, horovod::common::GlooAllreduce>, std::__1::allocator<horovod::common::GlooAllreduce> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::shared_ptr<horovod::common::AllreduceOp>::__shared_ptr_default_delete<horovod::common::AllreduceOp, horovod::common::GlooAllreduce> from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::__shared_ptr_pointer<horovod::common::GlooAllgather*, std::__1::shared_ptr<horovod::common::AllgatherOp>::__shared_ptr_default_delete<horovod::common::AllgatherOp, horovod::common::GlooAllgather>, std::__1::allocator<horovod::common::GlooAllgather> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::shared_ptr<horovod::common::AllgatherOp>::__shared_ptr_default_delete<horovod::common::AllgatherOp, horovod::common::GlooAllgather> from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::__shared_ptr_pointer<horovod::common::GlooBroadcast*, std::__1::shared_ptr<horovod::common::BroadcastOp>::__shared_ptr_default_delete<horovod::common::BroadcastOp, horovod::common::GlooBroadcast>, std::__1::allocator<horovod::common::GlooBroadcast> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::shared_ptr<horovod::common::BroadcastOp>::__shared_ptr_default_delete<horovod::common::BroadcastOp, horovod::common::GlooBroadcast> from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::__shared_ptr_pointer<horovod::common::GlooAlltoall*, std::__1::shared_ptr<horovod::common::AlltoallOp>::__shared_ptr_default_delete<horovod::common::AlltoallOp, horovod::common::GlooAlltoall>, std::__1::allocator<horovod::common::GlooAlltoall> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::shared_ptr<horovod::common::AlltoallOp>::__shared_ptr_default_delete<horovod::common::AlltoallOp, horovod::common::GlooAlltoall> from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::__shared_ptr_pointer<horovod::common::GlooReducescatter*, std::__1::shared_ptr<horovod::common::ReducescatterOp>::__shared_ptr_default_delete<horovod::common::ReducescatterOp, horovod::common::GlooReducescatter>, std::__1::allocator<horovod::common::GlooReducescatter> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::shared_ptr<horovod::common::ReducescatterOp>::__shared_ptr_default_delete<horovod::common::ReducescatterOp, horovod::common::GlooReducescatter> from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::__shared_ptr_pointer<horovod::common::AdasumMPIAllreduceOp*, std::__1::shared_ptr<horovod::common::AllreduceOp>::__shared_ptr_default_delete<horovod::common::AllreduceOp, horovod::common::AdasumMPIAllreduceOp>, std::__1::allocator<horovod::common::AdasumMPIAllreduceOp> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::shared_ptr<horovod::common::AllreduceOp>::__shared_ptr_default_delete<horovod::common::AllreduceOp, horovod::common::AdasumMPIAllreduceOp> from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::__shared_ptr_pointer<horovod::common::MPIAllreduce*, std::__1::shared_ptr<horovod::common::AllreduceOp>::__shared_ptr_default_delete<horovod::common::AllreduceOp, horovod::common::MPIAllreduce>, std::__1::allocator<horovod::common::MPIAllreduce> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::shared_ptr<horovod::common::AllreduceOp>::__shared_ptr_default_delete<horovod::common::AllreduceOp, horovod::common::MPIAllreduce> from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::__shared_ptr_pointer<horovod::common::MPIAllgather*, std::__1::shared_ptr<horovod::common::AllgatherOp>::__shared_ptr_default_delete<horovod::common::AllgatherOp, horovod::common::MPIAllgather>, std::__1::allocator<horovod::common::MPIAllgather> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::shared_ptr<horovod::common::AllgatherOp>::__shared_ptr_default_delete<horovod::common::AllgatherOp, horovod::common::MPIAllgather> from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::__shared_ptr_pointer<horovod::common::MPIBroadcast*, std::__1::shared_ptr<horovod::common::BroadcastOp>::__shared_ptr_default_delete<horovod::common::BroadcastOp, horovod::common::MPIBroadcast>, std::__1::allocator<horovod::common::MPIBroadcast> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::shared_ptr<horovod::common::BroadcastOp>::__shared_ptr_default_delete<horovod::common::BroadcastOp, horovod::common::MPIBroadcast> from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::__shared_ptr_pointer<horovod::common::MPIAlltoall*, std::__1::shared_ptr<horovod::common::AlltoallOp>::__shared_ptr_default_delete<horovod::common::AlltoallOp, horovod::common::MPIAlltoall>, std::__1::allocator<horovod::common::MPIAlltoall> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::shared_ptr<horovod::common::AlltoallOp>::__shared_ptr_default_delete<horovod::common::AlltoallOp, horovod::common::MPIAlltoall> from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::__shared_ptr_pointer<horovod::common::MPIReducescatter*, std::__1::shared_ptr<horovod::common::ReducescatterOp>::__shared_ptr_default_delete<horovod::common::ReducescatterOp, horovod::common::MPIReducescatter>, std::__1::allocator<horovod::common::MPIReducescatter> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::shared_ptr<horovod::common::ReducescatterOp>::__shared_ptr_default_delete<horovod::common::ReducescatterOp, horovod::common::MPIReducescatter> from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::__shared_ptr_pointer<horovod::common::JoinOp*, std::__1::shared_ptr<horovod::common::JoinOp>::__shared_ptr_default_delete<horovod::common::JoinOp, horovod::common::JoinOp>, std::__1::allocator<horovod::common::JoinOp> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::shared_ptr<horovod::common::JoinOp>::__shared_ptr_default_delete<horovod::common::JoinOp, horovod::common::JoinOp> from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::__shared_ptr_pointer<horovod::common::BarrierOp*, std::__1::shared_ptr<horovod::common::BarrierOp>::__shared_ptr_default_delete<horovod::common::BarrierOp, horovod::common::BarrierOp>, std::__1::allocator<horovod::common::BarrierOp> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::shared_ptr<horovod::common::BarrierOp>::__shared_ptr_default_delete<horovod::common::BarrierOp, horovod::common::BarrierOp> from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::__shared_ptr_pointer<horovod::common::ErrorOp*, std::__1::shared_ptr<horovod::common::ErrorOp>::__shared_ptr_default_delete<horovod::common::ErrorOp, horovod::common::ErrorOp>, std::__1::allocator<horovod::common::ErrorOp> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::shared_ptr<horovod::common::ErrorOp>::__shared_ptr_default_delete<horovod::common::ErrorOp, horovod::common::ErrorOp> from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::ParameterManager::ITunableParameter from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::ParameterManager::TunableParameter<Eigen::Matrix<double, -1, 1, 0, -1, 1> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::ParameterManager::TunableParameter<bool> from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::ParameterManager::CategoricalParameter<bool> from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::__shared_ptr_pointer<horovod::common::MPIController*, std::__1::shared_ptr<horovod::common::Controller>::__shared_ptr_default_delete<horovod::common::Controller, horovod::common::MPIController>, std::__1::allocator<horovod::common::MPIController> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::enable_shared_from_this<horovod::common::Controller> from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::Controller from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::__shared_ptr_pointer<horovod::common::GlooController*, std::__1::shared_ptr<horovod::common::Controller>::__shared_ptr_default_delete<horovod::common::Controller, horovod::common::GlooController>, std::__1::allocator<horovod::common::GlooController> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::__shared_ptr_pointer<horovod::common::GlooAllreduce*, std::__1::shared_ptr<horovod::common::AllreduceOp>::__shared_ptr_default_delete<horovod::common::AllreduceOp, horovod::common::GlooAllreduce>, std::__1::allocator<horovod::common::GlooAllreduce> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::__shared_ptr_pointer<horovod::common::GlooAllgather*, std::__1::shared_ptr<horovod::common::AllgatherOp>::__shared_ptr_default_delete<horovod::common::AllgatherOp, horovod::common::GlooAllgather>, std::__1::allocator<horovod::common::GlooAllgather> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::__shared_ptr_pointer<horovod::common::GlooBroadcast*, std::__1::shared_ptr<horovod::common::BroadcastOp>::__shared_ptr_default_delete<horovod::common::BroadcastOp, horovod::common::GlooBroadcast>, std::__1::allocator<horovod::common::GlooBroadcast> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::__shared_ptr_pointer<horovod::common::GlooAlltoall*, std::__1::shared_ptr<horovod::common::AlltoallOp>::__shared_ptr_default_delete<horovod::common::AlltoallOp, horovod::common::GlooAlltoall>, std::__1::allocator<horovod::common::GlooAlltoall> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::__shared_ptr_pointer<horovod::common::GlooReducescatter*, std::__1::shared_ptr<horovod::common::ReducescatterOp>::__shared_ptr_default_delete<horovod::common::ReducescatterOp, horovod::common::GlooReducescatter>, std::__1::allocator<horovod::common::GlooReducescatter> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::__shared_ptr_pointer<horovod::common::AdasumMPIAllreduceOp*, std::__1::shared_ptr<horovod::common::AllreduceOp>::__shared_ptr_default_delete<horovod::common::AllreduceOp, horovod::common::AdasumMPIAllreduceOp>, std::__1::allocator<horovod::common::AdasumMPIAllreduceOp> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::__shared_ptr_pointer<horovod::common::MPIAllreduce*, std::__1::shared_ptr<horovod::common::AllreduceOp>::__shared_ptr_default_delete<horovod::common::AllreduceOp, horovod::common::MPIAllreduce>, std::__1::allocator<horovod::common::MPIAllreduce> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::__shared_ptr_pointer<horovod::common::MPIAllgather*, std::__1::shared_ptr<horovod::common::AllgatherOp>::__shared_ptr_default_delete<horovod::common::AllgatherOp, horovod::common::MPIAllgather>, std::__1::allocator<horovod::common::MPIAllgather> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::__shared_ptr_pointer<horovod::common::MPIBroadcast*, std::__1::shared_ptr<horovod::common::BroadcastOp>::__shared_ptr_default_delete<horovod::common::BroadcastOp, horovod::common::MPIBroadcast>, std::__1::allocator<horovod::common::MPIBroadcast> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::__shared_ptr_pointer<horovod::common::MPIAlltoall*, std::__1::shared_ptr<horovod::common::AlltoallOp>::__shared_ptr_default_delete<horovod::common::AlltoallOp, horovod::common::MPIAlltoall>, std::__1::allocator<horovod::common::MPIAlltoall> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::__shared_ptr_pointer<horovod::common::MPIReducescatter*, std::__1::shared_ptr<horovod::common::ReducescatterOp>::__shared_ptr_default_delete<horovod::common::ReducescatterOp, horovod::common::MPIReducescatter>, std::__1::allocator<horovod::common::MPIReducescatter> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::__shared_ptr_pointer<horovod::common::JoinOp*, std::__1::shared_ptr<horovod::common::JoinOp>::__shared_ptr_default_delete<horovod::common::JoinOp, horovod::common::JoinOp>, std::__1::allocator<horovod::common::JoinOp> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::__shared_ptr_pointer<horovod::common::BarrierOp*, std::__1::shared_ptr<horovod::common::BarrierOp>::__shared_ptr_default_delete<horovod::common::BarrierOp, horovod::common::BarrierOp>, std::__1::allocator<horovod::common::BarrierOp> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::__shared_ptr_pointer<horovod::common::ErrorOp*, std::__1::shared_ptr<horovod::common::ErrorOp>::__shared_ptr_default_delete<horovod::common::ErrorOp, horovod::common::ErrorOp>, std::__1::allocator<horovod::common::ErrorOp> > from CMakeFiles/pytorch.dir/__/common/operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::ParameterManager::ITunableParameter from CMakeFiles/pytorch.dir/__/common/parameter_manager.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::ParameterManager::TunableParameter<Eigen::Matrix<double, -1, 1, 0, -1, 1> > from CMakeFiles/pytorch.dir/__/common/parameter_manager.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::ParameterManager::TunableParameter<bool> from CMakeFiles/pytorch.dir/__/common/parameter_manager.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::ParameterManager::CategoricalParameter<bool> from CMakeFiles/pytorch.dir/__/common/parameter_manager.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::ParameterManager::TunableParameter<Eigen::Matrix<double, -1, 1, 0, -1, 1> > from CMakeFiles/pytorch.dir/__/common/parameter_manager.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::ParameterManager::ITunableParameter from CMakeFiles/pytorch.dir/__/common/parameter_manager.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::ParameterManager::CategoricalParameter<bool> from CMakeFiles/pytorch.dir/__/common/parameter_manager.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::ParameterManager::TunableParameter<bool> from CMakeFiles/pytorch.dir/__/common/parameter_manager.cc.o
ld: warning: cannot export hidden symbol std::__1::thread::thread<void (horovod::common::ThreadPool::*)(), horovod::common::ThreadPool*, void>(void (horovod::common::ThreadPool::*&&)(), horovod::common::ThreadPool*&&) from CMakeFiles/pytorch.dir/__/common/thread_pool.cc.o
ld: warning: cannot export hidden symbol std::__1::thread::thread<void (horovod::common::TimelineWriter::*)(), horovod::common::TimelineWriter*, void>(void (horovod::common::TimelineWriter::*&&)(), horovod::common::TimelineWriter*&&) from CMakeFiles/pytorch.dir/__/common/timeline.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::BroadcastOp from CMakeFiles/pytorch.dir/__/common/ops/collective_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::AlltoallOp from CMakeFiles/pytorch.dir/__/common/ops/collective_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::BroadcastOp from CMakeFiles/pytorch.dir/__/common/ops/collective_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::AlltoallOp from CMakeFiles/pytorch.dir/__/common/ops/collective_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::OperationManager from CMakeFiles/pytorch.dir/__/common/ops/operation_manager.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::OperationManager from CMakeFiles/pytorch.dir/__/common/ops/operation_manager.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::enable_shared_from_this<horovod::common::Controller> from CMakeFiles/pytorch.dir/__/common/mpi/mpi_controller.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::Controller from CMakeFiles/pytorch.dir/__/common/mpi/mpi_controller.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::Controller from CMakeFiles/pytorch.dir/__/common/mpi/mpi_controller.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::enable_shared_from_this<horovod::common::Controller> from CMakeFiles/pytorch.dir/__/common/mpi/mpi_controller.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::BroadcastOp from CMakeFiles/pytorch.dir/__/common/ops/mpi_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::AlltoallOp from CMakeFiles/pytorch.dir/__/common/ops/mpi_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::BroadcastOp from CMakeFiles/pytorch.dir/__/common/ops/mpi_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::AlltoallOp from CMakeFiles/pytorch.dir/__/common/ops/mpi_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::Adasum<ompi_communicator_t*> from CMakeFiles/pytorch.dir/__/common/ops/adasum/adasum_mpi.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::Adasum<ompi_communicator_t*> from CMakeFiles/pytorch.dir/__/common/ops/adasum/adasum_mpi.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::enable_shared_from_this<horovod::common::Controller> from CMakeFiles/pytorch.dir/__/common/gloo/gloo_controller.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::Controller from CMakeFiles/pytorch.dir/__/common/gloo/gloo_controller.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::Controller from CMakeFiles/pytorch.dir/__/common/gloo/gloo_controller.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::enable_shared_from_this<horovod::common::Controller> from CMakeFiles/pytorch.dir/__/common/gloo/gloo_controller.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::GlooStore from CMakeFiles/pytorch.dir/__/common/gloo/http_store.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::GlooStore from CMakeFiles/pytorch.dir/__/common/gloo/http_store.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::GlooStore from CMakeFiles/pytorch.dir/__/common/gloo/memory_store.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::GlooStore from CMakeFiles/pytorch.dir/__/common/gloo/memory_store.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::BroadcastOp from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::AlltoallOp from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::GlooAlgorithms<unsigned char> from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::IGlooAlgorithms from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::GlooAlgorithms<signed char> from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::GlooAlgorithms<unsigned short> from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::GlooAlgorithms<short> from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::GlooAlgorithms<int> from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::GlooAlgorithms<long long> from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::GlooAlgorithms<gloo::float16> from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::GlooAlgorithms<float> from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::GlooAlgorithms<double> from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::GlooAlgorithms<bool> from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::BroadcastOp from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::AlltoallOp from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::IGlooAlgorithms from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::GlooAlgorithms<unsigned char> from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::GlooAlgorithms<signed char> from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::GlooAlgorithms<unsigned short> from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::GlooAlgorithms<short> from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::GlooAlgorithms<int> from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::GlooAlgorithms<long long> from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::GlooAlgorithms<gloo::float16> from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::GlooAlgorithms<float> from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::GlooAlgorithms<double> from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::GlooAlgorithms<bool> from CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::__shared_ptr_emplace<horovod::common::Status, std::__1::allocator<horovod::common::Status> > from CMakeFiles/pytorch.dir/handle_manager.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::__shared_ptr_emplace<horovod::common::Status, std::__1::allocator<horovod::common::Status> > from CMakeFiles/pytorch.dir/handle_manager.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::__shared_ptr_emplace<horovod::torch::TorchTensor, std::__1::allocator<horovod::torch::TorchTensor> > from CMakeFiles/pytorch.dir/mpi_ops_v2.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::__shared_ptr_emplace<horovod::torch::TorchOpContext, std::__1::allocator<horovod::torch::TorchOpContext> > from CMakeFiles/pytorch.dir/mpi_ops_v2.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::__function::__base<void (horovod::common::Status const&)> from CMakeFiles/pytorch.dir/mpi_ops_v2.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::__shared_ptr_emplace<horovod::torch::TorchTensor, std::__1::allocator<horovod::torch::TorchTensor> > from CMakeFiles/pytorch.dir/mpi_ops_v2.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::__shared_ptr_emplace<horovod::torch::TorchOpContext, std::__1::allocator<horovod::torch::TorchOpContext> > from CMakeFiles/pytorch.dir/mpi_ops_v2.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::__function::__base<void (horovod::common::Status const&)> from CMakeFiles/pytorch.dir/mpi_ops_v2.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::PersistentBuffer from CMakeFiles/pytorch.dir/adapter_v2.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::Tensor from CMakeFiles/pytorch.dir/adapter_v2.cc.o
ld: warning: cannot export hidden symbol typeinfo name for horovod::common::OpContext from CMakeFiles/pytorch.dir/adapter_v2.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::__shared_ptr_emplace<horovod::torch::TorchPersistentBuffer, std::__1::allocator<horovod::torch::TorchPersistentBuffer> > from CMakeFiles/pytorch.dir/adapter_v2.cc.o
ld: warning: cannot export hidden symbol typeinfo name for std::__1::__shared_ptr_emplace<horovod::torch::TorchTensor, std::__1::allocator<horovod::torch::TorchTensor> > from CMakeFiles/pytorch.dir/adapter_v2.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::PersistentBuffer from CMakeFiles/pytorch.dir/adapter_v2.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::Tensor from CMakeFiles/pytorch.dir/adapter_v2.cc.o
ld: warning: cannot export hidden symbol typeinfo for horovod::common::OpContext from CMakeFiles/pytorch.dir/adapter_v2.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::__shared_ptr_emplace<horovod::torch::TorchPersistentBuffer, std::__1::allocator<horovod::torch::TorchPersistentBuffer> > from CMakeFiles/pytorch.dir/adapter_v2.cc.o
ld: warning: cannot export hidden symbol typeinfo for std::__1::__shared_ptr_emplace<horovod::torch::TorchTensor, std::__1::allocator<horovod::torch::TorchTensor> > from CMakeFiles/pytorch.dir/adapter_v2.cc.o
duplicate symbol 'Eigen::internal::conditional<((unpacket_traits<__simd128_float16_t>::size) % (8)) == (0), Eigen::internal::unpacket_traits<__simd128_float16_t>::half, __simd128_float16_t>::type Eigen::internal::predux_half_dowto4<__simd128_float16_t>(__simd128_float16_t const&)' in:
CMakeFiles/pytorch.dir/__/common/controller.cc.o
CMakeFiles/pytorch.dir/__/common/operations.cc.o
duplicate symbol 'Eigen::internal::conditional<((unpacket_traits<__simd128_float16_t>::size) % (8)) == (0), Eigen::internal::unpacket_traits<__simd128_float16_t>::half, __simd128_float16_t>::type Eigen::internal::predux_half_dowto4<__simd128_float16_t>(__simd128_float16_t const&)' in:
CMakeFiles/pytorch.dir/__/common/controller.cc.o
CMakeFiles/pytorch.dir/__/common/process_set.cc.o
duplicate symbol 'Eigen::internal::conditional<((unpacket_traits<__simd128_float16_t>::size) % (8)) == (0), Eigen::internal::unpacket_traits<__simd128_float16_t>::half, __simd128_float16_t>::type Eigen::internal::predux_half_dowto4<__simd128_float16_t>(__simd128_float16_t const&)' in:
CMakeFiles/pytorch.dir/__/common/controller.cc.o
CMakeFiles/pytorch.dir/__/common/parameter_manager.cc.o
duplicate symbol 'Eigen::internal::conditional<((unpacket_traits<__simd128_float16_t>::size) % (8)) == (0), Eigen::internal::unpacket_traits<__simd128_float16_t>::half, __simd128_float16_t>::type Eigen::internal::predux_half_dowto4<__simd128_float16_t>(__simd128_float16_t const&)' in:
CMakeFiles/pytorch.dir/__/common/controller.cc.o
CMakeFiles/pytorch.dir/__/common/response_cache.cc.o
duplicate symbol 'Eigen::internal::conditional<((unpacket_traits<__simd128_float16_t>::size) % (8)) == (0), Eigen::internal::unpacket_traits<__simd128_float16_t>::half, __simd128_float16_t>::type Eigen::internal::predux_half_dowto4<__simd128_float16_t>(__simd128_float16_t const&)' in:
CMakeFiles/pytorch.dir/__/common/controller.cc.o
CMakeFiles/pytorch.dir/__/common/ops/collective_operations.cc.o
duplicate symbol 'Eigen::internal::conditional<((unpacket_traits<__simd128_float16_t>::size) % (8)) == (0), Eigen::internal::unpacket_traits<__simd128_float16_t>::half, __simd128_float16_t>::type Eigen::internal::predux_half_dowto4<__simd128_float16_t>(__simd128_float16_t const&)' in:
CMakeFiles/pytorch.dir/__/common/controller.cc.o
CMakeFiles/pytorch.dir/__/common/ops/operation_manager.cc.o
duplicate symbol 'Eigen::internal::conditional<((unpacket_traits<__simd128_float16_t>::size) % (8)) == (0), Eigen::internal::unpacket_traits<__simd128_float16_t>::half, __simd128_float16_t>::type Eigen::internal::predux_half_dowto4<__simd128_float16_t>(__simd128_float16_t const&)' in:
CMakeFiles/pytorch.dir/__/common/controller.cc.o
CMakeFiles/pytorch.dir/__/common/optim/bayesian_optimization.cc.o
duplicate symbol 'Eigen::internal::conditional<((unpacket_traits<__simd128_float16_t>::size) % (8)) == (0), Eigen::internal::unpacket_traits<__simd128_float16_t>::half, __simd128_float16_t>::type Eigen::internal::predux_half_dowto4<__simd128_float16_t>(__simd128_float16_t const&)' in:
CMakeFiles/pytorch.dir/__/common/controller.cc.o
CMakeFiles/pytorch.dir/__/common/ops/mpi_operations.cc.o
duplicate symbol 'Eigen::internal::conditional<((unpacket_traits<__simd128_float16_t>::size) % (8)) == (0), Eigen::internal::unpacket_traits<__simd128_float16_t>::half, __simd128_float16_t>::type Eigen::internal::predux_half_dowto4<__simd128_float16_t>(__simd128_float16_t const&)' in:
CMakeFiles/pytorch.dir/__/common/controller.cc.o
CMakeFiles/pytorch.dir/__/common/mpi/mpi_controller.cc.o
duplicate symbol 'Eigen::internal::conditional<((unpacket_traits<__simd128_float16_t>::size) % (8)) == (0), Eigen::internal::unpacket_traits<__simd128_float16_t>::half, __simd128_float16_t>::type Eigen::internal::predux_half_dowto4<__simd128_float16_t>(__simd128_float16_t const&)' in:
CMakeFiles/pytorch.dir/__/common/controller.cc.o
CMakeFiles/pytorch.dir/__/common/ops/adasum/adasum_mpi.cc.o
duplicate symbol 'Eigen::internal::conditional<((unpacket_traits<__simd128_float16_t>::size) % (8)) == (0), Eigen::internal::unpacket_traits<__simd128_float16_t>::half, __simd128_float16_t>::type Eigen::internal::predux_half_dowto4<__simd128_float16_t>(__simd128_float16_t const&)' in:
CMakeFiles/pytorch.dir/__/common/controller.cc.o
CMakeFiles/pytorch.dir/__/common/optim/gaussian_process.cc.o
duplicate symbol 'Eigen::internal::conditional<((unpacket_traits<__simd128_float16_t>::size) % (8)) == (0), Eigen::internal::unpacket_traits<__simd128_float16_t>::half, __simd128_float16_t>::type Eigen::internal::predux_half_dowto4<__simd128_float16_t>(__simd128_float16_t const&)' in:
CMakeFiles/pytorch.dir/__/common/controller.cc.o
CMakeFiles/pytorch.dir/__/common/ops/adasum_mpi_operations.cc.o
duplicate symbol 'Eigen::internal::conditional<((unpacket_traits<__simd128_float16_t>::size) % (8)) == (0), Eigen::internal::unpacket_traits<__simd128_float16_t>::half, __simd128_float16_t>::type Eigen::internal::predux_half_dowto4<__simd128_float16_t>(__simd128_float16_t const&)' in:
CMakeFiles/pytorch.dir/__/common/controller.cc.o
CMakeFiles/pytorch.dir/__/common/gloo/gloo_controller.cc.o
duplicate symbol 'Eigen::internal::conditional<((unpacket_traits<__simd128_float16_t>::size) % (8)) == (0), Eigen::internal::unpacket_traits<__simd128_float16_t>::half, __simd128_float16_t>::type Eigen::internal::predux_half_dowto4<__simd128_float16_t>(__simd128_float16_t const&)' in:
CMakeFiles/pytorch.dir/__/common/controller.cc.o
CMakeFiles/pytorch.dir/__/common/ops/gloo_operations.cc.o
ld: 14 duplicate symbols for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [../../lib.macosx-11.1-arm64-3.9/horovod/torch/mpi_lib_v2.cpython-39-darwin.so] Error 1
make[1]: *** [horovod/torch/CMakeFiles/pytorch.dir/all] Error 2
make: *** [all] Error 2
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/private/var/folders/cg/8x25dwnn2xx6b7bbw8kw973w0000gq/T/pip-install-2qg__ipb/horovod_cd0910fb8cf942ebbc6c7e3e0a0a9b78/setup.py", line 210, in <module>
setup(name='horovod',
File "/Users/mleimeister/anaconda3/lib/python3.9/site-packages/setuptools/__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
File "/Users/mleimeister/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 148, in setup
return run_commands(dist)
File "/Users/mleimeister/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 163, in run_commands
dist.run_commands()
File "/Users/mleimeister/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 967, in run_commands
self.run_command(cmd)
File "/Users/mleimeister/anaconda3/lib/python3.9/site-packages/setuptools/dist.py", line 1214, in run_command
super().run_command(command)
File "/Users/mleimeister/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/Users/mleimeister/anaconda3/lib/python3.9/site-packages/wheel/bdist_wheel.py", line 299, in run
self.run_command('build')
File "/Users/mleimeister/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/mleimeister/anaconda3/lib/python3.9/site-packages/setuptools/dist.py", line 1214, in run_command
super().run_command(command)
File "/Users/mleimeister/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/Users/mleimeister/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/Users/mleimeister/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/mleimeister/anaconda3/lib/python3.9/site-packages/setuptools/dist.py", line 1214, in run_command
super().run_command(command)
File "/Users/mleimeister/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/Users/mleimeister/anaconda3/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/Users/mleimeister/anaconda3/lib/python3.9/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run
_build_ext.build_ext.run(self)
File "/Users/mleimeister/anaconda3/lib/python3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 339, in run
self.build_extensions()
File "/private/var/folders/cg/8x25dwnn2xx6b7bbw8kw973w0000gq/T/pip-install-2qg__ipb/horovod_cd0910fb8cf942ebbc6c7e3e0a0a9b78/setup.py", line 144, in build_extensions
subprocess.check_call(command, cwd=cmake_build_dir)
File "/Users/mleimeister/anaconda3/lib/python3.9/subprocess.py", line 373, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--config', 'RelWithDebInfo', '--', '-j8', 'VERBOSE=1']' returned non-zero exit status 2.
----------------------------------------
ERROR: Failed building wheel for horovod
Running setup.py clean for horovod
Failed to build horovod
```
Full output logs:
[install_logs.txt](https://github.com/horovod/horovod/files/9000446/install_logs.txt)
The same error happens when trying to build from source. Is there any CMake setting or other configuration changes that could prevent this?
Thanks for any hints!
| closed | 2022-06-28T11:26:22Z | 2022-07-28T15:25:05Z | https://github.com/horovod/horovod/issues/3587 | [
"bug"
] | MatthiasLeimeisterSonos | 2 |
ranaroussi/yfinance | pandas | 1,590 | Exception when running yf.download for some tickers | Hi there,
updated to the latest version 0.2.22 and now getting this error for quite a few tickers. These tickers worked before and now I am getting N/A for the entire time series.
['MDGSW']: Exception('%ticker%: 1d data not available for startTime=-2208994789 and endTime=1689079312. Only 100 years worth of day granularity data are allowed to be fetched per request.')
Many thanks for your help
Tom | closed | 2023-07-11T12:54:18Z | 2024-03-16T11:34:57Z | https://github.com/ranaroussi/yfinance/issues/1590 | [] | tomnewg | 2 |
pytorch/vision | machine-learning | 8,732 | datasets phototour.py gives errors | ### 🐛 Describe the bug
datasets phototour.py gives errors.
https://github.com/pytorch/vision/blob/main/torchvision/datasets/phototour.py
```
import torch
from torchvision.datasets import PhotoTour
from torchvision.transforms import ToTensor
def test_phototour_torchvision():
# Define the root directory where datasets will be stored
root = "./datasets"
# List of datasets to test
datasets = ["notredame", "yosemite", "liberty"]
for dataset_name in datasets:
print(f"\nTesting dataset: {dataset_name}")
# Initialize the dataset
dataset = PhotoTour(
root=root,
name=dataset_name,
train=True,
transform=ToTensor(),
download=True, # Download the datasets if not already present
)
# Check if the dataset has been loaded successfully
assert len(dataset) > 0, f"Dataset {dataset_name} is empty!"
print(f"Number of patches in {dataset_name}: {len(dataset)}")
# Retrieve a sample
sample = dataset[0]
print(f"Sample type for {dataset_name}: {type(sample)}")
if isinstance(sample, torch.Tensor):
print(f"Sample shape: {sample.shape}")
elif isinstance(sample, tuple):
print(f"Sample components shapes: {[s.shape if isinstance(s, torch.Tensor) else type(s) for s in sample]}")
# Print the mean and standard deviation of the dataset
print(f"Mean: {dataset.mean}, Std: {dataset.std}")
# Access a few samples to verify functionality
for i in range(min(5, len(dataset))): # Test first 5 samples
try:
data = dataset[i]
print(f"Sample {i}: {data.shape if isinstance(data, torch.Tensor) else type(data)}")
except Exception as e:
print(f"Error accessing sample {i}: {e}")
if __name__ == "__main__":
test_phototour_torchvision()
```
Testing dataset: notredame
Traceback (most recent call last):
File "/opt/homebrew/Cellar/python@3.12/3.12.7/Frameworks/Python.framework/Versions/3.12/lib/python3.12/urllib/request.py", line 1344, in do_open
h.request(req.get_method(), req.selector, req.data, headers,
File "/opt/homebrew/Cellar/python@3.12/3.12.7/Frameworks/Python.framework/Versions/3.12/lib/python3.12/http/client.py", line 1336, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/opt/homebrew/Cellar/python@3.12/3.12.7/Frameworks/Python.framework/Versions/3.12/lib/python3.12/http/client.py", line 1382, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/opt/homebrew/Cellar/python@3.12/3.12.7/Frameworks/Python.framework/Versions/3.12/lib/python3.12/http/client.py", line 1331, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/opt/homebrew/Cellar/python@3.12/3.12.7/Frameworks/Python.framework/Versions/3.12/lib/python3.12/http/client.py", line 1091, in _send_output
self.send(msg)
File "/opt/homebrew/Cellar/python@3.12/3.12.7/Frameworks/Python.framework/Versions/3.12/lib/python3.12/http/client.py", line 1035, in send
self.connect()
File "/opt/homebrew/Cellar/python@3.12/3.12.7/Frameworks/Python.framework/Versions/3.12/lib/python3.12/http/client.py", line 1001, in connect
self.sock = self._create_connection(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.7/Frameworks/Python.framework/Versions/3.12/lib/python3.12/socket.py", line 865, in create_connection
raise exceptions[0]
File "/opt/homebrew/Cellar/python@3.12/3.12.7/Frameworks/Python.framework/Versions/3.12/lib/python3.12/socket.py", line 850, in create_connection
sock.connect(sa)
TimeoutError: [Errno 60] Operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/users/pho.py", line 48, in <module>
test_phototour_torchvision()
File "/users/pho.py", line 16, in test_phototour_torchvision
dataset = PhotoTour(
^^^^^^^^^^
File "/users/venv/lib/python3.12/site-packages/torchvision/datasets/phototour.py", line 109, in __init__
self.download()
File "/users/venv/lib/python3.12/site-packages/torchvision/datasets/phototour.py", line 158, in download
download_url(url, self.root, filename, md5)
File "/users/venv/lib/python3.12/site-packages/torchvision/datasets/utils.py", line 122, in download_url
url = _get_redirect_url(url, max_hops=max_redirect_hops)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/users/venv/lib/python3.12/site-packages/torchvision/datasets/utils.py", line 66, in _get_redirect_url
with urllib.request.urlopen(urllib.request.Request(url, headers=headers)) as response:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.7/Frameworks/Python.framework/Versions/3.12/lib/python3.12/urllib/request.py", line 215, in urlopen
return opener.open(url, data, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.7/Frameworks/Python.framework/Versions/3.12/lib/python3.12/urllib/request.py", line 515, in open
response = self._open(req, data)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.7/Frameworks/Python.framework/Versions/3.12/lib/python3.12/urllib/request.py", line 532, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.7/Frameworks/Python.framework/Versions/3.12/lib/python3.12/urllib/request.py", line 492, in _call_chain
result = func(*args)
^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.7/Frameworks/Python.framework/Versions/3.12/lib/python3.12/urllib/request.py", line 1373, in http_open
return self.do_open(http.client.HTTPConnection, req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.7/Frameworks/Python.framework/Versions/3.12/lib/python3.12/urllib/request.py", line 1347, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 60] Operation timed out>
### Versions
Name: torch
Version: 2.5.1
Name: torchvision
Version: 0.20.1
| open | 2024-11-15T20:16:18Z | 2024-11-27T14:59:47Z | https://github.com/pytorch/vision/issues/8732 | [] | venkatram-dev | 2 |
horovod/horovod | pytorch | 3,825 | CI for tf-head: package `tf-nightly-gpu` must be replaced by `tf-nightly` | Example: https://github.com/horovod/horovod/actions/runs/4007634282/jobs/6882833168
```
2023-01-25T18:03:55.7944804Z #39 1.522 =========================================================
2023-01-25T18:03:55.7945146Z #39 1.522 The "tf-nightly-gpu" package has been removed!
2023-01-25T18:03:55.7945386Z #39 1.522
2023-01-25T18:03:55.7945680Z #39 1.522 Please install "tf-nightly" instead.
2023-01-25T18:03:55.7945896Z #39 1.522
2023-01-25T18:03:55.7946153Z #39 1.522 Other than the name, the two packages have been identical
2023-01-25T18:03:55.7946550Z #39 1.522 since tf-nightly 2.1, or roughly since Sep 2019. For more
2023-01-25T18:03:55.7947051Z #39 1.522 information, see: pypi.org/project/tf-nightly-gpu
2023-01-25T18:03:55.7947364Z #39 1.522 =========================================================
```
| closed | 2023-01-25T18:08:12Z | 2023-01-26T10:04:22Z | https://github.com/horovod/horovod/issues/3825 | [
"bug"
] | maxhgerlach | 0 |
holoviz/panel | matplotlib | 6,950 | Links in tutorial pages are broken | Clicking on the links in tutorial pages don't see to find a resource and instead just render the existing page - it looks like it's linking to internal anchors ```#``` rather than the page.
E.g. the link to _trends reference guide _ documentation in this page https://panel.holoviz.org/tutorials/basic/indicators_performance.html links to the following
https://panel.holoviz.org/tutorials/basic/indicators_performance.html#../../reference/indicators/Trend.html
If using the actual relative resolution we get https://panel.holoviz.org/tutorials/reference/indicators/Trend.html but it should actually be https://panel.holoviz.org/reference/indicators/Trend.html
The above relative link should be https://panel.holoviz.org/tutorials/basic/indicators_performance.html/../../../reference/indicators/Trend.html
This problem exists throughout all reference links I have been trying in these pages. | closed | 2024-06-30T19:08:00Z | 2024-07-14T15:33:56Z | https://github.com/holoviz/panel/issues/6950 | [] | matthalstead | 0 |
flairNLP/flair | nlp | 3,460 | flair correct shows FileNotFoundError: | ### Question
when i use the flair correct,it shows that:No such file or directory: '/home/ef3ebff9-6175-4b23-8bdf-e85042d809b8/1_inconsistent.bed'
![Uploading 1716391990878.png…]()
| closed | 2024-05-22T15:34:39Z | 2024-05-22T15:45:12Z | https://github.com/flairNLP/flair/issues/3460 | [
"question"
] | guo0814 | 0 |
noirbizarre/flask-restplus | api | 324 | Curl in swagger documentation | Hello,
I am trying to use a curl generated in the automatic documentation, however it does not work. Generated curl is:
```
curl -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' -d '{ \
"username": "string", \
"surname": "string", \
"password": "string", \
"name": "string" \
}' 'http://localhost:5000/users'
```
I found out that this results in an 400 error on `json = request.get_json()`.
When I remove backslashes it works.
Is there a way how to tell automatic documentation to create a curl command on one line or without backslashes? | open | 2017-09-21T10:02:05Z | 2017-09-21T10:02:05Z | https://github.com/noirbizarre/flask-restplus/issues/324 | [] | somnambWl | 0 |
sherlock-project/sherlock | python | 1,774 | ا | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Put x into all boxes (like this [x]) once you have completed what they say.
Make sure complete everything in the checklist.
-->
- [ ] I'm reporting a feature request
- [ ] I've checked for similar feature requests including closed ones
## Description
<!--
Provide a detailed description of the feature you would like Sherlock to have
-->
WRITE DESCRIPTION HERE | closed | 2023-04-17T04:50:01Z | 2023-04-21T09:19:10Z | https://github.com/sherlock-project/sherlock/issues/1774 | [
"enhancement"
] | lh1b | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,422 | G_GAN,G_L1,D_real,D_fake are all 'NaN' after a while | When I train with pix2pix, I always get "NaN" as return value after some epochs. I have tested it with my own datasets as well as with the facades dataset. However, I always got the same error.
After searching for it, I found that I should adjust the learning rate. After that the test ran longer, but after a while the same error occurred.
I really don't know what the losses of G_GAN, G_L1, D_real and D_fake are due to during training.


| closed | 2022-05-16T17:35:48Z | 2022-07-04T18:22:38Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1422 | [] | Narua2010 | 7 |
csurfer/pyheat | matplotlib | 24 | Developing a web UI update, stuck at updating the project to python3 | Hi,
I intended to contribute a web UI feature to your amazing project, but I stuck while running the project in python3. I made some package and script fixes to move on but also, the used heatmap module is too outdated and does not support python3 at all. Since you've stated this project also runs on python3, I also made it my goal to fix these bugs. Do you guys have any workaround to overcome this version problem for the 'heatmap' module? The heatmap class is your main feature here and I don't want to intervene with the main algorithm.
You can check my updates from [here](https://github.com/MelihCelik00/pyheat) and I appreciate any feedback to help my goal to contribute!
My sole dream is to make #17 real. | open | 2024-04-14T21:19:13Z | 2024-04-14T21:21:12Z | https://github.com/csurfer/pyheat/issues/24 | [] | MelihCelik00 | 0 |
allenai/allennlp | pytorch | 4,670 | Add memory pinning option to new data loader | See
- https://pytorch.org/docs/stable/data.html#memory-pinning
- https://github.com/pytorch/pytorch/blob/master/torch/utils/data/_utils/pin_memory.py#L45
- https://github.com/pytorch/pytorch/blob/master/torch/utils/data/dataloader.py#L410 | closed | 2020-09-24T19:43:55Z | 2021-02-18T23:58:00Z | https://github.com/allenai/allennlp/issues/4670 | [] | epwalsh | 1 |
ipython/ipython | jupyter | 14,568 | Exception on rendering empty pipeline | Hi guys,
this issue is very very simple.
`ipython` simply fails on exception, when simple Pipeline is rendered.

This runs perfectly fine in normal `python`, it fails only in `ipython`.
## Reproduce
Very simple: put this code into new notebook and run:
```python
# This code fails - EMPTY pipeline is rendered
from sklearn.pipeline import Pipeline
# Create pipeline
pipeline = Pipeline(steps=[])
pipeline
```
It is important to note, that problem is in rendering, not the code itself, because this code runs fine (without rendering pipeline on last line):
```python
# This code works - pipeline is NOT rendered
from sklearn.pipeline import Pipeline
# Create pipeline
pipeline = Pipeline(steps=[])
```
As soon, as the pipeline is filled with some content (transformer), rendering works fine again. So the main problem is rendering the empty pipeline.
```python
# This code works - pipeline is not empty
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import FunctionTransformer
def remove_columns(x):
return x.drop(columns=['a', 'b'])
# Create pipeline
pipeline = Pipeline(steps=[
('remove_columns', FunctionTransformer(remove_columns)),
])
pipeline
```
## Expected behavior
No exception should be thrown and empty Pipeline should by displayed.
| closed | 2024-11-01T07:34:02Z | 2024-11-08T14:56:30Z | https://github.com/ipython/ipython/issues/14568 | [] | stefansimik | 3 |
developmentseed/lonboard | data-visualization | 202 | Add support for Visual Studio Code | It seems lonboard does not support VS Code yet. It would be great to support VS Code in addition to JupyterLab and Google Colab. | closed | 2023-11-06T05:00:05Z | 2023-11-06T11:23:28Z | https://github.com/developmentseed/lonboard/issues/202 | [] | giswqs | 1 |
graphql-python/graphene-sqlalchemy | graphql | 185 | I have wrote some extensions for graphene-sqlalchemy | Here they are: [graphene-sqlalchemy-ext](https://github.com/yanghg-basefx/graphene-sqlalchemy-ext)
And I think some of methods can merge into graphene-sqlalchemy as a basic function. such as `connection_from_query`, which provides a way to page result by SQL query (LIMIT ... OFFSET ...). It's much faster than convert all the query to a list.
If you'd like to, you can use any code free. I'm very happy to contribute my strength.
Cheers. | open | 2019-03-01T08:06:57Z | 2019-04-04T07:53:19Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/185 | [
"enhancement"
] | yanghg-basefx | 2 |
huggingface/transformers | python | 36,854 | Facing RunTime Attribute error while running different Flax models for RoFormer | when running FlaxRoFormerForMaskedLM model, I have encountered an issue as
> AttributeError: 'jaxlib.xla_extension.ArrayImpl' object has no attribute 'split'.
This error is reported in the file `transformers/models/roformer/modeling_flax_roformer.py:265`
The function responsible for this error in that file is as below
```
def apply_rotary_position_embeddings(sinusoidal_pos, query_layer, key_layer, value_layer=None):
sin, cos = sinusoidal_pos.split(2, axis=-1)
```
While changing this particular line from `sinusoidal_pos.split(2, axis=-1)` to `sinusoidal_pos._split(2, axis=-1)` , I didn't get that error
My observation is when I replace `split()` with `_split()` , my issue is resolved
### System Info
My environment details are as below :
> - `transformers` version: 4.49.0
> - Platform: Linux-5.4.0-208-generic-x86_64-with-glibc2.35
> - Python version: 3.10.12
> - Huggingface_hub version: 0.29.3
> - Safetensors version: 0.5.3
> - Accelerate version: not installed
> - Accelerate config: not found
> - DeepSpeed version: not installed
> - PyTorch version (GPU?): 2.6.0+cu124 (False)
> - Tensorflow version (GPU?): not installed (NA)
> - Flax version (CPU?/GPU?/TPU?): 0.10.2 (cpu)
> - Jax version: 0.4.36
> - JaxLib version: 0.4.36
I am attaching a screenshot for reference
<img width="1642" alt="Image" src="https://github.com/user-attachments/assets/a488444c-6095-4fc5-a5a0-bc400409d8ba" />
### Who can help?
@gante @Rocketknight1
I am facing this issue for Models like
> FlaxRoFormerForMultipleChoice
> FlaxRoFormerForSequenceClassification
> FlaxRoFormerForTokenClassification
> FlaxRoFormerForQuestionAnswering
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Steps to recreate the error:
Run the below code in any python editor
```
from transformers import AutoTokenizer, FlaxRoFormerForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("junnyu/roformer_chinese_base")
model = FlaxRoFormerForMaskedLM.from_pretrained("junnyu/roformer_chinese_base")
inputs = tokenizer("The capital of France is [MASK].", return_tensors="jax")
outputs = model(**inputs)
logits = outputs.logits
```
### Expected behavior
The model should run and produce error free output | open | 2025-03-20T12:33:26Z | 2025-03-20T14:23:07Z | https://github.com/huggingface/transformers/issues/36854 | [
"Flax",
"bug"
] | ctr-pmuruganTT | 0 |
strawberry-graphql/strawberry | asyncio | 3,636 | `RecursionError` hit when defining nested generics | ## Describe the Bug
Hello! First of all, thank you for this library!
I have a bug probably related to #3466 - consider creating a boolean algebra of `AND`, `OR` and `NOT` and trying to encode it within GraphQL:
```python
import dataclasses
import typing as t
import strawberry
_T = t.TypeVar("_T")
@strawberry.input(one_of=True)
class BoolOp(t.Generic[_T]):
and_: t.Optional[t.List["BoolOp[_T]"]] = strawberry.UNSET
or_: t.Optional[t.List["BoolOp[_T]"]] = strawberry.UNSET
not_: t.Optional["BoolOp[_T]"] = strawberry.UNSET
val: t.Optional[_T] = strawberry.UNSET
@strawberry.type
class Obj:
a: t.Optional[int] = strawberry.UNSET
b: t.Optional[str] = strawberry.UNSET
@strawberry.type
class Query:
@strawberry.field
def objs(self, where: BoolOp[Obj]) -> list:
return []
schema = strawberry.Schema(query=Query)
```
I would like to create a query akin to:
```graphql
query Q {
objs(where: {or_: [{val: {a: 1, b: "abc"}, {and_: [{val: {a: 2, b: ""}}]}] {
...
}
}
```
however, `RecursionError` is raised - here is a snippet of the traceback:
```
$ ipython dev/strawberry_nested_generics.py
---------------------------------------------------------------------------
RecursionError Traceback (most recent call last)
File ~/Desktop/alembic/root/projects/aerosani/dev/strawberry_nested_generics.py:24
19 a: t.Optional[int] = strawberry.UNSET
20 b: t.Optional[str] = strawberry.UNSET
23 @strawberry.type
---> 24 class Query:
25 @strawberry.field
26 def objs(self, where: BoolOp[Obj]) -> list:
27 return []
File ~/Desktop/alembic/root/projects/aerosani/dev/strawberry_nested_generics.py:26, in Query()
23 @strawberry.type
24 class Query:
25 @strawberry.field
---> 26 def objs(self, where: BoolOp[Obj]) -> list:
27 return []
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/types/field.py:596, in field(resolver, name, is_subscription, description, permission_classes, deprecation_reason, default, default_factory, metadata, directives, extensions, graphql_type, init)
594 if resolver:
595 assert init is not True, "Can't set init as True when passing a resolver."
--> 596 return field_(resolver)
597 return field_
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/types/field.py:199, in StrawberryField.__call__(self, resolver)
197 if isinstance(argument.type_annotation.annotation, str):
198 continue
--> 199 elif isinstance(argument.type, StrawberryUnion):
200 raise InvalidArgumentTypeError(
201 resolver,
202 argument,
203 )
204 elif has_object_definition(argument.type):
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/types/arguments.py:131, in StrawberryArgument.type(self)
129 @property
130 def type(self) -> Union[StrawberryType, type]:
--> 131 return self.type_annotation.resolve()
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/annotation.py:133, in StrawberryAnnotation.resolve(self)
131 """Return resolved (transformed) annotation."""
132 if self.__resolve_cache__ is None:
--> 133 self.__resolve_cache__ = self._resolve()
135 return self.__resolve_cache__
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/annotation.py:152, in StrawberryAnnotation._resolve(self)
149 if self._is_list(evaled_type):
150 return self.create_list(evaled_type)
--> 152 if self._is_graphql_generic(evaled_type):
153 if any(is_type_var(type_) for type_ in get_args(evaled_type)):
154 return evaled_type
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/annotation.py:281, in StrawberryAnnotation._is_graphql_generic(cls, annotation)
279 if hasattr(annotation, "__origin__"):
280 if definition := get_object_definition(annotation.__origin__):
--> 281 return definition.is_graphql_generic
283 return is_generic(annotation.__origin__)
285 return False
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/types/base.py:347, in StrawberryObjectDefinition.is_graphql_generic(self)
342 return False
344 # here we are checking if any exposed field is generic
345 # a Strawberry class can be "generic", but not expose any
346 # generic field to GraphQL
--> 347 return any(field.is_graphql_generic for field in self.fields)
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/types/base.py:347, in <genexpr>(.0)
342 return False
344 # here we are checking if any exposed field is generic
345 # a Strawberry class can be "generic", but not expose any
346 # generic field to GraphQL
--> 347 return any(field.is_graphql_generic for field in self.fields)
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/types/field.py:256, in StrawberryField.is_graphql_generic(self)
251 @property
252 def is_graphql_generic(self) -> bool:
253 return (
254 self.base_resolver.is_graphql_generic
255 if self.base_resolver
--> 256 else _is_generic(self.type)
257 )
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/types/field.py:305, in StrawberryField.type(self)
297 @property # type: ignore
298 def type(
299 self,
(...)
303 Literal[UNRESOLVED],
304 ]:
--> 305 return self.resolve_type()
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/types/field.py:352, in StrawberryField.resolve_type(self, type_definition)
349 with contextlib.suppress(NameError):
350 # Prioritise the field type over the resolver return type
351 if self.type_annotation is not None:
--> 352 resolved = self.type_annotation.resolve()
353 elif self.base_resolver is not None and self.base_resolver.type is not None:
354 # Handle unannotated functions (such as lambdas)
355 # Generics will raise MissingTypesForGenericError later
356 # on if we let it be returned. So use `type_annotation` instead
357 # which is the same behaviour as having no type information.
358 resolved = self.base_resolver.type
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/annotation.py:133, in StrawberryAnnotation.resolve(self)
131 """Return resolved (transformed) annotation."""
132 if self.__resolve_cache__ is None:
--> 133 self.__resolve_cache__ = self._resolve()
135 return self.__resolve_cache__
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/annotation.py:162, in StrawberryAnnotation._resolve(self)
160 return self.create_enum(evaled_type)
161 elif self._is_optional(evaled_type, args):
--> 162 return self.create_optional(evaled_type)
163 elif self._is_union(evaled_type, args):
164 return self.create_union(evaled_type, args)
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/annotation.py:220, in StrawberryAnnotation.create_optional(self, evaled_type)
210 # Note that passing a single type to `Union` is equivalent to not using `Union`
211 # at all. This allows us to not di any checks for how many types have been
212 # passed as we can safely use `Union` for both optional types
213 # (e.g. `Optional[str]`) and optional unions (e.g.
214 # `Optional[Union[TypeA, TypeB]]`)
215 child_type = Union[non_optional_types] # type: ignore
217 of_type = StrawberryAnnotation(
218 annotation=child_type,
219 namespace=self.namespace,
--> 220 ).resolve()
222 return StrawberryOptional(of_type)
[... skipping similar frames: StrawberryAnnotation.resolve at line 133 (1 times)]
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/annotation.py:150, in StrawberryAnnotation._resolve(self)
148 return evaled_type
149 if self._is_list(evaled_type):
--> 150 return self.create_list(evaled_type)
152 if self._is_graphql_generic(evaled_type):
153 if any(is_type_var(type_) for type_ in get_args(evaled_type)):
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/annotation.py:197, in StrawberryAnnotation.create_list(self, evaled_type)
192 def create_list(self, evaled_type: Any) -> StrawberryList:
193 item_type, *_ = get_args(evaled_type)
194 of_type = StrawberryAnnotation(
195 annotation=item_type,
196 namespace=self.namespace,
--> 197 ).resolve()
199 return StrawberryList(of_type)
[... skipping similar frames: StrawberryAnnotation.resolve at line 133 (1 times)]
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/annotation.py:152, in StrawberryAnnotation._resolve(self)
149 if self._is_list(evaled_type):
150 return self.create_list(evaled_type)
--> 152 if self._is_graphql_generic(evaled_type):
153 if any(is_type_var(type_) for type_ in get_args(evaled_type)):
154 return evaled_type
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/annotation.py:281, in StrawberryAnnotation._is_graphql_generic(cls, annotation)
279 if hasattr(annotation, "__origin__"):
280 if definition := get_object_definition(annotation.__origin__):
--> 281 return definition.is_graphql_generic
283 return is_generic(annotation.__origin__)
285 return False
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/types/base.py:347, in StrawberryObjectDefinition.is_graphql_generic(self)
342 return False
344 # here we are checking if any exposed field is generic
345 # a Strawberry class can be "generic", but not expose any
346 # generic field to GraphQL
--> 347 return any(field.is_graphql_generic for field in self.fields)
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/types/base.py:347, in <genexpr>(.0)
342 return False
344 # here we are checking if any exposed field is generic
345 # a Strawberry class can be "generic", but not expose any
346 # generic field to GraphQL
--> 347 return any(field.is_graphql_generic for field in self.fields)
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/types/field.py:256, in StrawberryField.is_graphql_generic(self)
251 @property
252 def is_graphql_generic(self) -> bool:
253 return (
254 self.base_resolver.is_graphql_generic
255 if self.base_resolver
--> 256 else _is_generic(self.type)
257 )
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/types/field.py:305, in StrawberryField.type(self)
297 @property # type: ignore
298 def type(
299 self,
(...)
303 Literal[UNRESOLVED],
304 ]:
--> 305 return self.resolve_type()
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/types/field.py:352, in StrawberryField.resolve_type(self, type_definition)
349 with contextlib.suppress(NameError):
350 # Prioritise the field type over the resolver return type
351 if self.type_annotation is not None:
--> 352 resolved = self.type_annotation.resolve()
353 elif self.base_resolver is not None and self.base_resolver.type is not None:
354 # Handle unannotated functions (such as lambdas)
355 # Generics will raise MissingTypesForGenericError later
356 # on if we let it be returned. So use `type_annotation` instead
357 # which is the same behaviour as having no type information.
358 resolved = self.base_resolver.type
[... skipping similar frames: StrawberryAnnotation.resolve at line 133 (1 times)]
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/annotation.py:162, in StrawberryAnnotation._resolve(self)
160 return self.create_enum(evaled_type)
161 elif self._is_optional(evaled_type, args):
--> 162 return self.create_optional(evaled_type)
163 elif self._is_union(evaled_type, args):
164 return self.create_union(evaled_type, args)
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/annotation.py:220, in StrawberryAnnotation.create_optional(self, evaled_type)
210 # Note that passing a single type to `Union` is equivalent to not using `Union`
211 # at all. This allows us to not di any checks for how many types have been
212 # passed as we can safely use `Union` for both optional types
213 # (e.g. `Optional[str]`) and optional unions (e.g.
214 # `Optional[Union[TypeA, TypeB]]`)
215 child_type = Union[non_optional_types] # type: ignore
217 of_type = StrawberryAnnotation(
218 annotation=child_type,
219 namespace=self.namespace,
--> 220 ).resolve()
222 return StrawberryOptional(of_type)
[... skipping similar frames: StrawberryAnnotation.resolve at line 133 (1 times)]
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/annotation.py:150, in StrawberryAnnotation._resolve(self)
148 return evaled_type
149 if self._is_list(evaled_type):
--> 150 return self.create_list(evaled_type)
152 if self._is_graphql_generic(evaled_type):
153 if any(is_type_var(type_) for type_ in get_args(evaled_type)):
File ~/.virtualenvs/alembic/lib/python3.10/site-packages/strawberry/annotation.py:197, in StrawberryAnnotation.create_list(self, evaled_type)
192 def create_list(self, evaled_type: Any) -> StrawberryList:
193 item_type, *_ = get_args(evaled_type)
194 of_type = StrawberryAnnotation(
195 annotation=item_type,
196 namespace=self.namespace,
--> 197 ).resolve()
199 return StrawberryList(of_type)
[... skipping similar frames: StrawberryAnnotation.resolve at line 133 (584 times), <genexpr> at line 347 (195 times), StrawberryAnnotation._is_graphql_generic at line 281 (195 times), StrawberryAnnotation._resolve at line 152 (195 times), StrawberryObjectDefinition.is_graphql_generic at line 347 (195 times), StrawberryField.is_graphql_generic at line 256 (195 times), StrawberryField.resolve_type at line 352 (195 times), StrawberryField.type at line 305 (195 times), StrawberryAnnotation._resolve at line 162 (194 times), StrawberryAnnotation._resolve at line 150 (194 times), StrawberryAnnotation.create_list at line 197 (194 times), StrawberryAnnotation.create_optional at line 220 (194 times)]
... SNIPPED ...
```
## System Information
- Operating system: macOS
- Strawberry version: 0.237.3 and 0.242
---
If I wanted to contribute this fix (seems like a major feature though 😄 ), where should I start?
Thank you for your time!
Libor | open | 2024-09-20T08:04:05Z | 2025-03-20T15:56:52Z | https://github.com/strawberry-graphql/strawberry/issues/3636 | [
"bug"
] | libor-saq | 2 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 505 | Can't receive signals when register handlers by `connect_via` | `@models_committed.connect_via(app)` not work:
```python
@models_committed.connect_via(app)
def on_models_committed(sender, changes):
print(changes)
```
but `@models_committed.connect` works.
I spent a few hours to find out the reason, reproduce:
```python
# app.py
from flask import Flask
from flask_sqlalchemy import SQLAlchemy, models_committed
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///:memory:'
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = True
app.config['DEBUG'] = True
print('create_app', type(app), id(app))
db = SQLAlchemy(app)
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
@models_committed.connect_via(app)
def on_models_committed(sender, changes):
print(changes)
@app.route('/')
def index():
user = User()
db.session.add(user)
db.session.commit()
return 'User id=' + str(user.id)
if __name__ == '__main__':
db.create_all()
with app.test_client() as client:
print(client.get('/').data)
```
And add `print('flask_sqlalchemy', type(session.app), id(session.app))` to `flask_sqlqlchemy/__init__.py` for debuging:
```
@staticmethod
def after_commit(session):
try:
d = session._model_changes
except AttributeError:
return
if d:
print('flask_sqlalchemy', type(session.app), id(session.app))
models_committed.send(session.app, changes=list(d.values()))
d.clear()
```
Run the scripts:
```bash
$ python app.py
create_app <class 'flask.app.Flask'> 140350331860528
flask_sqlalchemy <class 'werkzeug.local.LocalProxy'> 140350227515936
b'User id=1'
```
`session.app` is not `app`, so `connect_via` not work.
| closed | 2017-06-10T09:28:48Z | 2020-12-05T19:58:30Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/505 | [
"bug"
] | guyskk | 3 |
praw-dev/praw | api | 1,057 | Which REST endpoints are being called by the subreddit.stream.comments()? | I would like to replicate the subreddit.stream.comments() method via directly calling the Reddit REST API. Which endpoints are you using to synthesize the subreddit.stream.comments() stream? What about the subreddit.stream.submissions? Thanks! | closed | 2019-04-30T02:43:15Z | 2019-04-30T03:29:39Z | https://github.com/praw-dev/praw/issues/1057 | [] | vgoklani | 1 |
pywinauto/pywinauto | automation | 759 | pywinauto on windows 7 can't get controls from window | I'm running a script that start skype on windows 7 and try to reach search button.
The script isn't working. I wonder if pywinauto support windows 7 x64?
## Expected Behavior
All the controls responsible for all the visible buttons reachable via pywinauto
## Actual Behavior
None of controls except maximize / minimize / close buttons can be reached
## Steps to Reproduce the Problem
1. Run script below
## Short Example of Code to Demonstrate the Problem
```
app0 = Application(backend="uia").start(r'"C:\Program Files (x86)\Microsoft\Skype for Desktop\Skype.exe"')
app = None
while True:
try:
app = Application(backend='uia').connect(title_re='Skype.*')
except:
sleep(1)
continue
break
skp = app.window(title_re='Skype.*')
skp.wait('ready', timeout=20)
try:
search = skp["Search for people, groups & messages"]
search.wait('ready', timeout=20)
search.click()
except Exception as e:
print("Failed to find 'Search for people, groups & messages' button. Maybe it was already pressed")
```
## Specifications
- Pywinauto version: 0.6.6
- Python version and bitness: 3.6.1
- Platform and OS: win 7 x64 home/pro (tested on both)
| open | 2019-06-26T12:02:57Z | 2020-10-24T07:06:46Z | https://github.com/pywinauto/pywinauto/issues/759 | [
"question"
] | dstepanenko | 5 |
Farama-Foundation/Gymnasium | api | 605 | [Question] Ground-truth dynamics model in mujoco | ### Question
Hi,
Anyone know where can we access the ground-truth dynamics model for the mujoco environments? For example, the cart-pole, it seems the dynamics equations are not given in any files. (.xml file has some parameters, but not same as the standard cart-pole parameters, e.g. cart mass, pole mass, pole length, and so on).
Thanks! | closed | 2023-07-14T01:18:17Z | 2023-11-09T16:20:09Z | https://github.com/Farama-Foundation/Gymnasium/issues/605 | [
"question"
] | KehanLong | 18 |
run-llama/rags | streamlit | 7 | How to upload files? | I turn to page RAG Config, but it shows "File/URL paths (not editable)".So where can I upload PDFs? | open | 2023-11-22T08:55:13Z | 2023-11-26T23:24:27Z | https://github.com/run-llama/rags/issues/7 | [] | Lauorie | 8 |
graphql-python/graphene-sqlalchemy | sqlalchemy | 74 | Refactor SQLAlchemyConnectionField to use SQLAlchemyObjectType.get_query() | Right now, `SQLAlchemyConnectionField` uses the `get_query()` implementation in `graphene_sqlalchemy.utils`. The same code is in `SQLAlchemyObjectType`.
https://github.com/graphql-python/graphene-sqlalchemy/blob/1d353f71f4ff256dcf69a7a13a27e4865282b044/graphene_sqlalchemy/fields.py#L18-L20
https://github.com/graphql-python/graphene-sqlalchemy/blob/1d353f71f4ff256dcf69a7a13a27e4865282b044/graphene_sqlalchemy/types.py#L146-L149
This means that if someone wants to update the query for an `SQLAlchemyObjectType`, e.g. to implement permissions restrictions, they have to subclass not only `SQLAlchemyObjectType` but also `SQLAlchemyConnectionField`.
I suggest refactoring `SQLAlchemyConnectionField` to re-use the `get_query()` implementation of the `SQLAlchemyObjectType` it wraps. I'm willing to look into creating a PR if there is interest. | open | 2017-08-30T10:03:41Z | 2018-01-24T20:17:57Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/74 | [] | lyschoening | 1 |
sinaptik-ai/pandas-ai | data-visualization | 1,152 | Add Firebase database as connector | ### 🚀 The feature
Add Firebase database as a connector
### Motivation, pitch
Add Firebase database as connector
### Alternatives
_No response_
### Additional context
_No response_ | closed | 2024-05-13T06:57:59Z | 2024-08-22T17:39:33Z | https://github.com/sinaptik-ai/pandas-ai/issues/1152 | [] | shivatmax | 1 |
thomaxxl/safrs | rest-api | 78 | Unable to delete entries | currently developing an API, which all seems to work ok using safrs. However, deleting entries somehow don't seem possible, the following stacktrace is printed:
```
[2020-09-29 11:20:06,951] ERROR: 'dict' object is not callable
The view function did not return a valid response. The return type must be a string, tuple, Response instance, or WSGI callable, but it was a dict.
Traceback (most recent call last):
File "/venv/lib/python3.8/site-packages/flask/app.py", line 1974, in make_response
rv = self.response_class.force_type(rv, request.environ)
File "/venv/lib/python3.8/site-packages/werkzeug/wrappers.py", line 921, in force_type
response = BaseResponse(*_run_wsgi_app(response, environ))
File "/venv/lib/python3.8/site-packages/werkzeug/test.py", line 923, in run_wsgi_app
app_rv = app(environ, start_response)
TypeError: 'dict' object is not callable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/venv/lib/python3.8/site-packages/safrs/safrs_api.py", line 538, in method_wrapper
result = fun(*args, **kwargs)
File "/venv/lib/python3.8/site-packages/safrs/jsonapi.py", line 495, in delete
return make_response({}, HTTPStatus.NO_CONTENT)
File "/venv/lib/python3.8/site-packages/safrs/jsonapi.py", line 38, in make_response
response = flask_make_response(*args, **kwargs)
File "/venv/lib/python3.8/site-packages/flask/helpers.py", line 213, in make_response
return current_app.make_response(args)
File "/venv/lib/python3.8/site-packages/flask/app.py", line 1982, in make_response
reraise(TypeError, new_error, sys.exc_info()[2])
File "/venv/lib/python3.8/site-packages/flask/_compat.py", line 34, in reraise
raise value.with_traceback(tb)
File "/venv/lib/python3.8/site-packages/flask/app.py", line 1974, in make_response
rv = self.response_class.force_type(rv, request.environ)
File "/venv/lib/python3.8/site-packages/werkzeug/wrappers.py", line 921, in force_type
response = BaseResponse(*_run_wsgi_app(response, environ))
File "/venv/lib/python3.8/site-packages/werkzeug/test.py", line 923, in run_wsgi_app
app_rv = app(environ, start_response)
TypeError: 'dict' object is not callable
The view function did not return a valid response. The return type must be a string, tuple, Response instance, or WSGI callable, but it was a dict.
[2020-09-29 11:20:06,952] ERROR: 'dict' object is not callable
The view function did not return a valid response. The return type must be a string, tuple, Response instance, or WSGI callable, but it was a dict.
[2020-09-29 11:20:06,952] ERROR in app: Exception on /user/1/ [DELETE]
Traceback (most recent call last):
File "/venv/lib/python3.8/site-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/venv/lib/python3.8/site-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/venv/lib/python3.8/site-packages/flask_restful/__init__.py", line 468, in wrapper
resp = resource(*args, **kwargs)
File "/venv/lib/python3.8/site-packages/flask_restful_swagger_2/__init__.py", line 39, in decorator
return f(*args, **kwargs)
File "/venv/lib/python3.8/site-packages/flask/views.py", line 88, in view
return self.dispatch_request(*args, **kwargs)
File "/venv/lib/python3.8/site-packages/flask_restful/__init__.py", line 583, in dispatch_request
resp = meth(*args, **kwargs)
File "/venv/lib/python3.8/site-packages/flask_restful_swagger_2/swagger.py", line 219, in inner
return f(self, *args, **kwargs)
File "/venv/lib/python3.8/site-packages/safrs/safrs_api.py", line 566, in method_wrapper
abort(status_code, errors=[errors])
File "/venv/lib/python3.8/site-packages/flask_restful/__init__.py", line 32, in abort
original_flask_abort(http_status_code)
File "/venv/lib/python3.8/site-packages/werkzeug/exceptions.py", line 707, in abort
return _aborter(status, *args, **kwargs)
File "/venv/lib/python3.8/site-packages/werkzeug/exceptions.py", line 687, in __call__
raise self.mapping[code](*args, **kwargs)
werkzeug.exceptions.InternalServerError: 500 Internal Server Error: The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
127.0.0.1 - - [29/Sep/2020 11:20:06] "DELETE /user/1/ HTTP/1.1" 500
```
I am using a MySQL database:
```
mysql> describe user;
+-------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+----------------+
| id | int | NO | PRI | NULL | auto_increment |
| name | varchar(45) | YES | UNI | NULL | |
+-------+-------------+------+-----+---------+----------------+
mysql> select * from user;
+----+------+
| id | ip |
+----+------+
| 1 | test |
+----+------+
```
and the following model:
```
class BaseModel(SAFRSBase, db.Model):
__abstract__ = True
class User(BaseModel):
"""
description: User description
"""
__tablename__ = "user"
id = db.Column(db.Integer, primary_key=True, unique=True, nullable=False)
name = db.Column(db.String(45), unique=True)
```
Hope you can help me with this as safrs is a real nice way in producing a self documenting API. | closed | 2020-09-29T09:25:09Z | 2020-10-04T19:32:08Z | https://github.com/thomaxxl/safrs/issues/78 | [] | rule88 | 5 |
desec-io/desec-stack | rest-api | 283 | Block registration of names under certain reserved TLDs | most notably, `.internal`, see Section 5.1 bullet 7 of https://tools.ietf.org/html/draft-wkumari-dnsop-internal-00 and wkumari/draft-wkumari-dnsop-internal#6 | closed | 2019-12-21T06:16:03Z | 2020-02-03T14:52:26Z | https://github.com/desec-io/desec-stack/issues/283 | [] | peterthomassen | 0 |
BayesWitnesses/m2cgen | scikit-learn | 177 | Move Dart language to a different bucket of E2E tests on Travis | Although the Dart has been configured in `.travis.yaml` I don't see it's being executed. Eg. recent master build - https://travis-ci.org/BayesWitnesses/m2cgen/jobs/660185796.
Error in the output:
```
Unknown pytest.mark.dart - is this a typo? You can register custom marks to avoid this warning
```
CC: @StrikerRUS | closed | 2020-03-09T17:11:29Z | 2020-03-16T22:58:06Z | https://github.com/BayesWitnesses/m2cgen/issues/177 | [] | izeigerman | 7 |
laughingman7743/PyAthena | sqlalchemy | 345 | Installing pyathena[pandas]==2.9.6 | Possibly not the right place to ask this question, but I couldn't find an answer anywhere else.
In a fresh virtual environment, pip install pyathena[pandas]==2.9.6 has to check so many versions of packages (especially boto*). Is there a reason why the constraints are not more strict?
$ pip install pyathena[pandas]
Collecting pyathena[pandas]
Using cached PyAthena-2.9.6-py3-none-any.whl (56 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.29-py3-none-any.whl (9.0 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.29-py3-none-any.whl (132 kB)
Collecting tenacity>=4.1.0
Using cached tenacity-8.0.1-py3-none-any.whl (24 kB)
Collecting pandas>=1.3.0
Using cached pandas-1.4.3-cp39-cp39-macosx_10_9_x86_64.whl (11.5 MB)
Collecting pyarrow>=7.0.0
Using cached pyarrow-8.0.0-cp39-cp39-macosx_10_13_x86_64.whl (22.4 MB)
Collecting s3fs>=2021.09.0
Using cached s3fs-2022.5.0-py3-none-any.whl (27 kB)
Collecting jmespath<2.0.0,>=0.7.1
Using cached jmespath-1.0.1-py3-none-any.whl (20 kB)
Collecting s3transfer<0.7.0,>=0.6.0
Using cached s3transfer-0.6.0-py3-none-any.whl (79 kB)
Collecting python-dateutil<3.0.0,>=2.1
Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
Collecting urllib3<1.27,>=1.25.4
Using cached urllib3-1.26.10-py2.py3-none-any.whl (139 kB)
Collecting pytz>=2020.1
Using cached pytz-2022.1-py2.py3-none-any.whl (503 kB)
Collecting numpy>=1.18.5
Using cached numpy-1.23.1-cp39-cp39-macosx_10_9_x86_64.whl (18.1 MB)
Collecting aiobotocore~=2.3.0
Using cached aiobotocore-2.3.4-py3-none-any.whl (64 kB)
Collecting aiohttp<=4
Using cached aiohttp-3.8.1-cp39-cp39-macosx_10_9_x86_64.whl (574 kB)
Collecting fsspec==2022.5.0
Using cached fsspec-2022.5.0-py3-none-any.whl (140 kB)
Collecting aioitertools>=0.5.1
Using cached aioitertools-0.10.0-py3-none-any.whl (23 kB)
Collecting aiobotocore~=2.3.0
Using cached aiobotocore-2.3.3.tar.gz (65 kB)
Preparing metadata (setup.py) ... done
Using cached aiobotocore-2.3.2.tar.gz (104 kB)
Preparing metadata (setup.py) ... done
Using cached aiobotocore-2.3.1.tar.gz (65 kB)
Preparing metadata (setup.py) ... done
Using cached aiobotocore-2.3.0.tar.gz (65 kB)
Preparing metadata (setup.py) ... done
INFO: pip is looking at multiple versions of pyathena to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of tenacity to determine which version is compatible with other requirements. This could take a while.
Collecting tenacity>=4.1.0
Using cached tenacity-8.0.0-py3-none-any.whl (22 kB)
INFO: pip is looking at multiple versions of fsspec to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of s3fs to determine which version is compatible with other requirements. This could take a while.
Collecting s3fs>=2021.09.0
Using cached s3fs-2022.3.0-py3-none-any.whl (26 kB)
Collecting fsspec==2022.3.0
Using cached fsspec-2022.3.0-py3-none-any.whl (136 kB)
Collecting aiobotocore~=2.2.0
Using cached aiobotocore-2.2.0-py3-none-any.whl
Collecting s3fs>=2021.09.0
Using cached s3fs-2022.2.0-py3-none-any.whl (26 kB)
Collecting aiobotocore~=2.1.0
Using cached aiobotocore-2.1.2.tar.gz (58 kB)
Preparing metadata (setup.py) ... done
Collecting fsspec==2022.02.0
Using cached fsspec-2022.2.0-py3-none-any.whl (134 kB)
Collecting aiobotocore~=2.1.0
Using cached aiobotocore-2.1.1.tar.gz (57 kB)
Preparing metadata (setup.py) ... done
Using cached aiobotocore-2.1.0.tar.gz (54 kB)
Preparing metadata (setup.py) ... done
Collecting s3fs>=2021.09.0
Using cached s3fs-2022.1.0-py3-none-any.whl (25 kB)
Collecting fsspec==2022.01.0
Using cached fsspec-2022.1.0-py3-none-any.whl (133 kB)
Collecting s3fs>=2021.09.0
Using cached s3fs-2021.11.1-py3-none-any.whl (25 kB)
Collecting fsspec==2021.11.1
Using cached fsspec-2021.11.1-py3-none-any.whl (132 kB)
Collecting aiobotocore~=2.0.1
Using cached aiobotocore-2.0.1.tar.gz (54 kB)
Preparing metadata (setup.py) ... done
Collecting s3fs>=2021.09.0
Using cached s3fs-2021.11.0-py3-none-any.whl (25 kB)
Collecting fsspec==2021.11.0
Using cached fsspec-2021.11.0-py3-none-any.whl (132 kB)
Collecting aiobotocore~=1.4.1
Using cached aiobotocore-1.4.2.tar.gz (52 kB)
Preparing metadata (setup.py) ... done
Using cached aiobotocore-1.4.1.tar.gz (52 kB)
Preparing metadata (setup.py) ... done
Collecting s3fs>=2021.09.0
Using cached s3fs-2021.10.1-py3-none-any.whl (26 kB)
Collecting fsspec==2021.10.1
Using cached fsspec-2021.10.1-py3-none-any.whl (125 kB)
Collecting s3fs>=2021.09.0
Using cached s3fs-2021.10.0-py3-none-any.whl (26 kB)
Collecting fsspec==2021.10.0
Using cached fsspec-2021.10.0-py3-none-any.whl (125 kB)
INFO: pip is looking at multiple versions of fsspec to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of s3fs to determine which version is compatible with other requirements. This could take a while.
Collecting s3fs>=2021.09.0
Using cached s3fs-2021.9.0-py3-none-any.whl (26 kB)
Collecting fsspec==2021.09.0
Using cached fsspec-2021.9.0-py3-none-any.whl (123 kB)
INFO: pip is looking at multiple versions of pyarrow to determine which version is compatible with other requirements. This could take a while.
Collecting pyarrow>=7.0.0
Using cached pyarrow-7.0.0-cp39-cp39-macosx_10_13_x86_64.whl (20.2 MB)
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
INFO: pip is looking at multiple versions of pandas to determine which version is compatible with other requirements. This could take a while.
Collecting pandas>=1.3.0
Using cached pandas-1.4.2-cp39-cp39-macosx_10_9_x86_64.whl (11.1 MB)
Using cached pandas-1.4.1-cp39-cp39-macosx_10_9_x86_64.whl (11.5 MB)
Using cached pandas-1.4.0-cp39-cp39-macosx_10_9_x86_64.whl (11.5 MB)
INFO: pip is looking at multiple versions of pyarrow to determine which version is compatible with other requirements. This could take a while.
Using cached pandas-1.3.5-cp39-cp39-macosx_10_9_x86_64.whl (11.3 MB)
Using cached pandas-1.3.4-cp39-cp39-macosx_10_9_x86_64.whl (11.6 MB)
Using cached pandas-1.3.3-cp39-cp39-macosx_10_9_x86_64.whl (11.6 MB)
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
Using cached pandas-1.3.2-cp39-cp39-macosx_10_9_x86_64.whl (11.6 MB)
INFO: pip is looking at multiple versions of pandas to determine which version is compatible with other requirements. This could take a while.
Using cached pandas-1.3.1-cp39-cp39-macosx_10_9_x86_64.whl (11.3 MB)
Using cached pandas-1.3.0-cp39-cp39-macosx_10_9_x86_64.whl (11.6 MB)
INFO: pip is looking at multiple versions of botocore to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of <Python from Requires-Python> to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of boto3 to determine which version is compatible with other requirements. This could take a while.
Collecting boto3>=1.13.20
Using cached boto3-1.24.28-py3-none-any.whl (132 kB)
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
Collecting botocore>=1.16.20
Using cached botocore-1.27.28-py3-none-any.whl (9.0 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.27-py3-none-any.whl (132 kB)
INFO: pip is looking at multiple versions of pyathena to determine which version is compatible with other requirements. This could take a while.
Collecting botocore>=1.16.20
Using cached botocore-1.27.27-py3-none-any.whl (9.0 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.26-py3-none-any.whl (132 kB)
INFO: pip is looking at multiple versions of botocore to determine which version is compatible with other requirements. This could take a while.
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
Collecting botocore>=1.16.20
Using cached botocore-1.27.26-py3-none-any.whl (9.0 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.25-py3-none-any.whl (132 kB)
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
Collecting botocore>=1.16.20
Using cached botocore-1.27.25-py3-none-any.whl (9.0 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.24-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.24-py3-none-any.whl (9.0 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.23-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.23-py3-none-any.whl (8.9 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.22-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.22-py3-none-any.whl (8.9 MB)
INFO: pip is looking at multiple versions of <Python from Requires-Python> to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of boto3 to determine which version is compatible with other requirements. This could take a while.
Collecting boto3>=1.13.20
Using cached boto3-1.24.21-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.21-py3-none-any.whl (8.9 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.20-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.20-py3-none-any.whl (8.9 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.19-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.19-py3-none-any.whl (8.9 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.18-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.18-py3-none-any.whl (8.9 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.17-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.17-py3-none-any.whl (8.9 MB)
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
Collecting boto3>=1.13.20
Using cached boto3-1.24.16-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.16-py3-none-any.whl (8.9 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.15-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.15-py3-none-any.whl (8.9 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.14-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.14-py3-none-any.whl (8.9 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.13-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.13-py3-none-any.whl (8.9 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.12-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.12-py3-none-any.whl (8.9 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.11-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.11-py3-none-any.whl (8.9 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.10-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.10-py3-none-any.whl (8.9 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.9-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.9-py3-none-any.whl (8.9 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.8-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.8-py3-none-any.whl (8.9 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.7-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.7-py3-none-any.whl (8.9 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.6-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.6-py3-none-any.whl (8.9 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.5-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.5-py3-none-any.whl (8.9 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.4-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.4-py3-none-any.whl (8.9 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.3-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.3-py3-none-any.whl (8.9 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.2-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.2-py3-none-any.whl (8.8 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.1-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.1-py3-none-any.whl (8.8 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.24.0-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.27.0-py3-none-any.whl (8.8 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.23.10-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.26.10-py3-none-any.whl (8.8 MB)
Collecting s3transfer<0.6.0,>=0.5.0
Using cached s3transfer-0.5.2-py3-none-any.whl (79 kB)
Collecting boto3>=1.13.20
Using cached boto3-1.23.9-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.26.9-py3-none-any.whl (8.8 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.23.8-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.26.8-py3-none-any.whl (8.8 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.23.7-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.26.7-py3-none-any.whl (8.8 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.23.6-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.26.6-py3-none-any.whl (8.8 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.23.5-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.26.5-py3-none-any.whl (8.8 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.23.4-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.26.4-py3-none-any.whl (8.8 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.23.3-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.26.3-py3-none-any.whl (8.8 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.23.2-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.26.2-py3-none-any.whl (8.8 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.23.1-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.26.1-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.23.0-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.26.0-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.22.13-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.25.13-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.22.12-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.25.12-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.22.11-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.25.11-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.22.10-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.25.10-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.22.9-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.25.9-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.22.8-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.25.8-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.22.7-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.25.7-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.22.6-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.25.6-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.22.5-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.25.5-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.22.4-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.25.4-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.22.3-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.25.3-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.22.2-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.25.2-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.22.1-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.25.1-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.22.0-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.25.0-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.46-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.46-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.45-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.45-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.44-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.44-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.43-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.43-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.42-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.42-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.41-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.41-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.40-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.40-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.39-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.39-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.38-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.38-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.37-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.37-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.36-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.36-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.35-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.35-py3-none-any.whl (8.7 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.34-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.34-py3-none-any.whl (8.6 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.33-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.33-py3-none-any.whl (8.6 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.32-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.32-py3-none-any.whl (8.6 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.31-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.31-py3-none-any.whl (8.6 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.30-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.30-py3-none-any.whl (8.6 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.29-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.29-py3-none-any.whl (8.6 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.28-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.28-py3-none-any.whl (8.6 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.27-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.27-py3-none-any.whl (8.6 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.26-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.26-py3-none-any.whl (8.6 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.25-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.25-py3-none-any.whl (8.6 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.24-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.24-py3-none-any.whl (8.6 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.23-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.23-py3-none-any.whl (8.6 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.22-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.22-py3-none-any.whl (8.6 MB)
Collecting boto3>=1.13.20
Using cached boto3-1.21.21-py3-none-any.whl (132 kB)
Collecting botocore>=1.16.20
Using cached botocore-1.24.21-py3-none-any.whl (8.6 MB)
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', ConnectionResetError(54, 'Connection reset by peer'))': /simple/wrapt/
Collecting wrapt>=1.10.10
Using cached wrapt-1.14.1-cp39-cp39-macosx_10_9_x86_64.whl (35 kB)
Collecting yarl<2.0,>=1.0
Using cached yarl-1.7.2-cp39-cp39-macosx_10_9_x86_64.whl (121 kB)
Collecting aiosignal>=1.1.2
Using cached aiosignal-1.2.0-py3-none-any.whl (8.2 kB)
Collecting frozenlist>=1.1.1
Using cached frozenlist-1.3.0-cp39-cp39-macosx_10_9_x86_64.whl (36 kB)
Collecting attrs>=17.3.0
Using cached attrs-21.4.0-py2.py3-none-any.whl (60 kB)
Collecting async-timeout<5.0,>=4.0.0a3
Using cached async_timeout-4.0.2-py3-none-any.whl (5.8 kB)
Collecting charset-normalizer<3.0,>=2.0
Using cached charset_normalizer-2.1.0-py3-none-any.whl (39 kB)
Collecting multidict<7.0,>=4.5
Using cached multidict-6.0.2-cp39-cp39-macosx_10_9_x86_64.whl (28 kB)
Collecting six>=1.5
Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting typing_extensions>=4.0
Using cached typing_extensions-4.3.0-py3-none-any.whl (25 kB)
Collecting idna>=2.0
Using cached idna-3.3-py3-none-any.whl (61 kB)
Installing collected packages: pytz, wrapt, urllib3, typing_extensions, tenacity, six, numpy, multidict, jmespath, idna, fsspec, frozenlist, charset-normalizer, attrs, async-timeout, yarl, python-dateutil, pyarrow, aiosignal, aioitertools, pandas, botocore, aiohttp, s3transfer, aiobotocore, s3fs, boto3, pyathena
Successfully installed aiobotocore-2.3.4 aiohttp-3.8.1 aioitertools-0.10.0 aiosignal-1.2.0 async-timeout-4.0.2 attrs-21.4.0 boto3-1.24.29 botocore-1.27.29 charset-normalizer-2.1.0 frozenlist-1.3.0 fsspec-2022.5.0 idna-3.3 jmespath-1.0.1 multidict-6.0.2 numpy-1.23.1 pandas-1.4.3 pyarrow-8.0.0 pyathena-2.9.6 python-dateutil-2.8.2 pytz-2022.1 s3fs-2022.5.0 s3transfer-0.6.0 six-1.16.0 tenacity-8.0.1 typing_extensions-4.3.0 urllib3-1.26.10 wrapt-1.14.1 yarl-1.7.2
| closed | 2022-07-13T23:57:26Z | 2022-07-16T07:33:24Z | https://github.com/laughingman7743/PyAthena/issues/345 | [] | dblado | 1 |
scikit-tda/kepler-mapper | data-visualization | 50 | Is there a way to use strings for the custom tooltip labels? | It would be really helpful for interactive analysis if the tooltip labels could be strings instead of just integer labels. Otherwise one has to reference their original label mapping to understand which integers correspond to what labels/categories. If this functionality is already implemented please demonstrate. Thank you. | closed | 2018-01-11T18:10:04Z | 2018-04-13T22:13:36Z | https://github.com/scikit-tda/kepler-mapper/issues/50 | [] | BlackArbsCEO | 1 |
google-research/bert | tensorflow | 1,130 | WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 25 vs previous value: 25. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize. | I am doing NER using BERT for past months on google colab GPU and everything was working fine but now when I am doing same on CPUs I am getting this warning.
When am using colab GPU for training then there is no issue and no warnings of this kind. But when I am training with same data and same parameters I am getting this type of warning on running on colab CPU and on my local system's CPU and training never finishes.
`WARNING:tensorflow:It seems that global step (tf.train.get_global_step) has not been increased. Current value (could be stable): 25 vs previous value: 25. You could increase the global step by passing tf.train.get_global_step() to Optimizer.apply_gradients or Optimizer.minimize.`
Can someone please tell me the reason why problem occurs only on CPU not on GPU keeping the configurations same?
And how can I resolve it?
By the way BERT is using AdamOptimizer and I have never modified BERT's optimizer.py . | open | 2020-08-04T07:05:00Z | 2020-08-04T07:05:00Z | https://github.com/google-research/bert/issues/1130 | [] | agarwalishan | 0 |
alteryx/featuretools | scikit-learn | 2,080 | Featuretools can generate features with comparison primitives that fail on calculation | There are four binary comparison primitives that can generate features that fail on feature calculation in some cases. The failures primarily appear to be in situations where a `Datetime` input is being compared to a scalar value.
The four primitives that have been found to cause this error are:
- `greater_than_equal_to_scalar`
- `greater_than_scalar`
- `less_than_scalar`
- `less_than_equal_to_scalar`
#### Code Sample, a copy-pastable example to reproduce your bug.
```python
import pandas as pd
import featuretools as ft
df = pd.DataFrame({
"id": [0, 1, 2],
"dates": pd.to_datetime(["2020-01-01", "2020-02-01" ,"2020-03-01"]),
"ints": [100, 200, 300]
})
es = ft.EntitySet()
es.add_dataframe(dataframe_name="df", dataframe=df, index="id")
fm, features = ft.dfs(entityset=es, target_dataframe_name="df", trans_primitives=["greater_than_scalar"], max_depth=1)
```
```
TypeError: Invalid comparison between dtype=datetime64[ns] and int
```
These primitives should be updated to either allow the calculation to proceed without error. Alternatively, the `Datetime` input could be removed from the list of supported inputs, or the input types list could perhaps be dynamically updated on instantiation based on the type of the scalar value provided. | closed | 2022-05-17T18:36:04Z | 2023-01-05T16:15:15Z | https://github.com/alteryx/featuretools/issues/2080 | [
"bug"
] | thehomebrewnerd | 1 |
huggingface/datasets | tensorflow | 6,823 | Loading problems of Datasets with a single shard | ### Describe the bug
When saving a dataset on disk and it has a single shard it is not loaded as when it is saved in multiple shards. I installed the latest version of datasets via pip.
### Steps to reproduce the bug
The code below reproduces the behavior. All works well when the range of the loop is 10000 but it fails when it is 1000.
```
from PIL import Image
import numpy as np
from datasets import Dataset, DatasetDict, load_dataset
def load_image():
# Generate random noise image
noise = np.random.randint(0, 256, (256, 256, 3), dtype=np.uint8)
return Image.fromarray(noise)
def create_dataset():
input_images = []
output_images = []
text_prompts = []
for _ in range(10000): # this is the problematic parameter
input_images.append(load_image())
output_images.append(load_image())
text_prompts.append('test prompt')
data = {'input_image': input_images, 'output_image': output_images, 'text_prompt': text_prompts}
dataset = Dataset.from_dict(data)
return DatasetDict({'train': dataset})
dataset = create_dataset()
print('dataset before saving')
print(dataset)
print(dataset['train'].column_names)
dataset.save_to_disk('test_ds')
print('dataset after loading')
dataset_loaded = load_dataset('test_ds')
print(dataset_loaded)
print(dataset_loaded['train'].column_names)
```
The output for 1000 iterations is:
```
dataset before saving
DatasetDict({
train: Dataset({
features: ['input_image', 'output_image', 'text_prompt'],
num_rows: 1000
})
})
['input_image', 'output_image', 'text_prompt']
Saving the dataset (1/1 shards): 100%|█| 1000/1000 [00:00<00:00, 5156.00 example
dataset after loading
Generating train split: 1 examples [00:00, 230.52 examples/s]
DatasetDict({
train: Dataset({
features: ['_data_files', '_fingerprint', '_format_columns', '_format_kwargs', '_format_type', '_output_all_columns', '_split'],
num_rows: 1
})
})
['_data_files', '_fingerprint', '_format_columns', '_format_kwargs', '_format_type', '_output_all_columns', '_split']
```
For 10000 iteration (8 shards) it is correct:
```
dataset before saving
DatasetDict({
train: Dataset({
features: ['input_image', 'output_image', 'text_prompt'],
num_rows: 10000
})
})
['input_image', 'output_image', 'text_prompt']
Saving the dataset (8/8 shards): 100%|█| 10000/10000 [00:01<00:00, 6237.68 examp
dataset after loading
Generating train split: 10000 examples [00:00, 10773.16 examples/s]
DatasetDict({
train: Dataset({
features: ['input_image', 'output_image', 'text_prompt'],
num_rows: 10000
})
})
['input_image', 'output_image', 'text_prompt']
```
### Expected behavior
The procedure should work for a dataset with one shrad the same as for one with multiple shards
### Environment info
- `datasets` version: 2.18.0
- Platform: macOS-14.1-arm64-arm-64bit
- Python version: 3.11.8
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.2
- `fsspec` version: 2024.2.0
Edit: I looked in the source code of load.py in datasets. I should have used "load_from_disk" and it indeed works that way. But ideally load_dataset would have raisen an error the same way as if I call a path:
```
if Path(path, config.DATASET_STATE_JSON_FILENAME).exists():
raise ValueError(
"You are trying to load a dataset that was saved using `save_to_disk`. "
"Please use `load_from_disk` instead."
)
```
nevertheless I find it interesting that it works just well and without a warning if there are multiple shards. | open | 2024-04-18T13:59:00Z | 2024-11-25T05:40:09Z | https://github.com/huggingface/datasets/issues/6823 | [] | andjoer | 2 |
autogluon/autogluon | scikit-learn | 4,825 | [timeseries] Add clone_for_deployment to TimeSeriesPredictor | ## Description
- Add the equivalent of [TabularPredictor.clone_for_deployment](https://auto.gluon.ai/stable/api/autogluon.tabular.TabularPredictor.clone_for_deployment.html). Currently, the trained predictor folder can be quite large and contains a lot of redundant information (e.g., training data copy), which makes it hard to use this artifact for deployment.
## References
- https://auto.gluon.ai/stable/api/autogluon.tabular.TabularPredictor.clone_for_deployment.html
| open | 2025-01-22T15:26:07Z | 2025-01-22T15:26:07Z | https://github.com/autogluon/autogluon/issues/4825 | [
"enhancement",
"module: timeseries"
] | shchur | 0 |
sinaptik-ai/pandas-ai | data-science | 815 | 'Unfortunately, I was not able to answer your question, because of the following error:\n\nAll objects passed were None\n' | ### System Info
OS version: Debian GNU/Linux 11 (bullseye)
python version: 3.10
pandasai version : 1.5.6
### 🐛 Describe the bug
#Import your dependencies
from hdbcli import dbapi
import pandas as pd
from pandasai import SmartDataframe
from langchain.llms import bedrock
Initialize your connection
conn = dbapi.connect(
address='<tenant>.hana.trial-us10.hanacloud.ondemand.com',
port='443',
user='USER',
password='Password',
encrypt=True,
sslValidateCertificate=True
)
schema = "USER1"
tablename = "ALL_RESERVATIONS"
data=pd.read_sql(f'select * from {schema}.{tablename}',conn)
print(data)
******* Output : *******
RESNO ARRIVAL Nights HOTEL TITLE FIRST NAME LAST NAME
0 7 2020-04-12 3 Airport Mrs Susan Baker
1 5 2019-03-14 10 Airport Mrs Sally Smith
2 6 2020-04-12 18 Atlantic Company None TOOLware
3 3 2020-11-14 4 Eighth Avenue Company None Datasoft
4 9 2020-12-23 16 Indian Horse Mr Peter Brown
5 4 2019-02-01 2 Long Beach Mr George Howe
6 8 2020-09-01 2 Ocean Star Mr Antony Jenkins
7 1 2020-12-24 3 Regency Mrs Jenny Porter
8 2 2020-12-24 10 Regency Mr Peter Brown
9 10 2020-11-14 3 River Boat Company None TOOLware
*******
# Instantiate a LLM
model_parameter = {"temperature": 0, "max_tokens_to_sample": 600}
langchain_llm = bedrock.Bedrock(model_id="anthropic.claude-v2", model_kwargs=model_parameter, region_name="us-east-1")
df = SmartDataframe(data, config={"llm": langchain_llm})
df.chat("Which Hotels have the most number of nights reserved?")
******* Output : 'Unfortunately, I was not able to answer your question, because of the following error:\n\nAll objects passed were None\n'
******* pandasai.log *******
2023-12-13 10:11:50 [INFO] Question: Which Hotels have the most number of nights reserved?
2023-12-13 10:11:50 [INFO] Running PandasAI with langchain_amazon_bedrock LLM...
2023-12-13 10:11:50 [INFO] Prompt ID: 513f35f8-b5f4-4d8d-985e-8401b4782924
2023-12-13 10:11:50 [INFO] Executing Step 0: CacheLookup
2023-12-13 10:11:50 [INFO] Executing Step 1: PromptGeneration
2023-12-13 10:11:50 [INFO] Using prompt: <dataframe>
dfs[0]:10x7
RESNO,ARRIVAL,Nights,HOTEL,TITLE,FIRST NAME,LAST NAME
5,2019-03-14,10,Airport,Mrs,Sally,Smith
6,2020-04-12,18,Atlantic,Company,,TOOLware
7,2020-04-12,3,Airport,Mrs,Susan,Baker
</dataframe>
Update this initial code:
```python
# TODO: import the required dependencies
import pandas as pd
# Write code here
# Declare result var: type (possible values "string", "number", "dataframe", "plot"). Examples: { "type": "string", "value": f"The highest salary is {highest_salary}." } or { "type": "number", "value": 125 } or { "type": "dataframe", "value": pd.DataFrame({...}) } or { "type": "plot", "value": "temp_chart.png" }
```
Q: Which Hotels have the most number of nights reserved?
Variable `dfs: list[pd.DataFrame]` is already declared.
At the end, declare "result" var dict: type (possible values "string", "number", "dataframe", "plot"). Examples: { "type": "string", "value": f"The highest salary is {highest_salary}." } or { "type": "number", "value": 125 } or { "type": "dataframe", "value": pd.DataFrame({...}) } or { "type": "plot", "value": "temp_chart.png" }
Generate python code and return full updated code:
2023-12-13 10:11:50 [INFO] Executing Step 2: CodeGenerator
2023-12-13 10:12:00 [INFO] Code generated:
```
import pandas as pd
# Write code here
df = pd.concat(dfs)
hotel_nights = df.groupby('HOTEL')['Nights'].sum().reset_index()
max_nights = hotel_nights['Nights'].max()
top_hotels = hotel_nights[hotel_nights['Nights'] == max_nights]['HOTEL'].tolist()
# Declare result
result = {
"type": "string",
"value": f"The hotels with the most nights reserved are: {', '.join(top_hotels)}"
}
```
2023-12-13 10:12:00 [INFO] Executing Step 3: CachePopulation
2023-12-13 10:12:00 [INFO] Executing Step 4: CodeExecution
2023-12-13 10:12:00 [INFO] Saving charts to /root/exports/charts/temp_chart.png
2023-12-13 10:12:00 [INFO]
Code running:
```
df = pd.concat(dfs)
hotel_nights = df.groupby('HOTEL')['Nights'].sum().reset_index()
max_nights = hotel_nights['Nights'].max()
top_hotels = hotel_nights[hotel_nights['Nights'] == max_nights]['HOTEL'].tolist()
result = {'type': 'string', 'value': f"The hotels with the most nights reserved are: {', '.join(top_hotels)}"}
```
2023-12-13 10:12:00 [WARNING] Failed to execute code with a correction framework [retry number: 1]
2023-12-13 10:12:00 [ERROR] Failed with error: Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/pandasai/pipelines/smart_datalake_chat/code_execution.py", line 46, in execute
result = pipeline_context.query_exec_tracker.execute_func(
File "/opt/conda/lib/python3.10/site-packages/pandasai/helpers/query_exec_tracker.py", line 128, in execute_func
result = function(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/pandasai/helpers/code_manager.py", line 203, in execute_code
exec(code_to_run, environment)
File "<string>", line 1, in <module>
File "/opt/conda/lib/python3.10/site-packages/pandas/util/_decorators.py", line 331, in wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/pandas/core/reshape/concat.py", line 368, in concat
op = _Concatenator(
File "/opt/conda/lib/python3.10/site-packages/pandas/core/reshape/concat.py", line 448, in __init__
raise ValueError("All objects passed were None")
ValueError: All objects passed were None
. Retrying | closed | 2023-12-13T13:09:18Z | 2023-12-15T21:44:07Z | https://github.com/sinaptik-ai/pandas-ai/issues/815 | [] | ferrymul7 | 2 |
littlecodersh/ItChat | api | 800 | 无法获取全部联系人 | 将201个微信群添加到了通讯录,需求是获取这201个微信群的信息。但是调用itchat.get_chatrooms(update=True, contactOnly=False)方法获取联系人接口webwxgetcontact只返回了200个联系人(包括人和微信群),其中群只有154个。但是用另一个微信号,添加了2个微信群到通讯录,可以返回368个联系人(包括2个微信群)。求解
接口(修改Seq没有效果):
https://wx.qq.com/cgi-bin/mmwebwx-bin/webwxgetcontact?r=1552358035000&seq=0&skey=@crypt_4a56c2e8_97be898d2e70a268e3a47f33eaf08765
返回:
{'BaseResponse': {'Ret': 0, 'ErrMsg': ''}, 'MemberCount': 200, 'MemberList': [{'Uin': 0, 'UserName': 'weixin', 'NickName': '微信团队', ........}], 'Seq': 0} | closed | 2019-03-12T02:45:00Z | 2019-03-12T03:39:53Z | https://github.com/littlecodersh/ItChat/issues/800 | [] | zombie9080 | 1 |
deepset-ai/haystack | nlp | 8,583 | Migration of experimental `ChatMessage` to Haystack | ## [Summary and motivation](https://github.com/deepset-ai/haystack/issues/8583#issuecomment-2500469582)
## Plan
```[tasklist]
### Tasks
- [x] **Haystack 2.8.0**
- [ ] https://github.com/deepset-ai/haystack/issues/8587
- [x] Replace direct instantiation of `ChatMessage` with specific class methods - https://github.com/deepset-ai/haystack/pull/8581
- [x] core-integrations: Replace direct instantiation of `ChatMessage` with specific class methods -https://github.com/deepset-ai/haystack-core-integrations/pull/1222
- [x] **Between Haystack 2.8.0 and 2.9.0**
- [x] core-integrations: Update all components to use `text` instead of `content` - https://github.com/deepset-ai/haystack-core-integrations/issues/1236
- [x] materials: adapt all code examples to use `text` (instead of `content`) in docs, tutorials, cookbook, integration pages, blogposts? - https://github.com/deepset-ai/haystack/issues/8621
- [x] **Haystack 2.9.0**
- [x] Replace the old `ChatMessage` with the new version: clear release notes + explicit error message if the instance is created using old params
- [x] `function` role/`from_function` class method: deprecate them; `from_function` should produce a `tool` message (with clear explanation for users)
- [ ] https://github.com/deepset-ai/haystack/issues/8654
- [ ] https://github.com/deepset-ai/haystack/issues/8623
- [x] **After Haystack 2.9.0 release**
- [x] Adapt components in experimental that use `ChatMessage`
- [x] update notebooks in cookbook; adapt tutorials
- [x] Remove migrated dataclass from experimental
- [x] **Haystack 2.10.0**
- [ ] https://github.com/deepset-ai/haystack/issues/8653
```
| closed | 2024-11-26T11:44:17Z | 2025-01-22T09:33:33Z | https://github.com/deepset-ai/haystack/issues/8583 | [
"P1"
] | anakin87 | 4 |
InstaPy/InstaPy | automation | 6,321 | like_by_feed: Total of links feched for analysis: 0 | I am only receiving "Total links fetched for analysis: 0" when using like_by_feed.
Somebody else? | closed | 2021-09-19T17:51:24Z | 2021-09-24T03:43:28Z | https://github.com/InstaPy/InstaPy/issues/6321 | [] | laboratoriodosfundos | 6 |
kizniche/Mycodo | automation | 746 | ow-shell should not be a dependency for 1wire devices | - Mycodo Version: 8.25
- Raspberry Pi Version: Pi Zero W
- Raspbian OS Version: Buster Lite
Having been running my set up for a couple of years (with version 6.45) without issue, I am just building a new system, naturally I just installed the latest version 8.25....
BUT, previously accessing the 1 wire bus was not dependent on owfs or ow-shell, and I cannot see why it should now be needed, nor can I identify in the changelog when this dependency was added... clearly I have been away too long and I missed something ?
Especially on a pi-zero I don't want to be installing stuff that is not necessary, ow-shell is just adding unnecessary bloat.
So to avoid installing it I added manually installed w1thermsensor, then
I commented out the:
` ('apt', 'ow-shell', 'ow-shell')`
line in ds18b20.py and restarted the daemon, I can then select my ds18b20s without any need for ow-shell. | closed | 2020-02-14T01:17:22Z | 2020-02-26T21:10:55Z | https://github.com/kizniche/Mycodo/issues/746 | [
"Sensor"
] | drgrumpy | 5 |
schemathesis/schemathesis | graphql | 2,088 | [BUG] Ascii can not be valid when generate test case with schemathesis | ### Checklist
- [ ] I checked the [FAQ section](https://schemathesis.readthedocs.io/en/stable/faq.html#frequently-asked-questions) of the documentation
- [ ] I looked for similar issues in the [issue tracker](https://github.com/schemathesis/schemathesis/issues)
- [ ] I am using the latest version of Schemathesis
### Describe the bug
when I use Schemathesis to generate test case by setting as below, which avoid generating test cases with symbols that are not recognizable due to using utf-8 for string generation.
generation_config=GenerationConfig(allow_x00=False, codec='ascii'),
But after I run it with my own schema, the screenshot of the generated test cases still contains unreadable characters, such as the text highlighted in the circle below.

### To Reproduce
🚨 **Mandatory** 🚨: Steps to reproduce the behavior:
1. Run this command 'run' in pycharm environment
2. See result above screenshot.
Clearly describe your expected outcome.
I would like the generated test case to not contain characters that are difficult to understand.
### Environment
```
- OS: [e.g. Linux or Windows]
- Python version: [3.9.18]
- Schemathesis version: [3.25.4]
- Spec version: [e.g. Open API 3.0.2]
```
### Additional context
Below information is excerpted from help documentation.
Generating strings[](https://schemathesis.readthedocs.io/en/stable/data-generation.html#generating-strings)
In Schemathesis, you can control how strings are generated:
allow_x00 (default True): Determines whether to allow the generation of \x00 bytes within strings. It is useful to avoid rejecting tests as invalid by some web servers.
codec (default utf-8): Specifies the codec used for generating strings. It helps if you need to restrict the inputs to, for example, the ASCII range.
Global configuration[](https://schemathesis.readthedocs.io/en/stable/data-generation.html#global-configuration)
CLI:
$ st run --generation-allow-x00=false ...
$ st run --generation-codec=ascii ...
Python:
import schemathesis
from schemathesis import GenerationConfig
schema = schemathesis.from_uri(
"https://example.schemathesis.io/openapi.json",
generation_config=GenerationConfig(allow_x00=False, codec='ascii'),
)
This configuration sets the string generation to disallow \x00 bytes and use the ASCII codec for all strings.
| closed | 2024-03-05T07:43:04Z | 2024-03-05T08:03:13Z | https://github.com/schemathesis/schemathesis/issues/2088 | [
"Type: Bug",
"Status: Needs Triage"
] | jiejunsailor | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.