repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
Zeyi-Lin/HivisionIDPhotos | fastapi | 181 | UnicodeDecodeError: 'gbk' codec can't decode byte 0x80 in position 21: illegal multibyte sequence。之前能正常运行,今天突然这样了 | Traceback (most recent call last):
File "app.py", line 16, in <module>
size_list_dict_CN = csv_to_size_list(os.path.join(root_dir, "size_list_CN.csv"))
File "D:\ModeCode\HivisionIDPhotos\HivisionIDPhotos\data_utils.py", line 12, in csv_to_size_list
next(reader)
UnicodeDecodeError: 'gbk' codec can't decode byte 0x80 in position 21: illegal multibyte sequence | closed | 2024-10-03T16:53:07Z | 2024-11-16T12:30:22Z | https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/181 | [] | xlxxlup | 1 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 486 | 我已经完成了部署,请问如何调用API批量下载视频呢?目前代码中好像只能获取视频信息 | 小白今天看了代码没懂,我目前已经部署好,可以获取视频的信息了,但是如何调用API批量下载视频呢? 视频下载的代码 API是哪个? 感谢期待您的回复!
| open | 2024-10-09T08:46:02Z | 2024-10-09T08:48:24Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/486 | [
"enhancement"
] | wcpcp | 1 |
mwaskom/seaborn | matplotlib | 3,375 | `PlotSpecError` after setting color parameter on so.Plot.scale | Setting the color param with an integer series on so.Plot.add and then setting the color param on so.Plot.scale to a qualitative palette raises `PlotSpecError: Scaling operation failed for the color variable`. If the color palette is sequential, no error is raised. I don't believe this is intended, given that it works when the series is casted to a str, category, or float.
Example:
```python
import seaborn as sns
import seaborn.objects as so
# loading example dataset and splitting subject column into its number
fmri = sns.load_dataset('fmri').assign(subject=lambda df: df.subject.str.split('s', expand=True).iloc[:,1].astype(int))
(
so.Plot(fmri, x='timepoint', y='signal')
.add(so.Lines(alpha=.7), so.Agg(), color='subject')
.scale(color='deep')
.add(so.Line(linewidth=3), so.Agg())
)
```
<details><summary>Traceback</summary>
```
IndexError Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/seaborn/_marks/base.py:179, in Mark._resolve(self, data, name, scales)
178 try:
--> 179 feature = scale(value)
180 except Exception as err:
File /opt/conda/lib/python3.10/site-packages/seaborn/_core/scales.py:129, in Scale.__call__(self, data)
128 if func is not None:
--> 129 trans_data = func(trans_data)
131 if scalar_data:
File /opt/conda/lib/python3.10/site-packages/seaborn/_core/properties.py:682, in Color._get_nominal_mapping.<locals>.mapping(x)
681 out = np.full((len(ixs), colors.shape[1]), np.nan)
--> 682 out[use] = np.take(colors, ixs[use], axis=0)
683 return out
File <__array_function__ internals>:180, in take(*args, **kwargs)
File /opt/conda/lib/python3.10/site-packages/numpy/core/fromnumeric.py:190, in take(a, indices, axis, out, mode)
95 """
96 Take elements from an array along an axis.
97
(...)
188 [5, 7]])
189 """
--> 190 return _wrapfunc(a, 'take', indices, axis=axis, out=out, mode=mode)
File /opt/conda/lib/python3.10/site-packages/numpy/core/fromnumeric.py:57, in _wrapfunc(obj, method, *args, **kwds)
56 try:
---> 57 return bound(*args, **kwds)
58 except TypeError:
59 # A TypeError occurs if the object does have such a method in its
60 # class, but its signature is not identical to that of NumPy's. This
(...)
64 # Call _wrapit from within the except clause to ensure a potential
65 # exception has a traceback chain.
IndexError: index 14 is out of bounds for axis 0 with size 14
The above exception was the direct cause of the following exception:
PlotSpecError Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/IPython/core/formatters.py:344, in BaseFormatter.__call__(self, obj)
342 method = get_real_method(obj, self.print_method)
343 if method is not None:
--> 344 return method()
345 return None
346 else:
File /opt/conda/lib/python3.10/site-packages/seaborn/_core/plot.py:279, in Plot._repr_png_(self)
277 def _repr_png_(self) -> tuple[bytes, dict[str, float]]:
--> 279 return self.plot()._repr_png_()
File /opt/conda/lib/python3.10/site-packages/seaborn/_core/plot.py:821, in Plot.plot(self, pyplot)
817 """
818 Compile the plot spec and return the Plotter object.
819 """
820 with theme_context(self._theme_with_defaults()):
--> 821 return self._plot(pyplot)
File /opt/conda/lib/python3.10/site-packages/seaborn/_core/plot.py:851, in Plot._plot(self, pyplot)
849 # Process the data for each layer and add matplotlib artists
850 for layer in layers:
--> 851 plotter._plot_layer(self, layer)
853 # Add various figure decorations
854 plotter._make_legend(self)
File /opt/conda/lib/python3.10/site-packages/seaborn/_core/plot.py:1366, in Plotter._plot_layer(self, p, layer)
1363 grouping_vars = mark._grouping_props + default_grouping_vars
1364 split_generator = self._setup_split_generator(grouping_vars, df, subplots)
-> 1366 mark._plot(split_generator, scales, orient)
1368 # TODO is this the right place for this?
1369 for view in self._subplots:
File /opt/conda/lib/python3.10/site-packages/seaborn/_marks/line.py:186, in Paths._plot(self, split_gen, scales, orient)
183 line_data[ax]["segments"].extend(segments)
184 n = len(segments)
--> 186 vals = resolve_properties(self, keys, scales)
187 vals["color"] = resolve_color(self, keys, scales=scales)
189 line_data[ax]["colors"].extend([vals["color"]] * n)
File /opt/conda/lib/python3.10/site-packages/seaborn/_marks/base.py:235, in resolve_properties(mark, data, scales)
231 def resolve_properties(
232 mark: Mark, data: DataFrame, scales: dict[str, Scale]
233 ) -> dict[str, Any]:
--> 235 props = {
236 name: mark._resolve(data, name, scales) for name in mark._mappable_props
237 }
238 return props
File /opt/conda/lib/python3.10/site-packages/seaborn/_marks/base.py:236, in <dictcomp>(.0)
231 def resolve_properties(
232 mark: Mark, data: DataFrame, scales: dict[str, Scale]
233 ) -> dict[str, Any]:
235 props = {
--> 236 name: mark._resolve(data, name, scales) for name in mark._mappable_props
237 }
238 return props
File /opt/conda/lib/python3.10/site-packages/seaborn/_marks/base.py:181, in Mark._resolve(self, data, name, scales)
179 feature = scale(value)
180 except Exception as err:
--> 181 raise PlotSpecError._during("Scaling operation", name) from err
183 if return_array:
184 feature = np.asarray(feature)
PlotSpecError: Scaling operation failed for the `color` variable. See the traceback above for more information.
```
</details>
| open | 2023-05-28T21:42:57Z | 2024-07-10T02:31:22Z | https://github.com/mwaskom/seaborn/issues/3375 | [
"bug",
"objects-scale"
] | joaofauvel | 3 |
Miserlou/Zappa | django | 1,922 | IsADirectoryError | <!--- Provide a general summary of the issue in the Title above -->
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
I am unable to deploy my Django 2 with Python 3.7.3. I put all variable into the source code then
with `slim handler` is `false` I got
```bash
Traceback (most recent call last):
File "/Users/sarit/.pyenv/versions/3.7.3/envs/life/lib/python3.7/site-packages/zappa/cli.py", line 748, in deploy
function_name=self.lambda_name)
File "/Users/sarit/.pyenv/versions/3.7.3/envs/life/lib/python3.7/site-packages/zappa/core.py", line 1287, in get_lambda_function
FunctionName=function_name)
File "/Users/sarit/.pyenv/versions/3.7.3/envs/life/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/sarit/.pyenv/versions/3.7.3/envs/life/lib/python3.7/site-packages/botocore/client.py", line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ResourceNotFoundException: An error occurred (ResourceNotFoundException) when calling the GetFunction operation: Function not found: arn:aws:lambda:ap-southeast-1:530403937392:function:futuready-th-li-sarit
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/sarit/.pyenv/versions/3.7.3/envs/life/lib/python3.7/site-packages/zappa/cli.py", line 2779, in handle
sys.exit(cli.handle())
File "/Users/sarit/.pyenv/versions/3.7.3/envs/life/lib/python3.7/site-packages/zappa/cli.py", line 509, in handle
self.dispatch_command(self.command, stage)
File "/Users/sarit/.pyenv/versions/3.7.3/envs/life/lib/python3.7/site-packages/zappa/cli.py", line 546, in dispatch_command
self.deploy(self.vargs['zip'])
File "/Users/sarit/.pyenv/versions/3.7.3/envs/life/lib/python3.7/site-packages/zappa/cli.py", line 779, in deploy
self.lambda_arn = self.zappa.create_lambda_function(**kwargs)
File "/Users/sarit/.pyenv/versions/3.7.3/envs/life/lib/python3.7/site-packages/zappa/core.py", line 1089, in create_lambda_function
response = self.lambda_client.create_function(**kwargs)
File "/Users/sarit/.pyenv/versions/3.7.3/envs/life/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/sarit/.pyenv/versions/3.7.3/envs/life/lib/python3.7/site-packages/botocore/client.py", line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.InvalidParameterValueException: An error occurred (InvalidParameterValueException) when calling the CreateFunction operation: Unzipped size must be smaller than 262144000 bytes
```
`slim handler` is `true`. I got
```bash
$ zappa deploy sarit
Calling deploy for stage sarit..
Downloading and installing dependencies..
- typed-ast==1.4.0: Using locally cached manylinux wheel
- psycopg2-binary==2.8.3: Using locally cached manylinux wheel
- pillow==6.1.0: Using locally cached manylinux wheel
- mypy==0.720: Using locally cached manylinux wheel
- markupsafe==1.1.1: Using locally cached manylinux wheel
- lazy-object-proxy==1.4.2: Using locally cached manylinux wheel
- coverage==4.5.4: Using locally cached manylinux wheel
- cffi==1.12.3: Using locally cached manylinux wheel
- sqlite==python3: Using precompiled lambda package
'python3.7'
Packaging project as gzipped tarball.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/durationpy already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/python_dateutil-2.6.1.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/troposphere-2.5.1.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/chardet-3.0.4.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/toml already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/chardet already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/placebo-0.9.0.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/pip-19.2.3.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/Unidecode-1.1.1.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/easy_install.py already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/s3transfer already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/cfn_flip-1.2.1.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/docutils already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/wsgi_request_logger-0.4.6.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/past already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/lambda_packages-0.20.0.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/unidecode already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/placebo already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/argcomplete already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/Click-7.0.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/cfn_flip already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/troposphere already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/PyYAML-5.1.2.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/idna already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/werkzeug already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/wheel already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/click already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/slugify already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/lambda_packages already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/cfn_clean already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/__pycache__ already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/certifi-2019.6.16.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/future-0.16.0.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/botocore already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/requests already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/pip already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/argcomplete-1.9.3.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/docutils-0.15.2.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/certifi already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/zappa already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/wheel-0.33.6.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/idna-2.8.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/boto3-1.9.215.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/tqdm already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/six.py already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/toml-0.10.0.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/durationpy-0.5.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/kappa already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/urllib3-1.25.3.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/yaml already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/tqdm-4.19.1.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/jmespath already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/six-1.12.0.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/kappa-0.6.0.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/urllib3 already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/Werkzeug-0.15.5.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/setuptools already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/pkg_resources already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/zappa-0.48.2.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/hjson already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/requests-2.22.0.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/python_slugify-1.2.4.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/requestlogger already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/libfuturize already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/s3transfer-0.2.1.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/future already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/botocore-1.12.215.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/libpasteurize already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/jmespath-0.9.3.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/dateutil already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/toml.py already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/hjson-3.0.1.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/setuptools-41.2.0.dist-info already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/cfn_tools already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/boto3 already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/man already exists. Specify --upgrade to force replacement.
WARNING: Target directory /Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/bin already exists. Specify --upgrade to force replacement.
Oh no! An error occurred! :(
==============
Traceback (most recent call last):
File "/Users/sarit/.pyenv/versions/3.7.3/envs/life/lib/python3.7/site-packages/zappa/cli.py", line 2779, in handle
sys.exit(cli.handle())
File "/Users/sarit/.pyenv/versions/3.7.3/envs/life/lib/python3.7/site-packages/zappa/cli.py", line 509, in handle
self.dispatch_command(self.command, stage)
File "/Users/sarit/.pyenv/versions/3.7.3/envs/life/lib/python3.7/site-packages/zappa/cli.py", line 546, in dispatch_command
self.deploy(self.vargs['zip'])
File "/Users/sarit/.pyenv/versions/3.7.3/envs/life/lib/python3.7/site-packages/zappa/cli.py", line 718, in deploy
self.create_package()
File "/Users/sarit/.pyenv/versions/3.7.3/envs/life/lib/python3.7/site-packages/zappa/cli.py", line 2230, in create_package
disable_progress=self.disable_progress
File "/Users/sarit/.pyenv/versions/3.7.3/envs/life/lib/python3.7/site-packages/zappa/core.py", line 627, in create_lambda_zip
self.copy_editable_packages(egg_links, temp_package_path)
File "/Users/sarit/.pyenv/versions/3.7.3/envs/life/lib/python3.7/site-packages/zappa/core.py", line 362, in copy_editable_packages
with open(egg_link, 'rb') as df:
IsADirectoryError: [Errno 21] Is a directory: '/Users/sarit/mein-codes/futuready_th_life_insurance/handler_venv/lib/python3.7/site-packages/zappa.egg-link'
==============
```
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
## Expected Behavior
<!--- Tell us what should happen -->
`zappa` must replace all the package and be able to deploy the Django Application
## Actual Behavior
<!--- Tell us what happens instead -->
It raises an error and stop the deployment
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
No idea
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. Django2 Python3.7.3 install through `pyenv`
2. `zappa deploy`
3. Put all configuration into `settings.py` file
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: `0.48.2`
* Operating System and Python version: `10.14.5`, `Python3.7.3`
* The output of `pip freeze`:
```bash
alabaster==0.7.12
appdirs==1.4.3
appnope==0.1.0
argcomplete==1.9.3
argon2-cffi==19.1.0
asn1crypto==0.24.0
astroid==2.2.5
atomicwrites==1.3.0
attrs==19.1.0
autoflake==1.3.1
Babel==2.7.0
backcall==0.1.0
backports.csv==1.0.7
black==19.3b0
boto3==1.9.215
botocore==1.12.215
certifi==2019.6.16
cffi==1.12.3
cfn-flip==1.2.1
chardet==3.0.4
Click==7.0
Collectfast==1.0.0
coreapi==2.3.3
coreschema==0.0.4
coverage==4.5.4
cryptography==2.7
decorator==4.4.0
defusedxml==0.6.0
diff-match-patch==20181111
Django==2.2.4
django-allauth==0.39.1
django-anymail==6.1.0
django-coverage-plugin==1.6.0
django-crispy-forms==1.7.2
django-debug-toolbar==2.0
django-environ==0.4.5
django-extensions==2.2.1
django-filter==2.2.0
django-import-export==1.2.0
django-model-utils==3.2.0
django-modeltranslation==0.13.3
django-money==0.15
django-redis==4.10.0
django-storages==1.7.1
djangorestframework==3.10.2
djangorestframework-simplejwt==4.3.0
docutils==0.15.2
durationpy==0.5
entrypoints==0.3
et-xmlfile==1.0.1
factory-boy==2.12.0
Faker==2.0.1
flake8==3.7.8
future==0.16.0
gunicorn==19.9.0
hjson==3.0.1
idna==2.8
imagesize==1.1.0
importlib-metadata==0.19
ipdb==0.12.2
ipython==7.7.0
ipython-genutils==0.2.0
isort==4.3.21
itypes==1.1.0
jdcal==1.4.1
jedi==0.15.1
Jinja2==2.10.1
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.20.0
lazy-object-proxy==1.4.2
MarkupSafe==1.1.1
mccabe==0.6.1
model-mommy==1.6.0
more-itertools==7.2.0
mypy==0.720
mypy-extensions==0.4.1
oauthlib==3.1.0
odfpy==1.4.0
openpyxl==2.6.3
packaging==19.1
parso==0.5.1
pexpect==4.7.0
pickleshare==0.7.5
Pillow==6.1.0
pip-tools==4.1.0
placebo==0.9.0
pluggy==0.12.0
prompt-toolkit==2.0.9
psycopg2==2.8.3
psycopg2-binary==2.8.3
ptyprocess==0.6.0
py==1.8.0
py-moneyed==0.8.0
pycodestyle==2.5.0
pycparser==2.19
pyflakes==2.1.1
Pygments==2.4.2
PyJWT==1.7.1
pylint==2.3.1
pylint-django==2.0.11
pylint-plugin-utils==0.5
pyOpenSSL==19.0.0
pyparsing==2.4.2
pytest==5.1.1
pytest-django==3.5.1
pytest-sugar==0.9.2
python-dateutil==2.6.1
python-slugify==1.2.4
python3-openid==3.1.0
pytz==2019.2
PyYAML==5.1.2
redis==3.3.8
requests==2.22.0
requests-oauthlib==1.2.0
s3transfer==0.2.1
sentry-sdk==0.11.1
six==1.12.0
snowballstemmer==1.9.0
Sphinx==2.2.0
sphinxcontrib-applehelp==1.0.1
sphinxcontrib-devhelp==1.0.1
sphinxcontrib-htmlhelp==1.0.2
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.2
sphinxcontrib-serializinghtml==1.1.3
sqlparse==0.3.0
tablib==0.13.0
termcolor==1.1.0
text-unidecode==1.2
toml==0.10.0
tqdm==4.19.1
traitlets==4.3.2
troposphere==2.5.1
typed-ast==1.4.0
typing==3.7.4.1
typing-extensions==3.7.4
Unidecode==1.1.1
uritemplate==3.0.0
urllib3==1.25.3
wcwidth==0.1.7
Werkzeug==0.15.5
wrapt==1.11.2
wsgi-request-logger==0.4.6
xlrd==1.2.0
xlwt==1.3.0
yapf==0.28.0
zappa==0.48.2
zipp==0.6.0
```
* Link to your project (optional):
* Your `zappa_settings.py`:
I don't use this
* `zappa_settings.json`
```bash
"sarit": {
"profile_name": "xxx-dev",
"project_name": "yyyy-th-li",
"runtime": "python3.7",
"s3_bucket": "futh-life-zappa-dev-sarit",
"aws_region": "ap-southeast-1",
"slim_handler": true,
"django_settings": "dev",
"memory_size": 3008,
}
``` | closed | 2019-08-27T02:48:26Z | 2019-08-27T03:10:48Z | https://github.com/Miserlou/Zappa/issues/1922 | [] | elcolie | 1 |
biolab/orange3 | scikit-learn | 5,979 | There is no disk in the drive error | <!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
After installing the adds-on, I can open orange anymore and appears the following error:
There is no disk in the drive. Please insert a disk into drive
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system: WIndows 7 Enterprise, 64 bits, Ram 32 Gb
- Orange version: 3.3.32
- How you installed Orange: webpage

Any idea?? | closed | 2022-05-19T16:08:33Z | 2022-05-20T11:47:45Z | https://github.com/biolab/orange3/issues/5979 | [
"bug report"
] | fcocespedes | 3 |
Miserlou/Zappa | flask | 1,983 | General question about large uploads. | I am running DRF.
```
django==3.0.1
django-storages==1.8
django-filter==2.2.0
djangorestframework==3.11.0
djangorestframework-jwt==1.11.0
psycopg2-binary==2.8.4
zappa==0.48.2
django-anymail[mailgun]==7.0.0
botocore==1.14.5
boto3==1.11.5
python-magic==0.4.15
```
And I am trying to upload a large file over 10MB through an endpoint I created. Small files work. I am not seeing any errors in the Django/Lambda logs. The only response I get is:
```
{
"message": "Internal server error"
}
```
I maxed out the lambda memory, and timeout. Is this a problem with API gateway? | closed | 2020-01-20T09:52:50Z | 2020-01-21T22:13:24Z | https://github.com/Miserlou/Zappa/issues/1983 | [] | corpulent | 2 |
Lightning-AI/pytorch-lightning | data-science | 20,527 | Encourage NumPy dependent projects to upgrade to version `>=2.5.0` | ### 📚 Documentation
As mentioned in the [Changelog](https://github.com/Lightning-AI/pytorch-lightning/pull/20369), version `2.5.0` introduces a robust scheme for seeding NumPy dependent dataloader workers.
As described in the related PR #20369, until this version Lightning was not handling correctly the seeding for NumPy, meaning that all projects using NumPy for controlling randomness (e.g. random transformations for augmentation) are affected (see [here](https://www.reddit.com/r/MachineLearning/comments/mocpgj/p_using_pytorch_numpy_a_bug_that_plagues/) and also [here](https://tanelp.github.io/posts/a-bug-that-plagues-thousands-of-open-source-ml-projects/)).
Since NumPy is commonly used for augmentations in deep learning projects (see also the example from [PyTorch docs](https://pytorch.org/tutorials/beginner/data_loading_tutorial.html#transforms)) and proper control of seeding/randomness is essential for any project, I think it would be beneficial to add a note/warning in the docs to use version `2.5.0` or higher if randomness depends on NumPy.
cc @borda | open | 2025-01-05T15:39:10Z | 2025-01-06T22:50:58Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20527 | [
"docs",
"needs triage"
] | adosar | 0 |
mwouts/itables | jupyter | 294 | Changing the color of buttons in searchBuilder | Hello, currently the color of searchBuilder buttons is dark grey which makes it difficult to read the text inside them. Also adding searchBuilder to the table makes the search box and number of entries per page dark-coloured and the text is completely invisible. I would like to make all buttons and input fields light but I am not sure how to do it. I would be grateful for help.
Here is my example file:
<img width="563" alt="Screenshot 2024-06-21 at 13 10 07" src="https://github.com/mwouts/itables/assets/58361053/03d27cef-0bc4-4bfd-8126-4418ab60c8a2">
~~~
---
title: Test
standalone: true
format:
html:
echo: false
css: styles.css
---
``` {python}
from itables import init_notebook_mode, show
from sklearn.datasets import load_diabetes
init_notebook_mode(all_interactive=True)
diabetes = load_diabetes(as_frame=True)
show(
diabetes.data,
layout={"top1": "searchBuilder"},
searchBuilder={
"preDefined": {"criteria": [{"data": "age", "condition": ">", "value": [0]}]}
},
)
```
~~~
I started with adding the following styles.css but it didn't work:
```
html.dark div.dtsb-searchBuilder button.dtsb-button,
html.dark div.dtsb-searchBuilder select.dtsb-dropDown,
html.dark div.dtsb-searchBuilder input.dtsb-input {
background-color: #F1F1F1 !important;
}
```
I don't have experience with html/css and not sure how to proceed. | closed | 2024-06-21T11:50:35Z | 2024-09-07T23:11:36Z | https://github.com/mwouts/itables/issues/294 | [] | mshqn | 7 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 810 | utterances_per_speaker affect | Hi, guys! I have a question:
What does the utterances_per_speaker affect? Only to accelerate training, because we use a bigger batch?
Thanks! | closed | 2021-08-05T19:14:32Z | 2021-08-25T08:56:33Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/810 | [] | swel4ik | 1 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 204 | 我合并+量化了 7B 和 13B 的模型提供给大家下载,并写了使用方法 | 其实合并和量化都很简单,也很快,但是没人写文档说怎么用😂
下载仓库地址:https://huggingface.co/johnlui/chinese-alpaca-7b-and-13b-quantized
移动本仓库中的`llama-7b-hf`和`llama-13b-hf`两个文件夹,到你项目的`./models`文件下即可。该文件夹同时适用于`llama.cpp`和`text-generation-webui`。 | closed | 2023-04-24T05:41:38Z | 2023-05-23T22:02:44Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/204 | [
"stale"
] | johnlui | 24 |
JoeanAmier/TikTokDownloader | api | 393 | 请教如何不交互,直接用程序调用“批量下载账号作品” | 作者和各位前辈好, 请教如何跳过交互过程(已加载浏览器cookie), 跳过启动main.py 输入1或2选择功能,
直接调用下面2个功能? 以实现全自动定时追更某作者
1. 批量下载账号作品(抖音)
2. 批量下载链接作品(抖音)
非常感谢您百忙之中的指点 | closed | 2025-01-26T01:51:51Z | 2025-01-26T03:49:42Z | https://github.com/JoeanAmier/TikTokDownloader/issues/393 | [] | 9ihbd2DZSMjtsf7vecXjz | 2 |
FlareSolverr/FlareSolverr | api | 1,277 | Changed the Flaresolverr_service.py but still can't make it work. | ### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version:3.3.21
- Last working FlareSolverr version:3.3.21
- Operating system:unraid
- Are you using Docker: yes
- FlareSolverr User-Agent (see log traces or / endpoint):
- Are you using a VPN: yes (with jackett)
- Are you using a Proxy: no
- Are you using Captcha Solver: ?
- If using captcha solver, which one:
- URL to test this issue:
```
### Description
Good morning,
I followed the procedure mentioned by users to resolve a problem on the YGG tracker.
It involved modifying the file: falresolverr_service.py.
I logged in as root to my docker, and I modified the file with vim with the latest version available on github.
Despite this, I cannot add my indexer to jackettvpn.
Any idea why ?
thanks in advance
### Logged Error Messages
```text
FlareSolverrSharp.Exceptions.FlareSolverrException: Exception: System.Threading.Tasks.TaskCanceledException: The request was canceled due to the configured HttpClient.Timeout of 60 seconds elapsing.
---> System.TimeoutException: A task was canceled.
---> System.Threading.Tasks.TaskCanceledException: A task was canceled.
at System.Threading.Tasks.TaskCompletionSourceWithCancellation`1.WaitWithCancellationAsync(CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.SendWithVersionDetectionAndRetryAsync(HttpRequestMessage request, Boolean async, Boolean doRequestAuth, CancellationToken cancellationToken)
at System.Net.Http.DiagnosticsHandler.SendAsyncCore(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken)
--- End of inner exception stack trace ---
--- End of inner exception stack trace ---
at System.Net.Http.HttpClient.HandleFailure(Exception e, Boolean telemetryStarted, HttpResponseMessage response, CancellationTokenSource cts, CancellationToken cancellationToken, CancellationTokenSource pendingRequestsCts)
at System.Net.Http.HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken)
at FlareSolverrSharp.Solvers.FlareSolverr.<>c__DisplayClass12_0.<<SendFlareSolverrRequest>b__0>d.MoveNext()
[v0.22.353.0] FlareSolverrSharp.Exceptions.FlareSolverrException: Exception: System.Threading.Tasks.TaskCanceledException: The request was canceled due to the configured HttpClient.Timeout of 60 seconds elapsing.
---> System.TimeoutException: A task was canceled.
---> System.Threading.Tasks.TaskCanceledException: A task was canceled.
at System.Threading.Tasks.TaskCompletionSourceWithCancellation`1.WaitWithCancellationAsync(CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.SendWithVersionDetectionAndRetryAsync(HttpRequestMessage request, Boolean async, Boolean doRequestAuth, CancellationToken cancellationToken)
at System.Net.Http.DiagnosticsHandler.SendAsyncCore(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken)
at System.Net.Http.HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken)
--- End of inner exception stack trace ---
--- End of inner exception stack trace ---
at System.Net.Http.HttpClient.HandleFailure(Exception e, Boolean telemetryStarted, HttpResponseMessage response, CancellationTokenSource cts, CancellationToken cancellationToken, CancellationTokenSource pendingRequestsCts)
at System.Net.Http.HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken)
at FlareSolverrSharp.Solvers.FlareSolverr.<>c__DisplayClass12_0.<<SendFlareSolverrRequest>b__0>d.MoveNext()
at FlareSolverrSharp.Solvers.FlareSolverr.<>c__DisplayClass12_0.<<SendFlareSolverrRequest>b__0>d.MoveNext()
--- End of stack trace from previous location ---
at FlareSolverrSharp.Utilities.SemaphoreLocker.LockAsync[T](Func`1 worker)
at FlareSolverrSharp.Solvers.FlareSolverr.SendFlareSolverrRequest(HttpContent flareSolverrRequest)
at FlareSolverrSharp.Solvers.FlareSolverr.Solve(HttpRequestMessage request, String sessionId)
at FlareSolverrSharp.ClearanceHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at System.Net.Http.HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken)
at Jackett.Common.Utils.Clients.HttpWebClient2.Run(WebRequest webRequest) in ./Jackett.Common/Utils/Clients/HttpWebClient2.cs:line 180
at Jackett.Common.Utils.Clients.WebClient.GetResultAsync(WebRequest request) in ./Jackett.Common/Utils/Clients/WebClient.cs:line 186
at Jackett.Common.Indexers.BaseWebIndexer.RequestWithCookiesAsync(String url, String cookieOverride, RequestType method, String referer, IEnumerable`1 data, Dictionary`2 headers, String rawbody, Nullable`1 emulateBrowser) in ./Jackett.Common/Indexers/BaseIndexer.cs:line 603
at Jackett.Common.Indexers.Definitions.CardigannIndexer.GetConfigurationForSetup(Boolean automaticlogin, String cookies) in ./Jackett.Common/Indexers/Definitions/CardigannIndexer.cs:line 990
at Jackett.Common.Indexers.Definitions.CardigannIndexer.DoLogin(String cookies) in ./Jackett.Common/Indexers/Definitions/CardigannIndexer.cs:line 607
at Jackett.Common.Indexers.Definitions.CardigannIndexer.ApplyConfiguration(JToken configJson) in ./Jackett.Common/Indexers/Definitions/CardigannIndexer.cs:line 1066
at Jackett.Server.Controllers.IndexerApiController.UpdateConfig(ConfigItem[] config) in ./Jackett.Server/Controllers/IndexerApiController.cs:line 97
at Microsoft.AspNetCore.Mvc.Infrastructure.ActionMethodExecutor.TaskOfIActionResultExecutor.Execute(ActionContext actionContext, IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Object[] arguments)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeActionMethodAsync>g__Awaited|12_0(ControllerActionInvoker invoker, ValueTask`1 actionResultValueTask)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeNextActionFilterAsync>g__Awaited|10_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Rethrow(ActionExecutedContextSealed context)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeInnerFilterAsync>g__Awaited|13_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeFilterPipelineAsync>g__Awaited|20_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)
at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)
at Jackett.Server.Middleware.CustomExceptionHandler.Invoke(HttpContext httpContext) in ./Jackett.Server/Middleware/CustomExceptionHandler.cs:line 26
2024-07-20 12:03:25 Error
System.UriFormatException: Invalid URI: The format of the URI could not be determined.
[v0.22.353.0] System.UriFormatException: Invalid URI: The format of the URI could not be determined.
at System.Uri.CreateThis(String uri, Boolean dontEscape, UriKind uriKind, UriCreationOptions& creationOptions)
at System.Uri..ctor(String uriString)
at Jackett.Common.Indexers.BaseIndexer.LoadValuesFromJson(JToken jsonConfig, Boolean useProtectionService) in ./Jackett.Common/Indexers/BaseIndexer.cs:line 138
at Jackett.Common.Indexers.Definitions.CardigannIndexer.LoadValuesFromJson(JToken jsonConfig, Boolean useProtectionService) in ./Jackett.Common/Indexers/Definitions/CardigannIndexer.cs:line 240
at Jackett.Common.Indexers.Definitions.CardigannIndexer.ApplyConfiguration(JToken configJson) in ./Jackett.Common/Indexers/Definitions/CardigannIndexer.cs:line 1064
at Jackett.Server.Controllers.IndexerApiController.UpdateConfig(ConfigItem[] config) in ./Jackett.Server/Controllers/IndexerApiController.cs:line 97
at Microsoft.AspNetCore.Mvc.Infrastructure.ActionMethodExecutor.TaskOfIActionResultExecutor.Execute(ActionContext actionContext, IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Object[] arguments)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeActionMethodAsync>g__Awaited|12_0(ControllerActionInvoker invoker, ValueTask`1 actionResultValueTask)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeNextActionFilterAsync>g__Awaited|10_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Rethrow(ActionExecutedContextSealed context)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeInnerFilterAsync()
--- End of stack trace from previous location ---
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeFilterPipelineAsync>g__Awaited|20_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)
at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)
at Jackett.Server.Middleware.CustomExceptionHandler.Invoke(HttpContext httpContext) in ./Jackett.Server/Middleware/CustomExceptionHandler.cs:line 26
```
### Screenshots
<img width="1503" alt="Capture d’écran 2024-07-20 113808" src="https://github.com/user-attachments/assets/17648964-fa45-489e-9a34-f12e8a681272">
<img width="1504" alt="Capture d’écran 2024-07-20 113822" src="https://github.com/user-attachments/assets/38a4bd38-11ea-4cbd-b5bd-dc296dc5f488">
| closed | 2024-07-20T10:14:07Z | 2024-07-20T12:19:54Z | https://github.com/FlareSolverr/FlareSolverr/issues/1277 | [
"duplicate"
] | concerty | 1 |
allenai/allennlp | data-science | 5,700 | Params Class Not Implement __repr__ | allennlp.common.params.Params only implentment __str__, I think implenmenting __repr__ will be better. | closed | 2022-08-11T14:09:15Z | 2022-08-25T16:09:55Z | https://github.com/allenai/allennlp/issues/5700 | [
"Feature request",
"stale"
] | tianliuxin | 1 |
dynaconf/dynaconf | fastapi | 1,133 | [RFC]typed:Add support for parametrized tuple type | ```python
field: tuple[int, str, float] = (1, "hello", 4.2)
field: tuple[int, ...] = (1, 2, 3, 4, 5, ...)
```
Add support to validate those types on `is_type_of` function
| open | 2024-07-06T15:15:45Z | 2024-07-08T18:38:18Z | https://github.com/dynaconf/dynaconf/issues/1133 | [
"Not a Bug",
"RFC",
"typed_dynaconf"
] | rochacbruno | 0 |
StackStorm/st2 | automation | 5,334 | st2actionrunner doesn't respect system user when doing private repo pack installs | ## SUMMARY
I have a special user on my st2 machines that has permissions for installing packs. This is also the same user listed in the st2.conf file as the system user (replacing stanley). When doing an install from a private repo the st2actionrunner is continues to run as root and therefore none of the ssh keys are setup for this.
### STACKSTORM VERSION
Paste the output of ``st2 --version``:
st2 3.5.0, on Python 3.6.8
##### OS, environment, install method
HA cluster (non-k8s): 2 app nodes and 1 controller
## Steps to reproduce the problem
As a non-root user try to install a package from a private git repo
## Expected Results
The package to install using the system user specified in the st2.conf file so it would use that user's keys.
## Actual Results
st2actionrunner runs as the root user instead of the system user specified in st2.conf.
Looking at source the install only happens as the current user that is running the pid. In this case st2actionrunner runs as the root user only while all the other services run as the system user specified in st2.conf.
Thinking of ways to work around this one without adding keys to the root user.
Thanks!
| open | 2021-08-18T14:18:20Z | 2022-04-16T05:52:32Z | https://github.com/StackStorm/st2/issues/5334 | [
"stale"
] | minsis | 1 |
tableau/server-client-python | rest-api | 809 | Update group license_mode | Hi,
updating from 'onSync ' to 'onLogin' works but from 'onLogin' to 'onSync' didn't work (no error, everything looks fine but no change on the field) | closed | 2021-02-28T12:27:04Z | 2021-04-23T23:24:17Z | https://github.com/tableau/server-client-python/issues/809 | [] | idanml | 3 |
pydantic/pydantic | pydantic | 10,852 | Support for custom Pydantic classes to use with to_strict_json_schema | ### Initial Checks
- [X] I have searched Google & GitHub for similar requests and couldn't find anything
- [X] I have read and followed [the docs](https://docs.pydantic.dev) and still think this feature is missing
### Description
In Pydantic V2, the OpenAI library provides a function to transform a Pydantic class into a Json schema. This function is called `to_strict_json_schema` and is found in `openai.lib._pydantic`. However, this method is private and does not support custom-made classes. For custom classes, it is necessary to use the `_ensure_strict_json_schema()` function.
Here is the implementation of `to_strict_json_schema`:
```python
def to_strict_json_schema(model: type[pydantic.BaseModel] | pydantic.TypeAdapter[Any]) -> dict[str, Any]:
if inspect.isclass(model) and is_basemodel_type(model):
schema = model_json_schema(model)
elif PYDANTIC_V2 and isinstance(model, pydantic.TypeAdapter):
schema = model.json_schema()
else:
raise TypeError(f"Non BaseModel types are only supported with Pydantic v2 - {model}")
return _ensure_strict_json_schema(schema, path=(), root=schema)
```
If I attempt to use `to_strict_json_schema()` with a custom class, I receive the following error:
```python
class CustomTopicClassification(BaseModel):
custom_topics: List[str] = Field(default_factory=list)
schema = CustomTopicClassification()
to_strict_json_schema(schema)
```
**Error:**
```
TypeError: Non BaseModel types are only supported with Pydantic v2 - topics=[]
```
However, if I use `_ensure_strict_json_schema(schema.model_json_schema(), path=(), root=schema.model_json_schema())`, the schema is transformed, but it does not match the response format expected by OpenAI.
### Transformation Issue and Custom Solution
If I apply the following transformation to the Pydantic class:
```python
_ensure_strict_json_schema(
schema.model_json_schema(),
path=(),
root=schema.model_json_schema()
)
```
It returns:
```json
{
"properties": {
"custom_topics": {
"items": {
"type": "string"
},
"title": "Custom Topics",
"type": "array"
}
},
"title": "CustomTopicClassification",
"type": "object",
"additionalProperties": false,
"required": ["custom_topics"]
}
```
However, the OpenAI Batch API requires the following format:
```json
{
"type": "json_schema",
"json_schema": {
"name": "CustomTopicClassification",
"schema": {
"type": "object",
"properties": {
"custom_topics": {
"type": "array",
"items": {
"type": "string",
"enum": []
}
}
},
"required": ["custom_topics"],
"additionalProperties": false
},
"strict": true
}
}
```
### Custom Transforming Function
To address this discrepancy, I created a custom transforming function:
```python
def transform_pydantic_to_json_schema(pydantic_schema):
"""
Transform a Pydantic JSON schema dictionary into the required JSON schema format.
:param pydantic_schema: The original Pydantic JSON schema.
:return: Transformed JSON schema.
"""
transformed_schema = {
"type": "json_schema",
"json_schema": {
"name": pydantic_schema.get("title", "UnknownSchema"),
"schema": {
"type": pydantic_schema["type"],
"properties": {},
"required": pydantic_schema.get("required", []),
"additionalProperties": pydantic_schema.get("additionalProperties", True),
},
"strict": True
}
}
# Process properties
for prop, details in pydantic_schema.get("properties", {}).items():
transformed_property = {"type": details["type"]}
# Handle nested "items" with "enum"
if "items" in details:
transformed_property["items"] = {
"type": details["items"].get("type"),
"enum": details["items"].get("enum", [])
}
transformed_schema["json_schema"]["schema"]["properties"][prop] = transformed_property
return transformed_schema
```
### Explanation
- **Function Purpose**: The `transform_pydantic_to_json_schema()` function converts a Pydantic JSON schema dictionary into the format required by the OpenAI Batch API.
- **Parameters**:
- `pydantic_schema`: The original Pydantic JSON schema.
- **Transformation Steps**:
- A dictionary named `transformed_schema` is created to meet the OpenAI Batch API format.
- The **name** is set from the `title` of the Pydantic schema (or defaults to `"UnknownSchema"`).
- The **type**, **properties**, **required**, and **additionalProperties** are set from the Pydantic schema.
- **Properties** are processed individually, and nested structures such as `"items"` are handled accordingly.
This function ensures that the output schema matches the required response format for the OpenAI Batch API. Let me know if this approach aligns with your requirements or if further adjustments are needed.
Pydantic should provide a public method to achieve this conversion, enabling proper use of Pydantic models with the OpenAI Batch API.
### Affected Components
- [ ] [Compatibility between releases](https://docs.pydantic.dev/changelog/)
- [ ] [Data validation/parsing](https://docs.pydantic.dev/concepts/models/#basic-model-usage)
- [X] [Data serialization](https://docs.pydantic.dev/concepts/serialization/) - `.model_dump()` and `.model_dump_json()`
- [X] [JSON Schema](https://docs.pydantic.dev/concepts/json_schema/)
- [ ] [Dataclasses](https://docs.pydantic.dev/concepts/dataclasses/)
- [ ] [Model Config](https://docs.pydantic.dev/concepts/config/)
- [ ] [Field Types](https://docs.pydantic.dev/api/types/) - adding or changing a particular data type
- [ ] [Function validation decorator](https://docs.pydantic.dev/concepts/validation_decorator/)
- [ ] [Generic Models](https://docs.pydantic.dev/concepts/models/#generic-models)
- [ ] [Other Model behaviour](https://docs.pydantic.dev/concepts/models/) - `model_construct()`, pickling, private attributes, ORM mode
- [ ] [Plugins](https://docs.pydantic.dev/) and integration with other tools - mypy, FastAPI, python-devtools, Hypothesis, VS Code, PyCharm, etc.
```
| closed | 2024-11-15T09:45:55Z | 2024-11-18T08:44:45Z | https://github.com/pydantic/pydantic/issues/10852 | [
"feature request"
] | tomascufaro | 4 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,342 | Pretrained model about day to night | Hi Thanks for your repository!
Is there any pre-trained model which convert day time images to night time images available?
I saw the video your team uploaded using bdd100k datasets.
Thanks
| open | 2021-11-23T22:18:10Z | 2023-03-09T09:10:50Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1342 | [] | oliver0922 | 2 |
ets-labs/python-dependency-injector | flask | 546 | Builtin method 'open' not working on callable. | Hi, I'm using this library very well, thank you.
I tried callable injection, but not working on:
```python3
class Container(containers.DeclarativeContainer):
buffered_byte_reader_factory=providers.Callable(
open,
mode="rb",
)
```
Exception: `TypeError: open() missing required argument 'file' (pos 1)` | closed | 2022-01-14T04:32:11Z | 2022-01-27T03:59:54Z | https://github.com/ets-labs/python-dependency-injector/issues/546 | [
"question"
] | minkichoe | 4 |
Kanaries/pygwalker | plotly | 69 | Force table format and crosstab to excel | Thank you so much for this package, I've been looking for something like this for ages. Some features from Tableau I would like to see:
1 - Force table format, as I very much like it for basic data exploration.
2 - Crosstab to excel, to download the data being shown in the viz to Excel, CSV or something.
Also, reinforcing #11, support for streamlit would very much be appreciated and would increase the potential of this package by a lot. | closed | 2023-03-07T18:09:49Z | 2023-09-20T19:32:54Z | https://github.com/Kanaries/pygwalker/issues/69 | [
"enhancement",
"graphic-walker"
] | danilo-css | 1 |
wkentaro/labelme | deep-learning | 662 | [BUG] AttributeError: Module 'labelme.utils' has no attribute 'draw_label' | #646 the issue is still present in the latest conda install,
```
>>> import labelme
>>> labelme.utils.draw_label
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'labelme.utils' has no attribute 'draw_label'
>>> labelme.__version__
'3.21.1'
>>> exit()
```
```
2020-05-20 17:33:35.357 python[9253:124484] ApplePersistenceIgnoreState: Existing state will not be touched. New state will be written to (null)
Traceback (most recent call last):
File "/Users/naman/miniconda3/envs/py3/bin/labelme_draw_label_png", line 11, in <module>
sys.exit(main())
File "/Users/naman/miniconda3/envs/py3/lib/python3.7/site-packages/labelme/cli/draw_label_png.py", line 17, in main
lbl = np.asarray(PIL.Image.open(args.label_png))
File "/Users/naman/miniconda3/envs/py3/lib/python3.7/site-packages/PIL/Image.py", line 2843, in open
fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: 'out/img.png'
```
Please release a new fix version to conda-forge | closed | 2020-05-20T12:08:07Z | 2020-05-25T23:44:33Z | https://github.com/wkentaro/labelme/issues/662 | [
"issue::bug"
] | thenamangoyal | 2 |
jina-ai/serve | machine-learning | 5,838 | Change documentation (review specially compatibility/interoperability and Hub vision) | closed | 2023-04-27T22:31:40Z | 2023-07-10T08:49:25Z | https://github.com/jina-ai/serve/issues/5838 | [
"docarrayv2"
] | JoanFM | 0 | |
coqui-ai/TTS | deep-learning | 2,378 | [Bug] unable to run tts-server on windows using miniconda | ### Describe the bug
Hello. So I created a new conda env called coquitts using:
conda create -n coquitts python=3.10
then, I activated it
conda activate coquitts
pip install tts
after it was done, I tried tts-server --model_name d:\downloads\best_model_1000865.pth
But I got this error.
Traceback (most recent call last):
File "D:\python3\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "D:\python3\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "D:\python3\Scripts\tts-server.exe\__main__.py", line 4, in <module>
File "D:\python3\lib\site-packages\TTS\server\server.py", line 103, in <module>
synthesizer = Synthesizer(
File "D:\python3\lib\site-packages\TTS\utils\synthesizer.py", line 75, in __init__
self._load_tts(tts_checkpoint, tts_config_path, use_cuda)
File "D:\python3\lib\site-packages\TTS\utils\synthesizer.py", line 108, in _load_tts
self.tts_config = load_config(tts_config_path)
File "D:\python3\lib\site-packages\TTS\config\__init__.py", line 79, in load_config
ext = os.path.splitext(config_path)[1]
File "D:\python3\lib\ntpath.py", line 204, in splitext
p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not NoneType
### To Reproduce
in the describtion
### Expected behavior
It should have run the tts server and told me what port it's listening on like usual
### Logs
_No response_
### Environment
```shell
tts 0.11.1, miniconda.python 3.10, no GPU, automatic torch installation that happened automatically when I did pip install tts, windows 10x64
```
### Additional context
_No response_ | closed | 2023-03-04T10:06:12Z | 2023-03-06T08:20:50Z | https://github.com/coqui-ai/TTS/issues/2378 | [
"bug"
] | king-dahmanus | 3 |
cvat-ai/cvat | pytorch | 8,722 | How can I create a task from remote sources? | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
_No response_
### Expected Behavior
_No response_
### Possible Solution
_No response_
### Context
I want to create a new task from remote sources that its URL placed in internal network. But I got this error:

Does it need to the internet? because I don't have internet on my virtual machine but I have ping to the machine that this URL belongs to.
Thanks in advance.
### Environment
_No response_ | closed | 2024-11-20T05:18:47Z | 2024-11-21T09:46:34Z | https://github.com/cvat-ai/cvat/issues/8722 | [
"question"
] | azadehashouri | 3 |
hpcaitech/ColossalAI | deep-learning | 5,805 | [BUG]: FileNotFoundError: [Errno 2] No such file or directory: '/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/kernel/extensions/pybind/inference/inference.cpp' | ### Is there an existing issue for this bug?
- [X] I have searched the existing issues
### 🐛 Describe the bug
```
$ ls ~/llama8b/TensorRT-LLM/Meta-Llama-3-8B-Instruct
config.json LICENSE model-00002-of-00004.safetensors model-00004-of-00004.safetensors original special_tokens_map.json tokenizer.json
generation_config.json model-00001-of-00004.safetensors model-00003-of-00004.safetensors model.safetensors.index.json README.md tokenizer_config.json USE_POLICY.md
```
`~/ColossalAI/examples/inference/llama$ colossalai run --nproc_per_node 1 llama_generation.py -m "~/llama8b/TensorRT-LLM/Meta-Llama-3-8B-Instruct" --max_length 80`
Output:
```
/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/transformers/utils/generic.py:441: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
_torch_pytree._register_pytree_node(
/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/transformers/utils/generic.py:309: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
_torch_pytree._register_pytree_node(
/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/transformers/utils/generic.py:309: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
_torch_pytree._register_pytree_node(
/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/pipeline/schedule/_utils.py:19: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
_register_pytree_node(OrderedDict, _odict_flatten, _odict_unflatten)
/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/torch/utils/_pytree.py:300: UserWarning: <class 'collections.OrderedDict'> is already registered as pytree node. Overwriting the previous registration.
warnings.warn(
/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/shardformer/layer/normalization.py:45: UserWarning: Please install apex from source (https://github.com/NVIDIA/apex) to use the fused layernorm kernel
warnings.warn("Please install apex from source (https://github.com/NVIDIA/apex) to use the fused layernorm kernel")
/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/kernel/extensions/utils.py:96: UserWarning: [extension] The CUDA version on the system (12.4) does not match with the version (12.1) torch was compiled with. The mismatch is found in the minor version. As the APIs are compatible, we will allow compilation to proceed. If you encounter any issue when using the built kernel, please try to build it again with fully matched CUDA versions
warnings.warn(
[extension] Compiling the JIT inference_ops_cuda kernel during runtime now
Traceback (most recent call last):
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/kernel/extensions/cpp_extension.py", line 132, in load
op_kernel = self.import_op()
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/kernel/extensions/cpp_extension.py", line 61, in import_op
return importlib.import_module(self.prebuilt_import_path)
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1004, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'colossalai._C.inference_ops_cuda'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sw/ColossalAI/examples/inference/llama/llama_generation.py", line 8, in <module>
from colossalai.inference.config import InferenceConfig
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/inference/__init__.py", line 2, in <module>
from .core import InferenceEngine
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/inference/core/__init__.py", line 1, in <module>
from .engine import InferenceEngine
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/inference/core/engine.py", line 23, in <module>
from colossalai.inference.modeling.policy import model_policy_map
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/inference/modeling/policy/__init__.py", line 2, in <module>
from .nopadding_baichuan import NoPaddingBaichuanModelInferPolicy
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/inference/modeling/policy/nopadding_baichuan.py", line 6, in <module>
from colossalai.inference.modeling.models.nopadding_baichuan import (
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/inference/modeling/models/nopadding_baichuan.py", line 11, in <module>
from colossalai.inference.modeling.models.nopadding_llama import NopadLlamaMLP
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/inference/modeling/models/nopadding_llama.py", line 35, in <module>
inference_ops = InferenceOpsLoader().load()
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/kernel/kernel_loader.py", line 83, in load
return usable_exts[0].load()
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/kernel/extensions/cpp_extension.py", line 136, in load
op_kernel = self.build_jit()
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/kernel/extensions/cuda_extension.py", line 86, in build_jit
op_kernel = load(
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1309, in load
return _jit_compile(
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1678, in _jit_compile
version = JIT_EXTENSION_VERSIONER.bump_version_if_changed(
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/torch/utils/_cpp_extension_versioner.py", line 45, in bump_version_if_changed
hash_value = hash_source_files(hash_value, source_files)
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/torch/utils/_cpp_extension_versioner.py", line 15, in hash_source_files
with open(filename) as file:
FileNotFoundError: [Errno 2] No such file or directory: '/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/kernel/extensions/pybind/inference/inference.cpp'
E0612 11:56:53.465000 140063917000512 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: 1) local_rank: 0 (pid: 1114939) of binary: /home/sw/anaconda3/envs/colossalai/bin/python
Traceback (most recent call last):
File "/home/sw/anaconda3/envs/colossalai/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main
run(args)
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run
elastic_launch(
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
llama_generation.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-06-12_11:56:53
host : sw-black
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 1114939)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
Error: failed to run torchrun --nproc_per_node=1 --nnodes=1 --node_rank=0 --master_addr=127.0.0.1 --master_port=29500 llama_generation.py -m ~/llama8b/TensorRT-LLM/Meta-Llama-3-8B-Instruct --max_length 80 on 127.0.0.1, is localhost: True, exception: Encountered a bad command exit code!
Command: 'cd /home/sw/ColossalAI/examples/inference/llama && export SHELL="/bin/bash" CONDA_EXE="/home/sw/anaconda3/bin/conda" LC_ADDRESS="en_US.UTF-8" LC_NAME="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" PWD="/home/sw/ColossalAI/examples/inference/llama" LOGNAME="sw" XDG_SESSION_TYPE="tty" CONDA_PREFIX="/home/sw/anaconda3/envs/colossalai" MOTD_SHOWN="pam" HOME="/home/sw" LC_PAPER="en_US.UTF-8" LANG="en_US.UTF-8" LS_COLORS="rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:" CONDA_PROMPT_MODIFIER="(colossalai) " HF_HUB_ENABLE_HF_TRANSFER="1" HF_TOKEN="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" SSH_CONNECTION="192.168.178.223 56678 192.168.178.249 22" PYTORCH_CUDA_ALLOC_CONF="expandable_segments:True" LESSCLOSE="/usr/bin/lesspipe %s %s" XDG_SESSION_CLASS="user" LC_IDENTIFICATION="en_US.UTF-8" TERM="xterm-256color" LESSOPEN="| /usr/bin/lesspipe %s" USER="sw" CONDA_SHLVL="2" DISPLAY="localhost:10.0" SHLVL="1" LC_TELEPHONE="en_US.UTF-8" SYSTEMD_PAGER="less" LC_MEASUREMENT="en_US.UTF-8" XDG_SESSION_ID="49" CONDA_PYTHON_EXE="/home/sw/anaconda3/bin/python" XDG_RUNTIME_DIR="/run/user/1000" SSH_CLIENT="192.168.178.223 56678 22" CONDA_DEFAULT_ENV="colossalai" LC_TIME="en_US.UTF-8" GCC_COLORS="error=01;31:warning=01;35:note=01;36:caret=01;32:locus=01:quote=01" XDG_DATA_DIRS="/home/sw/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share:/usr/share" PATH="/home/sw/.local/bin:/home/sw/anaconda3/envs/colossalai/bin:/home/sw/anaconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" DBUS_SESSION_BUS_ADDRESS="unix:path=/run/user/1000/bus" SSH_TTY="/dev/pts/5" CONDA_PREFIX_1="/home/sw/anaconda3" LC_NUMERIC="en_US.UTF-8" OLDPWD="/home/sw/ColossalAI/examples/inference" _="/home/sw/anaconda3/envs/colossalai/bin/colossalai" && torchrun --nproc_per_node=1 --nnodes=1 --node_rank=0 --master_addr=127.0.0.1 --master_port=29500 llama_generation.py -m ~/llama8b/TensorRT-LLM/Meta-Llama-3-8B-Instruct --max_length 80'
Exit code: 1
Stdout: already printed
Stderr: already printed
====== Training on All Nodes =====
127.0.0.1: failure
====== Stopping All Nodes =====
127.0.0.1: finish
```
Than after doing this:
`cd ColossalAI`
`pip install .`
This is the error during same run:
```
/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/transformers/utils/generic.py:441: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
_torch_pytree._register_pytree_node(
/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/transformers/utils/generic.py:309: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
_torch_pytree._register_pytree_node(
/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/transformers/utils/generic.py:309: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
_torch_pytree._register_pytree_node(
/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/pipeline/schedule/_utils.py:19: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
_register_pytree_node(OrderedDict, _odict_flatten, _odict_unflatten)
/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/torch/utils/_pytree.py:254: UserWarning: <class 'collections.OrderedDict'> is already registered as pytree node. Overwriting the previous registration.
warnings.warn(
/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/shardformer/layer/normalization.py:45: UserWarning: Please install apex from source (https://github.com/NVIDIA/apex) to use the fused layernorm kernel
warnings.warn("Please install apex from source (https://github.com/NVIDIA/apex) to use the fused layernorm kernel")
/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/kernel/extensions/utils.py:96: UserWarning: [extension] The CUDA version on the system (12.4) does not match with the version (12.1) torch was compiled with. The mismatch is found in the minor version. As the APIs are compatible, we will allow compilation to proceed. If you encounter any issue when using the built kernel, please try to build it again with fully matched CUDA versions
warnings.warn(
[extension] Compiling the JIT inference_ops_cuda kernel during runtime now
Traceback (most recent call last):
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/kernel/extensions/cpp_extension.py", line 132, in load
op_kernel = self.import_op()
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/kernel/extensions/cpp_extension.py", line 61, in import_op
return importlib.import_module(self.prebuilt_import_path)
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1004, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'colossalai._C.inference_ops_cuda'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sw/ColossalAI/examples/inference/llama/llama_generation.py", line 8, in <module>
from colossalai.inference.config import InferenceConfig
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/inference/__init__.py", line 2, in <module>
from .core import InferenceEngine
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/inference/core/__init__.py", line 1, in <module>
from .engine import InferenceEngine
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/inference/core/engine.py", line 23, in <module>
from colossalai.inference.modeling.policy import model_policy_map
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/inference/modeling/policy/__init__.py", line 2, in <module>
from .nopadding_baichuan import NoPaddingBaichuanModelInferPolicy
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/inference/modeling/policy/nopadding_baichuan.py", line 3, in <module>
from colossalai.inference.modeling.models.nopadding_baichuan import (
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/inference/modeling/models/nopadding_baichuan.py", line 13, in <module>
from colossalai.inference.modeling.models.nopadding_llama import NopadLlamaMLP
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/inference/modeling/models/nopadding_llama.py", line 30, in <module>
inference_ops = InferenceOpsLoader().load()
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/kernel/kernel_loader.py", line 83, in load
return usable_exts[0].load()
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/kernel/extensions/cpp_extension.py", line 136, in load
op_kernel = self.build_jit()
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/colossalai/kernel/extensions/cuda_extension.py", line 88, in build_jit
op_kernel = load(
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1306, in load
return _jit_compile(
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1710, in _jit_compile
_write_ninja_file_and_build_library(
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1797, in _write_ninja_file_and_build_library
get_compiler_abi_compatibility_and_version(compiler)
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 359, in get_compiler_abi_compatibility_and_version
if not check_compiler_ok_for_platform(compiler):
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 312, in check_compiler_ok_for_platform
which = subprocess.check_output(['which', compiler], stderr=subprocess.STDOUT)
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/subprocess.py", line 421, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/subprocess.py", line 526, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['which', 'c++']' returned non-zero exit status 1.
[2024-06-12 12:05:03,961] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 1202233) of binary: /home/sw/anaconda3/envs/colossalai/bin/python
Traceback (most recent call last):
File "/home/sw/anaconda3/envs/colossalai/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/torch/distributed/run.py", line 812, in main
run(args)
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/sw/anaconda3/envs/colossalai/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
llama_generation.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-06-12_12:05:03
host : sw-black
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 1202233)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
Error: failed to run torchrun --nproc_per_node=1 --nnodes=1 --node_rank=0 --master_addr=127.0.0.1 --master_port=29500 llama_generation.py -m ~/llama8b/TensorRT-LLM/Meta-Llama-3-8B-Instruct --max_length 80 on 127.0.0.1, is localhost: True, exception: Encountered a bad command exit code!
Command: 'cd /home/sw/ColossalAI/examples/inference/llama && export SHELL="/bin/bash" CONDA_EXE="/home/sw/anaconda3/bin/conda" LC_ADDRESS="en_US.UTF-8" LC_NAME="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" PWD="/home/sw/ColossalAI/examples/inference/llama" LOGNAME="sw" XDG_SESSION_TYPE="tty" CONDA_PREFIX="/home/sw/anaconda3/envs/colossalai" MOTD_SHOWN="pam" HOME="/home/sw" LC_PAPER="en_US.UTF-8" LANG="en_US.UTF-8" LS_COLORS="rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:" CONDA_PROMPT_MODIFIER="(colossalai) " HF_HUB_ENABLE_HF_TRANSFER="1" HF_TOKEN="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" SSH_CONNECTION="192.168.178.223 50527 192.168.178.249 22" PYTORCH_CUDA_ALLOC_CONF="expandable_segments:True" LESSCLOSE="/usr/bin/lesspipe %s %s" XDG_SESSION_CLASS="user" LC_IDENTIFICATION="en_US.UTF-8" TERM="xterm-256color" LESSOPEN="| /usr/bin/lesspipe %s" USER="sw" CONDA_SHLVL="2" SHLVL="1" LC_TELEPHONE="en_US.UTF-8" SYSTEMD_PAGER="less" LC_MEASUREMENT="en_US.UTF-8" XDG_SESSION_ID="52" CONDA_PYTHON_EXE="/home/sw/anaconda3/bin/python" XDG_RUNTIME_DIR="/run/user/1000" SSH_CLIENT="192.168.178.223 50527 22" CONDA_DEFAULT_ENV="colossalai" LC_TIME="en_US.UTF-8" GCC_COLORS="error=01;31:warning=01;35:note=01;36:caret=01;32:locus=01:quote=01" XDG_DATA_DIRS="/home/sw/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share:/usr/share" PATH="/home/sw/.local/bin:/home/sw/anaconda3/envs/colossalai/bin:/home/sw/anaconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin" DBUS_SESSION_BUS_ADDRESS="unix:path=/run/user/1000/bus" SSH_TTY="/dev/pts/5" CONDA_PREFIX_1="/home/sw/anaconda3" LC_NUMERIC="en_US.UTF-8" _="/home/sw/anaconda3/envs/colossalai/bin/colossalai" OLDPWD="/home/sw/ColossalAI/examples/inference" && torchrun --nproc_per_node=1 --nnodes=1 --node_rank=0 --master_addr=127.0.0.1 --master_port=29500 llama_generation.py -m ~/llama8b/TensorRT-LLM/Meta-Llama-3-8B-Instruct --max_length 80'
Exit code: 1
Stdout: already printed
Stderr: already printed
====== Training on All Nodes =====
127.0.0.1: failure
====== Stopping All Nodes =====
127.0.0.1: finish
```
### Environment
Python 3.10.14
Torch 2.3.1+cu121
CUDA Version: 12.4
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Thu_Mar_28_02:18:24_PDT_2024
Cuda compilation tools, release 12.4, V12.4.131
Build cuda_12.4.r12.4/compiler.34097967_0 | closed | 2024-06-12T10:02:09Z | 2024-06-19T07:06:49Z | https://github.com/hpcaitech/ColossalAI/issues/5805 | [
"bug"
] | teis-e | 2 |
Sanster/IOPaint | pytorch | 187 | ImportError: cannot import name 'CompVisVDenoiser' from 'k_diffusion.external' | Getting this import error after installing with pip
`ImportError: cannot import name 'CompVisVDenoiser' from 'k_diffusion.external' (/Users/devdesign/.asdf/installs/python/3.10.6/lib/python3.10/site-packages/k_diffusion/external.py)`
Running on m1 mac. I've installed lama before a few months back and it worked fine on the same machine. But when reinstalling currently i'm now getting this import error. Any fix? | closed | 2023-01-16T11:53:35Z | 2023-01-17T22:33:19Z | https://github.com/Sanster/IOPaint/issues/187 | [] | emmajane1313 | 5 |
unit8co/darts | data-science | 2,154 | [BUG] TFT forecast negative values although there are no negative target in training data | **Describe the bug**
TFT forecasts negative value although there are no negative targets in training data
**To Reproduce**
Deterministic forecast, with MAE as metric.
Targets are first scaled with darts.dataprocessing.transformers.Scaler
Best params: {'input_chunk_length': 22, 'output_chunk_length': 10, 'hidden_size': 256, 'lstm_layers': 3, 'num_attention_heads': 32, 'full_attention': False, 'dropout': 0.07764449837435405, 'lr': 0.0005731156347991463}
**System (please complete the following information):**
- Python version: [3.9]
- darts version [0.27.1]
| closed | 2024-01-11T06:41:34Z | 2024-01-11T07:35:05Z | https://github.com/unit8co/darts/issues/2154 | [
"bug",
"triage"
] | xlsi | 0 |
pandas-dev/pandas | python | 60,905 | DOC: Should `pandas.api.types.is_dtype_equal()` be documented? | ### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
None: It's not there!
### Documentation problem
`pandas.api.types.is_dtype_equal()` is not documented, but it is used in `tests/extension/decimal/array.py` which is used as an example `ExtensionArray` implementation
### Suggested fix for documentation
Unsure if we want this in the public API or not, but if so, we ought to fix it.
See https://github.com/pandas-dev/pandas-stubs/pull/1112 for some discussion | open | 2025-02-10T16:57:57Z | 2025-02-27T14:52:56Z | https://github.com/pandas-dev/pandas/issues/60905 | [
"Docs",
"Dtype Conversions"
] | Dr-Irv | 5 |
plotly/dash-table | plotly | 313 | Ability to export data as excel or csv | - For Excel files, only XLSX (not XLS) will be supported
- Only the data will be exported, formatting will not be exported
- The export will include the data in the current view. For example, if columns are hidden, sorted, or filtered, then the exported file will display the current view.
- Export will not protect users from “CSV Injection” attacks (https://www.owasp.org/index.php/CSV_Injection)
Exporting Excel files may require a large 3rd party open source library. This library is too large to include in the default version of dash-table, so we will need to engineer dynamic JavaScript module loading as part of this requirement.
We will consider the UI for this feature in a separate issue (we need to design a UI needs encompasses all of the new features that we're adding). | closed | 2018-12-19T22:02:52Z | 2019-08-08T20:28:38Z | https://github.com/plotly/dash-table/issues/313 | [
"dash-type-enhancement",
"dash-meta-sponsored",
"size: 8"
] | chriddyp | 8 |
pytest-dev/pytest-cov | pytest | 457 | Syntax error when setting up pytest-cov | # Summary
The latest version of pytest (here: https://pypi.org/project/pytest-cov/) mentions it supports python 3.5 and up
However, when running setup.py in a bare minimum environment, I get a syntax error.
## Expected vs actual result
Here's an example : https://github.com/martinlanton/tox_template_project/runs/2141979832?check_suite_focus=true
considering the simplicity of this package, I would expect it to run smoothly, however it seems to only run smoothly on python 3.6 and up.
What am I missing?
## Code
https://github.com/martinlanton/tox_template_project
Thanks :) | closed | 2021-03-18T17:27:58Z | 2021-03-18T19:29:55Z | https://github.com/pytest-dev/pytest-cov/issues/457 | [] | martinlanton | 5 |
flasgger/flasgger | api | 60 | Add support for method based path declarations in docstring. | REF: https://github.com/gangverk/flask-swagger/pull/32/files
If found `get, post, put, patch, delete, option, head..` swaggify based in those paths.
```python
@app.route('/user', methods=["GET", "PUT"])
def user():
"""
get:
Get a user
Use this endpoint to get a user object
---
tags:
- User API
definitions:
- schema:
id: User
properties:
name:
type: string
description: the user's name
age:
type: integer
description: the user's age
responses:
200:
description: user obtained
schema:
$ref: "#/definitions/User"
put:
Update a user
Use this endpoint to update a user
---
tags:
- User API
parameters:
- name: name
in: formData
type: string
required: true
- name: age
in: formData
type: integer
required: true
responses:
202:
description: user updated
schema:
$ref: "#/definitions/User"
"""
pass
```
IMPORTANT: Keep compatibility with current examples/* | open | 2017-03-28T00:36:12Z | 2017-10-02T16:31:54Z | https://github.com/flasgger/flasgger/issues/60 | [
"hacktoberfest"
] | rochacbruno | 0 |
quantmind/pulsar | asyncio | 87 | Embedded lua | Embedded `lua` is needed by the pulsar data store application.
Lua 5.2.3 source has been added to the `extensions` module but compilation is not working well across windows, mac and linux.
In addition the `lua_cjson` module has also been added.
Tests and documentation to do
| closed | 2014-01-09T10:11:43Z | 2014-08-13T20:52:58Z | https://github.com/quantmind/pulsar/issues/87 | [] | lsbardel | 1 |
amisadmin/fastapi-amis-admin | sqlalchemy | 93 | 需要帮助 | 你好,大佬.
我是前端一枚,最近在学习FastAPI,刚刚入门,amis还没有学习,打算学习一下.主要目的是想快速搭建一套前后端程序.
这两天突然发现用fastapi写的admin很少,感觉您的[fastapi-amis-admin挺好的,但是苦于没有视频教程,我看了文档,以我目前的水平,使用这个框架文档也无法运行运行起来一个Dome,希望能得到您的指导.当然我会给您一些钱,作为您付出的补偿.
我的手机号码13009476592(微信同步).
期待您的回复.
谢谢!!! | closed | 2023-04-20T03:32:53Z | 2023-06-13T06:49:26Z | https://github.com/amisadmin/fastapi-amis-admin/issues/93 | [] | STM32King | 1 |
explosion/spaCy | machine-learning | 13,392 | Unable to finetune transformer based ner model after initial tuning | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
Create a transformer ner model
Train it on data using the cfg and cli which auto-saves it
Create a new cfg file that points to your existing model
Try triggering the training using the CLI
You will get a missing config.json error
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
- **spaCy version:** 3.7.2
- **Platform:** Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- **Python version:** 3.10.13
| closed | 2024-03-23T15:22:01Z | 2024-03-25T12:43:33Z | https://github.com/explosion/spaCy/issues/13392 | [
"training",
"feat / ner",
"feat / transformer"
] | jlustgarten | 0 |
mlflow/mlflow | machine-learning | 14,809 | [BUG] mlflow gc fails to delete artifacts with MinIO due to double trailing slashes in prefix | ### Issues Policy acknowledgement
- [x] I have read and agree to submit bug reports in accordance with the [issues policy](https://www.github.com/mlflow/mlflow/blob/master/ISSUE_POLICY.md)
### Where did you encounter this bug?
Other
### MLflow version
- Client: 2.20.3
- Tracking server: 2.20.3
### System information
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: AlmaLinux 9.4
- **Python version**: 3.12.9
- **yarn version, if running the dev UI**: -
### Describe the problem
The `mlflow gc` command fails to delete artifacts when MinIO is used as the artifacts store and artifact proxying is enabled in the MLflow tracking server.
It fails while trying to list objects in the artifacts store and throws a `botocore.exceptions.ClientError: An error occurred (XMinioInvalidObjectName) when calling the ListObjectsV2 operation: Object name contains unsupported characters.`.
This happens because MLflow tries to list artifacts using a prefix with two trailing slashes (see tracking server logs and the HTTP request below). This is considered to be an invalid request by MinIO since [it is built upon a filesystem-based storage backend](https://github.com/minio/minio/issues/5874) which cannot use the `/` character for anything but the file path separator.
The following change to the `mlflow/store/artifact/s3_artifact_repo.py` file from MLflow `2.20.3` fixes this problem by simply stripping the extra slashes from the destination path which then gets used to construct the prefix:
```
diff --git a/mlflow/store/artifact/s3_artifact_repo.py b/mlflow/store/artifact/s3_artifact_repo.py
index c6ae89007..c805726d5 100644
--- a/mlflow/store/artifact/s3_artifact_repo.py
+++ b/mlflow/store/artifact/s3_artifact_repo.py
@@ -203,6 +203,7 @@ class S3ArtifactRepository(ArtifactRepository, MultipartUploadMixin):
dest_path = artifact_path
if path:
dest_path = posixpath.join(dest_path, path)
+ dest_path = dest_path.rstrip("/") if dest_path else ""
infos = []
prefix = dest_path + "/" if dest_path else ""
s3_client = self._get_s3_client()
@@ -249,7 +250,7 @@ class S3ArtifactRepository(ArtifactRepository, MultipartUploadMixin):
(bucket, dest_path) = self.parse_s3_compliant_uri(self.artifact_uri)
if artifact_path:
dest_path = posixpath.join(dest_path, artifact_path)
-
+ dest_path = dest_path.rstrip("/") if dest_path else ""
prefix = dest_path + "/" if dest_path else ""
s3_client = self._get_s3_client()
paginator = s3_client.get_paginator("list_objects_v2")
```
### Tracking information
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```shell
REPLACE_ME
```
### Code to reproduce issue
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```
REPLACE_ME
```
### Stack trace
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```
REPLACE_ME
```
### Other info / logs
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```
# Tracking server logs
2025/03/03 10:14:02 ERROR mlflow.server: Exception on /api/2.0/mlflow-artifacts/artifacts/2/f3afe5af593c4a7d983af85cf01b59a8/artifacts/ [DELETE]
Traceback (most recent call last):
File "/opt/bitnami/python/lib/python3.12/site-packages/flask/app.py", line 1511, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/bitnami/python/lib/python3.12/site-packages/flask/app.py", line 919, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/bitnami/python/lib/python3.12/site-packages/flask/app.py", line 917, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/bitnami/python/lib/python3.12/site-packages/flask/app.py", line 902, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/bitnami/python/lib/python3.12/site-packages/mlflow/server/handlers.py", line 573, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/bitnami/python/lib/python3.12/site-packages/mlflow/server/handlers.py", line 595, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/bitnami/python/lib/python3.12/site-packages/mlflow/server/handlers.py", line 2223, in _delete_artifact_mlflow_artifacts
artifact_repo.delete_artifacts(artifact_path)
File "/opt/bitnami/python/lib/python3.12/site-packages/mlflow/store/artifact/s3_artifact_repo.py", line 257, in delete_artifacts
for result in results:
^^^^^^^
File "/opt/bitnami/python/lib/python3.12/site-packages/botocore/paginate.py", line 269, in __iter__
response = self._make_request(current_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/bitnami/python/lib/python3.12/site-packages/botocore/paginate.py", line 357, in _make_request
return self._method(**current_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/bitnami/python/lib/python3.12/site-packages/botocore/client.py", line 569, in _api_call
return self._make_api_call(operation_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/bitnami/python/lib/python3.12/site-packages/botocore/client.py", line 1023, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (XMinioInvalidObjectName) when calling the ListObjectsV2 operation: Object name contains unsupported characters.
```
```
# Actual request issued by MLflow tracking server to MinIO
GET /mlflow?list-type=2&prefix=2%2Ff3afe5af593c4a7d983af85cf01b59a8%2Fartifacts%2F%2F&encoding-type=url HTTP/1.1
Host: minio.minio.svc.cluster.local
Accept-Encoding: identity
User-Agent: Boto3/1.37.1 md/Botocore#1.37.1 ua/2.0 os/linux#5.14.0-427.42.1.el9_4.x86_64 md/arch#x86_64 lang/python#3.12.9 md/pyimpl#CPython cfg/retry-mode#legacy Botocore/1.37.1
X-Amz-Date: 20250303T101402Z
X-Amz-Content-SHA256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
Authorization: ...
amz-sdk-invocation-id: 9e4acff3-392a-4718-866f-ef841d87ce08
amz-sdk-request: attempt=1
HTTP/1.1 400 Bad Request
Accept-Ranges: bytes
Content-Length: 332
Content-Type: application/xml
Server: MinIO
Strict-Transport-Security: max-age=31536000; includeSubDomains
Vary: Origin
Vary: Accept-Encoding
X-Amz-Id-2: 28314b1bfe2eff11603ecd4e0aa18484bd21af7e399acd2054bf1284c18135a2
X-Amz-Request-Id: 18294367618E608A
X-Content-Type-Options: nosniff
X-Ratelimit-Limit: 2157
X-Ratelimit-Remaining: 2157
X-Xss-Protection: 1; mode=block
Date: Mon, 03 Mar 2025 10:14:02 GMT
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>XMinioInvalidObjectName</Code><Message>Object name contains unsupported characters.</Message><BucketName>mlflow</BucketName><Resource>/mlflow</Resource><RequestId>18294367618E608A</RequestId><HostId>28314b1bfe2eff11603ecd4e0aa18484bd21af7e399acd2054bf1284c18135a2</HostId></Error>
```
### What component(s) does this bug affect?
- [ ] `area/artifacts`: Artifact stores and artifact logging
- [ ] `area/build`: Build and test infrastructure for MLflow
- [ ] `area/deployments`: MLflow Deployments client APIs, server, and third-party Deployments integrations
- [ ] `area/docs`: MLflow documentation pages
- [ ] `area/examples`: Example code
- [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry
- [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors
- [ ] `area/recipes`: Recipes, Recipe APIs, Recipe configs, Recipe Templates
- [ ] `area/projects`: MLproject format, project running backends
- [ ] `area/scoring`: MLflow Model server, model deployment tools, Spark UDFs
- [x] `area/server-infra`: MLflow Tracking server backend
- [x] `area/tracking`: Tracking Service, tracking client APIs, autologging
### What interface(s) does this bug affect?
- [ ] `area/uiux`: Front-end, user experience, plotting, JavaScript, JavaScript dev server
- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
- [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry
- [ ] `area/windows`: Windows support
### What language(s) does this bug affect?
- [ ] `language/r`: R APIs and clients
- [ ] `language/java`: Java APIs and clients
- [ ] `language/new`: Proposals for new client languages
### What integration(s) does this bug affect?
- [ ] `integrations/azure`: Azure and Azure ML integrations
- [ ] `integrations/sagemaker`: SageMaker integrations
- [ ] `integrations/databricks`: Databricks integrations | open | 2025-03-03T13:57:08Z | 2025-03-18T10:35:45Z | https://github.com/mlflow/mlflow/issues/14809 | [
"bug",
"good first issue",
"help wanted",
"area/tracking",
"area/server-infra"
] | NeonSludge | 5 |
napari/napari | numpy | 7,708 | [test-bot] pip install --pre is failing | The --pre Test workflow failed on 2025-03-16 01:12 UTC
The most recent failing test was on windows-latest py3.13 pyqt6
with commit: cb6f6e6157990806a53f1c58e31e9e7aa4a4966e
Full run: https://github.com/napari/napari/actions/runs/13878041347
(This post will be updated if another test fails, as long as this issue remains open.)
| closed | 2025-03-16T00:34:16Z | 2025-03-16T01:38:15Z | https://github.com/napari/napari/issues/7708 | [
"bug"
] | github-actions[bot] | 2 |
AirtestProject/Airtest | automation | 304 | Is there a way to run chrome on mobile? | Remove any following parts if does not have details about
**Describe the bug**
I think you know how to run Android's Chrome browser on selenium.
Do you also provide a driver to run Android's Chrome browser?
PC: driver = WebChrome() | closed | 2019-03-11T01:59:48Z | 2019-03-19T05:32:29Z | https://github.com/AirtestProject/Airtest/issues/304 | [] | JJunM | 0 |
quantumlib/Cirq | api | 6,811 | Handling Extreme Angle Values | **Description of the issue**
When passing extreme values for the angles, e.g.,:
```
\theta = 1.4645918875615231
\phi = 5.187848314319592e+49
\lambda = 1.4645918875615231
```
to `cirq.MatrixGate`, Cirq throws the following error:
```
ValueError: Not a unitary matrix: [[ 0.74364134+0.j -0.07087262-0.66481172j]
[ 0.66467076-0.07218264j 0.73929459-0.08028672j]]
```
Same happens when passing `np.inf` or `-np.inf` as angles.
**Expected Behavior**
The matrix should remain unitary even for large values of \phi and other extreme angles, or Cirq should gracefully handle the instability without throwing a ValueError.
**How to reproduce the issue**
```
def u_gate_matrix(theta, phi, lam):
return np.array([
[np.cos(theta / 2), -np.exp(1j * lam) * np.sin(theta / 2)],
[np.exp(1j * phi) * np.sin(theta / 2), np.exp(1j * (phi + lam)) * np.cos(theta / 2)]
])
qubit = cirq.NamedQubit("q0")
circuit = cirq.Circuit()
cirq.MatrixGate(u_gate_matrix(1.4645918875615231, 5.187848314319592e+49, 1.4645918875615231)).on(qubit)
```
<details>
put long logs in details blocks *like this*
</details>
**Cirq version**
You can get the cirq version by printing `cirq.__version__`. From the command line:
```
1.4.1
```
| closed | 2024-11-30T19:49:18Z | 2024-12-11T18:58:59Z | https://github.com/quantumlib/Cirq/issues/6811 | [
"kind/bug-report"
] | vili-1 | 1 |
gevent/gevent | asyncio | 1,825 | Broken installation for ubuntu16 (docker) + python3.10 | * gevent version: gevent-21.8.0, from pip
* greenlet version: greenlet-1.1.2 from pip
* Python version: Python 3.10.0a6 from ppa:deadsnakes/ppa
* Operating System:
```
Linux abf2e55ef89e 5.4.0-89-generic #100~18.04.1-Ubuntu SMP Wed Sep 29 10:59:42 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Distributor ID: Ubuntu
Description: Ubuntu 16.04.6 LTS
Release: 16.04
Codename: xenial
```
### Description:
Got errors about wrong greenlet size:
```python-traceback
<frozen importlib._bootstrap>:241: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 152 from C header, got 160 from PyObject
<frozen importlib._bootstrap>:241: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 152 from C header, got 160 from PyObject
<frozen importlib._bootstrap>:241: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 152 from C header, got 160 from PyObject
<frozen importlib._bootstrap>:241: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 152 from C header, got 160 from PyObject
```
All versions are correct and should work together. I run same setup on ubuntu18 (real machine, not docker) without any issue.
### What I've run:
```
FROM ubuntu:xenial
ENV ROOT_PATH="/builds/project" \
BUILD_DIR="/build/project" \
VIRTUAL_ENV="/tv"
RUN mkdir -p $BUILD_DIR
WORKDIR $BUILD_DIR
RUN apt-get update \
&& apt-get install -y software-properties-common curl \
&& add-apt-repository ppa:deadsnakes/ppa \
&& apt-get update \
&& apt-get install -y --no-install-recommends \
build-essential \
libssl-dev \
libffi-dev \
python3.10-dev \
python3.10-venv \
&& curl -s -L https://bootstrap.pypa.io/get-pip.py | python3.10
RUN echo "cache buster: 1"
RUN python3.10 -m venv . \
&& . bin/activate \
&& pip install --upgrade pip wheel \
&& pip install greenlet==1.1.2 gevent==21.8.0 \
&& pip freeze \
&& python -c "import gevent; print(gevent.getcurrent())"
RUN python3.10 -V && lsb_release -a && uname -a
```
Full log - https://gist.github.com/bogdandm/97c16bce2803076bdf3be1ba07a757a7
If I disable usage of .whl files with `pip install greenlet==1.1.2 gevent==21.8.0 --no-binary :all:` everything starts working fine | closed | 2021-10-27T11:36:24Z | 2021-12-11T21:25:00Z | https://github.com/gevent/gevent/issues/1825 | [] | bogdandm | 1 |
vaexio/vaex | data-science | 2,297 | [BUG-REPORT] Rename twice then concat makes error | **Description**
I have 2 tables as below:

Column name: Field0, Field1, Field2

Column name: Field0, Field1, Field2, Field3
for some logic app reason,
I rename the table 2 columns to Field3, Field4, Field5, and Field6, but Field3 already exists so i can't rename Field0 -> Field3, so first i rename Field0 to 'hField3h' (attach 'h 'at prefix and suffix), do the same thing for the rest, finally then rename again all column by removing 'h' character ('hField3h' => 'Field3')
after that, i do 'concat' 2 tables and get an error when print result data-frame

The code is bellow u can test it by yourself
```python
import vaex
df1 = vaex.open("C:\\Users\\Admin\\Documents\\hoai13\\hoai1.hdf5")
df2 = vaex.open("C:\\Users\\Admin\\Documents\\hoai13\\hoai2.hdf5")
print(df1)
print(df2)
df2.rename("Field0", "hField3h")
df2.rename("Field1", "hField4h")
df2.rename("Field2", "hField5h")
df2.rename("Field3", "hField6h")
df2.rename("hField3h", "Field3")
df2.rename("hField4h", "Field4")
df2.rename("hField5h", "Field5")
df2.rename("hField6h", "Field6")
print(df2)
df3 = vaex.concat([df2, df1])
print(df3)
```
HDF5 files:
[data.zip](https://github.com/vaexio/vaex/files/10172649/data.zip)
| open | 2022-12-07T03:58:35Z | 2023-01-09T02:29:41Z | https://github.com/vaexio/vaex/issues/2297 | [] | bls-lehoai | 1 |
Gozargah/Marzban | api | 1,008 | کوتاه کردن لینک سابسکریپشن | لطفا در صورت امکال لینک سابسکربشن رو کمی کوتاه تر کنید تا امکان کپی کردن لینک برای کاربر راحتتر باشه. مثلا برای هر کاربر یک uuid4 اختصاصی فقط برای لینک سابسکریپشن تولید بشه که کوتاه تر از حالت فعلی هست.
حالت فعلی:
https://sub.XXX.com/subset/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJNaXRyYSIsImFjY2VzcyI6InN1YnNjcmlwdGlvbiIsImlhdCI6MTcxNjYxNDE1M30.T2xx3eFoWaIiZsgrZ4q6Hg7RyVh1OxDk3uaMoiZJuUk
حالت پیشنهادی:
https://sub.XXX.com/subset/cb8b6214-1b69-4759-a872-0ac0ff06e353 | closed | 2024-05-25T05:44:10Z | 2024-05-25T13:44:31Z | https://github.com/Gozargah/Marzban/issues/1008 | [
"Duplicate"
] | Pezhman5252 | 1 |
iterative/dvc | data-science | 9,778 | Diff staged data similar to `git diff --cached` | With `git diff --cached`, one could check the staged data. This is useful before commiting the data. Currently, `dvc diff` does not support showing staged data in a similar way. It only supports diffing between two git commits, which means you have to git commit first before checking what's actually committed.
It would be useful if dvc could implement something like `git diff --cached`.
Context: https://discordapp.com/channels/485586884165107732/563406153334128681/1134156428077191198 | open | 2023-07-28T13:33:20Z | 2023-07-31T14:12:05Z | https://github.com/iterative/dvc/issues/9778 | [
"feature request",
"p3-nice-to-have",
"A: status"
] | YifuTao | 0 |
dunossauro/fastapi-do-zero | sqlalchemy | 274 | Iniciando o curso, antes tarde do que nunca | https://github.com/Luiz-ROCampos/fast_zero.git | closed | 2025-01-05T18:55:48Z | 2025-02-02T10:03:05Z | https://github.com/dunossauro/fastapi-do-zero/issues/274 | [] | Luiz-ROCampos | 2 |
jacobgil/pytorch-grad-cam | computer-vision | 194 | ScoreCAM empty cam_per_layer | In ScoreCAM implementation, if we set target_layers, we get warning "Warning: You are using ScoreCAM with target layers, however ScoreCAM will ignore them". But when we call it with target_layers=[], the resulting cam_per_layer is of course empty, and this raises error with aggregate_multi_layers(cam_per_layer).
| closed | 2022-01-13T16:01:54Z | 2022-04-14T09:15:28Z | https://github.com/jacobgil/pytorch-grad-cam/issues/194 | [] | ericotjo001 | 3 |
SciTools/cartopy | matplotlib | 2,141 | Dataset names for NaturalEarthFeature() are not discoverable, implicitly excludes raster datasets | ### Description
The `NaturalEarthFeature()` interface takes a name for the dataset field. These names are munged from the filenames on the the [Natural Earth Data](https://www.naturalearthdata.com/) website. These names are not listed anywhere, nor is the way they are built into URLs described. The documentation simply gives an example without explanation :
> name:
>
> The name of the dataset, e.g. ‘admin_0_boundary_lines_land’.
>
>
When an incorrect name is requested, `urllib` raises an `HTTPError`. The computed URL is not presented. This leaves the user with no way to diagnose the mistake they have made. The function definition contains the following doc string, which contains useful information that should be considered for inclusion in the user documentation :
"""
Return the path to the requested natural earth shapefile,
downloading and unzipping if necessary.
To identify valid components for this function, either browse
NaturalEarthData.com, or if you know what you are looking for, go to
https://github.com/nvkelso/natural-earth-vector/tree/master/zips to
see the actual files which will be downloaded.
Note
----
Some of the Natural Earth shapefiles have special features which are
described in the name. For example, the 110m resolution
"admin_0_countries" data also has a sibling shapefile called
"admin_0_countries_lakes" which excludes lakes in the country
outlines. For details of what is available refer to the Natural Earth
website, and look at the "download" link target to identify
appropriate names.
"""
This still does not explicitly describe how the URL is built, and so the user must still take a guess at the correct input. The `NEShpDownloader` class contains the following code :
```python
FORMAT_KEYS = ('config', 'resolution', 'category', 'name')
# Define the NaturalEarth URL template. Shapefiles are hosted on AWS since
# 2021: https://github.com/nvkelso/natural-earth-vector/issues/445
_NE_URL_TEMPLATE = ('https://naturalearth.s3.amazonaws.com/'
'{resolution}_{category}/ne_{resolution}_{name}.zip')
```
This scheme implicitly excludes some of the physical and cultural datasets :
```
110m_cultural/110m_cultural.zip
50m_physical/50m_physical.zip
110m_physical/110m_physical.zip
10m_physical/10m_physical.zip
10m_cultural/10m_cultural.zip
```
It also implicitly excludes all of the raster datasets :
```
10m_raster/GRAY_HR_SR.zip
10m_raster/GRAY_HR_SR_OB.zip
10m_raster/GRAY_HR_SR_OB_DR.zip
10m_raster/GRAY_HR_SR_W.zip
10m_raster/GRAY_LR_SR.zip
10m_raster/GRAY_LR_SR_OB.zip
10m_raster/GRAY_LR_SR_OB_DR.zip
10m_raster/GRAY_LR_SR_W.zip
10m_raster/HYP_HR.zip
10m_raster/HYP_HR_SR.zip
10m_raster/HYP_HR_SR_OB_DR.zip
10m_raster/HYP_HR_SR_W.zip
10m_raster/HYP_HR_SR_W_DR.zip
10m_raster/HYP_LR.zip
10m_raster/HYP_LR_SR.zip
10m_raster/HYP_LR_SR_OB_DR.zip
10m_raster/HYP_LR_SR_W.zip
10m_raster/HYP_LR_SR_W_DR.zip
10m_raster/NE1_HR_LC.zip
10m_raster/NE1_HR_LC_SR.zip
10m_raster/NE1_HR_LC_SR_W.zip
10m_raster/NE1_HR_LC_SR_W_DR.zip
10m_raster/NE1_LR_LC.zip
10m_raster/NE1_LR_LC_SR.zip
10m_raster/NE1_LR_LC_SR_W.zip
10m_raster/NE1_LR_LC_SR_W_DR.zip
10m_raster/NE2_HR_LC.zip
10m_raster/NE2_HR_LC_SR.zip
10m_raster/NE2_HR_LC_SR_W.zip
10m_raster/NE2_HR_LC_SR_W_DR.zip
10m_raster/NE2_LR_LC.zip
10m_raster/NE2_LR_LC_SR.zip
10m_raster/NE2_LR_LC_SR_W.zip
10m_raster/NE2_LR_LC_SR_W_DR.zip
10m_raster/OB_LR.zip
10m_raster/SR_HR.zip
10m_raster/SR_LR.zip
50m_raster/BATH_50M.zip
50m_raster/GRAY_50M_SR.zip
50m_raster/GRAY_50M_SR_OB.zip
50m_raster/GRAY_50M_SR_W.zip
50m_raster/HYP_50M_SR.zip
50m_raster/HYP_50M_SR_W.zip
50m_raster/MSR_50M.zip
50m_raster/NE1_50M_LC_SR.zip
50m_raster/NE1_50M_LC_SR_W.zip
50m_raster/NE1_50M_SR.zip
50m_raster/NE1_50M_SR_W.zip
50m_raster/NE2_50M_LC_SR.zip
50m_raster/NE2_50M_LC_SR_W.zip
50m_raster/NE2_50M_SR.zip
50m_raster/NE2_50M_SR_W.zip
50m_raster/OB_50M.zip
50m_raster/PRISMA_SR_50M.zip
50m_raster/SR_50M.zip
```
I'm not sure if this is intentional or not, but it is confusing as heck. I would certainly like to use the raster datasets, but because the 'ne_' prefix is hard-coded into the URL builder, there is no way get at them. In any event, here is a listing of all the datasets that can be accessed with the current interface :
[ne_names.txt](https://github.com/SciTools/cartopy/files/10856637/ne_names.txt)
Here is is a listing of all the Natural Earth datasets that are actually available :
[naturalearth.txt](https://github.com/SciTools/cartopy/files/10856578/naturalearth.txt)
I would recommend including this list somewhere in the user documentation. It would also be nice if the interface were updated to make it possible to access the raster datasets. | open | 2023-03-01T02:46:12Z | 2023-12-23T12:53:37Z | https://github.com/SciTools/cartopy/issues/2141 | [] | ryneches | 8 |
scikit-optimize/scikit-optimize | scikit-learn | 351 | Optimizer should run with `acq_optimizer="sampling" if base_estimator is a tree-based method. | closed | 2017-04-11T19:20:19Z | 2017-05-02T19:25:05Z | https://github.com/scikit-optimize/scikit-optimize/issues/351 | [
"Easy"
] | MechCoder | 0 | |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 16,453 | [Feature Request]: Keep original file name for img-img | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
There doesn't seem to be an option to save img-img generations with its original file name, this is pretty annoying when you're working with 100's of files and you need them to be named just like the original (e.g game textures), takes a lot of extra time to go into the output folder and rename them.
For batch processing you do have this option, which is great, however sometimes i want to just write individual prompts, though i'm less likely to want to do it because the tediousness of changing filenames all the time.
### Proposed workflow
checkbox
### Additional information
_No response_ | open | 2024-09-02T01:34:05Z | 2024-09-24T00:32:34Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16453 | [
"enhancement"
] | vurt72 | 1 |
facebookresearch/fairseq | pytorch | 4,637 | fairseq generated translations contain repeated sentences and commas | ## ❓ Questions and Help
I tried to use fairseq-generate to see the performance of the model and the translations in the Hypothesis line contains many repeated lines, not sure if it is a problem with the model or the command I used which is shown below.
`fairseq-generate /srv/scratch3/ltian/data_trans/NMTAdapt \
--path /srv/scratch3/ltian/train_nmt/checkpoint_10_28000pt.pt \
--results-path /srv/scratch3/ltian/nmt_eval_result \
--task translation_from_pretrained_bart \
--gen-subset test \
-t ml_XX -s en_XX \
--bpe 'sentencepiece' --sentencepiece-model /srv/scratch3/ltian/sentence.bpe.model \
--scoring sacrebleu \
--batch-size 32 --langs ar_AR,cs_CZ,de_DE,en_XX,es_XX,et_EE,fi_FI,fr_XX,gu_IN,hi_IN,it_IT,ja_XX,kk_KZ,ko_KR,lt_LT,lv_LV,my_MM,ne_NP,nl_XX,ro_RO,ru_RU,si_LK,ml_XX,no_ML,no_HI`
some examples of the output
> S-1399 [' There',' is',' no',' students',' politics','.'][en_XX]
T-1399 [' ई',' विद्या','र्','\u200d','थि',' मु','न्','ने','ऱ','्','ऱ','त्','ति','नु',' राष्ट्रीय','मि','ल्ल','.']
H-1399 -219.1944580078125 '▁छात्र', '▁राजनीति', '▁में', '▁शामिल', '▁हैं', '.'] [hi_IN] ['▁इसमें', '▁कोई', '▁भी', '▁नहीं', '▁है', '।'] [hi_IN] ['▁उन्होंने', '▁कहा', '▁है', '।'] [hi_IN] ['▁इसमें', '▁कोई', '▁भी', '▁नहीं', '▁है', '।'] [hi_IN] ["▁'", [hi_IN] ["▁'", [hi_IN] ['▁छात्रों', '▁की', '▁राजनीति', '▁में', '▁भी', '▁नहीं', '▁हैं', '।'] [hi_IN] ["▁'", [hi_IN] ['▁छात्रों', '▁की', '▁राजनीति', '▁में', '▁भी', '▁नहीं', '▁हैं', '।'] [hi_IN] ["▁'", [hi_IN] ['▁छात्रों', '▁की', '▁राजनीति', '▁में', '▁भी', '▁नहीं', '▁हैं', '।'] [hi_IN] ['▁छात्रों', '▁की', '▁राजनीति', '▁में', '▁भी', '▁नहीं', '▁हैं', '।'] [hi_IN] ['▁छात्रों', '▁की', '▁राजनीति', '▁में', '▁भी', '▁नहीं', '▁हैं', '।'] [hi_IN] ['▁छात्रों', '▁की', '▁राजनीति', '▁में', '▁भी', '▁नहीं', '▁हैं', '।'] [hi_IN] ['▁छात्र', '▁हैं', '।'] [hi_IN] ['▁छात्र', '▁हैं', '।'] [hi_IN] ['▁छात्रों', '▁की', '▁राजनीति', '▁में', '▁भी', '▁नहीं', '▁हैं', '।'] [hi_IN] ['▁छात्रों', '▁की', '▁राजनीति', '▁में', '▁भी', '▁नहीं', '▁हैं', '।'] [hi_IN] ['▁छात्रों', '▁की', '▁राजनीति', '▁में', '▁भी', '▁नहीं', '।'] [hi_IN] ['▁छात्रों', '▁की', '▁राजनीति', '▁में', '▁भी', '▁नहीं', '।'] [hi_IN] ['▁छात्रों', '▁की', '▁राजनीति', '▁में', '▁भी', '▁नहीं', '▁है', '।'] [hi_IN] ['▁छात्रों', '▁की', '▁राजनीति', '▁में', '▁भी', '▁नहीं', '▁है', '।'] [hi_IN] ['▁छात्रों', '▁की', '▁राजनीति', '.'] [hi_IN] ['▁छात्रों', '▁की', '▁राजनीति', '▁में', '▁भी', '▁नहीं', '।'] [hi_IN] ['▁छात्रों', '▁की', '▁राजनीति', '▁है', '।'] [hi_IN] ['▁छात्रों', '▁की', '▁राजनीति', '▁में', '▁भी', '▁नहीं', '।']
> S-1753 [' This',' relationship',' then',' became',' warm','.'][en_XX]
T-1753 [' पि','न्न','ी','ट्',' ई',' ','बन्ध','ं',' कू','ट','ु','त','ल्','\u200d',' ','दृ','ड','मा','य','ि','.']
H-1753 -278.149658203125 ['▁तब', '▁यह', '▁संबंध', '▁गर्म', '▁हो', '▁गया', '।'] [hi_IN] [hi_IN] ['▁फिर', '।'] [hi_IN] ['▁इस', '▁बीच', '▁में', '▁यह', '▁रिश्', '\u200d', 'या', '▁है', '।'] [hi_IN] ['▁फिर', '।'] [hi_IN] ['▁फिर', '।'] [hi_IN] ['▁इस', '▁बीच', '▁में', '▁यह', '▁रिश्', '\u200d', 'या', '▁गया', '।'] [hi_IN] ['▁और', '▁यह', '▁रिश्', '\u200d', 'या', '▁गया', '।'] [hi_IN] ['▁तब', '।'] [hi_IN] ['▁तब', '।'] [hi_IN] ['▁तब', '।'] [hi_IN] ['▁तब', '।'] [hi_IN] ['▁तब', '।'] [hi_IN] ['▁इस', '▁बीच', '▁में', '▁यह', '▁रिश्', '\u200d', 'या', '▁गया', '।'] [hi_IN] ['▁तब', '▁से', '▁हुआ', '।'] [hi_IN] ['▁तब', '▁से', '▁हुआ', '।'] [hi_IN] ['▁इसके', '▁बाद', '▁से', '▁हुआ', '।'] [hi_IN] ['▁इस', '▁तरह', '▁से', '▁हुआ', '।'] [hi_IN] ['▁इसके', '▁बाद', '▁से', '▁हुआ', '।'] [hi_IN] ['▁इसके', '▁बाद', '▁ये', '▁रिश्', '\u200d', 'या', '▁गया', '।'] [hi_IN] ['▁इसके', '▁बाद', '▁से', '▁हुआ', '।'] [hi_IN] ['▁इस', '▁रिश्', '\u200d', 'या', '▁गया', '।'] [hi_IN] ['▁इस', '▁संबंध', '▁गर्म', '▁हो', '▁गया', '।'] [hi_IN] ['▁इसके', '▁बाद', '▁ये', '▁रिश्', '\u200d', 'या', '▁गया', '।'] [hi_IN] ['▁इसके', '▁बाद', '▁ये', '▁रिश्', '\u200d', 'या', '▁गया', '।'] [hi_IN] ['▁इसके', '▁लिए', '▁गए', '▁थे', '।'] [hi_IN] ['▁इसके', '▁लिए', '▁गए', '▁थे', '।'] [hi_IN] ['▁इस', '▁रिश्', '\u200d', 'या', '▁गया', '।'] [hi_IN] ['▁इस', '▁रिश्', '\u200d', 'या', '▁गया', '।']
And clearly, the translation is way too long. If it helps, the model is trained on hindi-en parallel data and malayalam-en back translation data, the target in the example is transliterated malayalam with Devanagari script
#### What have you tried?
I tried to set --lenpen parameter as smaller than 0 hoping that it will reduce the length but it doesn't work
#### What's your environment?
- fairseq Version v0.10.2:
- PyTorch Version (e.g., 1.0)
- OS (e.g., Linux):
- How you installed fairseq (`pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
| closed | 2022-08-08T20:31:10Z | 2022-08-10T02:01:45Z | https://github.com/facebookresearch/fairseq/issues/4637 | [
"question",
"needs triage"
] | tianshuailu | 11 |
ultralytics/yolov5 | pytorch | 12,852 | "The labels from detect.py do not give me the prediction results in the same order as the ground truth .txt file." | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello,
The labels from detect.py don't give me the prediction results in the same order as the ground truth .txt file.
I want to calculate the intersection, union, and intersection over union between the prediction bounding box and the ground truth bbox. However, since they are not in the same order, I am unable to traverse the files and perform these calculations. I am attaching two screenshots: the first one is the prediction .txt file, and the second one is the corresponding ground truth .txt file.


For this example image, detect.py first predicted class 5, then class 0, and finally class 6. However, in the ground truth .txt file, we have class 0 first, followed by class 5, and finally class 6.
### Additional
_No response_ | closed | 2024-03-26T09:21:28Z | 2024-10-20T19:42:19Z | https://github.com/ultralytics/yolov5/issues/12852 | [
"question",
"Stale"
] | killich8 | 3 |
geex-arts/django-jet | django | 160 | Google analytics is not working | I have double checked everything and they all seems fine.
However, I have this error on starting server.
Why it cannot find util from oauth2client?
```
python.exe .\manage.py runserver
Performing system checks...
Unhandled exception in thread started by <function wrapper at 0x03777030>
Traceback (most recent call last):
File "C:\Users\creep\Code\Python\test\lib\site-packages\django\utils\autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "C:\Users\creep\Code\Python\test\lib\site-packages\django\core\management\commands\runserver.py", line 121, in inner_run
self.check(display_num_errors=True)
File "C:\Users\creep\Code\Python\test\lib\site-packages\django\core\management\base.py", line 374, in check
include_deployment_checks=include_deployment_checks,
File "C:\Users\creep\Code\Python\test\lib\site-packages\django\core\management\base.py", line 361, in _run_checks
return checks.run_checks(**kwargs)
File "C:\Users\creep\Code\Python\test\lib\site-packages\django\core\checks\registry.py", line 81, in run_checks
new_errors = check(app_configs=app_configs)
File "C:\Users\creep\Code\Python\test\lib\site-packages\django\core\checks\urls.py", line 14, in check_url_config
return check_resolver(resolver)
File "C:\Users\creep\Code\Python\test\lib\site-packages\django\core\checks\urls.py", line 24, in check_resolver
for pattern in resolver.url_patterns:
File "C:\Users\creep\Code\Python\test\lib\site-packages\django\utils\functional.py", line 35, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "C:\Users\creep\Code\Python\test\lib\site-packages\django\urls\resolvers.py", line 313, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "C:\Users\creep\Code\Python\test\lib\site-packages\django\utils\functional.py", line 35, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "C:\Users\creep\Code\Python\test\lib\site-packages\django\urls\resolvers.py", line 306, in urlconf_module
return import_module(self.urlconf_name)
File "C:\Python27\Lib\importlib\__init__.py", line 37, in import_module
__import__(name)
File "C:\Users\creep\Code\Python\test\test\test\urls.py", line 1, in <module>
from jet.dashboard.dashboard_modules import google_analytics_views
File "C:\Users\creep\Code\Python\test\lib\site-packages\jet\dashboard\dashboard_modules\google_analytics_views.py", line 6, in <module>
from jet.dashboard.dashboard_modules.google_analytics import GoogleAnalyticsClient, ModuleCredentialStorage
File "C:\Users\creep\Code\Python\test\lib\site-packages\jet\dashboard\dashboard_modules\google_analytics.py", line 11, in <module>
from googleapiclient.discovery import build
File "C:\Users\creep\Code\Python\test\lib\site-packages\googleapiclient\discovery.py", line 53, in <module>
from googleapiclient.errors import HttpError
File "C:\Users\creep\Code\Python\test\lib\site-packages\googleapiclient\errors.py", line 26, in <module>
from oauth2client import util
ImportError: cannot import name util``` | closed | 2016-12-24T23:37:41Z | 2018-03-11T10:01:54Z | https://github.com/geex-arts/django-jet/issues/160 | [] | pwndr00t | 4 |
aiortc/aiortc | asyncio | 1,138 | a fast api project that receives audio and video | hi and tnx for your work
would you please give me a code snippet that a fastapi api gets an audio and video as input to process on them further? | closed | 2024-08-01T16:58:30Z | 2024-12-16T02:47:36Z | https://github.com/aiortc/aiortc/issues/1138 | [
"stale"
] | ranch-hands | 3 |
cvat-ai/cvat | tensorflow | 8,749 | Deploy issue: cvat_opa fails to load bundle with message `ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules"` | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
1. Download and unzip v2.22.0
2. `export CVAT_HOST=192.168.0.148`
3. `sudo docker compose up -d`
4. navigate to 192.168.0.148:8080
### Expected Behavior
Within ~5 minutes, the web UI should be reachable. But each time I try to access it via 192.168.0.148:8080 (accessing over LAN on a laptop - server is headless), I get a `404 page not found` error.
### Possible Solution
The OPA logs suggest that it's not never able to reach cvat_server and load the bundle. The container log is filled with entries like this:
`2024-11-28T00:12:25Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.15:8080: connect: connection refused name=cvat plugin=bundle`
I searched on Github for similar errors, but there is no clear solution to this.
### Context
cvat_opa logs:
```
2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp: lookup cvat-server on 127.0.0.11:53: server misbehaving 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp: lookup cvat-server on 127.0.0.11:53: server misbehaving name=cvat 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp: lookup cvat-server on 127.0.0.11:53: server misbehaving 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp: lookup cvat-server on 127.0.0.11:53: server misbehaving name=cvat plugin=bundle
2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp: lookup cvat-server on 127.0.0.11:53: server misbehaving 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp: lookup cvat-server on 127.0.0.11:53: server misbehaving name=cvat 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp: lookup cvat-server on 127.0.0.11:53: server misbehaving 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR 2024-11-28T00:07:17Z 2024-11-28T00:07:17Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp: lookup cvat-server on 127.0.0.11:53: server misbehaving name=cvat plugin=bundle
2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp: lookup cvat-server on 127.0.0.11:53: server misbehaving 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp: lookup cvat-server on 127.0.0.11:53: server misbehaving name=cvat 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp: lookup cvat-server on 127.0.0.11:53: server misbehaving 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp: lookup cvat-server on 127.0.0.11:53: server misbehaving name=cvat plugin=bundle
2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp: lookup cvat-server on 127.0.0.11:53: server misbehaving 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp: lookup cvat-server on 127.0.0.11:53: server misbehaving name=cvat 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp: lookup cvat-server on 127.0.0.11:53: server misbehaving 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR 2024-11-28T00:07:18Z 2024-11-28T00:07:18Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp: lookup cvat-server on 127.0.0.11:53: server misbehaving name=cvat plugin=bundle
2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused name=cvat 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR 2024-11-28T00:07:19Z 2024-11-28T00:07:19Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused name=cvat plugin=bundle
2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused name=cvat 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR 2024-11-28T00:07:20Z 2024-11-28T00:07:20Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused name=cvat plugin=bundle
2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused name=cvat 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR 2024-11-28T00:07:21Z 2024-11-28T00:07:21Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused name=cvat plugin=bundle
2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused name=cvat 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR 2024-11-28T00:07:23Z 2024-11-28T00:07:23Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused name=cvat plugin=bundle
2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused name=cvat 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR 2024-11-28T00:07:25Z 2024-11-28T00:07:25Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused name=cvat plugin=bundle
2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused name=cvat 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR 2024-11-28T00:07:29Z 2024-11-28T00:07:29Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused name=cvat plugin=bundle
2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused name=cvat 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR 2024-11-28T00:07:35Z 2024-11-28T00:07:35Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused name=cvat plugin=bundle
2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused name=cvat 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR 2024-11-28T00:07:47Z 2024-11-28T00:07:47Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused name=cvat plugin=bundle
2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused name=cvat 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR 2024-11-28T00:08:05Z 2024-11-28T00:08:05Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused name=cvat plugin=bundle
2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused name=cvat 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR 2024-11-28T00:08:20Z 2024-11-28T00:08:20Z ERR msg=Bundle load failed: request failed: Get "http://cvat-server:8080/api/auth/rules": dial tcp 172.26.0.13:8080: connect: connection refused name=cvat plugin=bundle
```
cvat_server logs:
```
wait-for-it: waiting for cvat_db:5432 without a timeout
wait-for-it: cvat_db:5432 is available after 8 seconds
Operations to perform:
Apply all migrations: account, admin, analytics_report, auth, authtoken, contenttypes, dataset_repo, db, django_rq, engine, iam, organizations, quality_control, sessions, sites, socialaccount, webhooks
Running migrations:
Applying contenttypes.0001_initial... OK
Applying auth.0001_initial... OK
Applying account.0001_initial... OK
Applying account.0002_email_max_length... OK
Applying account.0003_alter_emailaddress_create_unique_verified_email... OK
Applying account.0004_alter_emailaddress_drop_unique_email... OK
Applying account.0005_emailaddress_idx_upper_email... OK
Applying admin.0001_initial... OK
Applying admin.0002_logentry_remove_auto_add... OK
Applying admin.0003_logentry_add_action_flag_choices... OK
Applying organizations.0001_initial... OK
Applying engine.0001_release_v0_1_0... OK
Applying engine.0002_labeledpoints_labeledpointsattributeval_labeledpolygon_labeledpolygonattributeval_labeledpolyline_la... OK
Applying engine.0003_objectpath_shapes... OK
Applying engine.0004_task_z_order... OK
Applying engine.0005_auto_20180609_1512... OK
Applying engine.0006_auto_20180629_1501... OK
Applying engine.0007_task_flipped... OK
Applying engine.0008_auto_20180917_1424... OK
Applying engine.0009_auto_20180917_1424... OK
Applying engine.0010_auto_20181011_1517... OK
Applying engine.0011_add_task_source_and_safecharfield... OK
Applying engine.0012_auto_20181025_1618... OK
Applying engine.0013_auth_no_default_permissions... OK
Applying engine.0014_job_max_shape_id... OK
Applying engine.0015_db_redesign_20190217... OK
Applying engine.0016_attribute_spec_20190217... OK
Applying engine.0017_db_redesign_20190221... OK
Applying engine.0018_jobcommit... OK
Applying engine.0019_frame_selection... OK
Applying engine.0020_remove_task_flipped...Getting flipped tasks...
Conversion started...
OK
Applying engine.0021_auto_20190826_1827... OK
Applying engine.0022_auto_20191004_0817... OK
Applying engine.0023_auto_20200113_1323... OK
Applying engine.0024_auto_20191023_1025...
Start schema migration...
[2024-11-28 00:07:46,061] INFO 0024_auto_20191023_1025:
Start schema migration...
Schema migration is finished...
[2024-11-28 00:07:46,062] INFO 0024_auto_20191023_1025:
Schema migration is finished...
Start data migration...
[2024-11-28 00:07:46,063] INFO 0024_auto_20191023_1025:
Start data migration...
OK
Applying engine.0025_auto_20200324_1222... OK
Applying engine.0026_auto_20200719_1511... OK
Applying engine.0027_auto_20200719_1552... OK
Applying engine.0028_labelcolor... OK
Applying engine.0029_data_storage_method... OK
Applying engine.0030_auto_20200914_1331... OK
Applying engine.0031_auto_20201011_0220... OK
Applying engine.0032_remove_task_z_order... OK
Applying engine.0033_projects_adjastment... OK
Applying engine.0034_auto_20201125_1426... OK
Applying engine.0035_data_storage... OK
Applying engine.0036_auto_20201216_0943... OK
Applying engine.0037_task_subset... OK
Applying engine.0038_manifest...The data migration has been started for creating manifest`s files
The data migration has been started for creating manifest`s files
[2024-11-28 00:07:47,695] INFO 0038_manifest: The data migration has been started for creating manifest`s files
Need to update 0 data objects
Need to update 0 data objects
[2024-11-28 00:07:47,699] INFO 0038_manifest: Need to update 0 data objects
OK
Applying engine.0039_auto_training... OK
Applying engine.0040_cloud_storage... OK
Applying engine.0041_auto_20210827_0258... OK
Applying engine.0042_auto_20210830_1056... OK
Applying engine.0043_auto_20211027_0718... OK
Applying engine.0044_auto_20211115_0858... OK
Applying engine.0045_auto_20211123_0824... OK
Applying engine.0046_data_sorting_method... OK
Applying engine.0047_auto_20211110_1938... OK
Applying engine.0048_auto_20211112_1918... OK
Applying engine.0049_auto_20220202_0710... OK
Applying engine.0050_auto_20220211_1425... OK
Applying engine.0051_auto_20220220_1824... OK
Applying engine.0052_alter_cloudstorage_specific_attributes... OK
Applying engine.0053_data_deleted_frames... OK
Applying engine.0054_auto_20220610_1829... OK
Applying engine.0055_jobs_directories...Migration has been started. Need to create 0 directories.
Migration has been started. Need to create 0 directories.
[2024-11-28 00:07:51,190] INFO 0055_jobs_directories: Migration has been started. Need to create 0 directories.
Migration has been finished successfully.
Migration has been finished successfully.
[2024-11-28 00:07:51,193] INFO 0055_jobs_directories: Migration has been finished successfully.
OK
Applying engine.0056_jobs_previews...Migration has been started. Need to create 0 previews.
Migration has been started. Need to create 0 previews.
[2024-11-28 00:07:51,274] INFO 0056_jobs_previews: Migration has been started. Need to create 0 previews.
OK
Applying engine.0057_auto_20220726_0926... OK
Applying engine.0058_auto_20220809_1236... OK
Applying engine.0059_labeledshape_outside... OK
Applying engine.0060_alter_label_parent... OK
Applying engine.0061_auto_20221130_0844... OK
Applying engine.0062_delete_previews...
Deleting Data previews...
[2024-11-28 00:07:54,422] INFO 0062_delete_previews:
Deleting Data previews...
Deleting Job previews...
Deleting CloudStorage previews...
[2024-11-28 00:07:54,435] INFO 0062_delete_previews:
Deleting Job previews...
[2024-11-28 00:07:54,436] INFO 0062_delete_previews:
Deleting CloudStorage previews...
OK
Applying engine.0063_delete_jobcommit... OK
Applying engine.0064_delete_or_rename_wrong_labels...
Deleting skeleton Labels without skeletons...
[2024-11-28 00:07:54,582] INFO 0064_delete_or_rename_wrong_labels:
Deleting skeleton Labels without skeletons...
Deleting duplicate skeleton sublabels and renaming duplicate Labels...
[2024-11-28 00:07:54,585] INFO 0064_delete_or_rename_wrong_labels:
Deleting duplicate skeleton sublabels and renaming duplicate Labels...
OK
Applying engine.0065_auto_20230221_0931... OK
Applying engine.0066_auto_20230319_1252... OK
Applying engine.0067_alter_cloudstorage_credentials_type... OK
Applying engine.0068_auto_20230418_0901... OK
Applying engine.0069_auto_20230608_1915... OK
Applying engine.0070_add_job_type_created_date... OK
Applying engine.0071_annotationguide_asset... OK
Applying engine.0072_alter_issue_updated_date... OK
Applying analytics_report.0001_initial... OK
Applying analytics_report.0002_fix_annotation_speed... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying auth.0007_alter_validators_add_error_messages... OK
Applying auth.0008_alter_user_username_max_length... OK
Applying auth.0009_alter_user_last_name_max_length... OK
Applying auth.0010_alter_group_name_max_length... OK
Applying auth.0011_update_proxy_permissions... OK
Applying auth.0012_alter_user_first_name_max_length... OK
Applying authtoken.0001_initial... OK
Applying authtoken.0002_auto_20160226_1747... OK
Applying authtoken.0003_tokenproxy... OK
Applying dataset_repo.0001_initial... OK
Applying dataset_repo.0002_auto_20190123_1305... OK
Applying dataset_repo.0003_gitdata_lfs... OK
Applying dataset_repo.0004_rename... OK
Applying dataset_repo.0005_auto_20201019_1100... OK
Applying dataset_repo.0006_gitdata_format... OK
Applying dataset_repo.0007_delete_gitdata... OK
Applying db.0001_initial... OK
Applying django_rq.0001_initial... OK
Applying engine.0073_alter_attributespec_default_value_and_more... OK
Applying engine.0074_alter_labeledimage_source_alter_labeledshape_source_and_more... OK
Applying engine.0075_annotationguide_is_public... OK
Applying engine.0076_remove_storages_that_refer_to_deleted_cloud_storages... OK
Applying engine.0077_auto_20231121_1952... OK
Applying engine.0078_alter_cloudstorage_credentials... OK
Applying engine.0079_alter_labeledimageattributeval_image_and_more... OK
Applying engine.0080_alter_trackedshape_track... OK
Applying engine.0081_job_assignee_updated_date_and_more... OK
Applying engine.0082_alter_labeledimage_job_and_more... OK
Applying engine.0083_move_to_segment_chunks...Information about migrated tasks is available in the migration log file: /home/django/logs/migrations/0083_move_to_segment_chunks-data_ids.log. You will need to remove data manually for these tasks.
[2024-11-28 00:07:58,694] INFO 0083_move_to_segment_chunks: Information about migrated tasks is available in the migration log file: /home/django/logs/migrations/0083_move_to_segment_chunks-data_ids.log. You will need to remove data manually for these tasks.
OK
Applying engine.0084_honeypot_support... OK
Applying engine.0085_segment_chunks_updated_date... OK
Applying engine.0086_profile_has_analytics_access... OK
Applying iam.0001_remove_business_group... OK
Applying organizations.0002_invitation_sent_date... OK
Applying quality_control.0001_initial... OK
Applying quality_control.0002_qualityreport_assignee... OK
Applying quality_control.0003_qualityreport_assignee_last_updated_and_more... OK
Applying quality_control.0004_qualitysettings_point_size_base... OK
Applying quality_control.0005_qualitysettings_match_empty... OK
Applying sessions.0001_initial... OK
Applying sites.0001_initial... OK
Applying sites.0002_alter_domain_unique... OK
Applying socialaccount.0001_initial... OK
Applying socialaccount.0002_token_max_lengths... OK
Applying socialaccount.0003_extra_data_default_dict... OK
Applying socialaccount.0004_app_provider_id_settings... OK
Applying socialaccount.0005_socialtoken_nullable_app... OK
Applying webhooks.0001_initial... OK
Applying webhooks.0002_alter_webhookdelivery_status_code... OK
Applying webhooks.0003_alter_webhookdelivery_status_code... OK
Applying webhooks.0004_alter_webhook_target_url... OK
wait-for-it: waiting for cvat_redis_inmem:6379 without a timeout
wait-for-it: cvat_redis_inmem:6379 is available after 0 seconds
/opt/venv/lib/python3.10/site-packages/rq_scheduler/utils.py:28: FutureWarning: Version 0.22.0+ of crontab will use datetime.utcnow() and
datetime.utcfromtimestamp() instead of datetime.now() and
datetime.fromtimestamp() as was previous. This had been a bug, which will be
remedied. If you would like to keep the *old* behavior:
`ct.next(..., default_utc=False)` . If you want to use the new behavior *now*:
`ct.next(..., default_utc=True)`. If you pass a datetime object with a tzinfo
attribute that is not None, timezones will *just work* to the best of their
ability. There are tests...
next_time = cron.next(now=now, return_datetime=True)
Processing queue import...
Processing queue export...
Processing queue annotation...
Processing queue webhooks...
Processing queue notifications...
Processing queue quality_reports...
Processing queue analytics_reports...
Processing queue cleaning...
Creating job clean_up_sessions...
163 static files copied to '/home/django/static'.
wait-for-it: waiting for cvat_db:5432 without a timeout
wait-for-it: cvat_db:5432 is available after 0 seconds
waiting for migrations to complete...
2024-11-28 00:08:21,028 INFO Creating socket unix:///tmp/uvicorn.sock
2024-11-28 00:08:21,029 INFO Closing socket unix:///tmp/uvicorn.sock
2024-11-28 00:08:21,034 INFO RPC interface 'supervisor' initialized
2024-11-28 00:08:21,036 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2024-11-28 00:08:21,038 INFO supervisord started with pid 1
2024-11-28 00:08:22,042 INFO spawned: 'clamav_update' with pid 93
2024-11-28 00:08:22,052 INFO spawned: 'nginx-0' with pid 94
2024-11-28 00:08:22,053 INFO spawned: 'smokescreen' with pid 95
2024-11-28 00:08:22,058 INFO Creating socket unix:///tmp/uvicorn.sock
2024-11-28 00:08:22,062 DEBG fd 20 closed, stopped monitoring <PInputDispatcher at 135063999362576 for <Subprocess at 135063999351200 with name uvicorn-0 in state STARTING> (stdin)>
2024-11-28 00:08:22,067 INFO spawned: 'uvicorn-0' with pid 96
2024-11-28 00:08:22,072 DEBG fd 24 closed, stopped monitoring <PInputDispatcher at 135063999363152 for <Subprocess at 135063999351056 with name uvicorn-1 in state STARTING> (stdin)>
2024-11-28 00:08:22,087 INFO spawned: 'uvicorn-1' with pid 104
2024-11-28 00:08:22,088 DEBG fd 7 closed, stopped monitoring <POutputDispatcher at 135063998903152 for <Subprocess at 135063998903056 with name clamav_update in state STARTING> (stdout)>
2024-11-28 00:08:22,099 DEBG fd 9 closed, stopped monitoring <POutputDispatcher at 135063999360992 for <Subprocess at 135063998903056 with name clamav_update in state STARTING> (stderr)>
2024-11-28 00:08:22,100 INFO exited: clamav_update (exit status 0; expected)
2024-11-28 00:08:22,105 DEBG received SIGCHLD indicating a child quit
2024-11-28 00:08:22,125 DEBG 'nginx-0' stderr output:
nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (13: Permission denied)
2024-11-28 00:08:22,125 DEBG 'smokescreen' stderr output:
2024-11-28T00:08:22Z 2024-11-28T00:08:22Z INF 2024-11-28T00:08:22Z 2024-11-28T00:08:22Z INF 2024-11-28T00:08:22Z 2024-11-28T00:08:22Z INF 2024-11-28T00:08:22Z 2024-11-28T00:08:22Z INF 2024-11-28T00:08:22Z 2024-11-28T00:08:22Z INF 2024-11-28T00:08:22Z 2024-11-28T00:08:22Z INF 2024-11-28T00:08:22Z 2024-11-28T00:08:22Z INF 2024-11-28T00:08:22Z 2024-11-28T00:08:22Z INF msg=starting
2024-11-28 00:08:22,125 DEBG fd 14 closed, stopped monitoring <POutputDispatcher at 135063999361232 for <Subprocess at 135063999350480 with name nginx-0 in state STARTING> (stderr)>
2024-11-28 00:08:22,136 DEBG 'uvicorn-0' stderr output:
wait-for-it: waiting for cvat_db:5432 without a timeout
2024-11-28 00:08:22,163 DEBG 'uvicorn-1' stderr output:
wait-for-it: waiting for cvat_db:5432 without a timeout
2024-11-28 00:08:22,177 DEBG 'uvicorn-0' stderr output:
wait-for-it: cvat_db:5432 is available after 0 seconds
2024-11-28 00:08:22,205 DEBG 'uvicorn-1' stderr output:
wait-for-it: cvat_db:5432 is available after 0 seconds
2024-11-28 00:08:22,229 DEBG 'uvicorn-0' stderr output:
wait-for-it: waiting for cvat_redis_inmem:6379 without a timeout
2024-11-28 00:08:22,240 DEBG 'uvicorn-1' stderr output:
wait-for-it: waiting for cvat_redis_inmem:6379 without a timeout
2024-11-28 00:08:22,243 DEBG 'uvicorn-0' stderr output:
wait-for-it: cvat_redis_inmem:6379 is available after 0 seconds
2024-11-28 00:08:22,259 DEBG 'uvicorn-1' stderr output:
wait-for-it: cvat_redis_inmem:6379 is available after 0 seconds
2024-11-28 00:08:22,281 DEBG 'uvicorn-0' stderr output:
wait-for-it: waiting for cvat_redis_ondisk:6666 without a timeout
2024-11-28 00:08:22,286 DEBG 'uvicorn-1' stderr output:
wait-for-it: waiting for cvat_redis_ondisk:6666 without a timeout
2024-11-28 00:08:22,302 DEBG 'uvicorn-0' stderr output:
wait-for-it: cvat_redis_ondisk:6666 is available after 0 seconds
2024-11-28 00:08:22,315 DEBG 'uvicorn-1' stderr output:
wait-for-it: cvat_redis_ondisk:6666 is available after 0 seconds
2024-11-28 00:08:23,324 INFO success: nginx-0 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-11-28 00:08:23,324 INFO success: smokescreen entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-11-28 00:08:23,324 INFO success: uvicorn-0 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-11-28 00:08:23,324 INFO success: uvicorn-1 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-11-28 00:08:27,664 DEBG 'uvicorn-0' stderr output:
INFO: Started server process [96]
INFO: Waiting for application startup.
INFO: ASGI 'lifespan' protocol appears unsupported.
INFO: Application startup complete.
INFO: Uvicorn running on socket /tmp/uvicorn.sock (Press CTRL+C to quit)
2024-11-28 00:08:27,802 DEBG 'uvicorn-1' stderr output:
INFO: Started server process [104]
INFO: Waiting for application startup.
INFO: ASGI 'lifespan' protocol appears unsupported.
INFO: Application startup complete.
2024-11-28 00:08:27,802 DEBG 'uvicorn-1' stderr output:
INFO: Uvicorn running on socket /tmp/uvicorn.sock (Press CTRL+C to quit)
2024-11-28 00:08:38,234 DEBG 'uvicorn-1' stdout output:
INFO: 172.26.0.2:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2024-11-28 00:08:44,768 DEBG 'uvicorn-1' stdout output:
INFO: 172.26.0.2:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2024-11-28 00:08:51,867 DEBG 'uvicorn-1' stdout output:
INFO: 172.26.0.2:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2024-11-28 00:09:04,223 DEBG 'uvicorn-1' stdout output:
INFO: 172.26.0.2:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2024-11-28 00:09:13,047 DEBG 'uvicorn-0' stdout output:
INFO: 172.26.0.2:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2024-11-28 00:09:27,771 DEBG 'uvicorn-1' stdout output:
INFO: 172.26.0.2:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2024-11-28 00:09:40,769 DEBG 'uvicorn-1' stdout output:
INFO: 172.26.0.2:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2024-11-28 00:09:53,993 DEBG 'uvicorn-0' stdout output:
INFO: 172.26.0.2:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2024-11-28 00:10:01,106 DEBG 'uvicorn-1' stdout output:
INFO: 172.26.0.2:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2024-11-28 00:10:11,761 DEBG 'uvicorn-0' stdout output:
INFO: 172.26.0.2:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2024-11-28 00:10:18,798 DEBG 'uvicorn-0' stdout output:
INFO: 172.26.0.2:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2024-11-28 00:10:28,911 DEBG 'uvicorn-1' stdout output:
INFO: 172.26.0.2:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2024-11-28 00:10:36,328 DEBG 'uvicorn-1' stdout output:
INFO: 172.26.0.2:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2024-11-28 00:10:48,353 DEBG 'uvicorn-1' stdout output:
INFO: 172.26.0.2:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2024-11-28 00:10:58,812 DEBG 'uvicorn-1' stdout output:
INFO: 172.26.0.2:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2024-11-28 00:11:07,809 DEBG 'uvicorn-0' stdout output:
INFO: 172.26.0.2:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2024-11-28 00:11:13,404 DEBG 'uvicorn-0' stdout output:
INFO: 172.26.0.2:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2024-11-28 00:11:23,155 DEBG 'uvicorn-1' stdout output:
INFO: 172.26.0.2:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2024-11-28 00:11:24,593 DEBG 'uvicorn-1' stdout output:
INFO: 172.26.0.2:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2024-11-28 00:11:38,381 DEBG 'uvicorn-1' stdout output:
INFO: 172.26.0.2:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2024-11-28 00:11:44,408 DEBG 'uvicorn-1' stdout output:
INFO: 172.26.0.2:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2024-11-28 00:11:50,414 WARN received SIGTERM indicating exit request
2024-11-28 00:11:50,414 DEBG killing uvicorn-1 (pid 104) with signal SIGTERM
2024-11-28 00:11:50,414 DEBG killing uvicorn-0 (pid 96) with signal SIGTERM
2024-11-28 00:11:50,414 INFO waiting for nginx-0, smokescreen, uvicorn-0, uvicorn-1 to die
2024-11-28 00:11:50,477 DEBG 'uvicorn-1' stderr output:
INFO: Shutting down
2024-11-28 00:11:50,518 DEBG 'uvicorn-0' stderr output:
INFO: Shutting down
2024-11-28 00:11:50,585 DEBG 'uvicorn-1' stderr output:
INFO: Finished server process [104]
2024-11-28 00:11:50,633 DEBG 'uvicorn-0' stderr output:
INFO: Finished server process [96]
2024-11-28 00:11:53,509 DEBG fd 26 closed, stopped monitoring <POutputDispatcher at 135063999362912 for <Subprocess at 135063999351056 with name uvicorn-1 in state STOPPING> (stdout)>
2024-11-28 00:11:53,509 DEBG fd 30 closed, stopped monitoring <POutputDispatcher at 135063999362960 for <Subprocess at 135063999351056 with name uvicorn-1 in state STOPPING> (stderr)>
2024-11-28 00:11:53,509 INFO stopped: uvicorn-1 (exit status 0)
2024-11-28 00:11:53,510 DEBG received SIGCHLD indicating a child quit
2024-11-28 00:11:53,510 INFO waiting for nginx-0, smokescreen, uvicorn-0 to die
2024-11-28 00:11:53,584 DEBG fd 23 closed, stopped monitoring <POutputDispatcher at 135063999362336 for <Subprocess at 135063999351200 with name uvicorn-0 in state STOPPING> (stdout)>
2024-11-28 00:11:53,584 DEBG fd 25 closed, stopped monitoring <POutputDispatcher at 135063999362384 for <Subprocess at 135063999351200 with name uvicorn-0 in state STOPPING> (stderr)>
2024-11-28 00:11:54,585 INFO stopped: uvicorn-0 (exit status 0)
2024-11-28 00:11:54,585 INFO Closing socket unix:///tmp/uvicorn.sock
2024-11-28 00:11:54,585 DEBG received SIGCHLD indicating a child quit
2024-11-28 00:11:54,586 DEBG killing smokescreen (pid 95) with signal SIGTERM
2024-11-28 00:11:54,587 DEBG 'smokescreen' stderr output:
2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF msg=quitting gracefully
2024-11-28 00:11:54,587 DEBG fd 15 closed, stopped monitoring <POutputDispatcher at 135063999361808 for <Subprocess at 135063999350672 with name smokescreen in state STOPPING> (stdout)>
2024-11-28 00:11:54,587 DEBG 'smokescreen' stderr output:
2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF msg=Waiting for all connections to become idle...
2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF msg=All connections are idle. Continuing with shutdown...
2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF 2024-11-28T00:11:54Z 2024-11-28T00:11:54Z INF msg=All connections idle: closing all remaining connections.
2024-11-28 00:11:54,587 INFO stopped: smokescreen (exit status 0)
2024-11-28 00:11:54,587 DEBG received SIGCHLD indicating a child quit
2024-11-28 00:11:54,587 DEBG killing nginx-0 (pid 94) with signal SIGTERM
2024-11-28 00:11:54,594 DEBG fd 10 closed, stopped monitoring <POutputDispatcher at 135063999361376 for <Subprocess at 135063999350480 with name nginx-0 in state STOPPING> (stdout)>
2024-11-28 00:11:54,594 INFO stopped: nginx-0 (exit status 0)
2024-11-28 00:11:54,594 DEBG received SIGCHLD indicating a child quit
wait-for-it: waiting for cvat_db:5432 without a timeout
wait-for-it: cvat_db:5432 is available after 0 seconds
Operations to perform:
Apply all migrations: account, admin, analytics_report, auth, authtoken, contenttypes, dataset_repo, db, django_rq, engine, iam, organizations, quality_control, sessions, sites, socialaccount, webhooks
Running migrations:
No migrations to apply.
wait-for-it: waiting for cvat_redis_inmem:6379 without a timeout
wait-for-it: cvat_redis_inmem:6379 is available after 0 seconds
Processing queue import...
Processing queue export...
Processing queue annotation...
Processing queue webhooks...
Processing queue notifications...
Processing queue quality_reports...
Processing queue analytics_reports...
Processing queue cleaning...
Job clean_up_sessions is unchanged
0 static files copied to '/home/django/static', 163 unmodified.
wait-for-it: waiting for cvat_db:5432 without a timeout
wait-for-it: cvat_db:5432 is available after 0 seconds
waiting for migrations to complete...
2024-11-28 00:12:32,549 INFO Creating socket unix:///tmp/uvicorn.sock
2024-11-28 00:12:32,549 INFO Closing socket unix:///tmp/uvicorn.sock
2024-11-28 00:12:32,551 INFO RPC interface 'supervisor' initialized
2024-11-28 00:12:32,551 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2024-11-28 00:12:32,551 INFO supervisord started with pid 1
2024-11-28 00:12:33,556 INFO spawned: 'clamav_update' with pid 70
2024-11-28 00:12:33,561 INFO spawned: 'nginx-0' with pid 71
2024-11-28 00:12:33,565 INFO spawned: 'smokescreen' with pid 72
2024-11-28 00:12:33,566 INFO Creating socket unix:///tmp/uvicorn.sock
2024-11-28 00:12:33,570 DEBG fd 20 closed, stopped monitoring <PInputDispatcher at 127498421794368 for <Subprocess at 127498421782992 with name uvicorn-0 in state STARTING> (stdin)>
2024-11-28 00:12:33,572 INFO spawned: 'uvicorn-0' with pid 73
2024-11-28 00:12:33,574 DEBG fd 24 closed, stopped monitoring <PInputDispatcher at 127498421794944 for <Subprocess at 127498421782848 with name uvicorn-1 in state STARTING> (stdin)>
2024-11-28 00:12:33,576 INFO spawned: 'uvicorn-1' with pid 78
2024-11-28 00:12:33,578 DEBG fd 7 closed, stopped monitoring <POutputDispatcher at 127498421334944 for <Subprocess at 127498421334848 with name clamav_update in state STARTING> (stdout)>
2024-11-28 00:12:33,579 DEBG fd 9 closed, stopped monitoring <POutputDispatcher at 127498421792784 for <Subprocess at 127498421334848 with name clamav_update in state STARTING> (stderr)>
2024-11-28 00:12:33,580 INFO exited: clamav_update (exit status 0; expected)
2024-11-28 00:12:33,582 DEBG received SIGCHLD indicating a child quit
2024-11-28 00:12:33,583 DEBG 'nginx-0' stderr output:
nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (13: Permission denied)
2024-11-28 00:12:33,584 DEBG 'smokescreen' stderr output:
2024-11-28T00:12:33Z 2024-11-28T00:12:33Z INF 2024-11-28T00:12:33Z 2024-11-28T00:12:33Z INF 2024-11-28T00:12:33Z 2024-11-28T00:12:33Z INF 2024-11-28T00:12:33Z 2024-11-28T00:12:33Z INF 2024-11-28T00:12:33Z 2024-11-28T00:12:33Z INF 2024-11-28T00:12:33Z 2024-11-28T00:12:33Z INF 2024-11-28T00:12:33Z 2024-11-28T00:12:33Z INF 2024-11-28T00:12:33Z 2024-11-28T00:12:33Z INF msg=starting
2024-11-28 00:12:33,601 DEBG 'uvicorn-0' stderr output:
wait-for-it: waiting for cvat_db:5432 without a timeout
2024-11-28 00:12:33,610 DEBG 'uvicorn-0' stderr output:
wait-for-it: cvat_db:5432 is available after 0 seconds
2024-11-28 00:12:33,611 DEBG 'uvicorn-1' stderr output:
wait-for-it: waiting for cvat_db:5432 without a timeout
2024-11-28 00:12:33,611 DEBG fd 14 closed, stopped monitoring <POutputDispatcher at 127498421793024 for <Subprocess at 127498421782272 with name nginx-0 in state STARTING> (stderr)>
2024-11-28 00:12:33,619 DEBG 'uvicorn-1' stderr output:
wait-for-it: cvat_db:5432 is available after 0 seconds
2024-11-28 00:12:33,622 DEBG 'uvicorn-0' stderr output:
wait-for-it: waiting for cvat_redis_inmem:6379 without a timeout
2024-11-28 00:12:33,626 DEBG 'uvicorn-0' stderr output:
wait-for-it: cvat_redis_inmem:6379 is available after 0 seconds
2024-11-28 00:12:33,627 DEBG 'uvicorn-1' stderr output:
wait-for-it: waiting for cvat_redis_inmem:6379 without a timeout
2024-11-28 00:12:33,631 DEBG 'uvicorn-1' stderr output:
wait-for-it: cvat_redis_inmem:6379 is available after 0 seconds
2024-11-28 00:12:33,634 DEBG 'uvicorn-0' stderr output:
wait-for-it: waiting for cvat_redis_ondisk:6666 without a timeout
2024-11-28 00:12:33,638 DEBG 'uvicorn-0' stderr output:
wait-for-it: cvat_redis_ondisk:6666 is available after 0 seconds
2024-11-28 00:12:33,639 DEBG 'uvicorn-1' stderr output:
wait-for-it: waiting for cvat_redis_ondisk:6666 without a timeout
2024-11-28 00:12:33,645 DEBG 'uvicorn-1' stderr output:
wait-for-it: cvat_redis_ondisk:6666 is available after 0 seconds
2024-11-28 00:12:34,646 INFO success: nginx-0 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-11-28 00:12:34,647 INFO success: smokescreen entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-11-28 00:12:34,648 INFO success: uvicorn-0 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-11-28 00:12:34,648 INFO success: uvicorn-1 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-11-28 00:12:36,054 DEBG 'uvicorn-1' stderr output:
INFO: Started server process [78]
INFO: Waiting for application startup.
2024-11-28 00:12:36,054 DEBG 'uvicorn-1' stderr output:
INFO: ASGI 'lifespan' protocol appears unsupported.
INFO: Application startup complete.
2024-11-28 00:12:36,054 DEBG 'uvicorn-1' stderr output:
INFO: Uvicorn running on socket /tmp/uvicorn.sock (Press CTRL+C to quit)
2024-11-28 00:12:36,113 DEBG 'uvicorn-0' stderr output:
INFO: Started server process [73]
INFO: Waiting for application startup.
INFO: ASGI 'lifespan' protocol appears unsupported.
INFO: Application startup complete.
2024-11-28 00:12:36,113 DEBG 'uvicorn-0' stderr output:
INFO: Uvicorn running on socket /tmp/uvicorn.sock (Press CTRL+C to quit)
2024-11-28 00:12:39,161 DEBG 'uvicorn-0' stdout output:
INFO: 172.26.0.5:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2024-11-28 00:12:47,474 DEBG 'uvicorn-0' stdout output:
INFO: 172.26.0.5:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2024-11-28 00:12:52,593 DEBG 'uvicorn-0' stdout output:
INFO: 172.26.0.5:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2024-11-28 00:12:58,162 DEBG 'uvicorn-0' stdout output:
INFO: 172.26.0.5:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2024-11-28 00:13:09,796 DEBG 'uvicorn-1' stdout output:
INFO: 172.26.0.5:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2024-11-28 00:13:20,473 DEBG 'uvicorn-1' stdout output:
INFO: 172.26.0.5:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2024-11-28 00:13:26,117 DEBG 'uvicorn-0' stdout output:
INFO: 172.26.0.5:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2024-11-28 00:13:36,077 DEBG 'uvicorn-0' stdout output:
INFO: 172.26.0.5:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2024-11-28 00:13:42,394 DEBG 'uvicorn-0' stdout output:
INFO: 172.26.0.5:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2024-11-28 00:13:50,263 DEBG 'uvicorn-0' stdout output:
INFO: 172.26.0.5:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2024-11-28 00:13:56,929 DEBG 'uvicorn-0' stdout output:
INFO: 172.26.0.5:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2024-11-28 00:14:10,057 DEBG 'uvicorn-0' stdout output:
INFO: 172.26.0.5:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2024-11-28 00:14:20,012 DEBG 'uvicorn-1' stdout output:
INFO: 172.26.0.5:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2024-11-28 00:14:33,874 DEBG 'uvicorn-1' stdout output:
INFO: 172.26.0.5:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2024-11-28 00:14:39,518 DEBG 'uvicorn-1' stdout output:
INFO: 172.26.0.5:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2024-11-28 00:14:45,781 DEBG 'uvicorn-0' stdout output:
INFO: 172.26.0.5:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2024-11-28 00:14:58,613 DEBG 'uvicorn-0' stdout output:
INFO: 172.26.0.5:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2024-11-28 00:15:11,901 DEBG 'uvicorn-0' stdout output:
INFO: 172.26.0.5:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
```
### Environment
```Markdown
cvat release v2.22.0
Ubuntu v24.04 Linux kernel 6.8.8-060808-generic
Portainer 2.21.4
```
| closed | 2024-11-28T00:23:32Z | 2024-12-02T07:56:55Z | https://github.com/cvat-ai/cvat/issues/8749 | [
"bug"
] | aeozyalcin | 3 |
horovod/horovod | tensorflow | 3,743 | Horovod.torch with 1 worker does not reproduce not distributed training | **Environment:**
1. Framework: PyTorch
2. Framework version: 1.12.0a0+git664058f
3. Horovod version: 0.25.0
4. MPI version: mpiexec version 1.1.7
5. CUDA version: Driver Version: 470.103.01 CUDA Version: 11.4
6. NCCL version: 2.10.3
7. Python version: 3.8.13
8. Spark / PySpark version:
9. Ray version:
10. OS and version: SUSE Linux Enterprise Server 15 SP3
11. GCC version:
12. CMake version:
I runned training scripts without distributed training, training with PyTorch DDP with 1 worker, and horovod with 1 worker. Not distributed version perfectly agreed with 1 worker PT DDP while horovod has a different training curve just after the first gradient update. [I've dug in sources](https://github.com/horovod/horovod/blob/4281bd7212136d8c201f90f64924ed515a796383/horovod/torch/functions.py#L95) and found that horovod initialized optimizer parameters and calls `step()` method. This behavior changed the optimizer state, and makes training more stable, but explicitly breaks reproductivity. | open | 2022-10-13T20:37:25Z | 2022-10-17T18:41:41Z | https://github.com/horovod/horovod/issues/3743 | [
"bug"
] | dboyda | 1 |
apify/crawlee-python | automation | 480 | Create a new guide for session management | - We should create a new documentation guide on how to work with sessions (`SessionPool`).
- Inspiration: https://crawlee.dev/docs/guides/session-management | closed | 2024-08-30T12:04:41Z | 2025-01-06T13:59:14Z | https://github.com/apify/crawlee-python/issues/480 | [
"documentation",
"t-tooling"
] | vdusek | 2 |
quokkaproject/quokka | flask | 240 | Create a QuickPost content type | This should have minimum requirements and a lot of defaults, even channel should be prefilled, iit will need a QuickPost form in DashBoard
| closed | 2015-07-14T23:14:39Z | 2018-02-06T13:46:17Z | https://github.com/quokkaproject/quokka/issues/240 | [
"enhancement",
"MEDIUM",
"ready"
] | rochacbruno | 0 |
huggingface/datasets | nlp | 7,022 | There is dead code after we require pyarrow >= 15.0.0 | There are code lines specific for pyarrow versions < 15.0.0.
However, we require pyarrow >= 15.0.0 since the merge of PR:
- #6892
Those code lines are now dead code and should be removed. | closed | 2024-07-03T08:52:57Z | 2024-07-03T09:17:36Z | https://github.com/huggingface/datasets/issues/7022 | [
"maintenance"
] | albertvillanova | 0 |
modin-project/modin | data-science | 7,217 | Update docs as to when Modin operators work best | closed | 2024-04-25T11:33:56Z | 2024-04-26T13:32:37Z | https://github.com/modin-project/modin/issues/7217 | [
"documentation 📜"
] | YarShev | 4 | |
pytest-dev/pytest-cov | pytest | 136 | "NameError: name 'exc' is not defined" in pytest-cov.pth | The name `exc` in the last `format()` call in `src/pytest-cov.pth` is not defined. I think the `except ImportError` should be changed to `except ImportError as exc`?
| closed | 2016-10-09T19:16:00Z | 2016-10-10T19:33:09Z | https://github.com/pytest-dev/pytest-cov/issues/136 | [] | sscherfke | 1 |
pywinauto/pywinauto | automation | 1,071 | libatspi2 for Linux application | Hi!
Now I'm trying to access the attributes of objects in a Qt application using your library. This is a common goal.
In particular, I'm trying to expand interaction with IATSPI. There is atspi_accessible_get_accessible_id method in the libatspi2 library. The question is how to change the LIB variable in the IATSPI class so that it takes the second version of the libatspi library?
The packages libatspi2.0-0, libatspi2.0-dev are installed, entering these names does not help, the script does not see them.
Perhaps there is an easier way to get all attributes, properties of Qt application objects?
I would be grateful for your advice.
## Specifications
- Pywinauto version: 0.6.8 atspi
- Python version and bitness: 2.7 and 3.7
- Platform and OS: Astra 1.6 Linux
| open | 2021-05-17T13:28:20Z | 2021-05-17T16:57:53Z | https://github.com/pywinauto/pywinauto/issues/1071 | [
"enhancement",
"New Feature",
"refactoring_critical",
"atspi"
] | OtorioO | 1 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 289 | Using a custom collate_fn with testers and logging_presets | Hello, it is not clear how can I use RNN architecture with this package, basically my problem is that I have to pad the sequences in each batch, but I don't have the access to the DataLoader, how should I approach this problem? | closed | 2021-03-09T17:06:44Z | 2021-03-19T14:14:47Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/289 | [
"documentation"
] | levtelyatnikov | 29 |
FujiwaraChoki/MoneyPrinterV2 | automation | 104 | FileNotFoundError: MoneyPrinterV2/venv/TTS/.models.json | i got an error while selecting this options.

Traceback (most recent call last):
File "D:\dari-github\MoneyPrinterV2\src\main.py", line 436, in <module>
main()
File "D:\dari-github\MoneyPrinterV2\src\main.py", line 151, in main
tts = TTS()
File "D:\dari-github\MoneyPrinterV2\src\classes\Tts.py", line 41, in __init__
self._model_manager = ModelManager(models_json_path)
File "D:\dari-github\MoneyPrinterV2\venv\lib\site-packages\TTS\utils\manage.py", line 56, in __init__
self.read_models_file(models_file)
File "D:\dari-github\MoneyPrinterV2\venv\lib\site-packages\TTS\utils\manage.py", line 68, in read_models_file
self.models_dict = read_json_with_comments(file_path)
File "D:\dari-github\MoneyPrinterV2\venv\lib\site-packages\TTS\config\__init__.py", line 17, in read_json_with_comments
with fsspec.open(json_path, "r", encoding="utf-8") as f:
File "D:\dari-github\MoneyPrinterV2\venv\lib\site-packages\fsspec\core.py", line 105, in __enter__
f = self.fs.open(self.path, mode=mode)
File "D:\dari-github\MoneyPrinterV2\venv\lib\site-packages\fsspec\spec.py", line 1310, in open
f = self._open(
File "D:\dari-github\MoneyPrinterV2\venv\lib\site-packages\fsspec\implementations\local.py", line 200, in _open
return LocalFileOpener(path, mode, fs=self, **kwargs)
File "D:\dari-github\MoneyPrinterV2\venv\lib\site-packages\fsspec\implementations\local.py", line 364, in __init__
self._open()
File "D:\dari-github\MoneyPrinterV2\venv\lib\site-packages\fsspec\implementations\local.py", line 369, in _open
self.f = open(self.path, mode=self.mode)
FileNotFoundError: [Errno 2] No such file or directory: 'D:/dari-github/MoneyPrinterV2/venv/TTS/.models.json'
what should i do ?
please anyone. | closed | 2025-02-17T09:36:23Z | 2025-02-20T08:17:34Z | https://github.com/FujiwaraChoki/MoneyPrinterV2/issues/104 | [] | omozousha | 10 |
pytest-dev/pytest-django | pytest | 371 | Request for pytest-django module name warning in docs | I think the docs need a big warning to install `pytest-django` and not `django-pytest`. That would have saved me a bit of time! Thanks!
| closed | 2016-08-14T15:33:13Z | 2016-08-15T17:13:18Z | https://github.com/pytest-dev/pytest-django/issues/371 | [] | irothschild | 2 |
laughingman7743/PyAthena | sqlalchemy | 109 | a describe {table_name} occurred an error | File "pandas/_libs/parsers.pyx", line 881, in pandas._libs.parsers.TextReader.read
File "pandas/_libs/parsers.pyx", line 896, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas/_libs/parsers.pyx", line 950, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 937, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 2132, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 7, saw 2
===========================
the cursor class is pandasCursor
if cursor class set as Cursor the error won't be raised
my schema is just as below:

| closed | 2019-12-06T07:02:54Z | 2020-01-05T14:45:14Z | https://github.com/laughingman7743/PyAthena/issues/109 | [] | balzaron | 3 |
indico/indico | sqlalchemy | 6,680 | "Apply" button translate | **Describe the bug**
I try to translate "Apply" but it doesn't affect in language changing.
**To Reproduce**
Steps to reproduce the behavior:
0. Run indico.
1. Go to 'Room booking'
2. Click on 'List of Rooms'
3. Scroll down to 'Building'
4. See "Apply" is still on English and doesn't change after set language on non English ones.
**Screenshots**

**Additional context**
Translating doesn't affect this button.
| closed | 2024-12-22T16:38:23Z | 2024-12-22T19:05:42Z | https://github.com/indico/indico/issues/6680 | [
"bug"
] | aforouz | 4 |
huggingface/pytorch-image-models | pytorch | 1,740 | Can not find model of 'vit_base_patch16_clip_224.openai_ft_in1k' in my TRANSFORMERS_CACHE | I can not find cached model when I created a model, that is 'vit_base_patch16_clip_224.openai_ft_in1k'. I have set TRANSFORMERS_CACHE for huggingface models. Where timm caches these models?
Thansk! | closed | 2023-03-24T14:53:05Z | 2023-03-24T15:31:57Z | https://github.com/huggingface/pytorch-image-models/issues/1740 | [
"bug"
] | Luciennnnnnn | 1 |
deepspeedai/DeepSpeed | deep-learning | 5,708 | [BUG] 1-bit LAMB not compatible with bf16 | **Describe the bug**
When training with 1-bit LAMB, the DeepSpeed ModelEngine complains that loss scaling is not enabled. However bf16 does not require loss scaling so does not support this (hence it is not enabled). The code should be updated to allow an exception for bf16. | open | 2024-06-28T21:41:20Z | 2024-06-28T21:41:20Z | https://github.com/deepspeedai/DeepSpeed/issues/5708 | [
"bug",
"training"
] | catid | 0 |
Johnserf-Seed/TikTokDownload | api | 133 | [BUG]报错但是也能运行 | 
控制台报错 但是也能正常下载 不知道有什么影响
| closed | 2022-04-17T04:13:34Z | 2022-04-18T13:28:31Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/133 | [
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] | Dongdong0112 | 1 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 15,450 | [Bug]: How to receive RGBA images in ControlNet? | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
My controlnet model's hint_channel is 4, but here just support 3 channel input in ContorlNet.

I want to know where the code for reading images is in ControlNet. It's too hard to find. plz!
### Steps to reproduce the problem
None
### What should have happened?

### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
webui 1.7.0
### Console logs
```Shell
None
```
### Additional information
_No response_ | closed | 2024-04-07T08:31:24Z | 2024-04-08T02:25:09Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15450 | [
"bug-report"
] | Tramac | 0 |
deepspeedai/DeepSpeed | pytorch | 5,689 | Running out of CPU memory. Dataset is loaded for each created process | **Describe the bug**
I want to pretrain a BERT model on 8 A100 40G GPUS. The problem which I have is that I run out of CPU memory (not GPU memory). I cannot understand why. I am trying to load a 75G dataset and deepspeed will load it as many times as the created processes. **Is there a way to load the dataset only once in the CPU?** My setup covers up to 500G CPU memory.
Also, what would be the optimal parameters for such training? Also why do I not have a larger batch size when I am using zero optimizer stage 2?
**To Reproduce**
deepspeed config file:
{
"train_batch_size": 2048,
"gradient_accumulation_steps" : 32,
"optimizer": {
  "type": "AdamW",
 "params": {
  "lr": 1.0e-04,
  "betas": [0.9, 0.98],
  "eps": 1e-6,
  "weight_decay": 0.01
 }
},
 "fp16": {
  "enabled": true,
  "loss_scale": 0,
   "loss_scale_window": 1000,
  "initial_scale_power": 16,
  "hysteresis": 2,
  "min_loss_scale": 1
},
 "zero_optimization": {
 "stage": 2,
 "offload_optimizer": {
  "device": "cpu",
   "pin_memory": true
},
"contiguous_gradients": true,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"allgather_bucket_size": 2e8,
"allgather_partitions": true
},
"zero_allow_untested_optimizer": true
}
model = BertForMaskedLM(config=config).to(device)
if self.deepspeed_conf is not None:
 self.model, self.optimizer, _, _ = deepspeed.initialize(
  model=model,
  model_parameters=model.parameters(),
  config_params=self.deepspeed_conf
)
for b in tqdm(self.dataloader):
  data = {k:v.to(self.device) for k, v in b.items()}
  labels = data.pop("labels")
  output = self.model(**data)
  loss = self.criterion(output["logits"],labels)
  self.model.backward(loss)
  self.model.step()
**Expected behavior**
1) Train without running out of memory
2) expected to be able to have larger batch size with the zero stage 2 optimization, but it will run out of memory.
**ds_report output**
8 40G A100 GPUs
64 CPU
500G memory
**Launcher context**
running: deepspeed train.py
Thanks a lot in advance! | closed | 2024-06-21T09:09:45Z | 2024-11-14T18:39:18Z | https://github.com/deepspeedai/DeepSpeed/issues/5689 | [
"bug",
"training"
] | MikeMitsios | 1 |
plotly/dash-table | dash | 608 | style_data_condtional - "row_index" won't accept vector of values | I am using `dashTable` 4.0.2 and below is an example of highlighting multiple `dashDataTable` rows in Dash for R.
```
library(dash)
library(dashCoreComponents)
library(dashHtmlComponents)
# Load data
data(mtcars)
# Create DashTable
carsDashDT <- dashDataTable(
id = "cars-table",
columns = lapply(colnames(mtcars),
function(colName){
list(
id = colName,
name = colName
)
}),
data = df_to_list(mtcars),
row_selectable = "multi",
)
# App Start ------------------------------
# Initate Application
app <- Dash$new()
# Create Layout ------------------------------
app$layout(
htmlDiv(list(
carsDashDT
))
)
# Callback Start ------------------------------
# Highlight dashTable rows
app$callback(
output(id = "cars-table", property = "style_data_conditional"),
params = list(
input(id = "cars-table", property = "selected_rows")
),
function(rows) {
style_data_conditional <- NULL
for (i in 1:length(rows)) {
rowIndex <- rows[[i]]
style_data_conditional_append = list(list(
"if" = list("row_index" = rowIndex),
"backgroundColor" = "#B0BED9"
))
style_data_conditional <- c(style_data_conditional,
style_data_conditional_append)
}
return(style_data_conditional)
}
)
app$run_server()
```
The example provided at https://dashr.plot.ly/datatable/style under _Conditional Formatting - Highlighting Certain Rows_ demonstrates how to highlight a single row with a numeric input.
Based on the example provided in the above link, I experimented with multiple different syntax but couldn't achieve highlighting multiple rows without using a `loop`.
The documentation for _dashDataTable_ > _style_data_conditional_ mentions:
_row_index (numeric | a value equal to: 'odd', 'even'; optional)_
It seems like `row_index` is currently limited to accepting single numeric input. It would be very nice to be able to apply conditional styles to multiple rows by feeding a vector/list of values. Rather than the `loop` in the `callback`, something similar to:
```
style_data_conditional_append = list(list(
"if" = list("row_index" = rows)
))
```
In case I am missing it, I would appreciate if you can provide the correct syntax. Thanks. | closed | 2019-10-01T17:33:28Z | 2020-04-21T02:53:24Z | https://github.com/plotly/dash-table/issues/608 | [
"dash-type-enhancement"
] | CanerIrfanoglu | 1 |
microsoft/JARVIS | pytorch | 78 | Plain text token in the repo | There is a token pushed in plain text to repo. Is it some expired one or should be deleted?
https://github.com/microsoft/JARVIS/blame/main/server/huggingface.py#:~:text=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 | closed | 2023-04-06T19:27:00Z | 2023-04-06T19:30:26Z | https://github.com/microsoft/JARVIS/issues/78 | [] | januszwalnik | 1 |
absent1706/sqlalchemy-mixins | sqlalchemy | 21 | Missing attributes in SerializeMixin | It seems that the `SerializeMixin` is only serializing the columns and relationships in its `to_dict` method. This misses attributes defined on the model such as Hybrid Attributes.
A reference implementation that handles it can be found here: https://wakatime.com/blog/32-flask-part-1-sqlalchemy-models-to-json
Unfortunately in order to not miss anything, one has to use `dir()` and remove all the properties that are already accounted for in the Columns and Relationships, as well as dropping everything starting with an underscore.
Other than that, I really like your Mixin library and plan to use it in my upcoming project :)
Thanks! | closed | 2019-05-20T12:25:36Z | 2020-03-31T16:23:10Z | https://github.com/absent1706/sqlalchemy-mixins/issues/21 | [] | omrihar | 4 |
xuebinqin/U-2-Net | computer-vision | 153 | Merge the Pull requests | @NathanUA
i have noticed that you are not merging the Pull requests, some of them are very important to fix the bugs and reduce memory.
did you abandon this repository? | closed | 2021-01-23T22:22:25Z | 2021-02-03T14:24:50Z | https://github.com/xuebinqin/U-2-Net/issues/153 | [] | seekingdeep | 2 |
Lightning-AI/pytorch-lightning | pytorch | 20,296 | Error encountered while using multiple optimizers inside a loop. | ### Bug description
I am working on a big project which for which I need to call manual_backward and optimizer.step inside a loop for every batch.
Here is some reference code for a training_function that works, and another that doesn’t:
```
def loss_fn_working(self, batch: Any, batch_idx: int):
env = self.envs[self.p]
actions = None
prev_log_rewards = torch.empty(env.done.shape[0]).type_as(env.state)
prev_forward_logprob = None
loss = torch.tensor(0.0, requires_grad=True)
TERM = env.terminal_index
while not torch.all(env.done):
active = ~env.done
forward_logprob, back_logprob = self.forward(env)
log_rewards = -self.get_rewards()
if actions is not None:
error = log_rewards - prev_log_rewards[active]
error += back_logprob.gather(1, actions[actions != TERM, None]).squeeze(1)
error += prev_forward_logprob[active, -1]
error -= forward_logprob[:, -1].detach()
error -= (
prev_forward_logprob[active]
.gather(1, actions[actions != TERM, None])
.squeeze(1)
)
loss = loss + F.huber_loss(
error,
torch.zeros_like(error),
delta=1.0,
reduction="none",
)
loss = loss * log_rewards.softmax(0)
loss = loss.mean(0)
actions = self.sample_actions(forward_logprob, active, TERM)
env.step(actions)
# save previous log-probs and log-rewards
if prev_forward_logprob is None:
prev_forward_logprob = torch.empty_like(forward_logprob)
prev_forward_logprob[active] = forward_logprob
prev_log_rewards[active] = log_rewards
return loss, log_rewards
def loss_fn_not_working(self, batch, batch_size, prefix, batch_idx):
gfn_opt, rep_opt = self.optimizers()
# some code here
losses = []
rep_losses = []
prev_forward_log_prob = None
prev_stop_prob = torch.zeros(batch_size, device='cuda')
loss = torch.tensor(0.0, requires_grad=True, device='cuda')
active = torch.ones((batch_size,), dtype=bool, device='cuda')
graph = torch.diag_embed(torch.ones(batch_size, self.n_dim)).cuda()
while active.any():
graph_hat = graph[active].clone()
adj_mat = graph_hat.clone()
rep_loss, latent_var = self.rep_model(torch.cat((adj_mat, next_id.unsqueeze(-1)), axis = -1))
rep_loss_tensor = torch.tensor(0.0, requires_grad=True) + rep_loss
forward_log_prob, Fs_masked, back_log_prob, next_prob, stop_prob = (
self.gfn_model(latent_var)
)
with torch.no_grad():
actions = self.sample_actions(Fs_masked)
graph = self.update_graph(actions)
#######################
log_rewards = -self.energy_model(graph_hat, batch, False, self.current_epoch)
if counter==0:
loss = self.calculate_loss(loss, log_rewards, prev_log_rewards, back_log_prob, actions, stop_prob, prefix)
else:
loss = self.calculate_loss(loss, log_rewards, prev_log_rewards, back_log_prob, actions, stop_prob, prefix, prev_stop_prob[active], prev_forward_log_prob[active])
losses.append(loss.item())
rep_losses.append(rep_loss.item())
if prefix == 'train':
rep_opt.zero_grad()
self.manual_backward(rep_loss_tensor, retain_graph=True)
rep_opt.step()
gfn_opt.zero_grad()
self.manual_backward(loss)
gfn_opt.step()
with torch.no_grad():
active[indices_to_deactivate] = False #active updated appropriately
indices = indices[~current_stop]
# active_indices = ~current_stop # Not being used?
next_id = F.one_hot(indices, num_classes=self.n_dim)
prev_log_rewards = log_rewards[~current_stop]
counter += 1
if prev_forward_log_prob is None:
prev_forward_log_prob = torch.empty_like(forward_log_prob)
prev_forward_log_prob[active] = forward_log_prob[~current_stop]
prev_stop_prob[active] = stop_prob[~current_stop]
return losses, graph, log_rewards, counter, rep_losses
def calculate_loss(
self,
loss,
log_rewards,
prev_log_rewards,
back_log_prob,
actions,
stop_prob,
prefix, # Added for debugging
prev_stop_prob=None,
prev_forward_log_prob=None,
):
error = torch.tensor(0.0, requires_grad=True) + log_rewards - prev_log_rewards # [B]
error = error + (back_log_prob).gather(1, actions.unsqueeze(1)).squeeze(1) # P_B(s|s')
error = error - stop_prob.detach() # P(s_f|s')
if prev_stop_prob is not None and prev_forward_log_prob is not None:
error = error + prev_stop_prob.detach() # P(s_f|s)
error = error - (prev_forward_log_prob).gather(
1, actions.unsqueeze(1)
).squeeze(1)
loss = loss + F.huber_loss( # accumulate losses
error,
torch.zeros_like(error),
delta=1.0,
reduction="none",
)
loss = loss * log_rewards.softmax(0)
return loss.mean(0)
```
Here, the main variable of importance is `prev_forward_log_prob` in `loss_fn_not_working`. The loss is being calculated using calculate_loss() function.
I have kept manual_optimization as True.
When using `loss_fn_not_working`, and keeping retain_graph as false for loss, I get the following error:
"Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward."
If I do keep `retain_graph` as True for loss (i.e. the loss for the second optimizer), I get the following error instead:
"
one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [64, 10]], which is output 0 of AsStridedBackward0, is at version 3; expected version 1 instead.
"
If I use loss_fn_working, there is no problem. So, I understand that the problem arises when using backward calls inside the loop. I am not really making any in-place operations, so why is the second loss_fn not working?
(PS: I am on version 1.8.3 because of some other libraries. Is it solved in newer version?)
### What version are you seeing the problem on?
v1.x
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [64, 10]], which is output 0 of AsStridedBackward0, is at version 3; expected version 1 instead.
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.4.0):
#- PyTorch Version (e.g., 2.4):
#- Python version (e.g., 3.12):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
```
</details>
### More info
_No response_ | open | 2024-09-23T05:32:25Z | 2024-09-23T05:32:25Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20296 | [
"bug",
"needs triage"
] | RAraghavarora | 0 |
dynaconf/dynaconf | flask | 782 | RFC: Add a way to disable envvar loading and default prefix | It's a great library and I am enjoying it. However I would like to make a small feature request if it is not already there. I want a way to disable environmental variable overwriting the variables declared in config files.
I removed the below line from the Dynaconf() class parameter , still it's the same.
envvar_prefix = "DYNACONF"
I ever created a new "envvar_prefix" however , it seems the env prefix "DYNACONF_.." is a default and always being checked even if you have a different envvar_prefix set.
Could you please consider below as the feature requests if there is no answer:
1) Is there a way to completely disable the environmental variable overwriting the existing ?
2) Is there a way to completely disable the check up of environmental variable "DYNACONF_" when different envvar_prefix is like below ?
envvar_prefix = "MYAPP"
_Originally posted by @swagatsourav in https://github.com/dynaconf/dynaconf/discussions/781_ | closed | 2022-08-02T09:00:29Z | 2024-07-08T18:42:01Z | https://github.com/dynaconf/dynaconf/issues/782 | [
"RFC"
] | rochacbruno | 7 |
itamarst/eliot | numpy | 410 | Document/handle large numpy arrays in JSON serializer | A numpy array with 10M entries isn't rare, but shouldn't be serialized to JSON.
1. By default only log a sample.
2. Document other ways of handling it (writing to large file, omitting array completely). | closed | 2019-05-09T18:13:57Z | 2019-05-19T23:52:46Z | https://github.com/itamarst/eliot/issues/410 | [] | itamarst | 0 |
tensorpack/tensorpack | tensorflow | 1,331 | How to stop training of example FasterRCNN | Hello, I want to stop FasterRCNN training, but when I use `kill <pid>',some processes are still alive.
```
1405 pts/0 S 0:00 /usr/bin/python3 -c from multiprocessing.semaphore_tracker import main;main(3)
1441 pts/0 Sl 0:46 /usr/bin/python3 -c from multiprocessing.spawn import spawn_main; spawn_main(tracker_fd=4, pipe_handle=19) --multiprocessing-fork
1442 pts/0 Sl 0:43 /usr/bin/python3 -c from multiprocessing.spawn import spawn_main; spawn_main(tracker_fd=4, pipe_handle=20) --multiprocessing-fork
1443 pts/0 Sl 0:51 /usr/bin/python3 -c from multiprocessing.spawn import spawn_main; spawn_main(tracker_fd=4, pipe_handle=21) --multiprocessing-fork
1444 pts/0 Sl 0:40 /usr/bin/python3 -c from multiprocessing.spawn import spawn_main; spawn_main(tracker_fd=4, pipe_handle=22) --multiprocessing-fork
1445 pts/0 Sl 0:45 /usr/bin/python3 -c from multiprocessing.spawn import spawn_main; spawn_main(tracker_fd=4, pipe_handle=23) --multiprocessing-fork
1597 pts/0 Sl 0:05 /usr/bin/python3 -c from multiprocessing.spawn import spawn_main; spawn_main(tracker_fd=4, pipe_handle=31) --multiprocessing-fork
```
Is there any better way to stop training? | closed | 2019-09-24T02:08:45Z | 2019-09-24T03:26:01Z | https://github.com/tensorpack/tensorpack/issues/1331 | [
"installation/environment"
] | linzaihui | 4 |
huggingface/pytorch-image-models | pytorch | 2,237 | [BUG] MultiQueryAttention2d incorrect Upsample usage (MobileNet v4) | **Describe the bug**
While going through the MobileNet v4 implementation, I noticed that the MultiQueryAttention2d might not work correctly with query_strides > 1.
At https://github.com/huggingface/pytorch-image-models/blob/474c9cf768345f2b9ec74c10f4f4d5545ad26ea0/timm/layers/attention2d.py#L193
The upsample is defined as `nn.Upsample(self.query_strides, mode='bilinear', align_corners=False))`
While (I'm pretty sure) it should be `nn.Upsample(scale_factor=self.query_strides, mode='bilinear', align_corners=False))`
As the first argument is actual size and the second one `scale_factor` is the one we want here.
**Additional context**
* https://pytorch.org/docs/stable/generated/torch.nn.Upsample.html
`torch.nn.Upsample(size=None, scale_factor=None, mode='nearest', align_corners=None, recompute_scale_factor=None)`
| closed | 2024-07-19T18:32:28Z | 2024-07-20T00:12:03Z | https://github.com/huggingface/pytorch-image-models/issues/2237 | [
"bug"
] | hassonofer | 5 |
kizniche/Mycodo | automation | 1,096 | No internet connection after restoring previous version of Mycodo | After restoring (from version 8.12.6) a version 8.11.0 (see #1092) the `upgrade` page shows the message below and prevents an update of Mycodo.
```
No internet connection detected. To upgrade Mycodo automatically, you will need an internet connection. Refresh the page when one is connected.
```
The raspberry is connected to the internet and I have verified this with `ping`.
### Versions:
- Mycodo Version: 8.11.0
- Model: Raspberry Pi 4 Model B Rev 1.4
- Release:
- Distributor ID: Raspbian
- Description: Raspbian GNU/Linux 10 (buster)
- Release: 10
- Codename: buster
Firmware:
- Aug 3 2021 18:14:56
- Copyright (c) 2012 Broadcom
- version 40787ee5905644f639a2a0f6e00ae12e517a2211 (clean) (release) (start)
### Reproducibility
Please list specific setup details that are involved and the steps to reproduce the behavior:
1. Restore a backup of a previous version (in my case from `8.12.6` to 8.11`)
2. Upgrade button becomes red (to indicate there is an update)
3. See update page presenting the error
4.
### Expected behavior
Not receiving a message about not having an internet connection (even though the Pi is connect to the internet) and being able to upgrade.
### Screenshots
https://imgur.com/FBVwLSk
| closed | 2021-09-26T14:09:33Z | 2021-12-03T04:06:41Z | https://github.com/kizniche/Mycodo/issues/1096 | [
"bug",
"Fixed and Committed"
] | sjoerdschouten | 8 |
sktime/pytorch-forecasting | pandas | 1,187 | Handling "Large Datasets" that do not fit into memory | - PyTorch-Forecasting version: '0.10.2'
- PyTorch version: '1.13.0'
- Python version: 3.9.13
- Operating System: Ubuntu
Hi,
thank you for this amazing work! I am really excited to use the library, however, running into a slight problem because the actual dataset I have is about ~200 million rows large, and not possible to fit into a pandas dataframe. Currently, I am working with vaex. I am therefore wondering if there are any suggestions or ideas to support this. I would alse be open to work on this as a contribution . Thanks in advance!
Edit:
Don't know how I missed the info box for "Large Datasets". But seeing the suggestion there, maybe it would be helpful to write an example tutorial that would explain this. I will try the suggestion there to my use case and if there is interest, I could contribute a tutorial, or talk about other extensions for large datasets.
| open | 2022-11-16T15:51:10Z | 2025-01-22T23:59:07Z | https://github.com/sktime/pytorch-forecasting/issues/1187 | [] | nilsleh | 4 |
ultralytics/yolov5 | pytorch | 12,663 | png format training error reporting | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
WARNING data\images2\999.jpg: ignoring corrupt image/label: [Errno 2] No such file or directory: 'data\\images2\\999.jpg',I use PNG format why the error suffix is JPG format, how to modify it?
### Additional
_No response_ | closed | 2024-01-24T04:02:23Z | 2024-01-24T11:48:10Z | https://github.com/ultralytics/yolov5/issues/12663 | [
"question"
] | wq247726404 | 2 |
autokey/autokey | automation | 84 | <alt>+<tab> doesn't work | ## Classification:
Bug
## Reproducibility:
Always
## Summary
`#keyboard.send_keys("<alt>+<tab>")` doesn't work on Ubuntu (Unit interface).
## Steps to Reproduce
Create a new script with the following content:
`keyboard.send_keys("<alt>+<tab>")`
Open Gedit, select a couple of lines and execute the script.
## Expected Results
Window should be changed.
## Actual Results
You got the select lines replaced by a "tab" and no window was changed.
`autokey-gtk --verbose` output:
`2017-06-20 13:27:16,257 DEBUG - interface - Send modified key: modifiers: [u'<alt>'] key: <tab>`
## Version
AutoKey (Qt) Version 0.90.4
Installed via: apt-get.
Distro:
Ubuntu 16.04, 64bits
| closed | 2017-06-20T16:29:18Z | 2023-10-27T10:29:12Z | https://github.com/autokey/autokey/issues/84 | [
"wontfix",
"invalid"
] | leonardofl | 7 |
huggingface/datasets | numpy | 6,563 | `ImportError`: cannot import name 'insecure_hashlib' from 'huggingface_hub.utils' (.../huggingface_hub/utils/__init__.py) | ### Describe the bug
Yep its not [there](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/__init__.py) anymore.
```text
+ python /home/trainer/sft_train.py --model_name cognitivecomputations/dolphin-2.2.1-mistral-7b --dataset_name wasertech/OneOS --load_in_4bit --use_peft --batch_size 4 --num_train_epochs 1 --learning_rate 1.41e-5 --gradient_accumulation_steps 8 --seq_length 4096 --output_dir output --log_with wandb
Traceback (most recent call last):
File "/home/trainer/sft_train.py", line 22, in <module>
from datasets import load_dataset
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/__init__.py", line 22, in <module>
from .arrow_dataset import Dataset
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 66, in <module>
from .arrow_reader import ArrowReader
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/arrow_reader.py", line 30, in <module>
from .download.download_config import DownloadConfig
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/download/__init__.py", line 9, in <module>
from .download_manager import DownloadManager, DownloadMode
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/download/download_manager.py", line 31, in <module>
from ..utils import tqdm as hf_tqdm
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/utils/__init__.py", line 19, in <module>
from .info_utils import VerificationMode
File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 5, in <module>
from huggingface_hub.utils import insecure_hashlib
ImportError: cannot import name 'insecure_hashlib' from 'huggingface_hub.utils' (/home/trainer/llm-train/lib/python3.8/site-packages/huggingface_hub/utils/__init__.py)
```
### Steps to reproduce the bug
Using `datasets==2.16.1` and `huggingface_hub== 0.17.3`, load a dataset with `load_dataset`.
### Expected behavior
The dataset should be (downloaded - if needed - and) returned.
### Environment info
```text
trainer@a311ae86939e:/mnt$ pip show datasets
Name: datasets
Version: 2.16.1
Summary: HuggingFace community-driven open-source library of datasets
Home-page: https://github.com/huggingface/datasets
Author: HuggingFace Inc.
Author-email: thomas@huggingface.co
License: Apache 2.0
Location: /home/trainer/llm-train/lib/python3.8/site-packages
Requires: packaging, pyyaml, multiprocess, pyarrow-hotfix, pandas, pyarrow, xxhash, dill, numpy, aiohttp, tqdm, fsspec, requests, filelock, huggingface-hub
Required-by: trl, lm-eval, evaluate
trainer@a311ae86939e:/mnt$ pip show huggingface_hub
Name: huggingface-hub
Version: 0.17.3
Summary: Client library to download and publish models, datasets and other repos on the huggingface.co hub
Home-page: https://github.com/huggingface/huggingface_hub
Author: Hugging Face, Inc.
Author-email: julien@huggingface.co
License: Apache
Location: /home/trainer/llm-train/lib/python3.8/site-packages
Requires: requests, pyyaml, packaging, typing-extensions, tqdm, filelock, fsspec
Required-by: transformers, tokenizers, peft, evaluate, datasets, accelerate
trainer@a311ae86939e:/mnt$ huggingface-cli env
Copy-and-paste the text below in your GitHub issue.
- huggingface_hub version: 0.17.3
- Platform: Linux-6.5.13-7-MANJARO-x86_64-with-glibc2.29
- Python version: 3.8.10
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /home/trainer/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: wasertech
- Configured git credential helpers:
- FastAI: N/A
- Tensorflow: N/A
- Torch: 2.1.2
- Jinja2: 3.1.2
- Graphviz: N/A
- Pydot: N/A
- Pillow: 10.2.0
- hf_transfer: N/A
- gradio: N/A
- tensorboard: N/A
- numpy: 1.24.4
- pydantic: N/A
- aiohttp: 3.9.1
- ENDPOINT: https://huggingface.co
- HUGGINGFACE_HUB_CACHE: /home/trainer/.cache/huggingface/hub
- HUGGINGFACE_ASSETS_CACHE: /home/trainer/.cache/huggingface/assets
- HF_TOKEN_PATH: /home/trainer/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
``` | closed | 2024-01-06T02:28:54Z | 2024-03-14T02:59:42Z | https://github.com/huggingface/datasets/issues/6563 | [] | wasertech | 7 |
Python3WebSpider/ProxyPool | flask | 171 | 测试了下 不支持https 但现在网站基本都是https的 | closed | 2022-06-06T08:47:37Z | 2023-05-04T16:54:03Z | https://github.com/Python3WebSpider/ProxyPool/issues/171 | [
"bug"
] | hilbp | 3 | |
jina-ai/clip-as-service | pytorch | 426 | Model accuracy after quantization (fp16) | I was wondering, if the model accuracy after converting the optimized graph to fp16 was evaluated. I am experiencing significant accuracy drops (on finetuned bert) after converting the graph. This might be a bert issue and has to be solved during training (quantization preparation), but I am using the bert as service code for graph optimization right now.
Also, I tried to quantize the bert graph finetuned on squad2, which should return some probability values as float. However, all of the outputs are NAN or INF after graph conversion. Could you point me in the direction of the error here?
Thanks so much! | open | 2019-07-29T00:55:15Z | 2019-07-29T00:55:15Z | https://github.com/jina-ai/clip-as-service/issues/426 | [] | volker42maru | 0 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 801 | Query are sending INSERT request with back_propagate | I am not sure this is a bug or not, but it seems strange that launching a query would send an INSERT request.
Here is a repository to replicate the bug. https://github.com/Noezor/example_flask_sqlalchemy_bug/
### Expected Behavior
For the models :
```python
from config import db
class Parent(db.Model):
__tablename__ = "parent"
id = db.Column(db.Integer(), primary_key = True)
name = db.Column(db.String, unique = True)
children = db.relationship("Child", back_populates="parent")
class Child(db.Model):
__tablename__ = "child"
id = db.Column(db.Integer(), primary_key = True)
name = db.Column(db.String(32), unique = True)
parent_id = db.Column(db.Integer, db.ForeignKey("parent.id"))
parent = db.relationship("Parent", back_populates="children")
```
And now the testscript.
```python
from config import db
from model import Child, Parent
parent = Parent(name='John')
if not Parent.query.filter(Parent.name == parent.name).one_or_none():
db.session.add(parent)
db.session.commit()
else :
parent = Parent.query.filter(Parent.name == parent.name).one_or_none()
child1 = Child(name="Toto",parent = parent)
if not Child.query.filter(Child.name == "Toto").one_or_none() :
db.session.add(child1)
db.session.commit()
else :
child1 = Child.query.filter(Child.name == "Toto").one_or_none()
print("success")
```
At first launch, the problem should work fine. At second launch, once the database is populated, there should not be problem either as the query will detect that the database already contains added elements.
### Actual Behavior
At first launch, everything is working fine. On the other hand, at second launch, it seems that line "if not Child.query.filter(Child.name == "Toto").one_or_none() :" is sending an INSERT request.
```pytb
2020-02-01 15:23:28,552 INFO sqlalchemy.engine.base.Engine SELECT CAST('test plain returns' AS VARCHAR(60)) AS anon_1
2020-02-01 15:23:28,552 INFO sqlalchemy.engine.base.Engine ()
2020-02-01 15:23:28,553 INFO sqlalchemy.engine.base.Engine SELECT CAST('test unicode returns' AS VARCHAR(60)) AS anon_1
2020-02-01 15:23:28,554 INFO sqlalchemy.engine.base.Engine ()
2020-02-01 15:23:28,555 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2020-02-01 15:23:28,556 INFO sqlalchemy.engine.base.Engine SELECT parent.id AS parent_id, parent.name AS parent_name
FROM parent
WHERE parent.name = ?
2020-02-01 15:23:28,557 INFO sqlalchemy.engine.base.Engine ('John',)
2020-02-01 15:23:28,560 INFO sqlalchemy.engine.base.Engine SELECT parent.id AS parent_id, parent.name AS parent_name
FROM parent
WHERE parent.name = ?
2020-02-01 15:23:28,560 INFO sqlalchemy.engine.base.Engine ('John',)
**2020-02-01 15:23:28,567 INFO sqlalchemy.engine.base.Engine INSERT INTO child (name, parent_id) VALUES (?, ?)**
2020-02-01 15:23:28,568 INFO sqlalchemy.engine.base.Engine ('Toto', 1)
2020-02-01 15:23:28,569 INFO sqlalchemy.engine.base.Engine ROLLBACK
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context
context)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 470, in do_execute
cursor.execute(statement, parameters)
sqlite3.IntegrityError: UNIQUE constraint failed: child.name
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/pionn/.vscode-insiders/extensions/ms-python.python-2020.1.58038/pythonFiles/ptvsd_launcher.py", line 43, in <module>
main(ptvsdArgs)
File "/home/pionn/.vscode-insiders/extensions/ms-python.python-2020.1.58038/pythonFiles/lib/python/old_ptvsd/ptvsd/__main__.py", line 432, in main
run()
File "/home/pionn/.vscode-insiders/extensions/ms-python.python-2020.1.58038/pythonFiles/lib/python/old_ptvsd/ptvsd/__main__.py", line 316, in run_file
runpy.run_path(target, run_name='__main__')
File "/usr/lib/python3.6/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/usr/lib/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
**File "/home/pionn/minimum_bug_sqlalchemy/test.py", line 13, in <module>
if not Child.query.filter(Child.name == "Toto").one_or_none() :**
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/query.py", line 2784, in one_or_none
ret = list(self)
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/query.py", line 2854, in __iter__
self.session._autoflush()
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/session.py", line 1407, in _autoflush
util.raise_from_cause(e)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 187, in reraise
raise value
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/session.py", line 1397, in _autoflush
self.flush()
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/session.py", line 2171, in flush
self._flush(objects)
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/session.py", line 2291, in _flush
transaction.rollback(_capture_exception=True)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/langhelpers.py", line 66, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 187, in reraise
raise value
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/session.py", line 2255, in _flush
flush_context.execute()
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/unitofwork.py", line 389, in execute
rec.execute(self)
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/unitofwork.py", line 548, in execute
uow
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/persistence.py", line 181, in save_obj
mapper, table, insert)
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/persistence.py", line 835, in _emit_insert_statements
execute(statement, params)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 945, in execute
return meth(self, multiparams, params)
File "/usr/lib/python3/dist-packages/sqlalchemy/sql/elements.py", line 263, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1053, in _execute_clauseelement
compiled_sql, distilled_params
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1189, in _execute_context
context)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1402, in _handle_dbapi_exception
exc_info
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 186, in reraise
raise value.with_traceback(tb)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context
context)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 470, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (raised as a result of Query-invoked autoflush; consider using a session.no_autoflush block if this flush is occurring prematurely) (sqlite3.IntegrityError) UNIQUE constraint failed: child.name [SQL: 'INSERT INTO child (name, parent_id) VALUES (?, ?)'] [parameters: ('Toto', 1)]
```
I believe it happends through back_propagate as, if removed, the "bug" dissapears. Same if I don't specify a parent for the child.
### Environment
* Operating system: Ubuntu 18.14
* Python version: 3.6.3
* Flask-SQLAlchemy version: 2.4.1
* SQLAlchemy version: 1.3.12
| closed | 2020-02-01T14:49:13Z | 2020-12-05T20:21:37Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/801 | [] | Noezor | 1 |
deepspeedai/DeepSpeed | machine-learning | 5,644 | [BUG] RuntimeError encountered when generating tokens from a Meta-Llama-3-8B-Instruct model initialized with 4-bit or 8-bit quantization | **Describe the bug**
I got the error `RuntimeError: probability tensor contains either `inf`, `nan` or element < 0` when trying to run deepspeed_engine.generate when `Meta-Llama-3-8B-Instruct` is initialized with either 4-bit or 8-bit quantization.
**To Reproduce**
Run the following code
```python
from typing import cast
from transformers.models.llama.modeling_llama import LlamaDecoderLayer
from deepspeed.module_inject.containers.llama import LLAMALayerPolicy
from functools import wraps
if not getattr(LLAMALayerPolicy, "is_get_hidden_heads_patched", False):
# Apply the monkey patch copied from https://github.com/microsoft/DeepSpeed/pull/5624
@wraps(LLAMALayerPolicy.get_hidden_heads)
def patched_get_hidden_heads(self: LLAMALayerPolicy) -> tuple[int, int, float, int]:
client_module = cast(LlamaDecoderLayer, self.client_module)
hidden_heads = (
client_module.self_attn.q_proj.in_features,
client_module.self_attn.num_heads,
client_module.input_layernorm.variance_epsilon,
client_module.mlp.gate_proj.out_features,
)
return hidden_heads
LLAMALayerPolicy.get_hidden_heads = patched_get_hidden_heads
setattr(LLAMALayerPolicy, "is_get_hidden_heads_patched", True)
from os import environ
rank = 0
environ["RANK"] = str(rank)
local_rank = 0
environ["LOCAL_RANK"] = str(local_rank)
world_size = 1
environ["WORLD_SIZE"] = str(world_size)
deepspeed_config = {
"zero_optimization": {
"load_from_fp32_weights": False,
"stage": 3,
"zero_quantized_weights": True,
"zero_quantized_nontrainable_weights": True,
},
"train_micro_batch_size_per_gpu": 1,
"fp16": {"enabled": True},
"weight_quantization": {
"quantized_initialization": {
# The same error occurs with either 4 or 8 bit quantization.
# "num_bits": 4,
"num_bits": 8,
"group_size": 64,
"group_dim": 1,
"symmetric": False,
}
},
}
from transformers.integrations.deepspeed import HfDeepSpeedConfig
hf_deepspeed_config = HfDeepSpeedConfig(deepspeed_config)
import deepspeed.comm
deepspeed.comm.init_distributed(
dist_backend="nccl",
rank=rank,
world_size=world_size,
auto_mpi_discovery=False,
init_method=f"tcp://127.0.0.1:9999",
)
from transformers import AutoModelForCausalLM
import torch
model_path = "meta-llama/Meta-Llama-3-8B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
use_flash_attention_2=True,
)
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_path)
from deepspeed.runtime.config import DeepSpeedConfig
from deepspeed import DeepSpeedEngine
deepspeed_engine = DeepSpeedEngine(
args={},
model=model,
config=deepspeed_config,
config_class=DeepSpeedConfig(deepspeed_config),
)
from transformers import GenerationConfig
generation_config = GenerationConfig.from_pretrained(model_path, max_new_tokens=20)
with torch.no_grad():
deepspeed_engine.eval()
print(
tokenizer.batch_decode(
deepspeed_engine.generate(
torch.tensor(
[[tokenizer.bos_token_id]],
dtype=torch.int,
device=deepspeed_engine.device,
),
synced_gpus=True,
generation_config=generation_config,
)
)
)
```
Then the output is
```
[2024-06-12 06:39:01,111] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.3
[WARNING] using untested triton version (2.3.0), only 1.0.0 is known to be compatible
[2024-06-12 06:39:01,813] [INFO] [comm.py:637:init_distributed] cdb=None
[2024-06-12 06:39:01,814] [INFO] [comm.py:668:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
Using /home/bo/.cache/torch_extensions/py311_cu121 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /home/bo/.cache/torch_extensions/py311_cu121/quantizer/build.ninja...
/home/bo/peftai/.venv/lib/python3.11/site-packages/torch/utils/cpp_extension.py:1967: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
warnings.warn(
Building extension module quantizer...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module quantizer...
Time to load quantizer op: 0.12922883033752441 seconds
Using quantizer for weights: CUDAQuantizer
The model was loaded with use_flash_attention_2=True, which is deprecated and may be removed in a future release. Please use `attn_implementation="flash_attention_2"` instead.
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
[2024-06-12 06:39:01,975] [INFO] [partition_parameters.py:562:patch_init_and_builtins] Enable Zero3 engine with INT4 quantization.
[2024-06-12 06:39:02,874] [INFO] [partition_parameters.py:345:__exit__] finished initializing model - num_params = 291, num_elems = 8.03B
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:09<00:00, 2.45s/it]
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
[2024-06-12 06:39:13,036] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
[2024-06-12 06:39:13,038] [INFO] [logging.py:96:log_dist] [Rank 0] Creating ZeRO Offload
[2024-06-12 06:39:13,142] [INFO] [utils.py:779:see_memory_usage] DeepSpeedZeRoOffload initialize [begin]
[2024-06-12 06:39:13,142] [INFO] [utils.py:780:see_memory_usage] MA 8.08 GB Max_MA 10.05 GB CA 13.26 GB Max_CA 13 GB
[2024-06-12 06:39:13,143] [INFO] [utils.py:787:see_memory_usage] CPU Virtual Memory: used = 67.62 GB, percent = 13.4%
Parameter Offload: Total persistent parameters: 266240 in 65 params
[2024-06-12 06:39:13,256] [INFO] [utils.py:779:see_memory_usage] DeepSpeedZeRoOffload initialize [end]
[2024-06-12 06:39:13,257] [INFO] [utils.py:780:see_memory_usage] MA 8.08 GB Max_MA 8.08 GB CA 13.26 GB Max_CA 13 GB
[2024-06-12 06:39:13,257] [INFO] [utils.py:787:see_memory_usage] CPU Virtual Memory: used = 67.62 GB, percent = 13.4%
[2024-06-12 06:39:13,258] [INFO] [config.py:996:print] DeepSpeedEngine configuration:
[2024-06-12 06:39:13,258] [INFO] [config.py:1000:print] activation_checkpointing_config {
"partition_activations": false,
"contiguous_memory_optimization": false,
"cpu_checkpointing": false,
"number_checkpoints": null,
"synchronize_checkpoint_boundary": false,
"profile": false
}
[2024-06-12 06:39:13,258] [INFO] [config.py:1000:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True}
[2024-06-12 06:39:13,258] [INFO] [config.py:1000:print] amp_enabled .................. False
[2024-06-12 06:39:13,258] [INFO] [config.py:1000:print] amp_params ................... False
[2024-06-12 06:39:13,258] [INFO] [config.py:1000:print] autotuning_config ............ {
"enabled": false,
"start_step": null,
"end_step": null,
"metric_path": null,
"arg_mappings": null,
"metric": "throughput",
"model_info": null,
"results_dir": "autotuning_results",
"exps_dir": "autotuning_exps",
"overwrite": true,
"fast": true,
"start_profile_step": 3,
"end_profile_step": 5,
"tuner_type": "gridsearch",
"tuner_early_stopping": 5,
"tuner_num_trials": 50,
"model_info_path": null,
"mp_size": 1,
"max_train_batch_size": null,
"min_train_batch_size": 1,
"max_train_micro_batch_size_per_gpu": 1.024000e+03,
"min_train_micro_batch_size_per_gpu": 1,
"num_tuning_micro_batch_sizes": 3
}
[2024-06-12 06:39:13,258] [INFO] [config.py:1000:print] bfloat16_enabled ............. False
[2024-06-12 06:39:13,258] [INFO] [config.py:1000:print] bfloat16_immediate_grad_update False
[2024-06-12 06:39:13,258] [INFO] [config.py:1000:print] checkpoint_parallel_write_pipeline False
[2024-06-12 06:39:13,258] [INFO] [config.py:1000:print] checkpoint_tag_validation_enabled True
[2024-06-12 06:39:13,258] [INFO] [config.py:1000:print] checkpoint_tag_validation_fail False
[2024-06-12 06:39:13,258] [INFO] [config.py:1000:print] comms_config ................. <deepspeed.comm.config.DeepSpeedCommsConfig object at 0x7f5dda12c890>
[2024-06-12 06:39:13,258] [INFO] [config.py:1000:print] communication_data_type ...... None
[2024-06-12 06:39:13,258] [INFO] [config.py:1000:print] compile_config ............... enabled=False backend='inductor' kwargs={}
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}}
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] curriculum_enabled_legacy .... False
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] curriculum_params_legacy ..... False
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] data_efficiency_config ....... {'enabled': False, 'seed': 1234, 'data_sampling': {'enabled': False, 'num_epochs': 1000, 'num_workers': 0, 'curriculum_learning': {'enabled': False}}, 'data_routing': {'enabled': False, 'random_ltd': {'enabled': False, 'layer_token_lr_schedule': {'enabled': False}}}}
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] data_efficiency_enabled ...... False
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] dataloader_drop_last ......... False
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] disable_allgather ............ False
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] dump_state ................... False
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] dynamic_loss_scale_args ...... None
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] eigenvalue_enabled ........... False
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] eigenvalue_gas_boundary_resolution 1
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] eigenvalue_layer_name ........ bert.encoder.layer
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] eigenvalue_layer_num ......... 0
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] eigenvalue_max_iter .......... 100
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] eigenvalue_stability ......... 1e-06
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] eigenvalue_tol ............... 0.01
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] eigenvalue_verbose ........... False
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] elasticity_enabled ........... False
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] flops_profiler_config ........ {
"enabled": false,
"recompute_fwd_factor": 0.0,
"profile_step": 1,
"module_depth": -1,
"top_modules": 1,
"detailed": true,
"output_file": null
}
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] fp16_auto_cast ............... False
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] fp16_enabled ................. True
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] fp16_master_weights_and_gradients False
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] global_rank .................. 0
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] grad_accum_dtype ............. None
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] gradient_accumulation_steps .. 1
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] gradient_clipping ............ 0.0
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] gradient_predivide_factor .... 1.0
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] graph_harvesting ............. False
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] hybrid_engine ................ enabled=False max_out_tokens=512 inference_tp_size=1 release_inference_cache=False pin_parameters=True tp_gather_partition_size=8
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] initial_dynamic_scale ........ 65536
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] load_universal_checkpoint .... False
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] loss_scale ................... 0
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] memory_breakdown ............. False
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] mics_hierarchial_params_gather False
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] mics_shard_size .............. -1
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] monitor_config ............... tensorboard=TensorBoardConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') wandb=WandbConfig(enabled=False, group=None, team=None, project='deepspeed') csv_monitor=CSVConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') enabled=False
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] nebula_config ................ {
"enabled": false,
"persistent_storage_path": null,
"persistent_time_interval": 100,
"num_of_version_in_retention": 2,
"enable_nebula_load": true,
"load_path": null
}
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] optimizer_legacy_fusion ...... False
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] optimizer_name ............... None
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] optimizer_params ............. None
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0, 'pipe_partitioned': True, 'grad_partitioned': True}
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] pld_enabled .................. False
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] pld_params ................... False
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] prescale_gradients ........... False
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] scheduler_name ............... None
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] scheduler_params ............. None
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] seq_parallel_communication_data_type torch.float32
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] sparse_attention ............. None
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] sparse_gradients_enabled ..... False
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] steps_per_print .............. 10
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] train_batch_size ............. 1
[2024-06-12 06:39:13,259] [INFO] [config.py:1000:print] train_micro_batch_size_per_gpu 1
[2024-06-12 06:39:13,260] [INFO] [config.py:1000:print] use_data_before_expert_parallel_ False
[2024-06-12 06:39:13,260] [INFO] [config.py:1000:print] use_node_local_storage ....... False
[2024-06-12 06:39:13,260] [INFO] [config.py:1000:print] wall_clock_breakdown ......... False
[2024-06-12 06:39:13,260] [INFO] [config.py:1000:print] weight_quantization_config ... q_type='symmetric' q_groups=1 enabled=True num_bits=8 quantized_initialization={'num_bits': 8, 'group_size': 64, 'group_dim': 1, 'symmetric': False} post_init_quant={}
[2024-06-12 06:39:13,260] [INFO] [config.py:1000:print] world_size ................... 1
[2024-06-12 06:39:13,260] [INFO] [config.py:1000:print] zero_allow_untested_optimizer False
[2024-06-12 06:39:13,260] [INFO] [config.py:1000:print] zero_config .................. stage=3 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=500,000,000 use_multi_rank_bucket_allreduce=True allgather_partitions=True allgather_bucket_size=500,000,000 overlap_comm=True load_from_fp32_weights=False elastic_checkpoint=False offload_param=None offload_optimizer=None sub_group_size=1,000,000,000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50,000,000 param_persistence_threshold=100,000 model_persistence_threshold=sys.maxsize max_live_parameters=1,000,000,000 max_reuse_distance=1,000,000,000 gather_16bit_weights_on_model_save=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False zero_hpz_partition_size=1 zero_quantized_weights=True zero_quantized_nontrainable_weights=True zero_quantized_gradients=False mics_shard_size=-1 mics_hierarchical_params_gather=False memory_efficient_linear=True pipeline_loading_checkpoint=False override_module_apply=True
[2024-06-12 06:39:13,260] [INFO] [config.py:1000:print] zero_enabled ................. True
[2024-06-12 06:39:13,260] [INFO] [config.py:1000:print] zero_force_ds_cpu_optimizer .. True
[2024-06-12 06:39:13,260] [INFO] [config.py:1000:print] zero_optimization_stage ...... 3
[2024-06-12 06:39:13,260] [INFO] [config.py:986:print_user_config] json = {
"zero_optimization": {
"load_from_fp32_weights": false,
"stage": 3,
"zero_quantized_weights": true,
"zero_quantized_nontrainable_weights": true
},
"train_micro_batch_size_per_gpu": 1,
"fp16": {
"enabled": true
},
"weight_quantization": {
"quantized_initialization": {
"num_bits": 8,
"group_size": 64,
"group_dim": 1,
"symmetric": false
}
}
}
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/bo/peftai/.deepspeed_llama3_8b.py", line 103, in <module>
[rank0]: deepspeed_engine.generate(
[rank0]: File "/home/bo/peftai/.venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/bo/peftai/.venv/lib/python3.11/site-packages/transformers/generation/utils.py", line 1622, in generate
[rank0]: result = self._sample(
[rank0]: ^^^^^^^^^^^^^
[rank0]: File "/home/bo/peftai/.venv/lib/python3.11/site-packages/transformers/generation/utils.py", line 2829, in _sample
[rank0]: next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: RuntimeError: probability tensor contains either `inf`, `nan` or element < 0
```
**Expected behavior**
No error
**ds_report output**
```
[2024-06-12 06:42:07,584] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.3
[WARNING] using untested triton version (2.3.0), only 1.0.0 is known to be compatible
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
async_io ............... [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
fp_quantizer ........... [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.3
[WARNING] using untested triton version (2.3.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/home/bo/peftai/.venv/lib/python3.11/site-packages/torch']
torch version .................... 2.3.0+cu121
deepspeed install path ........... ['/home/bo/peftai/.venv/lib/python3.11/site-packages/deepspeed']
deepspeed info ................... 0.14.2, unknown, unknown
torch cuda version ............... 12.1
torch hip version ................ None
nvcc version ..................... 12.2
deepspeed wheel compiled w. ...... torch 0.0, cuda 0.0
shared memory (/dev/shm) size .... 251.77 GB
```
**Screenshots**
Not applicable
**System info (please complete the following information):**
- OS: Ubuntu 22.04
- GPU count and types: 1 x RTX 3090
- Python version: 3.11.9
- Any other relevant info about your setup
**Launcher context**
Just `python` cli, not `deepspeed` cli.
**Docker context**
Not using Docker
**Additional context**
```
accelerate==0.23.0
aiofiles==23.2.1
aiohttp==3.8.6
aiohttp-cors==0.7.0
aiosignal==1.3.13
annotated-types==0.6.0
anyio==4.3.0
argon2-cffi==23.1.0
argon2-cffi-bindings==21.2.0
arrow==1.3.0
asttokens==2.4.0
async-lru==2.0.4
async-timeout==4.0.3
asyncstdlib==3.10.9
attrs==23.1.0
autoawq==0.2.5
autoawq_kernels==0.0.6
autoflake==2.2.1
azure-cli==2.60.0
Babel==2.14.0
backcall==0.2.0
beautifulsoup4==4.12.2
bitsandbytes==0.43.0
black==24.3.0
bleach==6.1.0
cached_classproperty==1.0.1
cachetools==5.3.1
certifi==2023.7.22
cffi==1.16.0
charset-normalizer==3.3.0
click==8.1.7
cloudpickle==3.0.0
cmake==3.29.2
colorful==0.5.6
comm==0.1.4
coverage==7.5.1
cryptography==41.0.4
datasets==2.18.0
debugpy==1.8.1
decorator==5.1.1
deepmerge==2.0b0
deepspeed==0.14.2
defusedxml==0.7.1
dill==0.3.8
diskcache==5.6.3
distlib==0.3.8
distro==1.9.0
ecdsa==0.18.0
einops==0.7.0
executing==2.0.0
fastapi==0.110.0
fastjsonschema==2.18.1
filelock==3.12.4
flash-attn @ https://github.com/Dao-AILab/flash-attention/releases/download/v2.5.8/flash_attn-2.5.8+cu122torch2.3cxx11abiFALSE-cp311-cp311-linux_x86_64.whl
fqdn==1.5.1
frozenlist==1.4.0
fsspec==2023.9.2
google-api-core==2.8.0
google-auth==2.29.0
googleapis-common-protos==1.56.1
gptcache==0.1.42
grpcio==1.63.0
guidance==0.0.64
h11==0.14.0
hiredis==2.2.3
hjson==3.1.0
httpcore==1.0.5
httptools==0.6.1
httpx==0.27.0
huggingface-hub==0.19.4
idna==3.4
immutables==0.20
iniconfig==2.0.0
interegular==0.3.3
ipykernel==6.25.2
ipython==8.16.1
ipywidgets==8.1.2
isoduration==20.11.0
isort==5.13.2
jaraco.functools==3.9.0
jedi==0.19.1
Jinja2==3.1.2
joblib==1.3.2
json5==0.9.24
jsonpointer==2.4
jsonschema==4.19.1
jsonschema-specifications==2023.7.1
jupyter==1.0.0
jupyter-console==6.6.3
jupyter-events==0.10.0
jupyter-lsp==2.2.4
jupyter_client==8.4.0
jupyter_core==5.4.0
jupyter_server==2.13.0
jupyter_server_terminals==0.5.3
jupyterlab==4.1.5
jupyterlab-pygments==0.2.2
jupyterlab_server==2.25.4
jupyterlab_widgets==3.0.10
lark==1.1.9
lazy-object-proxy==1.10.0
linkify-it-py==2.0.3
llvmlite==0.42.0
lm-format-enforcer==0.9.8
markdown-it-py==3.0.0
MarkupSafe==2.1.3
matplotlib-inline==0.1.6
mdit-py-plugins==0.4.1
mdurl==0.1.2
memray==1.12.0
mistune==3.0.2
more-itertools==9.1.0
mpmath==1.3.0
msal==1.24.1
msgpack==1.0.8
multidict==6.0.4
multiprocess==0.70.16
mypy-extensions==1.0.0
nbclient==0.8.0
nbconvert==7.9.2
nbformat==5.9.2
nbval==0.11.0
nest-asyncio==1.5.8
networkx==3.1
ninja==1.11.1.1
nodeenv==1.8.0
notebook==7.1.2
notebook_shim==0.2.4
numba==0.59.1
numpy==1.26.0
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-ml-py==12.550.52
nvidia-nccl-cu12==2.20.5
nvidia-nvjitlink-cu12==12.4.99
nvidia-nvtx-cu12==12.1.105
openai==1.25.2
opencensus==0.11.4
opencensus-context==0.1.3
outlines==0.0.34
overrides==7.7.0
packaging==23.2
pandas==2.2.1
pandocfilters==1.5.0
parso==0.8.3
pathspec==0.12.1
peft==0.5.0
pexpect==4.8.0
pickleshare==0.7.5
platformdirs==3.11.0
pluggy==1.5.0
poetry==1.8.3
pre_commit==3.7.1
prometheus-fastapi-instrumentator==7.0.0
prometheus_client==0.20.0
prompt-toolkit==3.0.39
protobuf==5.26.0
psutil==5.9.5
ptyprocess==0.7.0
pure-eval==0.2.2
py-cord==2.4.1
py-cpuinfo==9.0.0
py-spy==0.3.14
pyarrow==15.0.2
pyarrow-hotfix==0.6
pyasn1==0.5.0
pyasn1_modules==0.4.0
pycparser==2.21
pydantic==2.7.3
pydantic_core==2.18.4
pyflakes==3.1.0
pyflyby==1.9.2
Pygments==2.16.1
pygtrie==2.5.0
PyJWT==2.8.0
pynvml==11.5.0
pyparsing==3.1.1
pyright==1.1.359
PySide6==6.6.3
PySide6_Addons==6.6.3
PySide6_Essentials==6.6.3
pytest==8.2.0
python-dateutil==2.8.2
python-dotenv==1.0.1
python-jose==3.3.0
python-json-logger==2.0.7
python-ulid==1.1.0
pytz==2024.1
pyxll==5.8.0
pyxll_jupyter==0.5.2
PyYAML==6.0.1
pyzmq==25.1.1
qtconsole==5.5.1
QtPy==2.4.1
ray==2.23.0
redis==4.6.0
redis-om==0.3.1
referencing==0.30.2
regex==2023.10.3
requests==2.31.0
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
rich==13.7.1
rpds-py==0.10.6
rsa==4.9
safetensors==0.4.2
scipy==1.11.3
Send2Trash==1.8.2
sentencepiece==0.2.0
shiboken6==6.6.3
six==1.16.0
smart-open==7.0.4
sniffio==1.3.1
soupsieve==2.5
stack-data==0.6.3
starlette==0.36.3
sympy==1.12
terminado==0.18.1
textual==0.65.2
tiktoken==0.6.0
tinycss2==1.2.1
tokenizers==0.19.1
toml==0.10.2
torch==2.3.0
tornado==6.3.3
tqdm==4.66.1
traitlets==5.11.2
transformers==4.40.1
triton==2.3.0
typeguard==4.1.5
types-pyOpenSSL==23.2.0.2
types-python-dateutil==2.9.0.20240316
types-redis==4.6.0.7
typing_extensions==4.8.0
tzdata==2024.1
uc-micro-py==1.0.3
uri-template==1.3.0
urllib3==2.0.6
uvicorn==0.29.0
uvloop==0.19.0
virtualenv==20.26.2
vllm==0.4.2
vllm_nccl_cu12==2.18.1.0.4.0
vulnix==1.10.2.dev0
watchfiles==0.21.0
wcwidth==0.2.8
webcolors==1.13
webencodings==0.5.1
websocket-client==1.7.0
websockets==12.0
widgetsnbextension==4.0.10
wrapt==1.16.0
xformers==0.0.26.post1
xxhash==3.4.1
yarl==1.9.2
zstandard==0.22.0
```
| open | 2024-06-11T22:44:26Z | 2024-06-11T22:50:01Z | https://github.com/deepspeedai/DeepSpeed/issues/5644 | [
"bug",
"compression"
] | Atry | 2 |
sinaptik-ai/pandas-ai | data-visualization | 821 | Agent rephrase_query Error | ### System Info
pandasai version: 1.5.9
### 🐛 Describe the bug
using latest pandasai version(1.5.9) for rephrase_query, i'm getting the below error.
Code Snippet::
```
config = {
"llm": llm
}
agent= Agent(dfs, config=config, memory_size=10)
response = agent.rephrase_query(question)
print(response)
```
Error::
```
Unfortunately, I was not able to repharse query, because of the following error:
'RephraseQueryPrompt' object has no attribute 'conversation_text'
``` | closed | 2023-12-15T10:38:39Z | 2024-01-17T08:22:15Z | https://github.com/sinaptik-ai/pandas-ai/issues/821 | [
"bug",
"good first issue"
] | zaranaramani | 15 |
freqtrade/freqtrade | python | 11,407 | Unreaosnable ban | Froggleston found a Discord server linked to me on Disboard that mentioned that I sell Steam accounts. He assumed I was selling hacked accounts and confronted me about it. I denied it, which seems to have made him suspicious. He insisted that it was me based on the avatar and username and instead of discussing the situation or providing evidence, he immediately concluded that my sales were for "hacked accounts", without proof.
I have an interest in trading, I have never engaged in hacking, selling any stolen goods, or violating any platform rules.
The Steam accounts I sold ARE LEGITIMANT, and no investigation was conducted before the ban was issued. I asked what reason he was digging into my information for, and pointed out that my sales(if any) were none of his business instead of acknowledging this, he completely disregarded my words and doubled down on his accusation, outright stating that I was selling hacked accounts, and banned me. When I denied it again, he ignored it, making it clear he had already made up his mind.
Froggleston initiated the conversation with a bias, ignored my responses, and banned me based on assumption rather than evidence. He never actually engaged in a fair discussion and disregarded anything I said that didn't align with his initial belief.
 | closed | 2025-02-20T14:32:51Z | 2025-02-20T15:10:14Z | https://github.com/freqtrade/freqtrade/issues/11407 | [] | PoPzQ | 1 |
mljar/mljar-supervised | scikit-learn | 64 | Generate data information once | Right now, when generating the parameters for each model, whole dataset is checked for required preprocessing: each column is examined for categorical or missing values. It took a lot of time. What is more, it will be nice to generate a markdown report about dataset. | closed | 2020-04-21T07:01:08Z | 2020-05-21T12:42:08Z | https://github.com/mljar/mljar-supervised/issues/64 | [
"enhancement"
] | pplonski | 3 |
coqui-ai/TTS | pytorch | 3,463 | [Bug] Memory Explosion with xtts HifiganGenerator | ### Describe the bug
When running xttsv2 on 3090 RTX on WSL2 Ubuntu 22.04 on Windows 11 I would intermittently get memory explosions when doing inference. It seems to happen when I have huggin face transformer LLM loaded at the same time as XTTS. I traced when it happens to the forward pass of HifiganGenerator when it runs o = self.conv_pre(x) because self.conv_pre is just weight_norm(Conv1d(in_channels, upsample_initial_channel, 7, 1, padding=3) I couldn't identify any further what was going on but for some reason calling this uses all avilable gpu memory. Prior to hitting this line the system is using 8GB of VRAM then as soon as it hits it it goes to 23.7+GB of VRAM then the system starts to freeze.
Any help would be awesome but it is a weird bug.
### To Reproduce
I'm not able to produce on any of the leased machines I have. This just happens on my 3090 RTX, but the steps seem to be on
Load XTTS Model
Load Hugging Face LLM
Run inference via inference_stream
### Expected behavior
Memory pressure may fluctuate a bit but not 16+GB worth of fluxuation
### Logs
_No response_
### Environment
```shell
Windows 11
WSL2 Ubuntu 22.04
Tried on multiple version of python and pytorch and multiple versions of cuda
Reproduced on 11.8 12.2 releases of pytorch
```
### Additional context
_No response_ | closed | 2023-12-25T09:11:33Z | 2023-12-26T01:04:55Z | https://github.com/coqui-ai/TTS/issues/3463 | [
"bug"
] | chaseaucoin | 1 |
huggingface/peft | pytorch | 1,418 | [Documentation] Compatibility between lora + deepspeed + bitsandbytes | ### Feature request
Additional details in the [peft + deepspeed documentation](https://huggingface.co/docs/peft/accelerate/deepspeed-zero3-offload) and/or the [peft quantization documentation](https://huggingface.co/docs/peft/developer_guides/quantization) clarifying `peft` + `deepspeed` + `bitsandbytes` compatibility.
A table/compatibility matrix would be useful (e.g. columns being zero2, zero3, zero3 with CPU offloading and rows being standard precision, 4bit, 8bit, etc.). It would provide a central spot to confirm compatibility rather than trawling github issues, as well as a location to update if compatibility changes.
### Motivation
It's very common for users of `peft` to use it with `bitsandbytes` to load the base model in 4bit/8bit and only perform compute in higher precision. Separately, it's very popular to use `deepspeed` (or `fsdp`) for scalable full parameter training.
It's not particularly clear whether it's possible to use both a quantized base model, and model sharding with zero3. I originally supposed that you could use zero3 for parameter sharding, and potentially CPU offloading.
The [documentation](https://huggingface.co/docs/peft/accelerate/deepspeed-zero3-offload) makes it clear that `peft` is compatible with `deepspeed`, but makes no note of its compatibility with `deepspeed` AND `bitsandbytes`.
Including a section (or a warning box) on this in the documentation would be great for reducing ambiguity, particularly if it's updated as things change (e.g. if it is not supported now, but becomes supported).
### Your contribution
I'm happy to actually submit the PR containing the docs (maybe the relevant notes on both the quantization and deepspeed pages) and make them look pretty, but would need someone to clarify the current level of compatibility, or whether `peft` + `deepspeed` + `bitsandbytes` simply does not work right now. | closed | 2024-01-30T13:55:22Z | 2024-03-06T01:25:41Z | https://github.com/huggingface/peft/issues/1418 | [] | nathan-az | 9 |
ultralytics/ultralytics | pytorch | 19,388 | Unable to train on MPS using yolov12 models | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Train
### Bug
All YOLOv12 models produce an error when supplying `device="mps"` into model.train(). Here's the error:
Traceback (most recent call last):
File ".../yolo12_issue.py", line 6, in <module>
results = model.train(data="mapillary.yaml", epochs=100, imgsz=640, device="mps")
File ".../pyenv/versions/3.10.0/lib/python3.10/site-packages/ultralytics/engine/model.py", line 810, in train
self.trainer.train()
File ".../.pyenv/versions/3.10.0/lib/python3.10/site-packages/ultralytics/engine/trainer.py", line 208, in train
self._do_train(world_size)
File ".../.pyenv/versions/3.10.0/lib/python3.10/site-packages/ultralytics/engine/trainer.py", line 389, in _do_train
self.scaler.scale(self.loss).backward()
File ".../.pyenv/versions/3.10.0/lib/python3.10/site-packages/torch/_tensor.py", line 626, in backward
torch.autograd.backward(
File ".../vishy/.pyenv/versions/3.10.0/lib/python3.10/site-packages/torch/autograd/__init__.py", line 347, in backward
_engine_run_backward(
File ".../vishy/.pyenv/versions/3.10.0/lib/python3.10/site-packages/torch/autograd/graph.py", line 823, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
### Environment
Ultralytics 8.3.78 🚀 Python-3.10.0 torch-2.6.0 CPU (Apple M3 Pro)
Setup complete ✅ (12 CPUs, 36.0 GB RAM, 373.8/460.4 GB disk)
OS macOS-15.3-arm64-arm-64bit
Environment Darwin
Python 3.10.0
Install pip
RAM 36.00 GB
Disk 373.8/460.4 GB
CPU Apple M3 Pro
CPU count 12
GPU None
GPU count None
CUDA None
numpy ✅ 1.26.4<=2.1.1,>=1.23.0
matplotlib ✅ 3.9.1>=3.3.0
opencv-python ✅ 4.10.0.84>=4.6.0
pillow ✅ 11.0.0>=7.1.2
pyyaml ✅ 6.0.1>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.14.0>=1.4.1
torch ✅ 2.6.0>=1.8.0
torch ✅ 2.6.0!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.21.0>=0.9.0
tqdm ✅ 4.65.0>=4.64.0
psutil ✅ 5.9.4
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0
### Minimal Reproducible Example
```python
from ultralytics import YOLO
model = YOLO("yolo12n.pt")
# Train the model
results = model.train(data="path_to_dataset.yaml", epochs=100, imgsz=640, device="mps")
```
### Additional
_No response_
### Are you willing to submit a PR?
- [x] Yes I'd like to help by submitting a PR! | closed | 2025-02-24T01:03:37Z | 2025-02-24T15:53:02Z | https://github.com/ultralytics/ultralytics/issues/19388 | [
"detect"
] | VG-Fish | 5 |
jadore801120/attention-is-all-you-need-pytorch | nlp | 82 | arguments problem? | These errors occur when I run the preprocess.py and train.py


| closed | 2019-01-17T09:33:18Z | 2019-01-22T11:04:14Z | https://github.com/jadore801120/attention-is-all-you-need-pytorch/issues/82 | [] | A6Matrix | 14 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 674 | Loading .npy with 4 channels/layers | I believe this is related to #500, but I can't seem to figure out how to pass my 4-channel .npy arrays. After I apply the transforms I get this:
`UserWarning: Using a target size (torch.Size([2, 4, 43, 43])) that is different to the input size (torch.Size([2, 4, 44, 44])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.`
My input arrays are (4,33,33) and (4,43,43). I don't understand where the size 44 is coming from. | closed | 2019-06-12T16:18:15Z | 2019-06-14T18:57:57Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/674 | [] | patrick-han | 4 |
xorbitsai/xorbits | numpy | 369 | TST: run current test cases on GPUs | Note that the issue tracker is NOT the place for general support. For
discussions about development, questions about usage, or any general questions,
contact us on https://discuss.xorbits.io/.
We have lots of test cases running on CPUs right now. To make sure the code is also working properly on GPUs, we should try to run these test cases on GPUs. | open | 2023-04-14T04:49:25Z | 2023-05-17T04:27:57Z | https://github.com/xorbitsai/xorbits/issues/369 | [
"testing",
"gpu"
] | UranusSeven | 0 |
pyg-team/pytorch_geometric | pytorch | 8,948 | cpu version instead of cu121 | ### 😵 Describe the installation problem
I tried to run `conda install pyg -c pyg`
expected behaviour: installing the following file linux-64/pyg-2.5.0-py311_torch_2.2.0_cu121.tar.bz2
instead it is trying to install pyg/linux-64::pyg-2.5.0-py311_torch_2.2.0_cpu
### Environment
* PyG version: 2.5
* PyTorch version: 2.2.0
* OS: linux64
* Python version: 3.11
* CUDA/cuDNN version: 12.2 on the system - but pytorch-cuda: 12.1
* How you installed PyTorch and PyG (`conda`, `pip`, source): conda
* Any other relevant information (*e.g.*, version of `torch-scatter`):
| open | 2024-02-21T15:59:57Z | 2024-03-08T08:50:30Z | https://github.com/pyg-team/pytorch_geometric/issues/8948 | [
"installation"
] | bolak92 | 15 |
matplotlib/mplfinance | matplotlib | 446 | alines on addplots | Firstly, thank you so much for this tool! It has made my foray into financial algorithms a real joy! I am curious if there is a way to add the alines to other panels or addplots? I have RSI for instance that I would like to plot and mark up with some lines. The only way I can think of doing this at the moment is to create a second plot using the RSI value instead of the close ?
Thanks again!
Warren | open | 2021-09-14T15:18:03Z | 2024-03-20T01:30:18Z | https://github.com/matplotlib/mplfinance/issues/446 | [
"enhancement",
"question",
"hacktoberfest"
] | waemm | 6 |
plotly/dash | plotly | 2,849 | [Feature Request] Is there any way to set the local and global effects of dash's callback? | Is there any way to set the local and global effects of dash's callback? | closed | 2024-05-06T03:13:21Z | 2024-05-08T13:03:43Z | https://github.com/plotly/dash/issues/2849 | [] | jaxonister | 3 |
ahmedfgad/GeneticAlgorithmPython | numpy | 113 | Multi-label output support? | Does PyGAD support multi-label classification? That is, classification where the classes are not mutually exclusive? My y variable is a numpy array created by the MultiLabelBinarizer. I am trying to build a recommendation algorithm for patients, and the patients typically receive anywhere from 1 to 5 recommendations. | open | 2022-05-26T12:33:17Z | 2023-02-25T19:40:23Z | https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/113 | [
"question"
] | Alex-Bujorianu | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.