repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
marimo-team/marimo | data-visualization | 4,178 | Python 3.14 recursion error in matplotlib | ### Describe the bug
For this project
```toml
[project]
name = "test3"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.14"
dependencies = [
"marimo>=0.11.23",
"matplotlib>=3.10.1",
]
```
And this marimo file
```python
import marimo
__generated_with = "0.11.23"
app = marimo.App(width="medium")
@app.cell
def _():
import matplotlib.pyplot as plt
plt.scatter([1,2], [3,4])
return (plt,)
@app.cell
def _():
return
if __name__ == "__main__":
app.run()
```
I get
```
Warning: marimo truncated a very large console output.
Traceback (most recent call last):
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/marimo/_output/formatting.py", line 192, in try_format
mimetype, data = formatter(obj)
~~~~~~~~~^^^^^
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/marimo/_output/formatting.py", line 157, in f_mime
mime, data = obj._mime_() # type: ignore
~~~~~~~~~~^^
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/marimo/_output/formatters/matplotlib_formatters.py", line 39, in mime_data_artist
artist.figure.savefig(buf, format="png", bbox_inches="tight") # type: ignore
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/figure.py", line 3490, in savefig
self.canvas.print_figure(fname, **kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/backend_bases.py", line 2155, in print_figure
self.figure.draw(renderer)
~~~~~~~~~~~~~~~~^^^^^^^^^^
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/artist.py", line 94, in draw_wrapper
result = draw(artist, renderer, *args, **kwargs)
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/artist.py", line 71, in draw_wrapper
return draw(artist, renderer)
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/figure.py", line 3257, in draw
mimage._draw_list_compositing_images(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
renderer, self, artists, self.suppressComposite)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/image.py", line 134, in _draw_list_compositing_images
a.draw(renderer)
~~~~~~^^^^^^^^^^
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/artist.py", line 71, in draw_wrapper
return draw(artist, renderer)
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/axes/_base.py", line 3210, in draw
mimage._draw_list_compositing_images(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
renderer, self, artists, self.get_figure(root=True).suppressComposite)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/image.py", line 134, in _draw_list_compositing_images
a.draw(renderer)
~~~~~~^^^^^^^^^^
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/artist.py", line 71, in draw_wrapper
return draw(artist, renderer)
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/axis.py", line 1404, in draw
ticks_to_draw = self._update_ticks()
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/axis.py", line 1283, in _update_ticks
major_ticks = self.get_major_ticks(len(major_locs))
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/axis.py", line 1666, in get_major_ticks
self._copy_tick_props(self.majorTicks[0], tick)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/axis.py", line 1612, in _copy_tick_props
dest.tick1line.update_from(src.tick1line)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/lines.py", line 1358, in update_from
self._marker = MarkerStyle(marker=other._marker)
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/markers.py", line 248, in __init__
self._set_marker(marker)
~~~~~~~~~~~~~~~~^^^^^^^^
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/markers.py", line 323, in _set_marker
self.__dict__ = copy.deepcopy(marker.__dict__)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 131, in deepcopy
y = copier(x, memo)
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 202, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
~~~~~~~~^^^^^^^^^^^^^
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 138, in deepcopy
y = copier(memo)
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/path.py", line 285, in __deepcopy__
p = copy.deepcopy(super(), memo)
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 157, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 234, in _reconstruct
y = func(*args)
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 233, in <genexpr>
args = (deepcopy(arg, memo) for arg in args)
~~~~~~~~^^^^^^^^^^^
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 138, in deepcopy
y = copier(memo)
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/path.py", line 285, in __deepcopy__
p = copy.deepcopy(super(), memo)
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 157, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 234, in _reconstruct
y = func(*args)
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 233, in <genexpr>
args = (deepcopy(arg, memo) for arg in args)
~~~~~~~~^^^^^^^^^^^
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 138, in deepcopy
y = copier(memo)
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/path.py", line 285, in __deepcopy__
p = copy.deepcopy(super(), memo)
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 157, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 234, in _reconstruct
y = func(*args)
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 233, in <genexpr>
args = (deepcopy(arg, memo) for arg in args)
~~~~~~~~^^^^^^^^^^^
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 138, in deepcopy
y = copier(memo)
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/path.py", line 285, in __deepcopy__
p = copy.deepcopy(super(), memo)
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 157, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 234, in _reconstruct
y = func(*args)
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 233, in <genexpr>
args = (deepcopy(arg, memo) for arg in args)
~~~~~~~~^^^^^^^^^^^
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 138, in deepcopy
y = copier(memo)
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/path.py", line 285, in __deepcopy__
p = copy.deepcopy(super(), memo)
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 157, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 234, in _reconstruct
y = func(*args)
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 233, in <genexpr>
args = (deepcopy(arg, memo) for arg in args)
~~~~~~~~^^^^^^^^^^^
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 138, in deepcopy
y = copier(memo)
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/path.py", line 285, in __deepcopy__
p = copy.deepcopy(super(), memo)
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 157, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 234, in _reconstruct
y = func(*args)
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 233, in <genexpr>
args = (deepcopy(arg, memo) for arg in args)
~~~~~~~~^^^^^^^^^^^
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 138, in deepcopy
y = copier(memo)
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/path.py", line 285, in __deepcopy__
p = copy.deepcopy(super(), memo)
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 157, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 234, in _reconstruct
y = func(*args)
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 233, in <genexpr>
args = (deepcopy(arg, memo) for arg in args)
~~~~~~~~^^^^^^^^^^^
File "/home/meowxiik/.local/share/uv/python/cpython-3.14.0a6-linux-x86_64-gnu/lib/python3.14/copy.py", line 138, in deepcopy
y = copier(memo)
File "/home/meowxiik/test3/.venv/lib/python3.14/site-packages/matplotlib/path.py", line 285, in __deepcopy__
p = copy.deepcopy(super(), memo)
......
```
### Environment
<details>
```json
{
"marimo": "0.11.23",
"OS": "Linux",
"OS Version": "6.13.7-arch1-1",
"Processor": "",
"Python Version": "3.14.0a6",
"Binaries": {
"Browser": "--",
"Node": "v20.19.0"
},
"Dependencies": {
"click": "8.1.8",
"docutils": "0.21.2",
"itsdangerous": "2.2.0",
"jedi": "0.19.2",
"markdown": "3.7",
"narwhals": "1.31.0",
"packaging": "24.2",
"psutil": "7.0.0",
"pygments": "2.19.1",
"pymdown-extensions": "10.14.3",
"pyyaml": "6.0.2",
"ruff": "0.11.1",
"starlette": "0.46.1",
"tomlkit": "0.13.2",
"typing-extensions": "missing",
"uvicorn": "0.34.0",
"websockets": "15.0.1"
},
"Optional Dependencies": {},
"Experimental Flags": {}
}
```
</details>
### Code to reproduce
_No response_ | closed | 2025-03-20T21:43:11Z | 2025-03-20T21:56:57Z | https://github.com/marimo-team/marimo/issues/4178 | [
"bug"
] | richard-hajek | 2 |
giotto-ai/giotto-tda | scikit-learn | 330 | No module named 'sklearn.cluster._hierarchical' | Hi 😊,
I'm trying to import the packages as listed on "tutorial_mapper/Christmas Mapper.ipynb" but it fails to do so. I know for sure that my sklearn package works, since I use it every day.
Is there any chance that sklearn has changed the name of the clustering modules and therefore it fails to import the required packages?
I attach the pics of the error it throws:
<img width="1125" alt="Screenshot 2020-02-26 at 11 27 41" src="https://user-images.githubusercontent.com/55786246/75341195-d4ac9500-588b-11ea-817d-47c860503625.png">
<img width="1011" alt="Screenshot 2020-02-26 at 11 27 49" src="https://user-images.githubusercontent.com/55786246/75341203-d8401c00-588b-11ea-9a9e-3d782869cf72.png">
Do you have any suggestion on how could I fix this problem?
Thank you,
Sara | closed | 2020-02-26T11:35:36Z | 2020-02-27T19:27:20Z | https://github.com/giotto-ai/giotto-tda/issues/330 | [] | saramasarone | 8 |
SALib/SALib | numpy | 515 | Nonuniform distributions for Morris sampling returns inf | Thanks to @jyangfsu for the issue submission.
When the parameters follows norm, lognorm and truncnorm distributions, nonuniform_scale_samples function would returns inf.
As suggested by Saltelli et al. (2010), this can be avoid by cutting the tails of , for example, the normal distributions, at quantiles 5 and 95%.
Code in current nonuniform_scale_samples function:
```python
elif dists[i] == 'norm':
if b2 <= 0:
raise ValueError("""Normal distribution: stdev must be > 0""")
else:
conv_params[:, i] = sp.stats.norm.ppf(params[:, i], loc=b1, scale=b2))
```
Suggest modifying to:
```python
elif dists[i] == 'norm':
if b2 <= 0:
raise ValueError('''Normal distribution: stdev must be > 0''')
else:
conv_params[:, i] = scipy.stats.truncnorm.ppf(
params[:, i], scipy.stats.norm.ppf(0.05, loc=b1, scale=b2), scipy.stats.norm.ppf(0.95, loc=b1, scale=b2), loc=b1, scale=b2)
``` | open | 2022-07-12T06:30:00Z | 2023-09-02T05:36:58Z | https://github.com/SALib/SALib/issues/515 | [] | willu47 | 11 |
yeongpin/cursor-free-vip | automation | 175 | ❌ 发生错误:Config file not found: C:\Users\Administrator\Documents\.cursor-free-vip\config.ini,请重试 |  | closed | 2025-03-10T05:55:02Z | 2025-03-10T06:08:36Z | https://github.com/yeongpin/cursor-free-vip/issues/175 | [] | TangMiao1981 | 4 |
httpie/cli | rest-api | 556 | Connection aborted: Bad StatusLine | If the query string contains `colon (:)` then I get an error in the console.:
`http: error: ConnectionError: ('Connection aborted.', Bad StatusLine("''",))`
Escaping didn't work. I tried escaping colon with backslash but got the same error.
Query: `http --debug :3000/games where[id:gt]:=2`
```
HTTPie 0.9.2
HTTPie data: /home/kishan/.httpie
Requests 2.9.1
Pygments 2.1
Python 2.7.12 (default, Nov 19 2016, 06:48:10)
[GCC 5.4.0 20160609] linux2
>>> requests.request({'allow_redirects': False,
'auth': None,
'cert': None,
'data': OrderedDict(),
'files': DataDict(),
'headers': {'User-Agent': 'HTTPie/0.9.2', u'where[id': 'gt]:=2'},
'method': 'get',
'params': ParamsDict(),
'proxies': {},
'stream': True,
'timeout': 30,
'url': u'http://localhost:3000/games',
'verify': True})
Traceback (most recent call last):
File "/usr/bin/http", line 9, in <module>
load_entry_point('httpie==0.9.2', 'console_scripts', 'http')()
File "/usr/lib/python2.7/dist-packages/httpie/core.py", line 112, in main
response = get_response(args, config_dir=env.config.directory)
File "/usr/lib/python2.7/dist-packages/httpie/client.py", line 41, in get_response
response = requests_session.request(**kwargs)
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 468, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 426, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', BadStatusLine("''",))
``` | closed | 2017-01-25T15:36:44Z | 2017-01-25T18:16:57Z | https://github.com/httpie/cli/issues/556 | [] | afm-sayem | 3 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 731 | Error using DynamicSoftMarginLoss | Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [128, 128, 1, 1]] is at version 4; expected version 2 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). | open | 2024-11-12T08:33:21Z | 2024-11-15T01:58:56Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/731 | [
"bug"
] | inakierregueab | 2 |
sammchardy/python-binance | api | 1,527 | How to use websocket request with private key ? | I applied for an Ed25519 API key, but I don't know how to use this API key for websocket requests. | open | 2024-12-25T02:27:35Z | 2025-03-23T15:59:06Z | https://github.com/sammchardy/python-binance/issues/1527 | [
"question"
] | zhangk | 5 |
sktime/pytorch-forecasting | pandas | 1,306 | AssertionError: only regression tasks are supported - target must not be categorical | - PyTorch-Forecasting version:1.0.0
- PyTorch version:2.0.1
- Python version:3.10
- Operating System:Unix
Assume a dataset as follows:
```
group county date value
1. 1 0. 1
1 1. 1. 6
1 1. 2 16
1 1. 3 26
1. 2. 0. 14
1. 2. 1. 24
1. 2. 2. 34
```
This corresponds to two time series (based on group and county). I have 5000 time series which I prepare data similar to above.
Now this is the code:
```
max_encoder_length = 60
max_prediction_length = 30
context_length = max_encoder_length
prediction_length = max_prediction_length
training = TimeSeriesDataSet(
Data,
time_idx="date",
target="value",
categorical_encoders = {"group": NaNLabelEncoder().fit(Data. group), "county":NaNLabelEncoder().fit(Data.county)},
group_ids=["group","county"],
time_varying_unknown_reals=["value"],
max_encoder_length=context_length,
max_prediction_length=prediction_length
)
```
```
trainer = pl.Trainer(accelerator="auto", gradient_clip_val=0.02)
net = NBeats.from_dataset(training, learning_rate=3e-2, weight_decay=1e-2, widths=[32, 512], backcast_loss_ratio=0.1)
```
Then it throws an error message that
`AssertionError: only regression tasks are supported - target must not be categorical
`
I look into the source code and it seems the error is coming from here:
```
assert not isinstance(
dataset.target_normalizer, NaNLabelEncoder
), "only regression tasks are supported - target must not be categorical"
```
However, type of `dataset.target_normalizer` is `NaNLabelEncoder`, so I do not really understand what triggers this error and how to fix this, since the target is continuous and not categorical. I would appreciate if someone can shine some light on this issue.
| closed | 2023-05-18T09:54:29Z | 2023-05-19T10:06:07Z | https://github.com/sktime/pytorch-forecasting/issues/1306 | [] | manitadayon | 0 |
jmcnamara/XlsxWriter | pandas | 389 | Could XlsxWriter run on MicroPython to create smallish Excel files? | Could XlsxWriter run on MicroPython to create smallish Excel files?
| closed | 2016-10-27T01:09:27Z | 2016-11-22T10:27:25Z | https://github.com/jmcnamara/XlsxWriter/issues/389 | [
"question",
"ready to close"
] | Jimbobnz | 2 |
unit8co/darts | data-science | 2,252 | Q: How can I run this on AMD GPU? | I have an AMD GPU, that does not have CUDA available, I am on Windows and do not want to switch to Linux. My GPU does have Directs 12 though and through the use of Microsoft's 'directml' I am able to use my GPU with TensorFlow algorithms. I have also tried other options such as installing 3rd party libraries such as webui and others. I also have tried to do this through an Ubuntu type of setup but still, darts says that there is no supported GPU backend found, I have also tried to install lighting separately, but still when I run 'torch.cuda.is_available()' it always returns False. Is there anything I can do or should I just switch to NVIDEA GPU? | closed | 2024-02-25T00:14:19Z | 2024-02-26T16:13:52Z | https://github.com/unit8co/darts/issues/2252 | [
"question"
] | Erik-02 | 5 |
piskvorky/gensim | nlp | 2,579 | why skip-gram takes context word as input and predict word itself | #### Problem description
I'm going to use the python code of skip-gram (sg) in my research but recognize difference between the implementation and the original in Mikolov's paper.
The detail of the difference will be mentioned below.
Please let me know if this difference is intentionally or just a bug.
#### Steps/code/corpus to reproduce
code in:
> gensim/gensim/models/word2vec.py
https://github.com/RaRe-Technologies/gensim/blob/f97d0e793faa57877a2bbedc15c287835463eaa9/gensim/models/word2vec.py#L399-L414
> We can see input word is treated as output of NN, while context is embedded by matrix syn0 (vectors matrix)
...
https://github.com/RaRe-Technologies/gensim/blob/f97d0e793faa57877a2bbedc15c287835463eaa9/gensim/models/word2vec.py#L443-L456
> as a result, we're going to optimize P( input / context ) while, in the original paper, they tried to optimize P( context / input) in skip-gram architecture.
| closed | 2019-08-14T08:15:46Z | 2019-09-07T19:14:27Z | https://github.com/piskvorky/gensim/issues/2579 | [
"bug"
] | truythu169 | 4 |
dynaconf/dynaconf | fastapi | 861 | Vault auth login with Dynaconf | Hi, I would like to use Dynaconf to save my vault secrets. I enabled "vault_enabled" env and wanted to use VAULT_AUTH_WITH_IAM_FOR_DYNACONF for IAM auth authentication.
There is a problem when Dynaconf runs client.auth.aws.iam_login(
credentials.access_key,
credentials.secret_key,
credentials.token,
role=obj.VAULT_AUTH_ROLE_FOR_DYNACONF,
)
in the vault_loader class there is no option to add header_value(for X-Vault-AWS-IAM-Server-ID) and mount_point
Is there something I miss? | open | 2023-02-08T11:09:33Z | 2023-08-21T19:47:45Z | https://github.com/dynaconf/dynaconf/issues/861 | [
"question"
] | eladhaz05 | 1 |
koxudaxi/fastapi-code-generator | fastapi | 115 | Support x-www-form-urlencoded request | closed | 2021-02-19T10:10:59Z | 2021-04-05T14:39:17Z | https://github.com/koxudaxi/fastapi-code-generator/issues/115 | [] | koxudaxi | 0 | |
ITCoders/Human-detection-and-Tracking | numpy | 36 | Labelling Issue | # Issues should contain the following details which increases the probability of it get resolved quickly
when the any of the face is detected from camera it shows label dynamically changing. like 4,7,22,34,53
so i m not able to understand what are these labels..
looking forward to this.
thanks..
| closed | 2018-08-27T11:35:57Z | 2018-09-10T05:35:47Z | https://github.com/ITCoders/Human-detection-and-Tracking/issues/36 | [] | mailrdhegde | 3 |
python-gino/gino | sqlalchemy | 385 | one and one_or_none sqlalchemy methods support | ### Description
Sqlalchemy query api support methods `one` and `one_or_none.`
https://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.one
https://docs.sqlalchemy.org/en/latest/orm/query.html#sqlalchemy.orm.query.Query.one_or_none
But gino didn't support them. This methods very convenient.
| closed | 2018-11-08T14:28:59Z | 2019-10-25T16:04:21Z | https://github.com/python-gino/gino/issues/385 | [
"feature request"
] | fvolohin | 1 |
holoviz/panel | jupyter | 7,367 | Pipeline network plot should have `shared_axes=False` | ```plaintext
panel 1.5.2
holoviews 1.19.0
```
</details>
The network displaying an outline of pipeline stages is itself a holoviews plot. However, if you have a plot in you pipeline that has the same (default) names for its axes, then they will be linked.
I cannot think of a situation where that would be useful or expected. An easy fix is to add `shared_axes=False` to its opts.
MRE:
```python
import param
import holoviews as hv
import panel as pn
pipeline = pn.pipeline.Pipeline()
class Stage1(param.Parameterized):
@param.output()
def output(self):
return True
def view(self):
plot = hv.Scatter([1, 2, 4, 3])
return pn.Column(plot)
def panel(self):
return pn.Row(self.view,)
class Stage2(param.Parameterized):
def view(self):
return pn.Column('Hey')
def panel(self):
return pn.Row(self.view)
pipeline.add_stage('Stage 1', Stage1)
pipeline.add_stage('Stage 2', Stage2)
pn.Row(pipeline, width=500).servable()
```
- [x] I may be interested in making a pull request to address this
https://github.com/user-attachments/assets/6f669597-836b-43c0-aff7-037d96e07c41
| closed | 2024-10-07T14:17:06Z | 2024-10-08T08:12:22Z | https://github.com/holoviz/panel/issues/7367 | [] | TheoMathurin | 0 |
chatanywhere/GPT_API_free | api | 112 | API_BASE 连接问题 | 错误信息
----------
API Error: {"object":null,"data":null,"error":{"message":"OpenAI response error","type":"cf_bad_gateway","param":null,"code":"502"}}
试过 https://api.chatanywhere.com.cn 和 api.chatanywhere.cn 两个域名,都不行,这两个API用了2个多月了,除了有点不稳定之外,平常没有其他问题。 10/20日(今天)开始连不上了。
| closed | 2023-10-20T02:12:04Z | 2023-10-21T09:48:08Z | https://github.com/chatanywhere/GPT_API_free/issues/112 | [] | sxzhang2009 | 5 |
ymcui/Chinese-LLaMA-Alpaca-2 | nlp | 525 | 运行时显存占用过大和没有获取json返回体 | ### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案。
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[LangChain](https://github.com/hwchase17/langchain)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案。
### 问题类型
效果问题
### 基础模型
Chinese-Alpaca-2 (7B/13B)
### 操作系统
Linux
### 详细描述问题
本地部署了chinese-alpaca-2-7b模型之后,测试使用scripts/openai_server_demo/openai_api_server.py
并用一下指令测试:
curl http://localhost:19327/v1/chat/completions \
> -H "Content-Type: application/json" \
> -d '{
> "messages": [
> {"role": "user","content": "给我讲一些有关杭州的故事吧"}
> ],
> "repetition_penalty": 1.0
> }
1. 首先使用了GPU,发现显存占用过高,报错
2. 使用--only_gpu, 没有得到期望的回答
问题:
1.关于占用gpu显存过高问题,有没有优化的方法
2.如何能得到期待的问答式回复
### 依赖情况(代码类问题务必提供)
```
# 请在此处粘贴依赖情况(请粘贴在本代码块里)
```
### 运行日志或截图

| closed | 2024-02-20T06:56:30Z | 2024-03-16T22:04:45Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/525 | [
"stale"
] | xiaoToby | 17 |
allenai/allennlp | nlp | 5,304 | how to manage memory for modeling seq2seq model using varaible length training dataset? (model:allenai/led-base-16384) | I asked the same question on stackoverflow (https://stackoverflow.com/questions/68298295/allennlp-how-to-manage-memory-for-modeling-seq2seq-model-using-varaible-length)
--------------------------
## Environment info
- `transformers` version: 4.6.1
- Platform: Linux-5.4.0-74-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.0 (True)
- GPU: GeForce RTX 3090; 24G;
Models: allenai/led-base-16384
## Issue
I re-write the original paper code (https://github.com/allenai/qasper-led-baseline) for `qasper` task (QA for long article) for not using `allennlp` lib in original code due to dependency problem, but I encounter OOM issue which didn't happen to `allennlp` code. So I am curious `allennlp trainer` make some magic.
## To reproduce
OOM happens when using the code below. But the fine tune with `allennlp` trainer under the same maximum length of sequence doesn't encounter OOM issue. I don't know why OOM does not happen to `allennlp` and want to solve OOM without `allennlp`.
```
model = AutoModelForSeq2SeqLM.from_pretrained("allenai/led-base-16384", gradient_checkpointing=True, use_cache=False)
max_len = 15000
dummy = torch.ones(size=(1, max_len), device='cuda').long()
dummy2 = torch.zeros(size=(1, max_len), device='cuda').long()
dummy2[:100] = 1
dummy3 = torch.zeros(size=(1, 1024), device='cuda').long()
outputs = model(input_ids=dummy, attention_mask=dummy, global_attention_mask=dummy2, decoder_input_ids=dummy3)
```
| closed | 2021-07-08T08:46:25Z | 2021-07-10T12:57:31Z | https://github.com/allenai/allennlp/issues/5304 | [
"question"
] | HenryPaik1 | 3 |
deepinsight/insightface | pytorch | 2,342 | How to train Inswapper model ? | How to train Inswapper model with different resolutions? | open | 2023-06-16T10:02:16Z | 2023-07-13T13:16:58Z | https://github.com/deepinsight/insightface/issues/2342 | [] | LLSean | 2 |
CTFd/CTFd | flask | 1,881 | Users in admin scoreboard show user position instead of team position | In teams mode on the admin panel, users are shown with their user position on the scoreboard instead of their teams position. We should be showing both. | closed | 2021-05-07T18:43:22Z | 2021-06-17T05:36:55Z | https://github.com/CTFd/CTFd/issues/1881 | [
"easy"
] | ColdHeat | 0 |
akfamily/akshare | data-science | 5,418 | AKShare 接口问题报告 | ak.futures_rule | python版本:3.12
akshare版本:1.15.45
示例代码:
futures_rule_df = ak.futures_rule(date=weekday_str)
报错信息:
File "D:\Anaconda\envs\env_hight_futuresproject\Lib\site-packages\akshare\futures\futures_rule.py", line 28, in futures_rule
big_df = pd.read_html(StringIO(r.text), header=1)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda\envs\env_hight_futuresproject\Lib\site-packages\pandas\io\html.py", line 1240, in read_html
return _parse(
^^^^^^^
File "D:\Anaconda\envs\env_hight_futuresproject\Lib\site-packages\pandas\io\html.py", line 1003, in _parse
raise retained
File "D:\Anaconda\envs\env_hight_futuresproject\Lib\site-packages\pandas\io\html.py", line 983, in _parse
tables = p.parse_tables()
^^^^^^^^^^^^^^^^
File "D:\Anaconda\envs\env_hight_futuresproject\Lib\site-packages\pandas\io\html.py", line 249, in parse_tables
tables = self._parse_tables(self._build_doc(), self.match, self.attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda\envs\env_hight_futuresproject\Lib\site-packages\pandas\io\html.py", line 598, in _parse_tables
raise ValueError("No tables found")
ValueError: No tables found
python-BaseException
| closed | 2024-12-14T03:55:15Z | 2024-12-14T07:29:58Z | https://github.com/akfamily/akshare/issues/5418 | [
"bug"
] | YPersion | 2 |
DistrictDataLabs/yellowbrick | scikit-learn | 1,116 | yellowbrick.exceptions.DatasetsError: the downloaded dataset was improperly packaged without meta.json - please report this bug to the Yellowbrick maintainers! | **Describe the bug**
yellowbrick.exceptions.DatasetsError: the downloaded dataset was improperly packaged without meta.json - please report this bug to the Yellowbrick maintainers!
**To Reproduce**
```
import pandas as pd
from yellowbrick.datasets import load_bikeshare
X, y = load_bikeshare()
from yellowbrick.regressor import ResidualsPlot
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
# Create training and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.1
)
visualizer = ResidualsPlot(LinearRegression())
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.show()
```
| closed | 2020-10-11T15:15:23Z | 2020-10-12T15:08:41Z | https://github.com/DistrictDataLabs/yellowbrick/issues/1116 | [
"type: question",
"duplicate"
] | KK666-AI | 1 |
allenai/allennlp | nlp | 4,864 | Move ModelCard and TaskCard abstractions to the main repository | closed | 2020-12-14T21:51:22Z | 2020-12-22T22:00:01Z | https://github.com/allenai/allennlp/issues/4864 | [] | AkshitaB | 0 | |
clovaai/donut | nlp | 320 | High inference lantency | I fine-tuned Donut on custom data, but inference takes 15 seconds on CPU (8 cores). I realized the generate() function is the one that takes too long. Are there any improvements that could be made to reduce this latency please ? Thanks | open | 2024-11-11T12:38:26Z | 2024-11-11T12:38:26Z | https://github.com/clovaai/donut/issues/320 | [] | Altimis | 0 |
liangliangyy/DjangoBlog | django | 685 | css | 

这个问题应该是static在blog下级文件夹,不知道怎么改路径 | closed | 2023-10-23T04:26:20Z | 2023-11-06T06:15:31Z | https://github.com/liangliangyy/DjangoBlog/issues/685 | [] | Autism-mm | 1 |
newpanjing/simpleui | django | 399 | layout={}功能无法使用,无法弹出窗口 | layout={}功能无法使用,无法弹出窗口 | closed | 2021-09-26T09:05:23Z | 2021-10-27T03:34:28Z | https://github.com/newpanjing/simpleui/issues/399 | [
"bug"
] | weilingwei | 1 |
ResidentMario/geoplot | matplotlib | 232 | Missing Cartopy projections (and suggestion) | Hi, it's suggested in the docs that Cartopy projections are supported but it seems the Eckert 1-6 set are missing. Not sure if this is trivial to add or not.
As an API suggestion, perhaps geoplot should directly accept cartopy CRS objects and convert them internally (and eventually do the same for raw proj4 CRS objects, which look like they may be supported in cartopy at some point). Otherwise we end up with a mess of incompatible CRS object types (geoplot, cartopy, proj4). This would become especially annoying when working with a mix of plain Cartopy plots and geoplot plots, you need to keep track of different CRS objects for each plot type. | closed | 2021-02-16T08:55:31Z | 2021-07-03T23:53:51Z | https://github.com/ResidentMario/geoplot/issues/232 | [] | mangecoeur | 1 |
Avaiga/taipy | automation | 1,516 | Integrate Taipy GUI applications in Dataiku's DSS | ... because it simply does not work.
| open | 2024-07-15T08:56:24Z | 2024-09-25T16:24:17Z | https://github.com/Avaiga/taipy/issues/1516 | [
"🖧 Devops",
"🖰 GUI",
"💥Malfunction",
"🟧 Priority: High",
"❌ Blocked",
"🔒 Staff only"
] | FabienLelaquais | 2 |
graphql-python/graphene-sqlalchemy | graphql | 52 | Does `graphene-sqlalchemy` generate connections based on relations? | I have these two models:
```python
class Organization(db.Model):
id = db.Column(db.Integer, primary_key=True)
created = db.Column(db.TIMESTAMP, default=db.func.now())
modified = db.Column(db.TIMESTAMP, default=db.func.now(), onupdate=db.func.now())
# Attributes
name = db.Column(db.String(200), nullable=False)
logo = db.Column(db.TEXT, default='')
video = db.Column(db.TEXT, default='')
about = db.Column(db.TEXT, default='')
class Project(db.Model):
id = db.Column(db.Integer, primary_key=True)
created = db.Column(db.TIMESTAMP, default=db.func.now())
modified = db.Column(db.TIMESTAMP, default=db.func.now(), onupdate=db.func.now())
# Attributes
name = db.Column(db.String(200), nullable=False)
start_date = db.Column(db.TIMESTAMP, nullable=True)
end_date = db.Column(db.TIMESTAMP, nullable=True)
# Relations
organization_id = db.Column(db.Integer, db.ForeignKey('organization.id'), nullable=False)
organization = db.relationship('Organization', foreign_keys=[organization_id],
cascade="all, delete-orphan", single_parent=True,)
```
And this schema:
```python
import Organization as OrganizationModel
import Project as ProjectModel
class Project(SQLAlchemyObjectType):
class Meta:
model = ProjectModel
interfaces = (relay. Node, )
class Organization(SQLAlchemyObjectType):
class Meta:
model = OrganizationModel
interfaces = (relay.Node, )
class Query(graphene.ObjectType):
node = relay.Node.Field()
all_organizations = SQLAlchemyConnectionField(Organization)
all_projects = SQLAlchemyConnectionField(Project)
@staticmethod
def resolve_organization(self, args, context, info):
query = Organization.get_query(context)
return query.filter_by(**args)
@staticmethod
def resolve_project(self, args, context, info):
query = Project.get_query(context)
return query.filter_by(**args)
schema = graphene.Schema(query=Query, types=[Organization, Project])
```
I would then expect to be able to write a query to access `projects` from `allOrganizations`, but I seem unable to do get it to work. If I do the connection manually like this (with the same model):
```python
import Organization as OrganizationModel
import Project as ProjectModel
class Project(SQLAlchemyObjectType):
class Meta:
model = ProjectModel
interfaces = (relay. Node, )
class OrganizationProjectConnection(graphene.Connection):
class Meta:
node = Project
class Organization(SQLAlchemyObjectType):
class Meta:
model = OrganizationModel
interfaces = (relay.Node, )
projects = SQLAlchemyConnectionField(OrganizationProjectConnection)
class Query(graphene.ObjectType):
node = relay.Node.Field()
all_organizations = SQLAlchemyConnectionField(Organization)
all_projects = SQLAlchemyConnectionField(Project)
@staticmethod
def resolve_organization(self, args, context, info):
query = Organization.get_query(context)
return query.filter_by(**args)
@staticmethod
def resolve_project(self, args, context, info):
query = Project.get_query(context)
return query.filter_by(**args)
schema = graphene.Schema(query=Query, types=[Organization, Project])
```
it works as I expect. Is this a feature not yet implemented, or do I have to encourage the framework to make the connection somehow? | closed | 2017-06-07T10:46:26Z | 2023-02-26T00:53:15Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/52 | [] | filleokus | 2 |
deepspeedai/DeepSpeed | machine-learning | 6,859 | [BUG] deepspeed --enable_each_rank_log does not work in multi-node (PDSH) runs | **Describe the bug**
In a single-node training run, the command `deepspeed --enable_each_rank_log logdir <training command here>` will cause each rank to write its stderr/stdout to a unique file in logdir/
However, in a multinode training run using the default launcher (PDSH) e.g. `deepspeed --hostfile ./hostfile --enable_each_rank_log logdir` the arg is not passed through to the local launcher script per node.
**To Reproduce**
Run a deepspeed training run with --enable_each_rank_log, and with a hostfile that specifies multiple nodes. Observe that each rank's stdout/stderr is not directed to a file.
**Expected behavior**
Each rank should log to its own file.
**Launcher context**
deepspeed launcher + PDSH
| closed | 2024-12-12T23:56:51Z | 2024-12-17T17:33:10Z | https://github.com/deepspeedai/DeepSpeed/issues/6859 | [
"bug",
"training"
] | akeshet | 2 |
lukas-blecher/LaTeX-OCR | pytorch | 335 | fails on very simple image | It fails to generate a valid latex string for the simple image below.
Code to reproduce this:
https://colab.research.google.com/gist/murphyk/7727c1ecd1169c16d02600001818c487/latex-ocr-test.ipynb

| open | 2023-11-05T19:50:34Z | 2024-04-28T15:11:19Z | https://github.com/lukas-blecher/LaTeX-OCR/issues/335 | [] | murphyk | 4 |
sammchardy/python-binance | api | 1,434 | pip install python-binance | pip install python-binance | closed | 2024-10-20T18:44:18Z | 2024-10-20T18:44:56Z | https://github.com/sammchardy/python-binance/issues/1434 | [] | i0brahim | 0 |
chatopera/Synonyms | nlp | 112 | 【提醒】发布 v3.12.0, 增大词向量到 40W+ 词汇,优化下载速度,请酌情升级! | 下载大词表,优化下载速度。
| closed | 2020-09-21T06:22:14Z | 2021-01-03T04:06:10Z | https://github.com/chatopera/Synonyms/issues/112 | [] | hailiang-wang | 1 |
FlareSolverr/FlareSolverr | api | 344 | [yggtorrent] Exception (yggtorrent): FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Cloudflare Error: Cloudflare has blocked this request. Probably your IP is banned for this site, check in your web browser.: FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Cloudflare Error: Cloudflare has blocked this request. Probably your IP is banned for this site, check in your web browser. (Test) | ### Environment
* **FlareSolverr version**: 2.1.0
* **Last working FlareSolverr version**: 2.1.0
* **Operating system**: Linux
* **Are you using Docker**: yes
* **FlareSolverr User-Agent (see log traces or / endpoint)**: Mozilla/5.0 (X11; Linux x86_64; rv:89.0) Gecko/20100101 Firefox/89.0
* **Are you using a proxy or VPN?** no
* **Are you using Captcha Solver:** no
* **URL to test this issue:** https://www3.yggtorrent.re/engine/search?category=all&name=&description=&file=&uploader=&sub_category=&do=search&order=desc&sort=publish_date
### Description
Trying to access yggtorrent from jackett. It say my IP is blocked, but it's not, the cloudflare screen is resolved by my computer on the same network.
After many manual retry, it worked.
### Logged Error Messages
```
2022-03-23T07:25:34+00:00 INFO REQ-3 Incoming request => POST /v1 body: {"maxTimeout":120000,"cmd":"request.get","url":"https://www3.yggtorrent.re/engine/search?category=all&name=&description=&file=&uploader=&sub_category=&do=search&order=desc&sort=publish_date"}
2022-03-23T07:26:06+00:00 INFO REQ-3 Cloudflare detected
2022-03-23T07:26:10+00:00 INFO REQ-3 Cloudflare Error: Cloudflare has blocked this request. Probably your IP is banned for this site, check in your web browser.
2022-03-23T07:26:10+00:00 INFO REQ-3 Response in 35.923 s
```
| closed | 2022-03-23T07:29:24Z | 2022-03-23T21:01:49Z | https://github.com/FlareSolverr/FlareSolverr/issues/344 | [
"duplicate"
] | WazoAkaRapace | 1 |
microsoft/nni | data-science | 5,689 | Can't run more than n trials with trialConcurrency=n > 1 | **Describe the issue**:
When I set trialConcurrency > 1, NNI fails out with
```
[2023-09-30 12:57:40] ERROR (nni.runtime.msg_dispatcher_base/Thread-1 (command_queue_worker)) 7
Traceback (most recent call last):
File "/home/wolf/miniconda3/envs/mausspaun/lib/python3.10/site-packages/nni/runtime/msg_dispatcher_base.py", line 108, in command_queue_worker
self.process_command(command, data)
File "/home/wolf/miniconda3/envs/mausspaun/lib/python3.10/site-packages/nni/runtime/msg_dispatcher_base.py", line 154, in process_command
command_handlers[command](data)
File "/home/wolf/miniconda3/envs/mausspaun/lib/python3.10/site-packages/nni/runtime/msg_dispatcher.py", line 148, in handle_report_metric_data
self._handle_final_metric_data(data)
File "/home/wolf/miniconda3/envs/mausspaun/lib/python3.10/site-packages/nni/runtime/msg_dispatcher.py", line 201, in _handle_final_metric_data
self.tuner.receive_trial_result(id_, _trial_params[id_], value, customized=customized,
File "/home/wolf/miniconda3/envs/mausspaun/lib/python3.10/site-packages/nni/algorithms/hpo/tpe_tuner.py", line 197, in receive_trial_result
params = self._running_params.pop(parameter_id)
KeyError: 7
[2023-09-30 12:57:41] INFO (nni.runtime.msg_dispatcher_base/MainThread) Dispatcher exiting...
[2023-09-30 12:57:44] INFO (nni.runtime.msg_dispatcher_base/MainThread) Dispatcher terminiated
```
When the trialConcurrency = n > 1, then NNI runs n trials and fails out with this error. This happens for all the different n values i've tried (2, 5, 10, 100). When trialConcurrency=1, no problems.
**Environment**:
- NNI version: 3.0
- Training service (local|remote|pai|aml|etc): local
- Client OS: ubuntu
- Server OS (for remote mode only):
- Python version: 3.10.8
- PyTorch/TensorFlow version: N/A
- Is conda/virtualenv/venv used?: conda
- Is running in Docker?: no
**Configuration**:
- Experiment config (remember to remove secrets!):
```
{
"params": {
"experimentType": "hpo",
"searchSpaceFile": "/home/wolf/Dropbox/code/mouse-arm/examples/nni_arm_parameters/search_space.json",
"trialCommand": "python nni_sweep.py",
"trialCodeDirectory": "/home/wolf/Dropbox/code/mouse-arm/examples/nni_arm_parameters",
"trialConcurrency": 5,
"useAnnotation": false,
"debug": false,
"logLevel": "info",
"experimentWorkingDirectory": "/home/wolf/nni-experiments",
"tuner": {
"name": "TPE",
"classArgs": {
"optimize_mode": "minimize"
}
},
"trainingService": {
"platform": "local",
"trialCommand": "python nni_sweep.py",
"trialCodeDirectory": "/home/wolf/Dropbox/code/mouse-arm/examples/nni_arm_parameters",
"debug": false,
"maxTrialNumberPerGpu": 1,
"reuseMode": false
}
},
"execDuration": "13m 8s",
"nextSequenceId": 14,
"revision": 95
}
```
I haven't created a minimal reproducible example yet, I'm hoping someone might recognize this problem, as it seems pretty basic and maybe is just a version issue somewhere? | open | 2023-09-30T17:10:34Z | 2024-01-25T15:47:56Z | https://github.com/microsoft/nni/issues/5689 | [] | studywolf | 8 |
littlecodersh/ItChat | api | 345 | itchat 怎么在被人拦进群之后自动把群加入通讯录,否则群聊发过来的消息没发检测 | 聊天机器人被人拉进了一个群,这个时候必须手动添加至通讯录,不然会报错,无法接受消息,有木有办法检测被人拉进群聊之后自动将该群加入通讯录。 | closed | 2017-05-04T13:00:45Z | 2017-05-29T01:50:53Z | https://github.com/littlecodersh/ItChat/issues/345 | [
"question"
] | lucasjinreal | 3 |
noirbizarre/flask-restplus | flask | 174 | setup.py requires Flask>=0.8 but is using add_app_template_global feature from 0.10 | This means it doesn't run against rhel python-flask 0.9 package
| open | 2016-05-16T11:35:20Z | 2016-09-05T11:36:51Z | https://github.com/noirbizarre/flask-restplus/issues/174 | [
"documentation"
] | xx396 | 0 |
mongkok/fastapi-debug-toolbar | graphql | 52 | Static resource http 404 error under Windows system | The same configuration does not have this problem on Debian
OS: WIN11: Microsoft Windows [Version 10.0.22631.4037]
```
Nekoha-Shizuku Y:\test1 7.74s 6:06 PM
⚡InPRTx ❯❯ pdm run
WARNING: No command is given, default to the Python REPL.
Python 3.12.5 (tags/v3.12.5:ff3bc82, Aug 6 2024, 20:45:27) [MSC v.1940 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>>
⚡InPRTx ❯❯ pdm run 1.py
INFO: Started server process [25668]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: 127.0.0.1:57991 - "GET / HTTP/1.1" 404 Not Found
INFO: 127.0.0.1:57991 - "GET /favicon.ico HTTP/1.1" 404 Not Found
INFO: 127.0.0.1:57991 - "GET /docs HTTP/1.1" 200 OK
INFO: 127.0.0.1:57991 - "GET /_debug_toolbar/static/css/toolbar.css HTTP/1.1" 404 Not Found
INFO: 127.0.0.1:57994 - "GET /_debug_toolbar/static/js/toolbar.js HTTP/1.1" 404 Not Found
INFO: 127.0.0.1:57995 - "GET /_debug_toolbar/static/js/refresh.js HTTP/1.1" 404 Not Found
INFO: 127.0.0.1:57996 - "GET /_debug_toolbar/static/img/icon-white.svg HTTP/1.1" 404 Not Found
INFO: 127.0.0.1:57996 - "GET /openapi.json HTTP/1.1" 200 OK
INFO: 127.0.0.1:57995 - "GET /_debug_toolbar/static/css/print.css HTTP/1.1" 404 Not Found
⚡InPRTx ❯❯ pdm export --no-hashes
# This file is @generated by PDM.
# Please do not edit it manually.
annotated-types==0.7.0
anyio==4.4.0
click==8.1.7
colorama==0.4.6; platform_system == "Windows"
fastapi==0.112.1
fastapi-debug-toolbar==0.6.3
h11==0.14.0
idna==3.8
jinja2==3.1.4
markupsafe==2.1.5
pydantic==2.8.2
pydantic-core==2.20.1
pydantic-extra-types==2.9.0
pydantic-settings==2.4.0
pyinstrument==4.7.2
python-dotenv==1.0.1
sniffio==1.3.1
sqlparse==0.5.1
starlette==0.38.2
typing-extensions==4.12.2
uvicorn==0.30.6
```
```
⚡InPRTx ❯❯ cat .\1.py
from debug_toolbar.middleware import DebugToolbarMiddleware
from fastapi import FastAPI
app = FastAPI(debug=True)
app.add_middleware(DebugToolbarMiddleware)
if __name__ == '__main__':
import uvicorn
uvicorn.run(app, )
```

| closed | 2024-08-24T10:11:31Z | 2024-08-24T10:20:30Z | https://github.com/mongkok/fastapi-debug-toolbar/issues/52 | [] | InPRTx | 1 |
uxlfoundation/scikit-learn-intelex | scikit-learn | 1,588 | No module named 'daal4py._oneapi' | I am trying to run the same code and I receive the exact same error message as [#989](https://github.com/intel/scikit-learn-intelex/issues/989)
ModuleNotFoundError: No module named 'daal4py._oneapi'
I am using Intel i9-12900H with Iris Xr Graphics. Using python 3.8 with conda.
My code is like below:
```python
from sklearnex import patch_sklearn, unpatch_sklearn, config_context
patch_sklearn()
from sklearn.cluster import KMeans
······
with config_context(target_offload="gpu:0"):
clt_3 = KMeans(n_clusters=clusters, n_init=10)
clt_3.fit(img.reshape(-1, 3))
```
My packages in environment:
```
# packages in environment at D:\Dev\anaconda3\envs\scientificProject-colorsci:
#
# Name Version Build Channel
altgraph 0.17.3 py38haa95532_0
brotli 1.1.0 hcfcfb64_1 intel
brotli-bin 1.1.0 hcfcfb64_1 intel
bzip2 1.0.8 vc14hd4456ca_9 [vc14] intel
ca-certificates 2023.7.22 h56e8100_0 intel
certifi 2023.7.22 pyhd8ed1ab_0 intel
charset-normalizer 3.3.2 pyhd8ed1ab_0 intel
colour-science 0.4.1 pypi_0 pypi
contourpy 1.0.5 py38h59b6b97_0
cycler 0.12.1 pyhd8ed1ab_0 intel
daal 2018.0.3.20180405 0 intel
daal4py 2024.0.0 py38_intel_49555 intel
dal 2024.0.0 intel_49555 intel
dpcpp-cpp-rt 2023.2.2 intel_49531 intel
dpcpp_cpp_rt 2024.0.0 intel_49840 intel
dpctl 0.14.5 py38hdcf5b23_24 intel
et_xmlfile 1.1.0 py38haa95532_0
fonttools 4.25.0 pyhd3eb1b0_0 intel
fortran_rt 2024.0.0 intel_49840 intel
freetype 2.10.4 h43e298c_0 intel
future 0.18.3 pyhd8ed1ab_0 intel
icc_rt 2023.2.2 intel_49531 intel
idna 3.4 py38haa95532_0 intel
imageio 2.33.0 pypi_0 pypi
impi_rt 2021.11.0 intel_49499 intel
importlib-metadata 6.7.0 pyha770c72_0 intel
intel-cmplr-lib-rt 2023.2.2 intel_49531 intel
intel-cmplr-lic-rt 2023.2.2 intel_49531 intel
intel-fortran-rt 2023.2.2 intel_49531 intel
intel-opencl-rt 2023.2.2 intel_49531 intel
intel-openmp 2023.2.2 intel_49531 intel
intelpython 2024.0.0 0 intel
joblib 1.3.2 pyhd8ed1ab_0 intel
kiwisolver 1.4.4 py38hb1fd069_1 intel
libbrotlicommon 1.1.0 hcfcfb64_1 intel
libbrotlidec 1.1.0 hcfcfb64_1 intel
libbrotlienc 1.1.0 hcfcfb64_1 intel
libffi 3.4.2 h8ffe710_5 intel
libpng 1.6.37 vc14h53ad9d4_8 [vc14] intel
libsqlite 3.44.0 hcfcfb64_0 intel
libzlib 1.2.13 hcfcfb64_5 intel
llvmlite 0.40.0 py38h19421c1_0 intel
matplotlib 3.1.2 py38haae3450_12 intel
mkl 2023.2.0 intel_49496 intel
mkl-service 2.4.0 py38h9a4cf0c_35 intel
mkl_fft 1.3.6 py38h5020ddc_56 intel
mkl_random 1.2.2 py38hf267b2b_76 intel
mkl_umath 0.1.1 py38h51af1d9_86 intel
munkres 1.1.4 py_0 intel
numba 0.57.0 py38hb182ae8_2 intel
numpy 1.24.3 py38hcdfd0aa_0 intel
numpy-base 1.24.3 py38h9b12b81_0 intel
opencv-python 4.8.1.78 pypi_0 pypi
openpyxl 3.0.10 py38h2bbff1b_0
openssl 3.1.4 hcfcfb64_0 intel
packaging 23.2 pyhd8ed1ab_0 intel
pandas 1.5.2 py38hb0f345d_0 intel
pefile 2022.5.30 py38haa95532_0
pillow 10.1.0 pypi_0 pypi
pip 23.3.1 pyhd8ed1ab_0 intel
platformdirs 3.11.0 pyhd8ed1ab_0 intel
pooch 1.8.0 pyhd8ed1ab_0 intel
pyinstaller 5.6.2 py38h2bbff1b_0
pyinstaller-hooks-contrib 2022.14 py38haa95532_0
pyparsing 3.1.1 pyhd8ed1ab_0 intel
pysocks 1.7.1 pyh0701188_6 intel
python 3.8.16 h11da44f_20 intel
python-dateutil 2.8.2 py38_1 intel
python_abi 3.8 2_cp38 intel
pytz 2023.3.post1 pyhd8ed1ab_0 intel
pywin32 305 py38h2bbff1b_0 intel
pywin32-ctypes 0.2.0 py38_1000
requests 2.31.0 pyhd8ed1ab_0 intel
scikit-learn 1.2.2 py38h763eb3e_2 intel
vc14_runtime 14.36.32532 hdcecf7f_17 intel
vs2015_runtime 14.36.32532 h05e6639_17 intel
wheel 0.41.3 pyhd8ed1ab_0 intel
win_inet_pton 1.1.0 pyhd8ed1ab_6 intel
xlsxwriter 3.1.1 py38haa95532_0
xz 5.2.8 h8cc25b3_0 intel
zipp 3.15.0 pyhd8ed1ab_0 intel
zlib 1.2.13 hcfcfb64_5 intel
```
| open | 2023-11-24T18:29:45Z | 2023-11-24T19:18:53Z | https://github.com/uxlfoundation/scikit-learn-intelex/issues/1588 | [
"bug"
] | emptinessboy | 1 |
ploomber/ploomber | jupyter | 310 | Tracking changes in external files | Incremental builds allow users to quickly iterate since Ploomber takes care of only executing tasks whose source code or parameters have changed since the last run. However, if the source code is loading external files, changes to them are not detected:
```python
from pathlib import Path
def my_task(product, upstream):
# changes to some/path.json are not detected!
content = Path('some/path.json').read_text()
# do stuff
# ...
Path(product).write_text(output)
```
We're looking for a way to enhance this functionality in the simplest possible way for the user. The cleanest approach we've found so far is to embed this logic in `tasks[*].params`:
```yaml
# pipeline.yaml
tasks:
- source: my_module.my_task
product: output.txt
params:
json_file: some/path.json
```
Then our code would look like this:
```python
def my_task(product, upstream, json_file):
content = Path(json_file).read_text()
# do stuff
# ...
Path(product).write_text(output)
```
The main benefit is that this is task-agnostic: it works the same whether it's a function, script, or notebook.
However, for a given task, not all parameters are paths to files. Users may not want to trigger task execution on changes to all external files, so they need a way to distinguish between params and files that trigger task execution.
## Option 1: Naming convention
One way to achieve this without any API changes is to have a naming convention. Say, add a suffix (e.g., `resource` to tell Ploomber also to track files content):
```yaml
# pipeline.yaml
tasks:
- source: my_module.my_task
product: output.txt
params:
# track contents of this file
json_file: some/path-resource.json
# do not track this
json_file_another: another/path.json
```
## Option 2: Special type of parameters
Alternatively, we may define a special type of param:
```yaml
# pipeline.yaml
tasks:
- source: my_module.my_task
product: output.txt
params:
# track contents of this file
json_file:
# special type of parameter defined by the resource key
resource: some/path.json
# do not track this
json_file_another: another/path.json
```
## Important considerations
1. File size: To keep the metadata file size small, we should only save the file hash, not its contents
| closed | 2021-07-10T14:38:34Z | 2021-08-24T02:24:23Z | https://github.com/ploomber/ploomber/issues/310 | [] | edublancas | 7 |
ansible/awx | django | 15,176 | Azure Key Vault Private Key Passphrase not recognized | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
_No response_
### AWX version
24.3.1
### Select the relevant components
- [X] UI
- [ ] UI (tech preview)
- [ ] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [ ] Other
### Installation method
kubernetes
### Modifications
no
### Ansible version
_No response_
### Operating system
Ubuntu 20.04
### Web browser
Edge
### Steps to reproduce
- Created a Microsoft Azure Key Vault credential
- Successfully performed vault test of Secret value
- Successfully used MS-AKV lookup Machine credentials (Password) or Created Credential values in other playbook tasks
- Create a SSH key (RSA 4096) with passphrase that was copied manually from MS-AKV secrets
- Create a Machine Credential copying the SSH Private key
- Use the Private key Passphrase information from the external secret manager (MS-AKV credential/secret)
- Save new credential
### Expected results
Successful creation of the Machine key
### Actual results
Error "must be set when SSH key is encrypted" shown below the Private Key Passphrase field and unable to save.
### Additional information
This was also seen on AWX version 24.0.0 before upgrading to 24.3.1 on the same environment. | open | 2024-05-09T18:17:51Z | 2024-05-15T17:40:04Z | https://github.com/ansible/awx/issues/15176 | [
"type:bug",
"component:ui",
"community"
] | GiuffreLab | 2 |
wger-project/wger | django | 1,909 | Wger Docker install through Portainer errors out with: Internal Server Error: / | ## Steps to Reproduce
1. I only attempted to install wger on docker through Portainer and get the error from the logs below, along with the error in Portainer that: This stack was created outside of Portainer. Control over this stack is limited.
<details>
<summary>Logs</summary>
<!--
Defaulting to user installation because normal site-packages is not writeable
Obtaining file:///home/wger/src
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Checking if build backend supports build_editable: started
Checking if build backend supports build_editable: finished with status 'done'
Getting requirements to build editable: started
Getting requirements to build editable: finished with status 'done'
Installing backend dependencies: started
Installing backend dependencies: finished with status 'done'
Preparing editable metadata (pyproject.toml): started
Preparing editable metadata (pyproject.toml): finished with status 'done'
Building wheels for collected packages: wger
Building editable for wger (pyproject.toml): started
Building editable for wger (pyproject.toml): finished with status 'done'
Created wheel for wger: filename=wger-2.3.0a2-py3-none-any.whl size=17351 sha256=aebaa66990e706dc20d6a8d01aea52931d5352ddba9575632a16ae5530b6d2ec
Stored in directory: /tmp/pip-ephem-wheel-cache-qjpa2h83/wheels/89/63/ff/3e975a91a1d2938a2f2bf90cbf8f0a434e4e94c43d0a28910a
Successfully built wger
Installing collected packages: wger
Attempting uninstall: wger
Found existing installation: wger 2.3.0a2
Uninstalling wger-2.3.0a2:
Successfully uninstalled wger-2.3.0a2
Successfully installed wger-2.3.0a2
INFO 2025-03-07 11:37:36,119 apps AXES: BEGIN version 7.0.1, blocking by ip_address
Running in production mode, running collectstatic now
INFO 2025-03-07 11:37:37,939 apps AXES: BEGIN version 7.0.1, blocking by ip_address
0 static files copied to '/home/wger/static', 10860 unmodified.
Performing database migrations
INFO 2025-03-07 11:37:43,261 apps AXES: BEGIN version 7.0.1, blocking by ip_address
System check identified some issues:
:
?: (wger.W002) exercises without translations
HINT: There are 1 exercises without translations, this will cause problems! You can output or delete them with "python manage.py exercises-health-check --help"
Operations to perform:
Apply all migrations: actstream, auth, authtoken, axes, config, contenttypes, core, easy_thumbnails, exercises, gallery, gym, mailer, manager, measurements, nutrition, sessions, sites, weight
Running migrations:
No migrations to apply.
Your models in app(s): 'exercises', 'nutrition' have changes that are not yet reflected in a migration, and so won't be applied.
Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.
INFO 2025-03-07 11:37:47,507 apps AXES: BEGIN version 7.0.1, blocking by ip_address
System check identified some issues:
:
?: (wger.W002) exercises without translations
HINT: There are 1 exercises without translations, this will cause problems! You can output or delete them with "python manage.py exercises-health-check --help"
Set site URL to gym.v01d.synology.me
Using gunicorn...
[2025-03-07 11:37:48 +0200] [44] [INFO] Starting gunicorn 23.0.0
[2025-03-07 11:37:48 +0200] [44] [INFO] Listening at: http://0.0.0.0:8000 (44)
[2025-03-07 11:37:48 +0200] [44] [INFO] Using worker: sync
[2025-03-07 11:37:48 +0200] [45] [INFO] Booting worker with pid: 45
INFO 2025-03-07 11:37:50,407 apps AXES: BEGIN version 7.0.1, blocking by ip_address
ERROR 2025-03-07 11:37:53,109 log Internal Server Error: /
ERROR 2025-03-07 11:38:03,555 log Internal Server Error: /
ERROR 2025-03-07 11:38:13,920 log Internal Server Error: /
ERROR 2025-03-07 11:38:24,095 log Internal Server Error: /
ERROR 2025-03-07 11:38:34,186 log Internal Server Error: /
ERROR 2025-03-07 11:38:44,327 log Internal Server Error: /
ERROR 2025-03-07 11:38:54,459 log Internal Server Error: /
ERROR 2025-03-07 11:39:04,600 log Internal Server Error: /
ERROR 2025-03-07 11:39:14,689 log Internal Server Error: /
-->
```bash
```
</details>
Can you please check the logs and let me know what I'm doing wrong?
Thank you,
Radu | open | 2025-03-07T09:45:06Z | 2025-03-10T08:53:04Z | https://github.com/wger-project/wger/issues/1909 | [] | palradu | 1 |
liangliangyy/DjangoBlog | django | 308 | 留下网址 | <!--
如果你不认真勾选下面的内容,我可能会直接关闭你的 Issue。
提问之前,建议先阅读 https://github.com/ruby-china/How-To-Ask-Questions-The-Smart-Way
-->
**我确定我已经查看了** (标注`[ ]`为`[x]`)
- [x ] [DjangoBlog的readme](https://github.com/liangliangyy/DjangoBlog/blob/master/README.md)
- [x ] [配置说明](https://github.com/liangliangyy/DjangoBlog/blob/master/bin/config.md)
- [x ] [其他 Issues](https://github.com/liangliangyy/DjangoBlog/issues)
----
**我要申请** (标注`[ ]`为`[x]`)
- [ ] BUG 反馈
- [ ] 添加新的特性或者功能
- [ ] 请求技术支持
感谢作者的开源项目,我用的uwsgi+nginx 网址为:wruixue.cn
再次感谢~ | closed | 2019-08-16T07:08:06Z | 2019-08-19T15:37:40Z | https://github.com/liangliangyy/DjangoBlog/issues/308 | [] | renchong73 | 1 |
apache/airflow | machine-learning | 48,086 | Configuration default should now be LocalExecutor instead of SequentialExecutor | ### Apache Airflow version
3.0.0
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
We have disabled the SequentialExecutor, but forgot to change the default in airflow.cfg.
Tasks don't run as a result, unless the configuration is changed to LocalExecutor - or something else.
### What you think should happen instead?
The default should be changed to LocalExecutor
### How to reproduce
Start running airflow with a fresh beta4 install
### Operating System
MacOS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-03-21T23:50:09Z | 2025-03-22T20:36:22Z | https://github.com/apache/airflow/issues/48086 | [
"kind:bug",
"area:core",
"area:Executors-core",
"affected_version:3.0.0beta"
] | vikramkoka | 1 |
NullArray/AutoSploit | automation | 604 | Unhandled Exception (7f93a8b32) | Autosploit version: `3.0`
OS information: `Linux-4.15.0-46-generic-i686-with-Ubuntu-16.04-xenial`
Running context: `autosploit.py`
Error meesage: `global name 'Except' is not defined`
Error traceback:
```
Traceback (most recent call):
File "/home/goblin/Downloads/AutoSploit-master/autosploit/main.py", line 113, in main
loaded_exploits = load_exploits(EXPLOIT_FILES_PATH)
File "/home/goblin/Downloads/AutoSploit-master/lib/jsonize.py", line 61, in load_exploits
except Except:
NameError: global name 'Except' is not defined
```
Metasploit launched: `False`
| closed | 2019-03-28T23:49:41Z | 2019-04-02T20:24:30Z | https://github.com/NullArray/AutoSploit/issues/604 | [] | AutosploitReporter | 0 |
STVIR/pysot | computer-vision | 347 | Where can I find vid.json used in vid/gen_json.py? | I have ran "python par_crop.py 511 12" to parse VID dataset, and then I want to run "python gen_json.py"
However, I can't find vid.json, which is needed to run the code.
Where can I find it, or generate it ? very thanks
| closed | 2020-04-14T10:46:19Z | 2020-04-17T07:43:45Z | https://github.com/STVIR/pysot/issues/347 | [] | FrogMegane | 7 |
freqtrade/freqtrade | python | 11,401 | AuthenticationError: binance {"code":-2008,"msg":"Invalid Api-Key ID."} | <!--
Have you searched for similar issues before posting it? Yes
Did you have a VERY good look at the [documentation](https://www.freqtrade.io/en/latest/) and are sure that the question is not explained there - yes
Please do not use the question template to report bugs or to request new features.
-->
## Describe your environment
* Operating system: ____ Windows11
* Python Version: _____ (`python -V`) Python 3.13.2
* CCXT version: _____ (`pip freeze | grep ccxt`) 4.4.50
* Freqtrade Version: ____ (`freqtrade -V` or `docker compose run --rm freqtrade -V` for Freqtrade running in docker) freqtrade - 2025.1
## Your question
Good Day Support.
Please can i get assistance with regards to the below error as per my logs. Thank You
Things i have tried :
- I have reinstalled Freqtrade a few times but error still persists
- I have recreated the API keys a few times and double checked them
- I have contacted Binance support , and they confirm my IP is fine and my country is not being blocked
```
Logs --- PS C:\freqtrade\FT_USERDATA> docker compose run --rm freqtrade trade --strategy MultiIndicatorStrategy
2025-02-18 16:00:37,386 - freqtrade - INFO - freqtrade 2025.1
2025-02-18 16:00:37,979 - numexpr.utils - INFO - NumExpr defaulting to 8 threads.
2025-02-18 16:00:40,872 - freqtrade.worker - INFO - Starting worker 2025.1
2025-02-18 16:00:40,873 - freqtrade.configuration.load_config - INFO - Using config: user_data/config.json ...
2025-02-18 16:00:40,876 - freqtrade.loggers - INFO - Enabling colorized output.
2025-02-18 16:00:40,877 - freqtrade.loggers - INFO - Verbosity set to 0
2025-02-18 16:00:40,878 - freqtrade.configuration.configuration - INFO - Runmode set to live.
2025-02-18 16:00:40,879 - freqtrade.configuration.configuration - INFO - Dry run is disabled
2025-02-18 16:00:40,879 - freqtrade.configuration.configuration - INFO - Using DB: "sqlite:///tradesv3.sqlite"
2025-02-18 16:00:40,880 - freqtrade.configuration.configuration - INFO - Using max_open_trades: 3 ...
2025-02-18 16:00:40,959 - freqtrade.configuration.configuration - INFO - Using user-data directory:
/freqtrade/user_data ...
2025-02-18 16:00:40,962 - freqtrade.configuration.configuration - INFO - Using data directory:
/freqtrade/user_data/data/binance ...
2025-02-18 16:00:40,963 - freqtrade.exchange.check_exchange - INFO - Checking exchange...
2025-02-18 16:00:40,978 - freqtrade.exchange.check_exchange - INFO - Exchange "binance" is officially supported by
the Freqtrade development team.
2025-02-18 16:00:40,979 - freqtrade.configuration.configuration - INFO - Using pairlist from configuration.
2025-02-18 16:00:41,036 - freqtrade.resolvers.iresolver - INFO - Using resolved strategy MultiIndicatorStrategy
from '/freqtrade/user_data/strategies/MultiIndicatorStrategy.py'...
2025-02-18 16:00:41,038 - freqtrade.strategy.hyper - INFO - Found no parameter file.
2025-02-18 16:00:41,039 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stake_currency' with
value in config file: USDT.
2025-02-18 16:00:41,040 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stake_amount' with
value in config file: unlimited.
2025-02-18 16:00:41,040 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'unfilledtimeout' with
value in config file: {'entry': 10, 'exit': 10, 'exit_timeout_count': 0, 'unit': 'minutes'}.
2025-02-18 16:00:41,041 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'max_open_trades' with
value in config file: 3.
2025-02-18 16:00:41,042 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using minimal_roi: {'0': 0.15,
'30': 0.1, '60': 0.05, '120': 0.02}
2025-02-18 16:00:41,043 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using timeframe: 15m
2025-02-18 16:00:41,043 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stoploss: -0.07
2025-02-18 16:00:41,044 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_stop: True
2025-02-18 16:00:41,045 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_stop_positive:
0.02
2025-02-18 16:00:41,046 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using
trailing_stop_positive_offset: 0.03
2025-02-18 16:00:41,047 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using
trailing_only_offset_is_reached: True
2025-02-18 16:00:41,049 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using use_custom_stoploss: False
2025-02-18 16:00:41,050 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using process_only_new_candles:
True
2025-02-18 16:00:41,052 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using order_types: {'entry':
'limit', 'exit': 'limit', 'stoploss': 'limit', 'stoploss_on_exchange': False, 'stoploss_on_exchange_interval': 60}
2025-02-18 16:00:41,053 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using order_time_in_force:
{'entry': 'GTC', 'exit': 'GTC'}
2025-02-18 16:00:41,054 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stake_currency: USDT
2025-02-18 16:00:41,055 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stake_amount: unlimited
2025-02-18 16:00:41,056 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using startup_candle_count: 0
2025-02-18 16:00:41,057 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using unfilledtimeout: {'entry':
10, 'exit': 10, 'exit_timeout_count': 0, 'unit': 'minutes'}
2025-02-18 16:00:41,059 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using use_exit_signal: True
2025-02-18 16:00:41,060 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using exit_profit_only: False
2025-02-18 16:00:41,061 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using
ignore_roi_if_entry_signal: False
2025-02-18 16:00:41,063 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using exit_profit_offset: 0.0
2025-02-18 16:00:41,064 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using disable_dataframe_checks:
False
2025-02-18 16:00:41,066 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using
ignore_buying_expired_candle_after: 0
2025-02-18 16:00:41,067 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using
position_adjustment_enable: False
2025-02-18 16:00:41,069 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using
max_entry_position_adjustment: -1
2025-02-18 16:00:41,070 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using max_open_trades: 3
2025-02-18 16:00:41,072 - freqtrade.configuration.config_validation - INFO - Validating configuration ...
2025-02-18 16:00:41,080 - freqtrade.exchange.exchange - INFO - Using CCXT 4.4.50
2025-02-18 16:00:41,127 - freqtrade.exchange.exchange - INFO - Using Exchange "Binance"
2025-02-18 16:00:41,731 - freqtrade.exchange.common - WARNING - _load_async_markets() returned exception: "Error
in reload_markets due to AuthenticationError. Message: binance {"code":-2008,"msg":"Invalid Api-Key ID."}".
Retrying still for 3 times.
2025-02-18 16:00:42,155 - freqtrade.exchange.common - WARNING - _load_async_markets() returned exception: "Error
in reload_markets due to AuthenticationError. Message: binance {"code":-2008,"msg":"Invalid Api-Key ID."}".
Retrying still for 2 times.
2025-02-18 16:00:42,575 - freqtrade.exchange.common - WARNING - _load_async_markets() returned exception: "Error
in reload_markets due to AuthenticationError. Message: binance {"code":-2008,"msg":"Invalid Api-Key ID."}".
Retrying still for 1 times.
2025-02-18 16:00:42,991 - freqtrade.exchange.common - WARNING - _load_async_markets() returned exception: "Error
in reload_markets due to AuthenticationError. Message: binance {"code":-2008,"msg":"Invalid Api-Key ID."}". Giving
up.
2025-02-18 16:00:42,992 - freqtrade.exchange.exchange - ERROR - Could not load markets.
Traceback (most recent call last):
File "/freqtrade/freqtrade/exchange/exchange.py", line 633, in _api_reload_markets
return await self._api_async.load_markets(reload=reload, params={})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 287, in
load_markets
raise e
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 283, in
load_markets
result = await self.markets_loading
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 272, in
load_markets_helper
currencies = await self.fetch_currencies()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/binance.py", line 2984, in
fetch_currencies
results = await asyncio.gather(*promises)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/binance.py", line 11373, in request
response = await self.fetch2(path, api, method, params, headers, body, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 885, in fetch2
raise e
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 876, in fetch2
return await self.fetch(request['url'], request['method'], request['headers'], request['body'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 254, in fetch
self.handle_errors(http_status_code, http_status_text, url, method, headers, http_response, json_response,
request_headers, request_body)
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/binance.py", line 11340, in
handle_errors
self.throw_exactly_matched_exception(self.get_exceptions_by_url(url, 'exact'), error, feedback)
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/base/exchange.py", line 4688, in
throw_exactly_matched_exception
raise exact[string](message)
ccxt.base.errors.AuthenticationError: binance {"code":-2008,"msg":"Invalid Api-Key ID."}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/freqtrade/freqtrade/exchange/common.py", line 187, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/exchange/exchange.py", line 645, in _load_async_markets
markets = self.loop.run_until_complete(self._api_reload_markets(reload=reload))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/exchange/exchange.py", line 637, in _api_reload_markets
raise TemporaryError(
freqtrade.exceptions.TemporaryError: Error in reload_markets due to AuthenticationError. Message: binance
{"code":-2008,"msg":"Invalid Api-Key ID."}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/freqtrade/freqtrade/exchange/exchange.py", line 633, in _api_reload_markets
return await self._api_async.load_markets(reload=reload, params={})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 287, in
load_markets
raise e
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 283, in
load_markets
result = await self.markets_loading
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 272, in
load_markets_helper
currencies = await self.fetch_currencies()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/binance.py", line 2984, in
fetch_currencies
results = await asyncio.gather(*promises)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/binance.py", line 11373, in request
response = await self.fetch2(path, api, method, params, headers, body, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 885, in fetch2
raise e
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 876, in fetch2
return await self.fetch(request['url'], request['method'], request['headers'], request['body'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 254, in fetch
self.handle_errors(http_status_code, http_status_text, url, method, headers, http_response, json_response,
request_headers, request_body)
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/binance.py", line 11340, in
handle_errors
self.throw_exactly_matched_exception(self.get_exceptions_by_url(url, 'exact'), error, feedback)
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/base/exchange.py", line 4688, in
throw_exactly_matched_exception
raise exact[string](message)
ccxt.base.errors.AuthenticationError: binance {"code":-2008,"msg":"Invalid Api-Key ID."}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/freqtrade/freqtrade/exchange/common.py", line 187, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/exchange/exchange.py", line 645, in _load_async_markets
markets = self.loop.run_until_complete(self._api_reload_markets(reload=reload))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/exchange/exchange.py", line 637, in _api_reload_markets
raise TemporaryError(
freqtrade.exceptions.TemporaryError: Error in reload_markets due to AuthenticationError. Message: binance
{"code":-2008,"msg":"Invalid Api-Key ID."}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/freqtrade/freqtrade/exchange/exchange.py", line 633, in _api_reload_markets
return await self._api_async.load_markets(reload=reload, params={})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 287, in
load_markets
raise e
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 283, in
load_markets
result = await self.markets_loading
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 272, in
load_markets_helper
currencies = await self.fetch_currencies()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/binance.py", line 2984, in
fetch_currencies
results = await asyncio.gather(*promises)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/binance.py", line 11373, in request
response = await self.fetch2(path, api, method, params, headers, body, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 885, in fetch2
raise e
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 876, in fetch2
return await self.fetch(request['url'], request['method'], request['headers'], request['body'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 254, in fetch
self.handle_errors(http_status_code, http_status_text, url, method, headers, http_response, json_response,
request_headers, request_body)
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/binance.py", line 11340, in
handle_errors
self.throw_exactly_matched_exception(self.get_exceptions_by_url(url, 'exact'), error, feedback)
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/base/exchange.py", line 4688, in
throw_exactly_matched_exception
raise exact[string](message)
ccxt.base.errors.AuthenticationError: binance {"code":-2008,"msg":"Invalid Api-Key ID."}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/freqtrade/freqtrade/exchange/common.py", line 187, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/exchange/exchange.py", line 645, in _load_async_markets
markets = self.loop.run_until_complete(self._api_reload_markets(reload=reload))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/exchange/exchange.py", line 637, in _api_reload_markets
raise TemporaryError(
freqtrade.exceptions.TemporaryError: Error in reload_markets due to AuthenticationError. Message: binance
{"code":-2008,"msg":"Invalid Api-Key ID."}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/freqtrade/freqtrade/exchange/exchange.py", line 633, in _api_reload_markets
return await self._api_async.load_markets(reload=reload, params={})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 287, in
load_markets
raise e
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 283, in
load_markets
result = await self.markets_loading
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 272, in
load_markets_helper
currencies = await self.fetch_currencies()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/binance.py", line 2984, in
fetch_currencies
results = await asyncio.gather(*promises)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/binance.py", line 11373, in request
response = await self.fetch2(path, api, method, params, headers, body, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 885, in fetch2
raise e
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 876, in fetch2
return await self.fetch(request['url'], request['method'], request['headers'], request['body'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/base/exchange.py", line 254, in fetch
self.handle_errors(http_status_code, http_status_text, url, method, headers, http_response, json_response,
request_headers, request_body)
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/async_support/binance.py", line 11340, in
handle_errors
self.throw_exactly_matched_exception(self.get_exceptions_by_url(url, 'exact'), error, feedback)
File "/home/ftuser/.local/lib/python3.12/site-packages/ccxt/base/exchange.py", line 4688, in
throw_exactly_matched_exception
raise exact[string](message)
ccxt.base.errors.AuthenticationError: binance {"code":-2008,"msg":"Invalid Api-Key ID."}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/freqtrade/freqtrade/exchange/exchange.py", line 672, in reload_markets
self._markets = retrier(self._load_async_markets, retries=retries)(reload=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/exchange/common.py", line 199, in wrapper
return wrapper(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/exchange/common.py", line 199, in wrapper
return wrapper(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/exchange/common.py", line 199, in wrapper
return wrapper(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/exchange/common.py", line 202, in wrapper
raise ex
File "/freqtrade/freqtrade/exchange/common.py", line 187, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/exchange/exchange.py", line 645, in _load_async_markets
markets = self.loop.run_until_complete(self._api_reload_markets(reload=reload))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/exchange/exchange.py", line 637, in _api_reload_markets
raise TemporaryError(
freqtrade.exceptions.TemporaryError: Error in reload_markets due to AuthenticationError. Message: binance
{"code":-2008,"msg":"Invalid Api-Key ID."}
2025-02-18 16:00:43,012 - freqtrade - ERROR - Could not load markets, therefore cannot start. Please investigate
the above error for more details.
```
| closed | 2025-02-18T16:09:07Z | 2025-02-19T14:46:27Z | https://github.com/freqtrade/freqtrade/issues/11401 | [
"Question"
] | Shanekesh | 5 |
kymatio/kymatio | numpy | 1,032 | JTFS kwarg-only arguments | The way it's written right now, `TimeFrequencyScattering` requires all arguments to be kwargs, as opposed to the regular scattering classes (e.g., `Scattering1D`). For consistency, it would be good to follow the same convention here. Is there some specific reason for forcing kwargs here?
@lostanlen @cyrusvahidi | closed | 2023-11-01T09:41:27Z | 2024-04-24T14:26:34Z | https://github.com/kymatio/kymatio/issues/1032 | [] | janden | 9 |
jupyterhub/zero-to-jupyterhub-k8s | jupyter | 3,174 | autohttps pod unable to obtain certificate BEHIND FW (NAT) | # Intro
Hi All,
I am sorry for re-opening this type of issue. I wasted a lot of time on this and I have no idea how to fix it.
I tried workarounds from [here](https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/2601) and related posts. All are about **deleting/restarting the autohttps pod - this did not help me.**
I installed Z2JH in my local research kubernetes cluster. I need letsencrypt otherwise the [jupyterhub-ssh](https://github.com/yuvipanda/jupyterhub-ssh) solution won't work(at least I am not able to get it working with manual https).
# Question
Can anybody please help me troubleshoot autohttps?
# Setup:
JupyterHub is accessible from the public on URL sai-jupyterhub.cttc.es ; using github authentication. The url translates to 84.88.63.129:
```
nslookup sai-jupyterhub.cttc.es
Name: sai-jupyterhub.cttc.es
Address: 84.88.63.129
```
Local IP of the server/loadbalancer in DMZ is 10.1.24.200. If I configure manual https I am able to access the jupyterhub, with autohttps no.
This is the log after deleting(restarting) autohttps pod (same as in other autohttps related posts):
```
kubectl logs autohttps-798f766ccd-tjp2l
Defaulted container "traefik" out of: traefik, secret-sync, load-acme (init)
time="2023-07-27T15:55:58Z" level=info msg="Configuration loaded from file: /etc/traefik/traefik.yaml"
time="2023-07-27T15:55:58Z" level=warning msg="Traefik Pilot is deprecated and will be removed soon. Please check our Blog for migration instructions later this year."
time="2023-07-27T15:55:59Z" level=warning msg="No domain found in rule PathPrefix(`/`), the TLS options applied for this router will depend on the SNI of each request" routerName=default@file entryPointName=https
time="2023-07-27T15:55:59Z" level=warning msg="No domain found in rule PathPrefix(`/`), the TLS options applied for this router will depend on the SNI of each request" entryPointName=https routerName=default@file
time="2023-07-27T15:56:19Z" level=error msg="Unable to obtain ACME certificate for domains \"sai-jupyterhub.cttc.es\" : unable to generate a certificate for the domains [sai-jupyterhub.cttc.es]: error: one or more domains had a problem:\n[sai-jupyterhub.cttc.es] acme: error: 400 :: urn:ietf:params:acme:error:connection :: 84.88.63.129: Fetching http://sai-jupyterhub.cttc.es/.well-known/acme-challenge/2CFC7lvufZVcxIheS1VWY5qrql9cEl1jVPymujqBewA: Timeout during connect (likely firewall problem)\n" providerName=default.acme
```
this is the full config of Z2JH:
```
proxy:
https:
enabled: True
hosts:
- sai-jupyterhub.cttc.es
letsencrypt:
contactEmail: XXX@XXX.XXX
service:
loadBalancerIP: 10.1.24.200
extraPorts:
- name: ssh
port: 22
targetPort: ssh
- name: sftp
port: 2222
targetPort: sftp
traefik:
extraPorts:
- name: ssh
containerPort: 8022
- name: sftp
containerPort: 2222
networkPolicy:
allowedIngressPorts: [http, https, ssh, sftp]
extraStaticConfig:
entryPoints:
ssh-entrypoint:
address: :8022
sftp-entrypoint:
address: :2222
extraDynamicConfig:
tcp:
services:
ssh-service:
loadBalancer:
servers:
- address: jupyterhub-ssh:22
sftp-service:
loadBalancer:
servers:
- address: jupyterhub-sftp:22
routers:
ssh-router:
entrypoints: [ssh-entrypoint]
rule: HostSNI(`*`)
service: ssh-service
sftp-router:
entrypoints: [sftp-entrypoint]
rule: HostSNI(`*`)
service: sftp-service
hub:
config:
GitHubOAuthenticator:
client_id: XXX
client_secret: XXX
oauth_callback_url: https://sai-jupyterhub.cttc.es/hub/oauth_callback
allowed_organizations:
- XXX
scope:
- read:org
JupyterHub:
authenticator_class: github
extraConfig:
spawner_config: |
c.JupyterHub.allow_named_servers = True
c.JupyterHub.spawner_class = "kubespawner.KubeSpawner"
c.KubeSpawner.allow_privilege_escalation = True
c.KubeSpawner.args = ['--allow-root']
c.KubeSpawner.uid = 0
c.KubeSpawner.profile_list = [
{
'display_name': 'Python DataScience',
'slug': 'datascience-small',
'profile_options': {
'memory': {
'display_name': 'CPUs',
'choices': {
'2': {
'display_name': '2 CPUs',
'kubespawner_override': {
'cpu_limit': 2,
'cpu_guarantee': 2,
'mem_guarantee': '8G',
'mem_limit': '8G',
}
},
'4': {
'display_name': '4 CPUs',
'kubespawner_override': {
'cpu_limit': 4,
'cpu_guarantee': 4,
'mem_guarantee': '8G',
'mem_limit': '8G',
}
},
'12': {
'display_name': '12 CPUs',
'kubespawner_override': {
'cpu_limit': 12,
'cpu_guarantee': 12,
'mem_guarantee': '64G',
'mem_limit': '64G',
}
},
}
},
},
'kubespawner_override': {
'image': 'jupyter/datascience-notebook:python-3.10',
}
},
]
cull:
enabled: false
```
| closed | 2023-07-27T16:17:52Z | 2023-08-21T13:37:37Z | https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/3174 | [] | 5uperpalo | 3 |
getsentry/sentry | python | 87,589 | Project Settings > Tags and Context will not load for large project with many tags | ### Environment
SaaS (https://sentry.io/)
### Steps to Reproduce
1. Be in a large project with many tags, Dropbox's dropbox-ios project is where we have this issue.
2. Project Settings > Tags and Context
Shows 429 responses, likely because of unoptimized code doing many queries for this page to load.
<img width="851" alt="Image" src="https://github.com/user-attachments/assets/a1610219-90f7-41a4-8542-1ccf70cd67fd" />
### Expected Result
Page should load quickly and without an error.
### Actual Result
Spinner, wait, then error message saying there was an error loading data.
### Product Area
Settings - Projects
### Link
https://dropbox.sentry.io/settings/projects/dropbox-ios/tags/
### DSN
_No response_
### Version
_No response_ | open | 2025-03-21T16:21:20Z | 2025-03-24T11:58:47Z | https://github.com/getsentry/sentry/issues/87589 | [
"Product Area: Settings - Projects",
"Waiting for: Product Owner"
] | jboulter11 | 3 |
horovod/horovod | tensorflow | 3,372 | failed call to cuInit after calling hvd.init() | **Environment:**
1. Framework: TensorFlow
2. Framework version: 2.6.0
3. Horovod version: 0.23.0 (also seen with 0.22.1)
4. MPI version: OpenMPI 4.1.1
5. CUDA version: 11.3.1
6. NCCL version: 2.10.3
7. Python version: 3.9.5
8. Spark / PySpark version: NA
9. Ray version: NA
10. OS and version: Debian 10
11. GCC version: 10.3.0
12. CMake version: 3.20.1
**Bug report:**
I'm running into the following error:
```
>>> import tensorflow as tf
>>> import horovod.tensorflow as hvd
>>> hvd.init()
>>> tf.config.experimental.list_physical_devices('GPU')
2022-01-18 17:24:29.162452: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: UNKNOWN ERROR (34)
2022-01-18 17:24:29.162536: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: r29n3.lisa.surfsara.nl
2022-01-18 17:24:29.162558: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: r29n3.lisa.surfsara.nl
2022-01-18 17:24:29.162831: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:200] libcuda reported version is: Invalid argument: expected %d.%d, %d.%d.%d, or %d.%d.%d.%d form for driver version; got
"1"
2022-01-18 17:24:29.162897: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:204] kernel reported version is: 470.94.0
```
But note that this runs fine:
```
>>> import tensorflow as tf
>>> import horovod.tensorflow as hvd
>>> tf.config.experimental.list_physical_devices('GPU')
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU'), PhysicalDevice(name='/physical_device:GPU:1', device_type='GPU'), PhysicalDevice(name='/physical_device:GPU:2', device_type='GPU'), PhysicalDevice(name='/physical_device:GPU:3', device_type='GPU')]
```
Thus, it seems to be the `hvd.init` that triggers the issue.
I have the exact same installation (same version of Horovod & dependencies) on another cluster, where it works well. The only difference is the OS (the other runs `RHEL 8.4`) and the CUDA driver version (`470.57.02` on the system where it works, `470.94` on the system where it fails).
My suspicion is that the newer CUDA driver (`470.94`) maybe only identifies itself by major/minor version, as an `nivida-smi` _also_ only shows major/minor version for this driver. The revision seems to be omitted. I am not sure though how the `hvd.init()` call would alter the behaviour of the CUDA driver initialization - but I'm hoping someone on your side has a hunch on this :)
Alternatively, if someone has the same driver version `470.94` and can _not_ reproduce the issue, I'd also love to hear it - then we have to look for alternative theories on what is causing this. | open | 2022-01-18T17:17:01Z | 2022-01-18T17:19:16Z | https://github.com/horovod/horovod/issues/3372 | [
"bug"
] | casparvl | 1 |
ivy-llc/ivy | tensorflow | 28,052 | Fix Frontend Failing Test: torch - creation_ops.torch.full | ToDo - 27498
Type- Priority Open | closed | 2024-01-25T21:28:07Z | 2024-01-31T11:01:36Z | https://github.com/ivy-llc/ivy/issues/28052 | [
"Sub Task"
] | sgalpha01 | 2 |
neuml/txtai | nlp | 88 | Switch default backend on Windows to Hnswlib | Currently, Windows defaults to annoy as the index backend on Windows. This should be changed to Hnswlib given that it has been support for all txtai ANN operations. | closed | 2021-05-11T20:27:18Z | 2021-05-13T15:12:42Z | https://github.com/neuml/txtai/issues/88 | [] | davidmezzetti | 0 |
jupyterlab/jupyter-ai | jupyter | 657 | ImportError: cannot import name 'SecretStr' from 'langchain_core.pydantic_v1' | <!-- Welcome! Thank you for contributing. These HTML comments will not render in the issue.
Before creating a new issue:
* Search for relevant issues
* Follow the issue reporting guidelines:
https://jupyterlab.readthedocs.io/en/latest/getting_started/issue.html
-->
## Description
<!--Describe the bug clearly and concisely. Include screenshots if possible-->
Not able to load the magic
## Reproduce
<!--Describe step-by-step instructions to reproduce the behavior-->
```
pip install -U jupyter-ai
```
<!--Describe how you diagnosed the issue. See the guidelines at
https://jupyterlab.readthedocs.io/en/latest/getting_started/issue.html -->
```python
%load_ext jupyter_ai_magics
```
<pre>
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[2], line 1
----> 1 get_ipython().run_line_magic('load_ext', 'jupyter_ai_magics')
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/IPython/core/interactiveshell.py:2456](http://localhost:8888/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/IPython/core/interactiveshell.py#line=2455), in InteractiveShell.run_line_magic(self, magic_name, line, _stack_depth)
2454 kwargs['local_ns'] = self.get_local_scope(stack_depth)
2455 with self.builtin_trap:
-> 2456 result = fn(*args, **kwargs)
2458 # The code below prevents the output from being displayed
2459 # when using magics with decorator @output_can_be_silenced
2460 # when the last Python token in the expression is a ';'.
2461 if getattr(fn, magic.MAGIC_OUTPUT_CAN_BE_SILENCED, False):
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/IPython/core/magics/extension.py:33](http://localhost:8888/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/IPython/core/magics/extension.py#line=32), in ExtensionMagics.load_ext(self, module_str)
31 if not module_str:
32 raise UsageError('Missing module name.')
---> 33 res = self.shell.extension_manager.load_extension(module_str)
35 if res == 'already loaded':
36 print("The %s extension is already loaded. To reload it, use:" % module_str)
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/IPython/core/extensions.py:76](http://localhost:8888/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/IPython/core/extensions.py#line=75), in ExtensionManager.load_extension(self, module_str)
69 """Load an IPython extension by its module name.
70
71 Returns the string "already loaded" if the extension is already loaded,
72 "no load function" if the module doesn't have a load_ipython_extension
73 function, or None if it succeeded.
74 """
75 try:
---> 76 return self._load_extension(module_str)
77 except ModuleNotFoundError:
78 if module_str in BUILTINS_EXTS:
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/IPython/core/extensions.py:91](http://localhost:8888/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/IPython/core/extensions.py#line=90), in ExtensionManager._load_extension(self, module_str)
89 with self.shell.builtin_trap:
90 if module_str not in sys.modules:
---> 91 mod = import_module(module_str)
92 mod = sys.modules[module_str]
93 if self._call_load_ipython_extension(mod):
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/importlib/__init__.py:126](http://localhost:8888/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/importlib/__init__.py#line=125), in import_module(name, package)
124 break
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1204, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1176, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:1147, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:690, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:940, in exec_module(self, module)
File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds)
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/jupyter_ai_magics/__init__.py:4](http://localhost:8888/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/jupyter_ai_magics/__init__.py#line=3)
1 from ._version import __version__
3 # expose embedding model providers on the package root
----> 4 from .embedding_providers import (
5 BaseEmbeddingsProvider,
6 BedrockEmbeddingsProvider,
7 CohereEmbeddingsProvider,
8 GPT4AllEmbeddingsProvider,
9 HfHubEmbeddingsProvider,
10 OpenAIEmbeddingsProvider,
11 QianfanEmbeddingsEndpointProvider,
12 )
13 from .exception import store_exception
14 from .magics import AiMagics
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/jupyter_ai_magics/embedding_providers.py:3](http://localhost:8888/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/jupyter_ai_magics/embedding_providers.py#line=2)
1 from typing import ClassVar, List
----> 3 from jupyter_ai_magics.providers import (
4 AuthStrategy,
5 AwsAuthStrategy,
6 EnvAuthStrategy,
7 Field,
8 MultiEnvAuthStrategy,
9 )
10 from langchain.pydantic_v1 import BaseModel, Extra
11 from langchain_community.embeddings import (
12 BedrockEmbeddings,
13 CohereEmbeddings,
(...)
17 QianfanEmbeddingsEndpoint,
18 )
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/jupyter_ai_magics/providers.py:11](http://localhost:8888/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/jupyter_ai_magics/providers.py#line=10)
8 from typing import Any, ClassVar, Coroutine, Dict, List, Literal, Optional, Union
10 from jsonpath_ng import parse
---> 11 from langchain.chat_models.base import BaseChatModel
12 from langchain.llms.sagemaker_endpoint import LLMContentHandler
13 from langchain.llms.utils import enforce_stop_tokens
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chat_models/__init__.py:23](http://localhost:8888/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chat_models/__init__.py#line=22)
19 import warnings
21 from langchain_core._api import LangChainDeprecationWarning
---> 23 from langchain.utils.interactive_env import is_interactive_env
26 def __getattr__(name: str) -> None:
27 from langchain_community import chat_models
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/utils/__init__.py:14](http://localhost:8888/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/utils/__init__.py#line=13)
7 from langchain_core.utils.formatting import StrictFormatter, formatter
8 from langchain_core.utils.input import (
9 get_bolded_text,
10 get_color_mapping,
11 get_colored_text,
12 print_text,
13 )
---> 14 from langchain_core.utils.utils import (
15 check_package_version,
16 convert_to_secret_str,
17 get_pydantic_field_names,
18 guard_import,
19 mock_now,
20 raise_for_status_with_text,
21 xor_args,
22 )
24 from langchain.utils.env import get_from_dict_or_env, get_from_env
25 from langchain.utils.math import cosine_similarity, cosine_similarity_top_k
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_core/utils/__init__.py:18](http://localhost:8888/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_core/utils/__init__.py#line=17)
16 from langchain_core.utils.loading import try_load_from_hub
17 from langchain_core.utils.strings import comma_list, stringify_dict, stringify_value
---> 18 from langchain_core.utils.utils import (
19 build_extra_kwargs,
20 check_package_version,
21 convert_to_secret_str,
22 get_pydantic_field_names,
23 guard_import,
24 mock_now,
25 raise_for_status_with_text,
26 xor_args,
27 )
29 __all__ = [
30 "StrictFormatter",
31 "check_package_version",
(...)
50 "stringify_value",
51 ]
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_core/utils/utils.py:13](http://localhost:8888/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_core/utils/utils.py#line=12)
10 from packaging.version import parse
11 from requests import HTTPError, Response
---> 13 from langchain_core.pydantic_v1 import SecretStr
16 def xor_args(*arg_groups: Tuple[str, ...]) -> Callable:
17 """Validate specified keyword args are mutually exclusive."""
ImportError: cannot import name 'SecretStr' from 'langchain_core.pydantic_v1' ([/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_core/pydantic_v1/__init__.py](http://localhost:8888/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_core/pydantic_v1/__init__.py))
</pre>
## Expected behavior
<!--Describe what you expected to happen-->
## Context
<!--Complete the following for context, and add any other relevant context-->
- Operating System and version: <!-- e.g. Linux Ubuntu 21.04 -->
- Browser and version: <!-- e.g. Chrome 92 -->
- JupyterLab version: <!-- e.g. 3.1.7 -->
Macbook Pro M3 14.3.1
Chrome 121.0.6167.184 (Official Build) (arm64)
<!--The more content you provide, the more we can help!-->
<details><summary>Troubleshoot Output</summary>
<pre>
$PATH:
/Users/marksusol/DataScience/google-cloud-sdk/bin
/opt/local/libexec/llvm-16/bin
/opt/local/bin
/opt/local/sbin
/Users/marksusol/bin
/Library/Frameworks/Spark.framework/spark-3.5.0-bin-hadoop3/bin
/Library/Frameworks/Python.framework/Versions/3.11/bin
/usr/local/bin
/System/Cryptexes/App/usr/bin
/usr/bin
/bin
/usr/sbin
/sbin
/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin
/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin
/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin
/Library/Apple/usr/bin
sys.path:
/Library/Frameworks/Python.framework/Versions/3.11/bin
/Library/Frameworks/Python.framework/Versions/3.11/lib/python311.zip
/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11
/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/lib-dynload
/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages
sys.executable:
/Library/Frameworks/Python.framework/Versions/3.11/bin/python3.11
sys.version:
3.11.7 (v3.11.7:fa7a6f2303, Dec 4 2023, 15:22:56) [Clang 13.0.0 (clang-1300.0.29.30)]
platform.platform():
macOS-14.3.1-arm64-arm-64bit
which -a jupyter:
/Library/Frameworks/Python.framework/Versions/3.11/bin/jupyter
pip list:
Package Version
----------------------------- ------------------
absl-py 2.0.0
accelerate 0.26.1
aiohttp 3.9.3
aiosignal 1.3.1
aiosqlite 0.20.0
alabaster 0.7.13
alembic 1.13.1
annotated-types 0.6.0
anyascii 0.3.2
anyio 3.7.1
appdirs 1.4.4
applaunchservices 0.3.0
appnope 0.1.3
apprise 1.7.2
argon2-cffi 23.1.0
argon2-cffi-bindings 21.2.0
arrow 1.3.0
asgi-lifespan 2.1.0
astroid 3.0.1
asttokens 2.4.1
astunparse 1.6.3
async-lru 2.0.4
async-timeout 4.0.3
asyncpg 0.29.0
atomicwrites 1.4.1
attrs 23.1.0
autopep8 2.0.4
Babel 2.13.1
beautifulsoup4 4.12.2
binaryornot 0.4.4
bitsandbytes 0.42.0
black 23.11.0
bleach 6.1.0
blinker 1.7.0
blis 0.7.11
cachetools 5.3.2
catalogue 2.0.10
catboost 1.2.2
Cerberus 1.3.5
certifi 2023.11.17
cffi 1.16.0
chardet 5.2.0
charset-normalizer 3.3.2
click 8.1.7
cloudpathlib 0.16.0
cloudpickle 3.0.0
colorama 0.4.6
comm 0.2.0
confection 0.1.4
contourpy 1.2.0
contractions 0.1.73
cookiecutter 2.5.0
coolname 2.2.0
croniter 2.0.1
cryptography 42.0.5
cspy 1.0.3
cycler 0.12.1
cymem 2.0.8
dacite 1.8.1
dask 2024.2.1
databricks-cli 0.18.0
dataclasses-json 0.6.4
datasets 2.16.1
dateparser 1.2.0
debugpy 1.8.0
decorator 5.1.1
deepmerge 1.1.1
defusedxml 0.7.1
diff-match-patch 20230430
dill 0.3.7
distlib 0.3.7
distributed 2024.2.1
distro 1.8.0
dm-tree 0.1.8
dnspython 2.6.1
docker 6.1.3
docker-pycreds 0.4.0
docopt 0.6.2
docstring-parser 0.15
docstring-to-markdown 0.13
docutils 0.20.1
einops 0.7.0
email_validator 2.1.1
emoji 2.9.0
en-core-web-sm 3.7.1
en-spacy-pii-distilbert 0.0.0
entrypoints 0.4
et-xmlfile 1.1.0
evaluate 0.4.1
executing 2.0.1
faiss-cpu 1.7.4
Faker 22.6.0
fastjsonschema 2.19.0
filelock 3.13.1
findspark 2.0.1
flake8 6.1.0
Flask 3.0.2
flatbuffers 23.5.26
fonttools 4.46.0
fqdn 1.5.1
frozenlist 1.4.1
fsspec 2023.10.0
funcy 2.0
gast 0.5.4
gensim 4.3.2
gitdb 4.0.11
GitPython 3.1.41
google-auth 2.25.2
google-auth-oauthlib 1.1.0
google-pasta 0.2.0
graphviz 0.20.1
greenlet 3.0.3
griffe 0.41.0
grpcio 1.60.1
grpcio-tools 1.60.1
gunicorn 21.2.0
h11 0.14.0
h2 4.1.0
h5py 3.10.0
hpack 4.0.0
htmlmin 0.1.12
httpcore 1.0.2
httpx 0.26.0
huggingface-hub 0.20.3
hyperframe 6.0.1
ibm-cloud-sdk-core 3.19.1
idna 3.6
ImageHash 4.3.1
imagesize 1.4.1
importlib-metadata 7.0.0
inflection 0.5.1
install 1.3.5
intervaltree 3.1.0
ipykernel 6.28.0
ipython 8.18.1
ipython-genutils 0.2.0
ipython-sql 0.5.0
ipywidgets 8.1.1
isoduration 20.11.0
isort 5.13.0
itsdangerous 2.1.2
jaraco.classes 3.3.0
jedi 0.19.1
jellyfish 1.0.3
Jinja2 3.1.2
joblib 1.3.2
json5 0.9.14
jsonlines 4.0.0
jsonpatch 1.33
jsonpath-ng 1.6.1
jsonpointer 2.4
jsonschema 4.20.0
jsonschema-specifications 2023.11.2
jupyter 1.0.0
jupyter_ai 2.10.0
jupyter_ai_magics 2.10.0
jupyter_client 8.6.0
jupyter-console 6.6.3
jupyter_core 5.5.0
jupyter-events 0.9.0
jupyter-lsp 2.2.1
jupyter_server 2.12.1
jupyter_server_terminals 0.4.4
jupyterlab 4.1.2
jupyterlab_pygments 0.3.0
jupyterlab_server 2.25.2
jupyterlab-widgets 3.0.9
kaggle 1.6.4
keras 2.15.0
keyring 24.3.0
kiwisolver 1.4.5
kubernetes 29.0.0
langchain 0.1.9
langchain-community 0.0.24
langchain-core 0.1.27
langcodes 3.3.0
langsmith 0.1.9
libclang 16.0.6
llvmlite 0.41.1
locket 1.0.0
Mako 1.3.2
Markdown 3.5.1
markdown-it-py 3.0.0
MarkupSafe 2.1.3
marshmallow 3.21.0
matplotlib 3.8.2
matplotlib-inline 0.1.6
mccabe 0.7.0
mdurl 0.1.2
mistune 3.0.2
ml-dtypes 0.2.0
mlflow 2.10.1
more-itertools 10.1.0
mpmath 1.3.0
msgpack 1.0.7
multidict 6.0.5
multimethod 1.10
multiprocess 0.70.15
murmurhash 1.0.10
mypy-extensions 1.0.0
namex 0.0.7
nbclient 0.9.0
nbconvert 7.12.0
nbformat 5.9.2
nest-asyncio 1.5.8
networkx 3.2.1
nltk 3.8.1
notebook 7.0.6
notebook_shim 0.2.3
numba 0.58.1
numexpr 2.8.8
numpy 1.25.2
numpydoc 1.6.0
oauthlib 3.2.2
openai 1.10.0
openpyxl 3.1.2
opt-einsum 3.3.0
orjson 3.9.15
overrides 7.4.0
packaging 23.2
pandas 2.1.4
pandocfilters 1.5.0
parso 0.8.3
partd 1.4.1
pathspec 0.11.2
pathy 0.10.3
patsy 0.5.6
peft 0.8.2
pendulum 2.1.2
pep517 0.13.1
pexpect 4.9.0
phik 0.12.4
pickleshare 0.7.5
Pillow 10.1.0
pip 24.0
pip-api 0.0.30
pipreqs 0.4.13
platformdirs 4.1.0
plette 0.4.4
plotly 5.18.0
pluggy 1.3.0
ply 3.11
prefect 2.16.0
preshed 3.0.9
prettytable 3.9.0
prometheus-client 0.19.0
prompt-toolkit 3.0.41
protobuf 4.21.7
psutil 5.9.6
psycopg2-binary 2.9.9
ptyprocess 0.7.0
PuLP 2.8.0
pure-eval 0.2.2
py4j 0.10.9.7
pyahocorasick 2.0.0
pyarrow 15.0.0
pyarrow-hotfix 0.6
pyasn1 0.5.1
pyasn1-modules 0.3.0
pycodestyle 2.11.1
pycparser 2.21
pydantic 1.10.11
pydantic_core 2.14.6
pydocstyle 6.3.0
pyflakes 3.1.0
Pygments 2.17.2
PyJWT 2.8.0
pyLDAvis 3.4.2
pylint 3.0.2
pylint-venv 3.0.3
pyls-spyder 0.4.0
pyobjc-core 10.1
pyobjc-framework-Cocoa 10.1
pyobjc-framework-CoreServices 10.1
pyobjc-framework-FSEvents 10.1
pyparsing 3.1.1
PyPDF2 3.0.1
PyQt5 5.15.10
PyQt5-Qt5 5.15.11
PyQt5-sip 12.13.0
PyQtWebEngine 5.15.6
PyQtWebEngine-Qt5 5.15.11
pyspark 3.5.0
python-dateutil 2.8.2
python-json-logger 2.0.7
python-lsp-black 1.3.0
python-lsp-jsonrpc 1.1.2
python-lsp-server 1.9.0
python-multipart 0.0.9
python-slugify 8.0.1
pytoolconfig 1.2.6
pytz 2023.3.post1
pytzdata 2020.1
PyWavelets 1.5.0
PyYAML 6.0.1
pyzmq 25.1.2
QDarkStyle 3.2.3
qstylizer 0.2.2
QtAwesome 1.2.3
qtconsole 5.5.1
QtPy 2.4.1
querystring-parser 1.2.4
readchar 4.0.5
referencing 0.32.0
regex 2023.10.3
requests 2.31.0
requests-oauthlib 1.3.1
requirementslib 3.0.0
responses 0.18.0
rfc3339-validator 0.1.4
rfc3986-validator 0.1.1
rich 13.7.0
rope 1.11.0
rpds-py 0.13.2
rpy2 3.5.14
rsa 4.9
Rtree 1.1.0
ruamel.yaml 0.18.6
ruamel.yaml.clib 0.2.8
safetensors 0.4.2
scikit-learn 1.3.2
scipy 1.11.4
seaborn 0.13.2
Send2Trash 1.8.2
sentencepiece 0.1.99
sentry-sdk 1.40.2
seqeval 1.2.2
setproctitle 1.3.3
setuptools 65.5.0
shtab 1.6.5
six 1.16.0
smart-open 6.4.0
smmap 5.0.1
sniffio 1.3.0
snowballstemmer 2.2.0
sortedcontainers 2.4.0
soupsieve 2.5
spacy 3.7.2
spacy-alignments 0.9.1
spacy-legacy 3.0.12
spacy-loggers 1.0.5
spacy-transformers 1.3.4
Sphinx 7.2.6
sphinxcontrib-applehelp 1.0.7
sphinxcontrib-devhelp 1.0.5
sphinxcontrib-htmlhelp 2.0.4
sphinxcontrib-jsmath 1.0.1
sphinxcontrib-qthelp 1.0.6
sphinxcontrib-serializinghtml 1.1.9
spyder 5.5.0
spyder-kernels 2.5.0
spyder-notebook 0.5.1
SQLAlchemy 2.0.25
sqlparse 0.4.4
srsly 2.4.8
stack-data 0.6.3
statsmodels 0.14.1
sympy 1.12
tabula-py 2.9.0
tabulate 0.9.0
tangled-up-in-unicode 0.2.0
tblib 3.0.0
tenacity 8.2.3
tensorboard 2.15.1
tensorboard-data-server 0.7.2
tensorflow 2.15.0
tensorflow-estimator 2.15.0
tensorflow-hub 0.15.0
tensorflow-io-gcs-filesystem 0.34.0
tensorflow-macos 2.15.0
tensorflow-metal 1.1.0
tensorflow-text 2.15.0
termcolor 2.4.0
terminado 0.18.0
text-unidecode 1.3
textdistance 4.6.0
textsearch 0.0.24
thinc 8.1.12
threadpoolctl 3.2.0
three-merge 0.1.1
tiktoken 0.6.0
tinycss2 1.2.1
tokenizers 0.15.1
toml 0.10.2
tomli 2.0.1
tomlkit 0.12.3
toolz 0.12.1
torch 2.3.0.dev20240208
torchaudio 2.2.0.dev20240208
torchvision 0.18.0.dev20240208
tornado 6.4
tqdm 4.66.1
traitlets 5.14.0
transformers 4.37.2
trl 0.7.10
typeguard 4.1.5
typer 0.9.0
types-python-dateutil 2.8.19.14
typing_extensions 4.8.0
typing-inspect 0.9.0
tyro 0.7.2
tzdata 2023.3
tzlocal 5.2
ujson 5.8.0
Unidecode 1.3.7
uri-template 1.3.0
urllib3 2.1.0
uvicorn 0.27.1
visions 0.7.5
vrpy 0.5.1
wandb 0.16.3
wasabi 0.10.1
watchdog 3.0.0
wcwidth 0.2.12
weasel 0.3.4
webcolors 1.13
webencodings 0.5.1
websocket-client 1.7.0
websockets 12.0
Werkzeug 3.0.1
wget 3.2
whatthepatch 1.0.5
wheel 0.42.0
widgetsnbextension 4.0.9
wordcloud 1.9.3
wrapt 1.14.1
wurlitzer 3.0.3
xgboost 2.0.3
xxhash 3.4.1
yapf 0.40.2
yarg 0.1.9
yarl 1.9.4
ydata-profiling 4.6.4
zict 3.0.0
zipp 3.17.0
</pre>
</details>
<details><summary>Command Line Output</summary>
<pre>
Paste the output from your command line running `jupyter lab` here, use `--debug` if possible.
</pre>
</details>
<details><summary>Browser Output</summary>
<!--See https://webmasters.stackexchange.com/a/77337 for how to access the JavaScript console-->
<pre>
Paste the output from your browser Javascript console here, if applicable.
</pre>
</details>
| closed | 2024-02-27T16:47:28Z | 2025-02-04T23:29:29Z | https://github.com/jupyterlab/jupyter-ai/issues/657 | [
"bug"
] | msusol | 3 |
clovaai/donut | nlp | 175 | KeyError: 'image' in Dataset | ## Error
While training my custom dataset I am getting the following error:
```
KeyError: Caught KeyError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/anaconda/envs/azureml_py38/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
data = fetcher.fetch(index)
File "/anaconda/envs/azureml_py38/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/anaconda/envs/azureml_py38/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/mnt/batch/tasks/shared/LS_root/mounts/clusters/pibit-ml/code/donut-poc/donut/donut/util.py", line 111, in __getitem__
input_tensor = self.donut_model.encoder.prepare_input(sample["image"], random_padding=self.split == "train")
KeyError: 'image'
```
I previously used Google Colab to run the same code with the same dataset structure, but I did not see this error.
## Dataset logs :
```
Using custom data configuration dataset-1000-7363c1598f36d3b2
Reusing dataset json (/home/azureuser/.cache/huggingface/datasets/json/dataset-1000-7363c1598f36d3b2/0.0.0/da492aad5680612e4028e7f6ddc04b1dfcec4b64db470ed7cc5f2bb265b9b6b5)
Dataset({
features: ['file_name', 'ground_truth'],
num_rows: 952
})
Using custom data configuration dataset-1000-7363c1598f36d3b2
Reusing dataset json (/home/azureuser/.cache/huggingface/datasets/json/dataset-1000-7363c1598f36d3b2/0.0.0/da492aad5680612e4028e7f6ddc04b1dfcec4b64db470ed7cc5f2bb265b9b6b5)
Dataset({
features: ['file_name', 'ground_truth'],
num_rows: 25
})
```
## structure of dataset
```
dataset-1000
├── test
│ ├── metadata.jsonl
│ ├── {image_path0}
│ ├── {image_path1}
│ .
│ .
├── train
│ ├── metadata.jsonl
│ ├── {image_path0}
│ ├── {image_path1}
│ .
│ .
└── validation
├── metadata.jsonl
├── {image_path0}
├── {image_path1}
│ .
│ .
```
## command:
```shell
$ python train.py --config config/train_cord.yaml \
--pretrained_model_name_or_path "naver-clova-ix/donut-base" \
--dataset_name_or_paths '["dataset-1000"]' \
--exp_version "test_experiment_1000"
```
## config yaml:
```yaml
resume_from_checkpoint_path: null # only used for resume_from_checkpoint option in PL
result_path: "./result"
pretrained_model_name_or_path: "naver-clova-ix/donut-base" # loading a pre-trained model (from moldehub or path)
dataset_name_or_paths: "naver-clova-ix/donut-base" # loading datasets (from moldehub or path)
sort_json_key: False # cord dataset is preprocessed, and publicly available at https://huggingface.co/datasets/naver-clova-ix/cord-v2
train_batch_sizes: [1]
val_batch_sizes: [1]
input_size: [1280, 960] # when the input resolution differs from the pre-training setting, some weights will be newly initialized (but the model training would be okay)
max_length: 768
align_long_axis: False
num_nodes: 1
seed: 2022
lr: 3e-5
warmup_steps: 100
num_training_samples_per_epoch: 900
max_epochs: 50
max_steps: -1
num_workers: 8
val_check_interval: 1.0
check_val_every_n_epoch: 3
gradient_clip_val: 1.0
verbose: True
```
## System Info
OS: Ubuntu 20.04.6 LTS x86_64 (Azure VM)
Python: 3.8.10
donut: 1.0.9
| open | 2023-04-02T10:17:18Z | 2023-10-18T20:45:06Z | https://github.com/clovaai/donut/issues/175 | [] | pathikg | 2 |
apache/airflow | python | 47,603 | Airflow does not handle port properly in cookies when more instances opened in one browser | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### If "Other Airflow 2 version" selected, which one?
2.10.3
### What happened?
When we use two airflow instances on different ip / machines we can access them in one browser instance and work with them normally and switch windows with airflows freely without enforced logout.
In the case there are two instances (or more) of airflow running on the same machine / ip but only port differs in one browser instance: When user switches to different airflow instance by tab in web browser he is logged out and when returning back user needs to login again.
For me it seems like port is not part of the key of cookies but machine name is.
### What you think should happen instead?
We assume the behaviour should be the same no matter if airflow instances are running on same machine or not.
### How to reproduce
Configure two airflows on one machine but different port (for example http tunnel may be used to emulate it).
Open an instance in one browser instance a switch between windows.
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| open | 2025-03-11T09:43:31Z | 2025-03-16T17:40:17Z | https://github.com/apache/airflow/issues/47603 | [
"kind:bug",
"area:webserver",
"good first issue",
"area:core",
"area:UI"
] | nesp159de | 3 |
zihangdai/xlnet | nlp | 208 | xlnet consistantly underperformed Bert on language modeling | Hi, I am using Xlnet as a language model with code provided by HuggingFace PyTorch-transformers.
However, the xlnet consistantly underperformed Bert in our experiment. Considering it's advanced design, we are curious how could that happen. For example, we test their ability of coreference resolution on Winograd Schema Challenge dataset. An example of the dataset would be:
```
The trophy doesn't fit into the brown suitcase because the trophy is too large. The trophy doesn't fit into the brown suitcase because the suitcase is too large.
```
And we let the model to choose the corret one by calculating the perplexity of the sentence.
In the end, we got the result:
Bert-large: 62% acc
Xlnet-base: 54.4% acc
Xlnet-large: 63.6% acc
So from my point of view, Xlnet-base should be compared to Bert-large since they have similar parameter size. Furthermore, we have done experiments on other test datasets, like SWAG, and saw the same phenomenon. Any thoughts on this problem would be appreciated :)
Code:
```
import torch
#from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM
from pytorch_transformers import XLNetLMHeadModel, XLNetTokenizer,XLNetConfig
import numpy as np
# OPTIONAL: if you want to have more information on what's happening, activate the logger as follows
import logging
logging.basicConfig(level=logging.INFO)
def xlnet_score(text, model, tokenizer):
tokenized_text = tokenizer.tokenize(text)
# text = "[CLS] Stir the mixture until it is done [SEP]"
sentence_prob = 0
#Sprint(len(tokenized_text))
for masked_index in range(0,len(tokenized_text)):
# Mask a token that we will try to predict back with `BertForMaskedLM`
masked_word = tokenized_text[masked_index]
if masked_word!= "<sep>":
masked_word = tokenized_text[masked_index]
tokenized_text[masked_index] = '<mask>'
input_ids = torch.tensor(tokenizer.convert_tokens_to_ids(tokenized_text)).unsqueeze(0)
index = torch.tensor(tokenizer.convert_tokens_to_ids(masked_word))
perm_mask = torch.zeros((1, input_ids.shape[1], input_ids.shape[1]), dtype=torch.float)
perm_mask[:, :, masked_index] = 1.0 # Previous tokens don't see last token
target_mapping = torch.zeros((1, 1, input_ids.shape[1]), dtype=torch.float) # Shape [1, 1, seq_length] => let's predict one token
target_mapping[0, 0, masked_index] = 1.0 # Our first (and only) prediction will be the last token of the sequence (the masked token)
input_ids = input_ids.to('cuda')
perm_mask = perm_mask.to('cuda')
target_mapping = target_mapping.to('cuda')
index = index.to('cuda')
with torch.no_grad():
outputs = model(input_ids, perm_mask=perm_mask, target_mapping=target_mapping, labels = index)
next_token_logits = outputs[0]
length = len(tokenized_text)
# predict_list = predictions[0, masked_index]
sentence_prob += next_token_logits.item()
tokenized_text[masked_index] = masked_word
return sentence_prob/(length)
tokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased')
model = XLNetLMHeadModel.from_pretrained('xlnet-large-cased')
model.to('cuda')
model.eval()
with open("wsc.txt", "r", encoding= 'utf-8') as f:
file = f.readlines()
num = len(file)
count = 0
curr = 0
label_list = ['A','B']
for i in file:
label, sentence_1, sentence_2 = i.split("\001")
prob_1 = xlnet_score(sentence_1, model=model, tokenizer=tokenizer)
prob_2 = xlnet_score(sentence_2, model=model, tokenizer=tokenizer)
answer = min(prob_1, prob_2)
#print(prob_1, prob_2, prob_3, prob_4)
index = [prob_1, prob_2].index(answer)
print(label, label_list[index])
if label==label_list[index]:
count+=1
curr += 1
print (count, curr, count/curr)
print (count/num)
``` | open | 2019-08-11T03:44:50Z | 2019-08-23T07:01:46Z | https://github.com/zihangdai/xlnet/issues/208 | [] | XuhuiZhou | 2 |
zappa/Zappa | flask | 774 | [Migrated] switching "keep_warm" off doesn't work | Originally from: https://github.com/Miserlou/Zappa/issues/1915 by [filthysocks](https://github.com/filthysocks)
If you switch the keep_warm functionality on and then back off again then the cloudwatch rule is NOT deleted.
## Context
If you set keep_warm to true, deploy set it to false and deploy again then the cloudwatch rule is not deleted. Thus, the function will still be kept warm.
## Expected Behavior
if keep_warm: false, then the function should not be kept warm
## Actual Behavior
it is kept warm
## Possible Fix
not sure ...
## Steps to Reproduce
1. deploy a function with "keep_warm": true
2. update this function with "keep_warm": false
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.48.2
* Operating System and Python version: ubuntu 18.04 python 3.7.3
* The output of `pip freeze`:
appdirs==1.4.3
argcomplete==1.9.3
aspy.yaml==1.3.0
atomicwrites==1.3.0
attrs==19.1.0
black==19.3b0
boto3==1.9.194
botocore==1.12.194
certifi==2019.6.16
cfgv==2.0.1
cfn-flip==1.2.1
chardet==3.0.4
Click==7.0
cognitojwt==1.1.0
coverage==4.5.4
dash==1.0.1
dash-auth==1.3.2
dash-bootstrap-components==0.6.3
dash-core-components==1.0.0
dash-daq==0.1.0
dash-google-auth==0.1.2
dash-html-components==1.0.0
dash-renderer==1.0.0
dash-table==4.0.1
docutils==0.14
durationpy==0.5
ecdsa==0.13.2
Flask==1.1.1
Flask-Cognito==1.13
Flask-Compress==1.4.0
Flask-Dance==2.2.0
Flask-SeaSurf==0.2.2
future==0.16.0
hjson==3.0.1
identify==1.4.5
idna==2.8
importlib-metadata==0.18
itsdangerous==1.1.0
Jinja2==2.10.1
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.20.0
MarkupSafe==1.1.1
more-itertools==7.2.0
nodeenv==1.3.3
numpy==1.16.4
oauthlib==3.0.2
pandas==0.24.2
pkg-resources==0.0.0
placebo==0.9.0
plotly==4.0.0
pluggy==0.12.0
pre-commit==1.17.0
py==1.8.0
pyasn1==0.4.5
pytest==4.3.1
pytest-cov==2.7.1
python-dateutil==2.6.1
python-jose==3.0.1
python-slugify==1.2.4
pytz==2019.1
PyYAML==5.1.1
requests==2.7.0
requests-oauthlib==1.2.0
retrying==1.3.3
rsa==4.0
s3transfer==0.2.1
six==1.12.0
toml==0.10.0
tqdm==4.19.1
troposphere==2.4.9
ua-parser==0.8.0
Unidecode==1.1.1
urllib3==1.25.3
URLObject==2.4.3
virtualenv==16.6.2
Werkzeug==0.15.5
wsgi-request-logger==0.4.6
zappa==0.48.2
zipp==0.5.2
| closed | 2021-02-20T12:42:15Z | 2024-04-13T18:37:14Z | https://github.com/zappa/Zappa/issues/774 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
cchen156/Learning-to-See-in-the-Dark | tensorflow | 38 | iphone 6s weights | The official website contains a low light image and its result when run on SID model (http://web.engr.illinois.edu/~cchen156/SID/examples/6s.html). I could not find the weights for this in the repository. Is it possible to upload the weights for when trained on iPhone 6s? | closed | 2018-07-17T18:27:43Z | 2019-03-04T16:20:02Z | https://github.com/cchen156/Learning-to-See-in-the-Dark/issues/38 | [] | allendoss | 5 |
joeyespo/grip | flask | 243 | Render YAML front matter like GitHub does | This file...
```markdown
---
title: "Cancer Mortality Exploration"
author: "w203 Teaching Team"
output: pdf_document
---
## Background
In this lab, imagine that your team is hired by a health government agency. They would like to understand factors that predict cancer mortality rates, with the ultimate aim of identifying communities for social interventions, and of understanding which interventions are likely to have the most impact. Your team was hired to perform an exploratory analysis to help the agency address their goals.
```
...looks like this on GitHub
<img width="953" alt="screen shot 2017-09-13 at 10 57 30 pm" src="https://user-images.githubusercontent.com/8334252/30414165-10852514-98d7-11e7-8a54-8267725fc5b8.png">
... and this on Grip
<img width="933" alt="screen shot 2017-09-13 at 10 57 39 pm" src="https://user-images.githubusercontent.com/8334252/30414175-18388dbe-98d7-11e7-91b1-04f944e5a60d.png">
| open | 2017-09-14T05:59:04Z | 2020-01-07T12:41:18Z | https://github.com/joeyespo/grip/issues/243 | [
"enhancement"
] | acarl005 | 5 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 15,914 | [Bug]: Error code 1 ( windows 10, nvidia ) | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [x] The issue has been reported before but has not been fixed yet
### What happened?
error code 1 when trying to run sd web ui
### Steps to reproduce the problem
1. download web ui from https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre
2. run update.bat
3. run run.bat
### What should have happened?
webui should run fine
### What browsers do you use to access the UI ?
Brave
### Sysinfo
Traceback (most recent call last):
File "E:\sd.webui\webui\launch.py", line 48, in <module>
main()
File "E:\sd.webui\webui\launch.py", line 29, in main
filename = launch_utils.dump_sysinfo()
File "E:\sd.webui\webui\modules\launch_utils.py", line 473, in dump_sysinfo
from modules import sysinfo
File "E:\sd.webui\webui\modules\sysinfo.py", line 7, in <module>
import pkg_resources
ModuleNotFoundError: No module named 'pkg_resources'
Press any key to continue . . .
### Console logs
```Shell
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.9.4
Commit hash: feee37d75f1b168768014e4634dcb156ee649c05
Installing torch and torchvision
F:\system\python\python.exe: No module named pip
Traceback (most recent call last):
File "F:\webui\launch.py", line 48, in <module>
main()
File "F:\webui\launch.py", line 39, in main
prepare_environment()
File "F:\webui\modules\launch_utils.py", line 380, in prepare_environment
run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True)
File "F:\webui\modules\launch_utils.py", line 115, in run
raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't install torch.
Command: "F:\system\python\python.exe" -m pip install torch==2.1.2 torchvision==0.16.2 --extra-index-url https://download.pytorch.org/whl/cu121
Error code: 1
Press any key to continue . . .
```
### Additional information
I have updated my GPU driver recently.
i have updated cuda recently
i have installed python 3.10.6 recently
i already tried ensure pip
i have added python to path | closed | 2024-05-30T14:10:33Z | 2024-06-01T12:40:02Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15914 | [
"bug-report"
] | r00tlol | 15 |
fugue-project/fugue | pandas | 136 | [FEATURE] Rename the limit method to take. Add it in Fugue SQL | **Context**
The limit method pull request is close to being [merged](https://github.com/fugue-project/fugue/pull/131). This method should be renamed to `head` because there already is a ANSI SQL keyword named `LIMIT`. After renaming this to `head`, implement the `HEAD` keyword for Fugue SQL with the same parameters as the execution engine definition.
| closed | 2021-01-11T00:47:14Z | 2021-01-12T05:38:32Z | https://github.com/fugue-project/fugue/issues/136 | [
"good first issue",
"Fugue SQL",
"core feature"
] | kvnkho | 1 |
jina-ai/clip-as-service | pytorch | 210 | change extract feature to end2end by bert classification | 1. change optimize_graph of graph.py
before:

after:

| closed | 2019-01-21T06:05:32Z | 2019-02-02T02:27:36Z | https://github.com/jina-ai/clip-as-service/issues/210 | [] | xiongma | 2 |
Ehco1996/django-sspanel | django | 216 | docker部署能不能简化?或者写一个脚本 | "HOST": "127.0.0.1", # 正常版本使用
# 'HOST': 'db', # docker版本使用
这种直接用脚本改就行。。。
如何,我折腾了好久啊。。。 | closed | 2019-02-28T14:18:00Z | 2019-03-19T14:37:29Z | https://github.com/Ehco1996/django-sspanel/issues/216 | [] | perfect-network | 4 |
CorentinJ/Real-Time-Voice-Cloning | python | 1,089 | Attribute Error: 'str' object has no attribute 'tobytes' | i was trying to use the localhost version of this synthesis ai by following the tutorial to get it working on a local server, but every time i try to synthesize text in the command prompt, this is is the error that pops up and it doesnt generate the audio
line 39, in generate
fwav = io.StringIO(output.tobytes())
AttributeError: 'str' object has no attribute 'tobytes' | closed | 2022-06-27T16:19:35Z | 2022-06-27T16:50:43Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1089 | [] | cringegaming64 | 0 |
Avaiga/taipy | data-visualization | 2,469 | [Refactor] Improve Data Node visual element (data tab) | ### Description
The `data_node` selector has a `show_data` parameter, which allows to show the "Data" tab.
The visual element is "smart" and "knows" when data is tabular; in that case, the "DATA" tab has 2 sub-tabs: One for a table and one for charts.
There are 2 things I think could be improved with this visual element:
1- Add parameters to not show the charts (default to False). Sometimes this is not useful, and when it's not useful, it can be confusing (also, I imagine adding a chart element if not needed is not great for performance).
2- Add a "summary" tab or sub-tab with the "Data" tab. The idea is to display summary statistics of the table for each column (data type, min, max, mean, Q1/2/, number of missing values). I don't know how to do it, but the concept is similar to Kaggle's data card descriptions (one example: https://www.kaggle.com/datasets/artyomkruglov/gaming-profiles-2025-steam-playstation-xbox/data). Another example is VS Code's Data Wrangler extension for Jupyter notebooks: https://code.visualstudio.com/docs/datascience/data-wrangler-quick-start.
For point number 2, I would also add a parameter to add or remove this component.
### Acceptance Criteria
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
### Code of Conduct
- [x] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | open | 2025-02-28T19:20:47Z | 2025-03-07T13:42:38Z | https://github.com/Avaiga/taipy/issues/2469 | [
"Core",
"📈 Improvement",
"🖰 GUI",
"❓ Question",
"🟩 Priority: Low",
"🔒 Staff only",
"Core: 📁 Data node"
] | enarroied | 2 |
google-research/bert | nlp | 857 | BERT returns different embedding for same sentence | I am using pre-trained BERT for creating features, for same sentence it produces different result in two different runs. Do we have to set some random state to produce consistent result? I am using **pytorch-transformers** for reading pre-trained model. | closed | 2019-09-16T11:29:43Z | 2019-09-16T17:12:37Z | https://github.com/google-research/bert/issues/857 | [] | rshah1990 | 1 |
tortoise/tortoise-orm | asyncio | 1,339 | How delete microsecond ? | Save data in mysql , created_at is : 2023-02-15 00:00:00.000000,
but i want created_at is : 2023-02-15 00:00:00 , what can i do ? i want delete microsecond | open | 2023-02-15T06:36:51Z | 2023-05-10T15:53:15Z | https://github.com/tortoise/tortoise-orm/issues/1339 | [] | kakrotto | 1 |
piskvorky/gensim | machine-learning | 3,046 | Possible unexplainable segfault after save/load cycles of KeyedVectors or Word2Vec | Via: https://stackoverflow.com/questions/66040594/in-gensim-word2vec-is-it-safe-to-resume-training-using-keyedvectors-if-model-is?noredirect=1#comment116793635_66040594
In that SO question, reliable cases of getting a segfault are described after either (1) saving/reloading just a model's `KeyedVectors`; or (2) saving/reloading a full model after `build_vocab()` & before `train()`. Size of training data may be involved, per user's report that problem didn't recur when same steps were tried on a small subset of their full data.
Needs full reproduction case on local (even if synthetic) dataset. | open | 2021-02-17T19:05:01Z | 2021-02-17T19:05:01Z | https://github.com/piskvorky/gensim/issues/3046 | [] | gojomo | 0 |
Asabeneh/30-Days-Of-Python | numpy | 562 | Does this repo go over OOP in python? | I was wondering if this is something that is included, I mean I'm alreayd on Day8 and my goal after this is to know python enough for coding interview. It would just be nice to have a OOP section, so we can use Python to build apps also. | open | 2024-07-17T03:42:42Z | 2024-07-17T03:42:42Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/562 | [] | realpyop | 0 |
iterative/dvc | data-science | 10,657 | Import ignore cache | # Bug Report
## DVC 3.56 Import ignore cache
## Description
I have local DVC repo with json annotations added each one and large data storage with thousand image files added as full folder.
I use symlinks for cache.
But import in external storage doesn't create symlinks for images data storage. DVC download first, than link files.
### Reproduce
##### Local data repo
Config
```
cache.type=symlink
core.autostage=true
```
Local storage dirs:
```
annotations/
master_annotation.json
train_annotation.json
test_annotations.json
data_storage/
image_0
image_1
...
image_N
```
In local storage comands
```
dvc add ./annotations/*
dvc add ./data_storage
```
##### Project repo
Config
```
cache.type=symlink
cache.dir=path/to/local/data/repo/.dvc/cache
core.autostage=true
```
commands:
```
dvc import path/to/local/data/repo data_storage
```
This command start downloading copies files from cache
```
dvc import path/to/local/data/repo data_storage --no-download
```
Check data_storage.dvc file and create it in project repo, but
```
dvc checkout data_storage.dvc
```
or
```
dvc checkout data_storage.dvc --relink
```
start downloading files again
### Expected
I think DVC must create symlink for files without downloading originals
### Environment information
<!--
This is required to ensure that we can reproduce the bug.
-->
**Output of `dvc doctor` in local data repo:**
```DVC version: 3.56.0 (deb)
-------------------------
Platform: Python 3.12.7 on Linux-5.15.0-86-generic-x86_64-with-glibc2.31
Subprojects:
Supports:
azure (adlfs = 2024.7.0, knack = 0.12.0, azure-identity = 1.19.0),
gdrive (pydrive2 = 1.21.1),
gs (gcsfs = 2024.10.0),
hdfs (fsspec = 2024.10.0, pyarrow = 18.0.0),
http (aiohttp = 3.10.10, aiohttp-retry = 2.9.0),
https (aiohttp = 3.10.10, aiohttp-retry = 2.9.0),
oss (ossfs = 2023.12.0),
s3 (s3fs = 2024.10.0, boto3 = 1.35.36),
ssh (sshfs = 2024.9.0),
webdav (webdav4 = 0.10.0),
webdavs (webdav4 = 0.10.0),
webhdfs (fsspec = 2024.10.0)
Config:
Global: /home/user/.config/dvc
System: /etc/xdg/dvc
Cache types: hardlink, symlink
Cache directory: nfs on ip-addr:/storage/
Caches: local
Remotes: None
Workspace directory: nfs on [ip-addr:/storage/](ip-addr:/storage/)
Repo: dvc, git
Repo.site_cache_dir: /var/tmp/dvc/repo/76de345055c7e5635fd954ee44e5d4e2
```
**Output of `dvc doctor` in project repo:**
```
DVC version: 3.56.0 (deb)
-------------------------
Platform: Python 3.12.7 on Linux-5.15.0-86-generic-x86_64-with-glibc2.31
Subprojects:
Supports:
azure (adlfs = 2024.7.0, knack = 0.12.0, azure-identity = 1.19.0),
gdrive (pydrive2 = 1.21.1),
gs (gcsfs = 2024.10.0),
hdfs (fsspec = 2024.10.0, pyarrow = 18.0.0),
http (aiohttp = 3.10.10, aiohttp-retry = 2.9.0),
https (aiohttp = 3.10.10, aiohttp-retry = 2.9.0),
oss (ossfs = 2023.12.0),
s3 (s3fs = 2024.10.0, boto3 = 1.35.36),
ssh (sshfs = 2024.9.0),
webdav (webdav4 = 0.10.0),
webdavs (webdav4 = 0.10.0),
webhdfs (fsspec = 2024.10.0)
Config:
Global: /home/user/.config/dvc
System: /etc/xdg/dvc
Cache types: symlink
Cache directory: nfs on ip-addr:/storage/
Caches: local
Remotes: None
Workspace directory: ext4 on /dev/sda2
Repo: dvc, git
Repo.site_cache_dir: /var/tmp/dvc/repo/f967073321531b0cc07fba234dd73d7b
```
| open | 2024-12-20T08:26:33Z | 2025-02-04T08:21:21Z | https://github.com/iterative/dvc/issues/10657 | [
"bug",
"p1-important",
"A: data-sync"
] | konstantin-frolov | 2 |
s3rius/FastAPI-template | graphql | 232 | RabbitMQ service not found when running pytest with poetry | # Problem
As the generated `README.md` suggests, I am running postgres with the command:
```
docker run -p "5432:5432" -e "POSTGRES_PASSWORD=readapp2" -e "POSTGRES_USER=readapp2" -e "POSTGRES_DB=readapp2" postgres:16.3-bullseye
```
and then attemping to run pytest via the command `poetry run pytest -vv .` The problem is that 2/5 of the tests (the ones defined in the file `test_rabbit.py`) fail because it cannot resolve the `ampq` url for the rabbitMQ service.
After messing with it a bit, I cannot figure out an easy way to launch an instance of rabbitMQ as a stand-alone container.
# Full output
```
(base) nate@nate-Kudu:~/spyder_projects/readapp/readapp2$ poetry run pytest . -vv
================================================================================= test session starts ==================================================================================
platform linux -- Python 3.12.8, pytest-8.3.4, pluggy-1.5.0 -- /home/nate/.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/bin/python
cachedir: .pytest_cache
rootdir: /home/nate/spyder_projects/readapp/readapp2
configfile: pyproject.toml
plugins: anyio-4.8.0, env-1.1.5, cov-5.0.0
collected 7 items
tests/test_db.py::test_db PASSED [ 14%]
tests/test_echo.py::test_echo PASSED [ 28%]
tests/test_rabbit.py::test_message_publishing ERROR [ 42%]
tests/test_rabbit.py::test_message_wrong_exchange ERROR [ 57%]
tests/test_readapp2.py::test_health PASSED [ 71%]
tests/test_redis.py::test_setting_value PASSED [ 85%]
tests/test_redis.py::test_getting_value PASSED [100%]
======================================================================================== ERRORS ========================================================================================
______________________________________________________________________ ERROR at setup of test_message_publishing _______________________________________________________________________
self = <Connection: "amqp://guest:******@readapp2-rmq:5672/" at 0x784ad9d4edf0>, client_properties = {}
@task
async def connect(
self, client_properties: Optional[FieldTable] = None,
) -> bool:
if self.is_opened:
raise RuntimeError("Connection already opened")
ssl_context = self.ssl_context
if ssl_context is None and self.url.scheme == "amqps":
ssl_context = await self.loop.run_in_executor(
None, self._get_ssl_context,
)
self.ssl_context = ssl_context
log.debug("Connecting to: %s", self)
try:
> reader, writer = await asyncio.open_connection(
self.url.host, self.url.port, ssl=ssl_context,
**self.__create_connection_kwargs,
)
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aiormq/connection.py:457:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../miniconda3/lib/python3.12/asyncio/streams.py:48: in open_connection
transport, _ = await loop.create_connection(
../../../miniconda3/lib/python3.12/asyncio/base_events.py:1078: in create_connection
infos = await self._ensure_resolved(
../../../miniconda3/lib/python3.12/asyncio/base_events.py:1461: in _ensure_resolved
return await loop.getaddrinfo(host, port, family=family, type=type,
../../../miniconda3/lib/python3.12/asyncio/base_events.py:900: in getaddrinfo
return await self.run_in_executor(
../../../miniconda3/lib/python3.12/concurrent/futures/thread.py:59: in run
result = self.fn(*self.args, **self.kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
host = 'readapp2-rmq', port = 5672, family = 0, type = <SocketKind.SOCK_STREAM: 1>, proto = 0, flags = 0
def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0):
"""Resolve host and port into list of address info entries.
Translate the host/port argument into a sequence of 5-tuples that contain
all the necessary arguments for creating a socket connected to that service.
host is a domain name, a string representation of an IPv4/v6 address or
None. port is a string service name such as 'http', a numeric port number or
None. By passing None as the value of host and port, you can pass NULL to
the underlying C API.
The family, type and proto arguments can be optionally specified in order to
narrow the list of addresses returned. Passing zero as a value for each of
these arguments selects the full range of results.
"""
# We override this function since we want to translate the numeric family
# and socket type values to enum constants.
addrlist = []
> for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
E socket.gaierror: [Errno -2] Name or service not known
../../../miniconda3/lib/python3.12/socket.py:976: gaierror
The above exception was the direct cause of the following exception:
anyio_backend = 'asyncio', request = <SubRequest 'test_exchange' for <Function test_message_publishing>>, args = ()
kwargs = {'test_exchange_name': '1cd4c981028b42628b327156c64487d5', 'test_rmq_pool': <aio_pika.pool.Pool object at 0x784ad9dfde70>}
local_func = <function test_exchange at 0x784ada1a6f20>, backend_name = 'asyncio', backend_options = {}, runner = <anyio._backends._asyncio.TestRunner object at 0x784ada00dd60>
def wrapper(
*args: Any, anyio_backend: Any, request: SubRequest, **kwargs: Any
) -> Any:
# Rebind any fixture methods to the request instance
if (
request.instance
and ismethod(func)
and type(func.__self__) is type(request.instance)
):
local_func = func.__func__.__get__(request.instance)
else:
local_func = func
backend_name, backend_options = extract_backend_and_options(anyio_backend)
if has_backend_arg:
kwargs["anyio_backend"] = anyio_backend
if has_request_arg:
kwargs["request"] = request
with get_runner(backend_name, backend_options) as runner:
if isasyncgenfunction(local_func):
> yield from runner.run_asyncgen_fixture(local_func, kwargs)
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/anyio/pytest_plugin.py:98:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/anyio/_backends/_asyncio.py:2234: in run_asyncgen_fixture
fixturevalue: T_Retval = self.get_loop().run_until_complete(
../../../miniconda3/lib/python3.12/asyncio/base_events.py:686: in run_until_complete
return future.result()
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/anyio/_backends/_asyncio.py:2226: in _call_in_runner_task
return await future
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/anyio/_backends/_asyncio.py:2193: in _run_tests_and_fixtures
retval = await coro
tests/conftest.py:140: in test_exchange
async with test_rmq_pool.acquire() as conn:
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aio_pika/pool.py:147: in __aenter__
self.item = await self.pool._get()
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aio_pika/pool.py:104: in _get
return await self._create_item()
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aio_pika/pool.py:92: in _create_item
item = await self.__constructor(*self.__constructor_args)
readapp2/services/rabbit/lifespan.py:38: in get_channel
async with connection_pool.acquire() as connection:
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aio_pika/pool.py:147: in __aenter__
self.item = await self.pool._get()
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aio_pika/pool.py:104: in _get
return await self._create_item()
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aio_pika/pool.py:92: in _create_item
item = await self.__constructor(*self.__constructor_args)
readapp2/services/rabbit/lifespan.py:22: in get_connection
return await aio_pika.connect_robust(str(settings.rabbit_url))
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aio_pika/robust_connection.py:334: in connect_robust
await connection.connect(timeout=timeout)
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aio_pika/robust_connection.py:180: in connect
await self.__fail_fast_future
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aio_pika/robust_connection.py:135: in __connection_factory
await Connection.connect(self, self.__connect_timeout)
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aio_pika/connection.py:125: in connect
self.transport = await UnderlayConnection.connect(
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aio_pika/abc.py:679: in connect
connection = await cls.make_connection(
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aio_pika/abc.py:667: in make_connection
connection: aiormq.abc.AbstractConnection = await asyncio.wait_for(
../../../miniconda3/lib/python3.12/asyncio/tasks.py:520: in wait_for
return await fut
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aiormq/connection.py:920: in connect
await connection.connect(client_properties or {})
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aiormq/base.py:164: in wrap
return await self.create_task(func(self, *args, **kwargs))
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aiormq/abc.py:44: in __inner
return await self.task
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Connection: "amqp://guest:******@readapp2-rmq:5672/" at 0x784ad9d4edf0>, client_properties = {}
@task
async def connect(
self, client_properties: Optional[FieldTable] = None,
) -> bool:
if self.is_opened:
raise RuntimeError("Connection already opened")
ssl_context = self.ssl_context
if ssl_context is None and self.url.scheme == "amqps":
ssl_context = await self.loop.run_in_executor(
None, self._get_ssl_context,
)
self.ssl_context = ssl_context
log.debug("Connecting to: %s", self)
try:
reader, writer = await asyncio.open_connection(
self.url.host, self.url.port, ssl=ssl_context,
**self.__create_connection_kwargs,
)
frame_receiver = FrameReceiver(reader)
except OSError as e:
> raise AMQPConnectionError(*e.args) from e
E aiormq.exceptions.AMQPConnectionError: [Errno -2] Name or service not known
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aiormq/connection.py:464: AMQPConnectionError
____________________________________________________________________ ERROR at setup of test_message_wrong_exchange _____________________________________________________________________
self = <Connection: "amqp://guest:******@readapp2-rmq:5672/" at 0x784ad93d2fd0>, client_properties = {}
@task
async def connect(
self, client_properties: Optional[FieldTable] = None,
) -> bool:
if self.is_opened:
raise RuntimeError("Connection already opened")
ssl_context = self.ssl_context
if ssl_context is None and self.url.scheme == "amqps":
ssl_context = await self.loop.run_in_executor(
None, self._get_ssl_context,
)
self.ssl_context = ssl_context
log.debug("Connecting to: %s", self)
try:
> reader, writer = await asyncio.open_connection(
self.url.host, self.url.port, ssl=ssl_context,
**self.__create_connection_kwargs,
)
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aiormq/connection.py:457:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../miniconda3/lib/python3.12/asyncio/streams.py:48: in open_connection
transport, _ = await loop.create_connection(
../../../miniconda3/lib/python3.12/asyncio/base_events.py:1078: in create_connection
infos = await self._ensure_resolved(
../../../miniconda3/lib/python3.12/asyncio/base_events.py:1461: in _ensure_resolved
return await loop.getaddrinfo(host, port, family=family, type=type,
../../../miniconda3/lib/python3.12/asyncio/base_events.py:900: in getaddrinfo
return await self.run_in_executor(
../../../miniconda3/lib/python3.12/concurrent/futures/thread.py:59: in run
result = self.fn(*self.args, **self.kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
host = 'readapp2-rmq', port = 5672, family = 0, type = <SocketKind.SOCK_STREAM: 1>, proto = 0, flags = 0
def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0):
"""Resolve host and port into list of address info entries.
Translate the host/port argument into a sequence of 5-tuples that contain
all the necessary arguments for creating a socket connected to that service.
host is a domain name, a string representation of an IPv4/v6 address or
None. port is a string service name such as 'http', a numeric port number or
None. By passing None as the value of host and port, you can pass NULL to
the underlying C API.
The family, type and proto arguments can be optionally specified in order to
narrow the list of addresses returned. Passing zero as a value for each of
these arguments selects the full range of results.
"""
# We override this function since we want to translate the numeric family
# and socket type values to enum constants.
addrlist = []
> for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
E socket.gaierror: [Errno -2] Name or service not known
../../../miniconda3/lib/python3.12/socket.py:976: gaierror
The above exception was the direct cause of the following exception:
anyio_backend = 'asyncio', request = <SubRequest 'test_exchange' for <Function test_message_wrong_exchange>>, args = ()
kwargs = {'test_exchange_name': '74502e149edc4d7f8e65679f6dad990c', 'test_rmq_pool': <aio_pika.pool.Pool object at 0x784ad8cdc970>}
local_func = <function test_exchange at 0x784ada1a6f20>, backend_name = 'asyncio', backend_options = {}, runner = <anyio._backends._asyncio.TestRunner object at 0x784ada00dd60>
def wrapper(
*args: Any, anyio_backend: Any, request: SubRequest, **kwargs: Any
) -> Any:
# Rebind any fixture methods to the request instance
if (
request.instance
and ismethod(func)
and type(func.__self__) is type(request.instance)
):
local_func = func.__func__.__get__(request.instance)
else:
local_func = func
backend_name, backend_options = extract_backend_and_options(anyio_backend)
if has_backend_arg:
kwargs["anyio_backend"] = anyio_backend
if has_request_arg:
kwargs["request"] = request
with get_runner(backend_name, backend_options) as runner:
if isasyncgenfunction(local_func):
> yield from runner.run_asyncgen_fixture(local_func, kwargs)
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/anyio/pytest_plugin.py:98:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/anyio/_backends/_asyncio.py:2234: in run_asyncgen_fixture
fixturevalue: T_Retval = self.get_loop().run_until_complete(
../../../miniconda3/lib/python3.12/asyncio/base_events.py:686: in run_until_complete
return future.result()
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/anyio/_backends/_asyncio.py:2226: in _call_in_runner_task
return await future
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/anyio/_backends/_asyncio.py:2193: in _run_tests_and_fixtures
retval = await coro
tests/conftest.py:140: in test_exchange
async with test_rmq_pool.acquire() as conn:
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aio_pika/pool.py:147: in __aenter__
self.item = await self.pool._get()
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aio_pika/pool.py:104: in _get
return await self._create_item()
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aio_pika/pool.py:92: in _create_item
item = await self.__constructor(*self.__constructor_args)
readapp2/services/rabbit/lifespan.py:38: in get_channel
async with connection_pool.acquire() as connection:
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aio_pika/pool.py:147: in __aenter__
self.item = await self.pool._get()
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aio_pika/pool.py:104: in _get
return await self._create_item()
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aio_pika/pool.py:92: in _create_item
item = await self.__constructor(*self.__constructor_args)
readapp2/services/rabbit/lifespan.py:22: in get_connection
return await aio_pika.connect_robust(str(settings.rabbit_url))
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aio_pika/robust_connection.py:334: in connect_robust
await connection.connect(timeout=timeout)
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aio_pika/robust_connection.py:180: in connect
await self.__fail_fast_future
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aio_pika/robust_connection.py:135: in __connection_factory
await Connection.connect(self, self.__connect_timeout)
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aio_pika/connection.py:125: in connect
self.transport = await UnderlayConnection.connect(
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aio_pika/abc.py:679: in connect
connection = await cls.make_connection(
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aio_pika/abc.py:667: in make_connection
connection: aiormq.abc.AbstractConnection = await asyncio.wait_for(
../../../miniconda3/lib/python3.12/asyncio/tasks.py:520: in wait_for
return await fut
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aiormq/connection.py:920: in connect
await connection.connect(client_properties or {})
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aiormq/base.py:164: in wrap
return await self.create_task(func(self, *args, **kwargs))
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aiormq/abc.py:44: in __inner
return await self.task
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Connection: "amqp://guest:******@readapp2-rmq:5672/" at 0x784ad93d2fd0>, client_properties = {}
@task
async def connect(
self, client_properties: Optional[FieldTable] = None,
) -> bool:
if self.is_opened:
raise RuntimeError("Connection already opened")
ssl_context = self.ssl_context
if ssl_context is None and self.url.scheme == "amqps":
ssl_context = await self.loop.run_in_executor(
None, self._get_ssl_context,
)
self.ssl_context = ssl_context
log.debug("Connecting to: %s", self)
try:
reader, writer = await asyncio.open_connection(
self.url.host, self.url.port, ssl=ssl_context,
**self.__create_connection_kwargs,
)
frame_receiver = FrameReceiver(reader)
except OSError as e:
> raise AMQPConnectionError(*e.args) from e
E aiormq.exceptions.AMQPConnectionError: [Errno -2] Name or service not known
../../../.cache/pypoetry/virtualenvs/readapp2-vunyLE_l-py3.12/lib/python3.12/site-packages/aiormq/connection.py:464: AMQPConnectionError
=============================================================================== short test summary info ================================================================================
ERROR tests/test_rabbit.py::test_message_publishing - aiormq.exceptions.AMQPConnectionError: [Errno -2] Name or service not known
ERROR tests/test_rabbit.py::test_message_wrong_exchange - aiormq.exceptions.AMQPConnectionError: [Errno -2] Name or service not known
============================================================================= 5 passed, 2 errors in 1.02s ==============================================================================
```
| closed | 2025-02-06T13:48:00Z | 2025-02-17T22:49:59Z | https://github.com/s3rius/FastAPI-template/issues/232 | [] | natelastname | 4 |
iperov/DeepFaceLab | machine-learning | 951 | since the core was updated it is not working on linux | What was the core before the update, i was all working fine for me, but since it was updated it is not working no linux anymore.
I would like to know downgrade | open | 2020-11-16T20:25:27Z | 2023-06-08T21:44:48Z | https://github.com/iperov/DeepFaceLab/issues/951 | [] | danielhama | 7 |
rthalley/dnspython | asyncio | 183 | Add nameserver information to Message | My case is that I need to know which nameserver replied to a given query.
Since the selected nameserver is random between the ones provided to the resolver, it is impossible to figure out which responded to a specific query, thus impossible to determine which would be out-of-date without resorting to iterating over nameservers, providing them one at a time to resolver.
| closed | 2016-07-01T12:15:42Z | 2016-09-20T12:31:30Z | https://github.com/rthalley/dnspython/issues/183 | [] | EvaSDK | 2 |
slackapi/bolt-python | fastapi | 1,159 | How can I register messages from the bot? | i have a simple code:
```python
import os
from slack_bolt import App
from slack_bolt.adapter.socket_mode import SocketModeHandler
app_token = os.getenv("SLACK_APP_TOKEN") # xapp
bot_token = os.getenv("TOKEN_BOT") # xoxb
app = App(token=bot_token,
signing_secret=os.environ.get("SLACK_SIGNING_SECRET"))
@app.event("message")
def handle_message_events(body, logger):
print('!!!!!!!NEW MSG!!!!!!!')
if __name__ == "__main__":
SocketModeHandler(app, app_token).start()
```
This code perfectly registers user messages, but unfortunately does not register messages from the bot. How can I register messages from the bot? | closed | 2024-09-11T06:10:15Z | 2024-09-11T06:54:54Z | https://github.com/slackapi/bolt-python/issues/1159 | [
"question"
] | MihailovAlexander | 2 |
deepinsight/insightface | pytorch | 2,007 | adaface loss | could author add adaface loss to your insightface?
adaface code: https://github.com/mk-minchul/AdaFace/blob/master/head.py | open | 2022-05-16T02:58:08Z | 2024-07-01T10:05:34Z | https://github.com/deepinsight/insightface/issues/2007 | [] | eeric | 3 |
sunscrapers/djoser | rest-api | 844 | Dont want to use all base endpoints | I dont want to use set username etc etc most of the end points.
how can I only use what I need like create and activation.
`/auth/users/` and `/auth/users/activation/` | closed | 2024-10-20T18:08:09Z | 2024-12-11T11:48:16Z | https://github.com/sunscrapers/djoser/issues/844 | [] | r0ld3x | 0 |
pallets/flask | python | 4,452 | why does my flask app prints out zen of python every time I run it? | is this some kind of easter egg?
`(env) xxx@xxx-dev:~/xxx/xxxxxx# flask db upgrade
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
INFO [alembic.runtime.migration] Context impl SQLiteImpl....`
I was using sqlalchemy to mapping the database similar to the follows:
table A
id PK
name
table B
id PK
roleA_id FK(tableA id)
roleB_id FK(tableA id)
roleC_id FK(tableA id)
I know there are better way to map it as a many to many relationship, but I choose to do so for reasons. after this db migration the app starts to print out the zen of python every time I run flask run/shell/db migrate/db upgrade, sometime even prints it out for multiple times. everything else works normal, which makes me quite confused.
Environment:
- Python version: 3.9.2
Package Version
------------------- -------
alembic 1.7.5
astroid 2.9.3
autopep8 1.6.0
blinker 1.4
click 8.0.3
dnspython 2.2.0
email-validator 1.1.3
Flask 2.0.2
Flask-DebugToolbar 0.11.0
Flask-Login 0.5.0
Flask-Migrate 3.1.0
Flask-SQLAlchemy 2.5.1
Flask-WTF 1.0.0
greenlet 1.1.2
idna 3.3
importlib-metadata 4.10.1
importlib-resources 5.4.0
isort 5.10.1
itsdangerous 2.0.1
Jinja2 3.0.3
lazy-object-proxy 1.7.1
ldap3 2.9.1
Mako 1.1.6
MarkupSafe 2.0.1
mccabe 0.6.1
pip 21.3.1
pkg_resources 0.0.0
platformdirs 2.4.1
pyasn1 0.4.8
pycodestyle 2.8.0
pylint 2.12.2
python-dotenv 0.19.2
setuptools 44.1.1
SQLAlchemy 1.4.31
toml 0.10.2
typing_extensions 4.0.1
Werkzeug 2.0.2
wrapt 1.13.3
WTForms 3.0.1
zipp 3.7.0
| closed | 2022-02-16T07:25:11Z | 2022-03-03T00:05:02Z | https://github.com/pallets/flask/issues/4452 | [] | JiewenGuan | 1 |
dask/dask | scikit-learn | 11,120 | Dask .head() returns error as .compute returns ok! | Hello guys!
I´ve noticed a weird bug when trying to see the dataframe.head(). As I could not identify if the error is related to the "original" Dataframe or the operations done after importing it, I kept it all.
This is a sample dataframe in CSV file
[test3.csv](https://github.com/dask/dask/files/15310736/test3.csv)
There are the operations done after
```
dataframe['status'] = 'cancelado'
dataframe['status'] = dataframe['status'].mask(cond=(dataframe['cd_tip_est_slct'].isin([5,9])), other='confirmado')
dataframe['produto'] = str()
dataframe['produto'] = 'Giftcards'
dataframe['parceiro'] = str()
dataframe['parceiro'] = np.nan
dataframe['parceiro'] = dataframe['parceiro'].mask(cond=(dataframe['cd_idfr_pcr'] == 516515741), other='X1')
dataframe['parceiro'] = dataframe['parceiro'].mask(cond=(dataframe['cd_idfr_pcr'] == 935454784), other='X2')
dataframe['parceiro'] = dataframe['parceiro'].fillna('novo parceiro')
dataframe['data_transacao'] = dataframe['tscompra']
dataframe['mci'] = dataframe['cd_cli_cprd']
dataframe['marca'] = dataframe['ProductLine']
dataframe['marca'] = dataframe['marca'].fillna('novo produto')
dataframe['marca'] = dataframe['parceiro'].mask(cond=(dataframe['marca'] == 'Sony PlayStation'), other='PlayStation')
dataframe['forma_pagamento'] = np.nan
dataframe['forma_pagamento'] = dataframe['forma_pagamento'].mask(cond=(dataframe['cd_tip_fma_pgto'] == 0), other=np.nan)
dataframe['forma_pagamento'] = dataframe['forma_pagamento'].mask(cond=(dataframe['cd_tip_fma_pgto'] == 1), other='débito em conta corrente')
dataframe['forma_pagamento'] = dataframe['forma_pagamento'].mask(cond=(dataframe['cd_tip_fma_pgto'] == 2), other='cartão de crédito')
dataframe['forma_pagamento'] = dataframe['forma_pagamento'].mask(cond=(dataframe['cd_tip_fma_pgto'] == 3), other='pix')
dataframe['forma_pagamento'] = dataframe['forma_pagamento'].mask(cond=(dataframe['cd_tip_fma_pgto'] == 4), other='pix open banking')
dataframe['forma_pagamento'] = dataframe['forma_pagamento'].mask(cond=(dataframe['cd_tip_fma_pgto'] == 5), other='débito em conta poupança')
```
if I run the compute, it does work
`dataframe.compute()`

if I run the head, it doe not work
```
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
Cell In[42], [line 1](vscode-notebook-cell:?execution_count=42&line=1)
----> [1](vscode-notebook-cell:?execution_count=42&line=1) dataframe.head()
File c:\Users\F3164582\AppData\Local\Programs\Python\Python311\Lib\site-packages\dask_expr\_collection.py:702, in FrameBase.head(self, n, npartitions, compute)
[700](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_collection.py:700) out = new_collection(expr.Head(self, n=n, npartitions=npartitions))
[701](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_collection.py:701) if compute:
--> [702](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_collection.py:702) out = out.compute()
[703](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_collection.py:703) return out
File c:\Users\F3164582\AppData\Local\Programs\Python\Python311\Lib\site-packages\dask_expr\_collection.py:475, in FrameBase.compute(self, fuse, **kwargs)
[473](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_collection.py:473) if not isinstance(out, Scalar):
[474](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_collection.py:474) out = out.repartition(npartitions=1)
--> [475](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_collection.py:475) out = out.optimize(fuse=fuse)
[476](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_collection.py:476) return DaskMethodsMixin.compute(out, **kwargs)
File c:\Users\F3164582\AppData\Local\Programs\Python\Python311\Lib\site-packages\dask_expr\_collection.py:590, in FrameBase.optimize(self, fuse)
[572](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_collection.py:572) def optimize(self, fuse: bool = True):
[573](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_collection.py:573) """Optimizes the DataFrame.
[574](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_collection.py:574)
[575](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_collection.py:575) Runs the optimizer with all steps over the DataFrame and wraps the result in a
(...)
[588](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_collection.py:588) The optimized Dask Dataframe
[589](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_collection.py:589) """
--> [590](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_collection.py:590) return new_collection(self.expr.optimize(fuse=fuse))
File c:\Users\F3164582\AppData\Local\Programs\Python\Python311\Lib\site-packages\dask_expr\_expr.py:94, in Expr.optimize(self, **kwargs)
[93](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:93) def optimize(self, **kwargs):
---> [94](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:94) return optimize(self, **kwargs)
File c:\Users\F3164582\AppData\Local\Programs\Python\Python311\Lib\site-packages\dask_expr\_expr.py:3028, in optimize(expr, fuse)
[3007](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:3007) """High level query optimization
[3008](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:3008)
[3009](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:3009) This leverages three optimization passes:
(...)
[3024](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:3024) optimize_blockwise_fusion
[3025](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:3025) """
[3026](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:3026) stage: core.OptimizerStage = "fused" if fuse else "simplified-physical"
-> [3028](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:3028) return optimize_until(expr, stage)
File c:\Users\F3164582\AppData\Local\Programs\Python\Python311\Lib\site-packages\dask_expr\_expr.py:2989, in optimize_until(expr, stage)
[2986](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:2986) return expr
[2988](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:2988) # Lower
-> [2989](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:2989) expr = expr.lower_completely()
[2990](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:2990) if stage == "physical":
[2991](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:2991) return expr
File c:\Users\F3164582\AppData\Local\Programs\Python\Python311\Lib\site-packages\dask_expr\_core.py:436, in Expr.lower_completely(self)
[434](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:434) expr = self
[435](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:435) while True:
--> [436](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:436) new = expr.lower_once()
[437](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:437) if new._name == expr._name:
[438](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:438) break
File c:\Users\F3164582\AppData\Local\Programs\Python\Python311\Lib\site-packages\dask_expr\_core.py:404, in Expr.lower_once(self)
[402](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:402) for operand in out.operands:
[403](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:403) if isinstance(operand, Expr):
--> [404](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:404) new = operand.lower_once()
[405](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:405) if new._name != operand._name:
[406](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:406) changed = True
File c:\Users\F3164582\AppData\Local\Programs\Python\Python311\Lib\site-packages\dask_expr\_core.py:404, in Expr.lower_once(self)
[402](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:402) for operand in out.operands:
[403](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:403) if isinstance(operand, Expr):
--> [404](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:404) new = operand.lower_once()
[405](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:405) if new._name != operand._name:
[406](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:406) changed = True
[... skipping similar frames: Expr.lower_once at line 404 (10 times)]
File c:\Users\F3164582\AppData\Local\Programs\Python\Python311\Lib\site-packages\dask_expr\_core.py:404, in Expr.lower_once(self)
[402](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:402) for operand in out.operands:
[403](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:403) if isinstance(operand, Expr):
--> [404](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:404) new = operand.lower_once()
[405](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:405) if new._name != operand._name:
[406](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:406) changed = True
File c:\Users\F3164582\AppData\Local\Programs\Python\Python311\Lib\site-packages\dask_expr\_core.py:393, in Expr.lower_once(self)
[390](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:390) expr = self
[392](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:392) # Lower this node
--> [393](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:393) out = expr._lower()
[394](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:394) if out is None:
[395](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:395) out = expr
File c:\Users\F3164582\AppData\Local\Programs\Python\Python311\Lib\site-packages\dask_expr\_expr.py:2439, in Head._lower(self)
[2435](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:2435) raise ValueError(
[2436](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:2436) f"only {self.frame.npartitions} partitions, head received {npartitions}"
[2437](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:2437) )
[2438](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:2438) partitions = self._partitions
-> [2439](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:2439) if is_index_like(self._meta):
[2440](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:2440) return BlockwiseHeadIndex(
[2441](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:2441) Partitions(self.frame, partitions), self.n, safe=False
[2442](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:2442) )
[2444](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:2444) safe = True if npartitions == 1 and self.frame.npartitions != 1 else False
File c:\Users\F3164582\AppData\Local\Programs\Python\Python311\Lib\functools.py:1001, in cached_property.__get__(self, instance, owner)
[999](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/functools.py:999) val = cache.get(self.attrname, _NOT_FOUND)
[1000](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/functools.py:1000) if val is _NOT_FOUND:
-> [1001](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/functools.py:1001) val = self.func(instance)
[1002](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/functools.py:1002) try:
[1003](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/functools.py:1003) cache[self.attrname] = val
File c:\Users\F3164582\AppData\Local\Programs\Python\Python311\Lib\site-packages\dask_expr\_expr.py:2393, in Head._meta(self)
[2391](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:2391) @functools.cached_property
[2392](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:2392) def _meta(self):
-> [2393](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_expr.py:2393) return self.frame._meta
File c:\Users\F3164582\AppData\Local\Programs\Python\Python311\Lib\site-packages\dask_expr\_core.py:455, in Expr._meta(self)
[453](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:453) @property
[454](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:454) def _meta(self):
--> [455](file:///C:/Users/F3164582/AppData/Local/Programs/Python/Python311/Lib/site-packages/dask_expr/_core.py:455) raise NotImplementedError()
NotImplementedError:
```
**Environment**:
- Dask version: dask 2024.5.0, dask-expr 1.1.0
- Python version: 3.11.7
- Operating System: windows
- Install method (conda, pip, source): pip
| closed | 2024-05-14T15:40:10Z | 2024-05-16T23:45:32Z | https://github.com/dask/dask/issues/11120 | [
"needs triage"
] | frbelotto | 2 |
biolab/orange3 | numpy | 6,350 | Multilingual Orange | On macOS I modified an installed bundle:
- renamed Orange to Orange-si; same for orangewidget and orangecanvas.
- copied translated sources to Orange-si, orangewidget-si, orangecanvas-si
Now I can do either `ln -s Orange-en Orange` or `ln -s Orange-si Orange` (and the same for others) to switch between English and Slovenian.
- Could this (where `ln -s` would of course happen within Orange settings) be a way to distribute multilingual Orange?
- This would also work for Linux, I guess. What about Windows? In the worst case, we could simply rename.
Reminder: we would need a Slovenian distribution of Orange by mid-march. | closed | 2023-02-25T16:37:17Z | 2025-01-02T16:01:22Z | https://github.com/biolab/orange3/issues/6350 | [] | janezd | 5 |
STVIR/pysot | computer-vision | 537 | About testing VOT2018 | loading VOT2018: 100%|██████████████████████████████████| 60/60 [00:00<00:00, 82.80it/s, zebrafish1]
( 1) Video: ants1 Time: 2.8s Speed: 114.2fps Lost: 0
( 2) Video: ants3 Time: 4.6s Speed: 124.4fps Lost: 1
Traceback (most recent call last):
File "../../tools/test.py", line 236, in <module>
main()
File "../../tools/test.py", line 89, in main
outputs = tracker.track(img)
File "pysot/tracker/siamrpn_tracker.py", line 116, in track
round(s_x), self.channel_average)
File "pysot/tracker/base_tracker.py", line 50, in get_subwindow
im_sz = im.shape
AttributeError: 'NoneType' object has no attribute 'shape'
There was an error when testing VOT2018,loading dataset was right,but Traceback appeared after test 2 videos.Testing on VOT2016 and VOT2019 has nothing like this.Would you help me to solve this problem? | open | 2021-06-02T02:38:09Z | 2021-06-02T02:38:09Z | https://github.com/STVIR/pysot/issues/537 | [] | leonardozxcasdqwe | 0 |
ckan/ckan | api | 7,602 | 2.10.1 - fails to start with suggested beaker configuration from migration notes | ## CKAN version
2.10.1
## Describe the bug
2.10.1 changelog: https://github.com/ckan/ckan/blob/2.10/CHANGELOG.rst#migration-notes mentions to change/add:
```
# ckan.ini
beaker.session.type = cookie
beaker.session.data_serializer = json
beaker.session.validate_key = CHANGE_ME
beaker.session.httponly = True
beaker.session.secure = True
beaker.session.samesite = Lax # or Strict, depending on your setup
```
First it would be nice to know what kind of string `CHANGE_ME` expects like the recommended lenght/format and maybe suggest a command to create the random string.
However the actual problem is with this change ckan 2.10.1 fails to start as it's lacking an AES library.
```
Traceback (most recent call last):
File "/usr/lib/ckan/default/src/ckan/ckan/config/middleware/flask_app.py", line 82, in __call__
return self.app(environ, start_response)
File "/usr/lib/ckan/default/src/ckan/ckan/config/middleware/common_middleware.py", line 87, in __call__
return self.app(environ, start_response)
File "/usr/lib/ckan/default/lib/python3.10/site-packages/flask/app.py", line 2091, in __call__
return self.wsgi_app(environ, start_response)
File "/usr/lib/ckan/default/lib/python3.10/site-packages/beaker/middleware.py", line 156, in __call__
return self.wrap_app(environ, session_start_response)
File "/usr/lib/ckan/default/src/ckan/ckan/config/middleware/common_middleware.py", line 31, in __call__
return self.app(environ, start_response)
File "/usr/lib/ckan/default/lib/python3.10/site-packages/flask/app.py", line 2076, in wsgi_app
response = self.handle_exception(e)
File "/usr/lib/ckan/default/lib/python3.10/site-packages/flask/app.py", line 1440, in handle_exception
server_error = self.ensure_sync(handler)(server_error)
File "/usr/lib/ckan/default/src/ckan/ckan/config/middleware/flask_app.py", line 615, in error_handler
show_login_redirect_link = current_user.is_anonymous and type(
File "/usr/lib/ckan/default/lib/python3.10/site-packages/werkzeug/local.py", line 436, in __get__
obj = instance._get_current_object()
File "/usr/lib/ckan/default/lib/python3.10/site-packages/werkzeug/local.py", line 565, in _get_current_object
return self.__local() # type: ignore
File "/usr/lib/ckan/default/lib/python3.10/site-packages/flask_login/utils.py", line 26, in <lambda>
current_user = LocalProxy(lambda: _get_user())
File "/usr/lib/ckan/default/lib/python3.10/site-packages/flask_login/utils.py", line 384, in _get_user
current_app.login_manager._load_user()
File "/usr/lib/ckan/default/lib/python3.10/site-packages/flask_login/login_manager.py", line 347, in _load_user
if self._session_protection_failed():
File "/usr/lib/ckan/default/lib/python3.10/site-packages/flask_login/login_manager.py", line 388, in _session_protection_failed
if sess and ident != sess.get("_id", None):
File "/usr/lib/ckan/default/lib/python3.10/site-packages/beaker/session.py", line 814, in __getattr__
return getattr(self._session(), attr)
File "/usr/lib/ckan/default/lib/python3.10/site-packages/beaker/session.py", line 810, in _session
self.__dict__['_sess'] = session_cls(req, **params)
File "/usr/lib/ckan/default/lib/python3.10/site-packages/beaker/session.py", line 622, in __init__
raise InvalidCryptoBackendError("No AES library is installed, can't generate "
beaker.exceptions.InvalidCryptoBackendError: No AES library is installed, can't generate encrypted cookie-only Session.
```
CKAN has been installed/upgraded via `pip install -e 'git+https://github.com/ckan/ckan.git@ckan-2.10.1#egg=ckan[requirements]'` so the lacking python library should be added the the requirements as well.
### Steps to reproduce
Steps to reproduce the behavior:
1. Install CKAN `pip install -e 'git+https://github.com/ckan/ckan.git@ckan-2.10.1#egg=ckan[requirements]'`
2. Configure beaker like suggested
3. Observe error at `/etc/ckan/default/uwsgi.ERR`
### Expected behavior
Have everything CKAN uses availble/installed to be able to successfully run it with the recommended (and according to the changelog future default) configuration.
### Additional details
beaker documentation:
https://beaker.readthedocs.io/en/latest/configuration.html#encryption-options
```
crypto_type (optional, string)
Encryption backend to use. If default is used, one of the installed backends is picked.
Other valid choices are cryptography, nss, pycrypto.
Note: You may need to install additional libraries to use Beaker’s cookie-based session encryption. See the Encryption section for more information.
```
https://beaker.readthedocs.io/en/latest/sessions.html#encryption
```
Depending on the Python implementation used, Beaker may require an additional library to provide AES encryption.
On CPython (the regular Python), one of the following libraries is required:
The [python-nss](https://pypi.python.org/pypi/python-nss/) library
The [pycryptopp](https://pypi.python.org/pypi/pycryptopp/) library
The [cryptography](https://pypi.python.org/pypi/cryptography/) library
The [PyCrypto](https://pypi.python.org/pypi/pycrypto/) library
``` | closed | 2023-05-25T08:00:23Z | 2023-05-30T01:20:37Z | https://github.com/ckan/ckan/issues/7602 | [] | tgurr | 2 |
modelscope/data-juicer | data-visualization | 213 | some problems with the quality filtering part on keep ratio | ### Before Asking 在提问之前
- [X] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully. 我已经仔细阅读了 [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) 上的操作指引。
- [X] I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
### Search before asking 先搜索,再提问
- [X] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar questions. 我已经在 [issue列表](https://github.com/alibaba/data-juicer/issues) 中搜索但是没有发现类似的问题。
### Question
I'm checking the quality filtering in Data-juicer, but find that there maybe some problems with the keep ratio part. The keep ratio in the table is only 2%~3% as followings. It's so low, Is it right?
model | keep ratio @ label | keep ratio @ pareto
-- | -- | --
GPT-3 quality classifier (estimated) | - | ~1.3%
gpt3 | 3.22% | 1.41%
chinese | 1.81% | -
### Additional 额外信息
_No response_ | closed | 2024-02-21T11:48:08Z | 2024-02-22T12:21:03Z | https://github.com/modelscope/data-juicer/issues/213 | [
"question"
] | applewangtao | 2 |
ray-project/ray | machine-learning | 51,242 | [Serve] Detailed Analysis of Errors Related to 'Ray does not allocate any GPUs on the driver node' && 'No CUDA GPUs are available' | ### What happened + What you expected to happen
When deploying platforms based on the Ray framework, such as Ray Serve and Ray LLM, together with vLLM's OpenAI server, the errors "No CUDA GPUs are available" or "Ray does not allocate any GPUs on the driver node" have become recurring issues.
In this issue, I will provide a detailed analysis of these problems, along with a brief solution, experimental records. I sincerely invite developers from the Ray and vLLM communities to participate in the discussion, point out any shortcomings, and share your suggestions!
<h2><strong>Quick Troubleshoot</strong></h2>

<p>For older versions of vLLM, I have also provided a hack to temporarily resolve this issue. Please refer to: <a href="https://github.com/ray-project/ray/issues/51154">Ray Issue #51154</a>.</p>
<p>For <strong>Ray LLM</strong> and <strong>Ray Serve</strong> documentation:</p>
<ul>
<li><strong>Ray LLM</strong>: <a href="https://docs.ray.io/en/latest/ray-llm/index.html">Ray LLM Documentation</a></li>
<li><strong>Ray Serve</strong>: <a href="https://docs.ray.io/en/latest/serve/index.html">Ray Serve vLLM Example</a></li>
</ul>
<p>A proper configuration for TP=1 involves modifying the <code inline="">build_app</code> function in the example code from the <strong>Ray Serve</strong> documentation by replacing the following content.</p>
```diff
pg_resources = []
- pg_resources.append({"CPU": 1}) # for the deployment replica
for i in range(tp):
pg_resources.append({"CPU": 1, accelerator: 1}) # for the vLLM actors
# We use the "STRICT_PACK" strategy below to ensure all vLLM actors are placed on
# the same Ray node.
return VLLMDeployment.options(
+ ray_actor_options={"num_gpus": 1,"num_cpus": 1},
placement_group_bundles=pg_resources, placement_group_strategy="STRICT_PACK"
).bind(
```
<hr>
<h2><strong>Introduction</strong></h2>
<p>The issue can be summarized simply: the framework design of <strong>vLLM</strong> does not fully accommodate <code inline="">LLMEngine</code> running within a <strong>placement group</strong>.</p>
<p>The process that creates <code inline="">RayDistributedExecutor</code>, which serves as the entry point, must have access to a <strong>GPU</strong> while <strong>not occupying GPU resources within Ray</strong>. This conflicts with the typical configuration of <strong>Ray Serve</strong>. Additionally, since <strong>vLLM always requests a whole number of GPUs when <code inline="">world_size > 1</code></strong>, it is not possible to work around this limitation by allocating fractional GPUs.</p>
<p>

</p>
<p>Regardless of whether using <code inline="">LLM</code> (offline inference) or <code inline="">OpenAIServingCompletion</code> (online deployment), both are considered <strong>entry points</strong>. The class responsible for managing the specific processes during initialization is called an <code inline="">Executor</code>. The <code inline="">Executor</code> itself creates a <strong>local actor</strong> to use the <strong>GPU</strong> and also spawns a <strong>dummy actor</strong> to reserve resources in the <strong>placement group</strong>.</p>

<p>However, when integrating this framework into <strong>Ray</strong>, several issues arise:</p>
<ul>
<li>In <strong>Ray</strong>, the <code inline="">Executor</code> itself also runs within an <strong>Actor</strong> and uses the <strong>first bundle of the placement group</strong>.
<ul>
<li>If <strong>no GPU resources</strong> are assigned to it, <code inline="">CUDA_VISIBLE_DEVICES</code> will be an <strong>empty string</strong>, leading to the <em>"No CUDA GPUs are available"</em> error when trying to call <code inline="">set_device</code>.</li>
<li>On the other hand, if we <strong>do allocate a GPU</strong> to it, <strong>vLLM</strong> will still use a <code inline="">dummy_driver_worker</code> that occupies a <strong>GPU</strong>, which causes the total number of requested workers to exceed the <strong>placement group capacity</strong>.</li>
<li>Since <strong>vLLM does not allocate resources based on bundles</strong> but instead forces each worker to use exactly <strong>one GPU when <code inline="">world_size > 1</code></strong>, we cannot work around this limitation by assigning fractional GPUs.</li>
</ul>
</li>
</ul>
<h3><strong>A Deadlock!</strong></h3>
<hr>
<h2><strong>Experiments</strong></h2>
<p>Due to the specific feature of the code, there are actually <strong>two executable scenarios</strong>. I will first present the experimental table and then analyze each case one by one.</p>
VLLM Version | Placement Group Configuration | TP | Status | Notes
-- | -- | -- | -- | --
VLLM 0.7.3 | [{'CPU':1} + {'GPU':1} * TP] | >1 | ✅ Works | Replica actor has no GPU but gains access via update_environment_variables
VLLM 0.7.3 | [{'GPU':1} * TP] | >1 | ❌ Fails | Extra worker creation causes deadlock due to loop in ray_distributed_executor.py#L187
VLLM 0.7.3 | [{'CPU':1} + {'GPU':1} * TP] | 1 | ❌ Fails | Replica actor has no GPU, and Executor can no longer "borrow" CUDA_VISIBLE_DEVICES
VLLM 0.7.3 | [{'GPU':1} * TP] | 1 | ✅ Works | Replica actor has no GPU, but uniproc_executor avoids dummy worker creation
<hr>
<h2><strong>Analysis</strong></h2>
<p>In the existing code, there are actually <strong>two scenarios</strong> where execution is possible:</p>
<ol>
<li><strong>TP > 1 without explicitly assigning GPUs</strong> (this is the default setting in <strong>Ray Serve</strong>). This explains why the issue has not become a critical blocker—under the current configuration, execution is still possible.</li>
<li><strong>TP = 1 with GPU assignment</strong> (as mentioned earlier, using an appropriate configuration combined with <strong>Ray Serve</strong> to resolve the issue).</li>
</ol>
<h3><strong>Case 1: Default Configuration (<code inline="">TP > 1</code> & No GPU Assigned)</strong></h3>

<p>Even if <strong>Ray</strong> does not allocate any <strong>GPUs</strong> to the <strong>Replica Actor</strong> (i.e., the <code inline="">RayDistributedExecutor</code> within the <strong>Serve</strong> framework), <code inline="">CUDA_VISIBLE_DEVICES</code> will still <strong>not be empty</strong>.</p>
<p>This happens because of this line of code, which calls <code inline="">self.driver_worker</code> and modifies the <strong>environment variables</strong> of the current process.</p>
<p>As a result, in the <strong>default configuration</strong>, the code functions correctly, allowing a process to access <strong>GPUs</strong> without directly occupying them.</p>
<h3><strong>Case 2: <code inline="">TP = 1</code> Changes the Behavior</strong></h3>
<p>When <strong>TP = 1</strong>, <strong>vLLM</strong> switches to using <code inline="">UniprocExecutor</code>, as seen in this line of code.</p>
<p>In this case, if <code inline="">CUDA_VISIBLE_DEVICES</code> is <strong>empty</strong>, it will cause an <strong>error</strong>, as <code inline="">UniprocExecutor</code> does <strong>not</strong> inherit the same <strong>environment variable handling</strong> as the multi-process setup.</p>

<hr>
<h2><strong>Supplementary Notes on Ray Serve and Ray LLM</strong></h2>
<p>After an initial review of the <strong>source code</strong> and conducting <strong>simple experiments</strong>, I believe that the <strong>new and old APIs</strong> of <strong>Ray Serve</strong> are fundamentally the same, except for the addition of a <strong>router</strong> and deeper <strong>integration with vLLM</strong>.</p>
<p>The core interaction between <strong>Ray</strong> and <strong>vLLM</strong> still revolves around the <strong>placement group (PG) allocation</strong> during deployment.</p>
<p>Therefore, these two approaches are essentially equivalent:</p>
<ol>
<li><strong>Manually integrating</strong> <code inline="">vllm.entrypoints.openai.serving_completion</code> into <strong>Ray Serve</strong>.</li>
<li><strong>Using the <code inline="">ray[llm]</code> library</strong> for deployment.</li>
</ol>
<hr>
<h2><strong>Related Issues</strong></h2>
<p>Based on my preliminary review, the following issues are all related to the analysis presented here:</p>
<ul>
<li><a href="https://github.com/vllm-project/vllm/issues/12983">vLLM Issue #12983</a></li>
<li><a href="https://github.com/vllm-project/vllm/issues/13521">vLLM Issue #13521</a></li>
<li><a href="https://github.com/vllm-project/vllm/issues/14415">vLLM Issue #14415</a></li>
<li><a href="https://github.com/vllm-project/vllm/issues/14456">vLLM Issue #14456</a></li>
<li><a href="https://github.com/ray-project/ray/issues/51154">Ray Issue #51154</a></li>
<li><a href="https://github.com/ray-project/ray/issues/51193">Ray Issue #51193</a></li>
<li><a href="https://github.com/ray-project/ray/issues/50275">Ray Issue #50275</a></li>
</ul></body></html>
### Versions / Dependencies
vllm>=0.7.2
ray[serve,llm,default] -U
### Reproduction script
Demo code in following
- Ray LLM: [Ray LLM Documentation](https://docs.ray.io/en/latest/serve/llm/overview.html)
- Ray Serve: [Ray Serve vLLM Example](https://docs.ray.io/en/latest/serve/tutorials/vllm-example.html)
### Issue Severity
High: It blocks me from completing my task. | open | 2025-03-11T11:58:54Z | 2025-03-14T20:20:45Z | https://github.com/ray-project/ray/issues/51242 | [
"bug",
"serve",
"llm"
] | huiyeruzhou | 5 |
kennethreitz/records | sqlalchemy | 13 | race condition in ResultSet.__iter__() | If you create two or more iterators over a ResultSet, then they consume `ResultSet._rows` together, meaning they all iterate over different rows. I.e. something like:
``` python
rows = db.query('select * from active_users')
itera = iter(rows)
iterb = iter(rows)
print(next(itera)) # row 1
print(next(iterb)) # row 2
```
Of course, once all rows have been iterated over and fully cached, then iterators behave correctly, since they are independently iterating over the `ResultSet._all_rows` list, something like:
``` python
rows = db.query('select * from active_users')
for row in rows: continue
itera = iter(rows)
iterb = iter(rows)
print(next(itera)) # row 1
print(next(iterb)) # row 1
```
Maybe this is intentional, but to me it seems confusing / not a standard iterator behavior. :bus:
| closed | 2016-02-07T22:55:01Z | 2018-04-28T23:00:46Z | https://github.com/kennethreitz/records/issues/13 | [
"bug"
] | hrldcpr | 3 |
feature-engine/feature_engine | scikit-learn | 765 | cyclical features may not always get the right period | CyclicalFeatures obtains the period by looking for the maximum value. So if months are coded from 1-12, the period will be 12 and that'll be fine.
For hours for example, if the hrs are coded from 1-24, it will get the period as 24 and that will also be fine, but if hrs are coded as 0-23, then the period will be set to 23 automatically, but in reality it is still 24.
Maybe we could use n_unique instead of max value?
In both cases there is still an issue to resolve: if the training set does not have the maximum value, or does not contain all elements, then the period will still be wrongly estimated, but I guess we could make that clear in the docs, I am not sure we can sort that out from our end. | closed | 2024-05-18T12:25:49Z | 2024-08-25T08:50:51Z | https://github.com/feature-engine/feature_engine/issues/765 | [] | solegalli | 18 |
modin-project/modin | pandas | 6,868 | Missed `gcsfs` dependency in CI | Tests `test_read_multiple_csv_cloud_store` and `test_read_csv_google_cloud_storage` are not actually tested, but simply fail with the same error (lack of the required module: `gcsfs`) when compared with pandas.
The main reason: https://github.com/modin-project/modin/issues/6016
Found in https://github.com/modin-project/modin/pull/6867 | closed | 2024-01-19T12:42:50Z | 2024-01-26T12:07:54Z | https://github.com/modin-project/modin/issues/6868 | [
"Testing 📈",
"P1"
] | anmyachev | 0 |
xuebinqin/U-2-Net | computer-vision | 128 | Why do we concatanate and then also sum the outputs before claculating loss? | In your diagram, there are 6 "side" outputs, which are then concatanted into a 7th tensor.
Then, in your code the whole 7 outputs are further summed into a final loss.
Isn't there a redundancy of the sum of all outputs and their concatanations? | closed | 2020-12-17T11:25:07Z | 2020-12-21T00:18:06Z | https://github.com/xuebinqin/U-2-Net/issues/128 | [] | shgidi | 1 |
Miksus/rocketry | pydantic | 69 | ENH: Cron-style time periods (and conditions) | **Is your feature request related to a problem? Please describe.**
Some people might prefer Crontab-like scheduling options over the built-in time periods even though one can compose pretty much the same scheduling with it as well.
**Describe the solution you'd like**
The solution will consist of two parts: time periods and then a condition wrapper for it. The solution should extend the existing time system to support intervals mimicking the Cron time intervals and then it should be trivial to add that to the conditions. Then there could be a condition added in the condition API and to the condition syntax to compose such a combination.
**Describe alternatives you've considered**
Using existing time periods and conditions.
| closed | 2022-08-07T20:48:32Z | 2022-08-15T07:26:01Z | https://github.com/Miksus/rocketry/issues/69 | [
"enhancement",
"built-in"
] | Miksus | 2 |
ultralytics/ultralytics | pytorch | 19,811 | Some questions about how to modify YAML files after improving the network | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I have been conducting experiments using YOLOv3 recently. After replacing the Darknet53 backbone network with Mobilenetv3, I modified the number of channels in each layer of the head layer, but it always got wrong and the code kept reporting errors. However, if I made the modifications on YOLOv5, it could run normally. I want to know how to determine the number of channels? After modifying the backbone, do we only need to change the number of channels for the head modification, and do we need to change anything else?
### Additional
# Parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
# darknet53 backbone
backbone:
# [from, number, module, args]
- [-1, 1, Conv, [32, 3, 1]] # 0
- [-1, 1, Conv, [64, 3, 2]] # 1-P1/2
- [-1, 1, Bottleneck, [64]]
- [-1, 1, Conv, [128, 3, 2]] # 3-P2/4
- [-1, 2, Bottleneck, [128]]
- [-1, 1, Conv, [256, 3, 2]] # 5-P3/8
- [-1, 8, Bottleneck, [256]]
- [-1, 1, Conv, [512, 3, 2]] # 7-P4/16
- [-1, 8, Bottleneck, [512]]
- [-1, 1, Conv, [1024, 3, 2]] # 9-P5/32
- [-1, 4, Bottleneck, [1024]] # 10
# YOLOv3 head
head:
- [-1, 1, Bottleneck, [1024, False]]
- [-1, 1, Conv, [512, 1, 1]]
- [-1, 1, Conv, [1024, 3, 1]]
- [-1, 1, Conv, [512, 1, 1]]
- [-1, 1, Conv, [1024, 3, 1]] # 15 (P5/32-large)
- [-2, 1, Conv, [256, 1, 1]]
- [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- [[-1, 8], 1, Concat, [1]] # cat backbone P4
- [-1, 1, Bottleneck, [512, False]]
- [-1, 1, Bottleneck, [512, False]]
- [-1, 1, Conv, [256, 1, 1]]
- [-1, 1, Conv, [512, 3, 1]] # 22 (P4/16-medium)
- [-2, 1, Conv, [128, 1, 1]]
- [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- [[-1, 6], 1, Concat, [1]] # cat backbone P3
- [-1, 1, Bottleneck, [256, False]]
- [-1, 2, Bottleneck, [256, False]] # 27 (P3/8-small)
- [[27, 22, 15], 1, Detect, [nc]] # Detect(P3, P4, P5)
Replace the backbone with MobileNetV3
[[-1, 1, conv_bn_hswish, [16, 2]], # 0-p1/2
[-1, 1, MobileNetV3, [ 16, 16, 3, 1, 0, 0]], # 1-p1/2
[-1, 1, MobileNetV3, [ 24, 64, 3, 2, 0, 0]], # 2-p2/4
[-1, 1, MobileNetV3, [ 24, 72, 3, 1, 0, 0]], # 3-p2/4
[-1, 1, MobileNetV3, [ 40, 72, 5, 2, 1, 0]], # 4-p3/8
[-1, 1, MobileNetV3, [ 40, 120, 5, 1, 1, 0]], # 5-p3/8
[-1, 1, MobileNetV3, [ 40, 120, 5, 1, 1, 0]], # 6-p3/8
[-1, 1, MobileNetV3, [ 80, 240, 3, 2, 0, 1]], # 7-p4/16
[-1, 1, MobileNetV3, [ 80, 200, 3, 1, 0, 1]], # 8-p4/16
[-1, 1, MobileNetV3, [ 80, 184, 3, 1, 0, 1]], # 9-p4/16
[-1, 1, MobileNetV3, [ 80, 184, 3, 1, 0, 1]], # 10-p4/16
[-1, 1, MobileNetV3, [112, 480, 3, 1, 1, 1]], # 11-p4/16
[-1, 1, MobileNetV3, [112, 672, 3, 1, 1, 1]], # 12-p4/16
[-1, 1, MobileNetV3, [160, 672, 5, 1, 1, 1]], # 13-p4/16
[-1, 1, MobileNetV3, [160, 960, 5, 2, 1, 1]], # 14-p5/32 原672改为原算法960
[-1, 1, MobileNetV3, [160, 960, 5, 1, 1, 1]], # 15-p5/32
]
| open | 2025-03-21T06:53:38Z | 2025-03-22T07:26:26Z | https://github.com/ultralytics/ultralytics/issues/19811 | [
"question",
"detect"
] | Meaccy | 2 |
thtrieu/darkflow | tensorflow | 921 | What is the width, height and random mean in cfg | in cfg file, there are width=608,height=608 and random=1 in yolo.cfg.
As I know, If random=1 YOLO randomly resize the input on each 10 batches.
but in shuffle(), I print the shape of the image, that is always 608 x 608.
What is the width,height and random means in cfg file?
+I remove the random=1 but it works well.
then Darkflow does not randomly resize?
| open | 2018-10-18T04:58:33Z | 2018-10-18T05:22:05Z | https://github.com/thtrieu/darkflow/issues/921 | [] | omg777 | 0 |
ExpDev07/coronavirus-tracker-api | rest-api | 321 | [UPDATE ISSUE] | urgent: confirmed and death cases have been updated but recovered have not been updated yet !!! recovered cases are of 29/06/2020 | closed | 2020-07-01T05:42:48Z | 2020-07-28T16:12:27Z | https://github.com/ExpDev07/coronavirus-tracker-api/issues/321 | [
"bug",
"duplicate"
] | elizaan | 1 |
aleju/imgaug | machine-learning | 241 | simultaneously augment images and boxes | Hi,
I just wondered if it was possible to augment simultaneously images and boxes ?
One of the use-cases would be to randomly crop around objects in images (see for example SSD paper https://arxiv.org/abs/1512.02325).
Thank you for your help,
Regards,
| open | 2019-01-23T13:59:33Z | 2019-01-27T15:13:34Z | https://github.com/aleju/imgaug/issues/241 | [] | MaximeTuab | 1 |
babysor/MockingBird | pytorch | 393 | 【作者快看】大家觉得哪个模型好啊? | 大家好,我现在用的这个模型和原声太不像了,大家有那种克隆更真实的模型推荐吗?能发我一下吗?谢谢各位了 | closed | 2022-02-17T11:57:54Z | 2022-07-27T05:45:11Z | https://github.com/babysor/MockingBird/issues/393 | [] | PEK-ZBAA | 3 |
d2l-ai/d2l-en | data-science | 1,719 | text of sec 13.1.2 needs updating for pytorch | Sec 13.1.2 (Using an Image Augmentation Training Model) refers to `transform_first` which is just in MXnet, not pytorch. (There are also several references to Gluon that should be removed). | closed | 2021-04-12T04:21:56Z | 2021-04-19T01:27:21Z | https://github.com/d2l-ai/d2l-en/issues/1719 | [] | murphyk | 1 |
pyro-ppl/numpyro | numpy | 1,299 | HMC Gibbs Hierarchical Prior Issue | I'm trying to fit a hierarchical model with HMC Gibbs. Much simplified model in the attached notebook. Works fine with a fixed hyperparameter for tau, but not at all with a prior over tau sampled in the model function. I've seen this behaviour in several models and trying to get to the bottom of it. Quite possible that I am missing something... but seems as though this should work?
[hmc gibbs hierarchical.ipynb.zip](https://github.com/pyro-ppl/numpyro/files/7916630/hmc.gibbs.hierarchical.ipynb.zip)
| closed | 2022-01-21T21:25:05Z | 2022-03-10T15:50:50Z | https://github.com/pyro-ppl/numpyro/issues/1299 | [
"question"
] | ross-h1 | 8 |
PaddlePaddle/ERNIE | nlp | 511 | Why Ernie is downloading pre-trained model from server every time? | https://github.com/PaddlePaddle/ERNIE/blob/9b6c6837f85ca8b232237bb9001710d820e11b95/ernie/modeling_ernie.py#L193
I have a question regarding loading the pre-trained model. In this particular line, it's not checking if the pre-trained dir exist or not instead condition is directly downloading the model first with temp file dir.
Suggestion :
Instead of creating a tempdir, create one static dir and store the weights there later, check if the weights are there before downloading from URL.
| closed | 2020-07-03T12:12:24Z | 2020-09-09T12:52:00Z | https://github.com/PaddlePaddle/ERNIE/issues/511 | [
"wontfix"
] | monk1337 | 2 |
open-mmlab/mmdetection | pytorch | 11,559 | fp16 overflow in CrossEntropyLossCost | Thanks for your error report and we appreciate it a lot.
**Checklist**
1. I have searched related issues but cannot get the expected help.
2. I have read the [FAQ documentation](https://mmdetection.readthedocs.io/en/latest/faq.html) but cannot get the expected help.
3. The bug has not been fixed in the latest version.
**Describe the bug**
When training with amp, `torch.einsum` may cause fp16 overflow.
https://github.com/open-mmlab/mmdetection/blob/cfd5d3a985b0249de009b67d04f37263e11cdf3d/mmdet/models/task_modules/assigners/match_cost.py#L495-L496
**Reproduction**
Training on my custom dataset using DINO with convnext-v2 arch.
**Error traceback**
```python
File "mmdetection/mmdet/models/task_modules/assigners/hungarian_assigner.py", line 131, in assign
matched_row_inds, matched_col_inds = linear_sum_assignment(cost)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: cost matrix is infeasible
```
**Bug fix**
```python
with torch.cuda.amp.autocast(enabled=False):
cls_cost = torch.einsum('nc,mc->nm', pos, gt_labels) + \
torch.einsum('nc,mc->nm', neg, 1 - gt_labels)
``` | open | 2024-03-17T08:36:22Z | 2024-04-03T09:08:31Z | https://github.com/open-mmlab/mmdetection/issues/11559 | [] | flytocc | 1 |
cchen156/Learning-to-See-in-the-Dark | tensorflow | 45 | misalignment problem in the dataset | A question in the **test_Sony.py** is that, the ```scipy.io``` is really slow and ```cv2.imwrite``` is really fast. In the 8-bit saving way, do this two methods inference the finnaly PSNR & SSIM? | open | 2018-08-07T09:46:09Z | 2021-08-03T09:42:52Z | https://github.com/cchen156/Learning-to-See-in-the-Dark/issues/45 | [] | oneTaken | 13 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.