repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
taverntesting/tavern | pytest | 301 | Issue with sending delete method | I always receive this error when running a DELETE command
tavern.util.exceptions.TestFailError: Test 'Delete non-existant experiment' failed:
- Status code was 400, expected 404:
{"error": "The server could not handle your request: 400 Bad Request: The browser (or proxy) sent a request that this server could not understand."}
------------------------------ Captured log call -------------------------------
base.py 37 ERROR Status code was 400, expected 404:
{"error": "The server could not handle your request: 400 Bad Request: The browser (or proxy) sent a request that this server could not understand."}
It all of a sudden started happening. I am on version 0.22.1. If I execute the same request in Postman it runs just fine. | closed | 2019-03-07T13:56:59Z | 2019-03-07T18:49:55Z | https://github.com/taverntesting/tavern/issues/301 | [] | mrwatchmework | 1 |
marshmallow-code/marshmallow-sqlalchemy | sqlalchemy | 664 | I suspect a breaking change related to "include_relationships=True" and schema.load() in version 1.4.1 | Hi ! I recently had to do some testing on my local for some portion of our code that's been running happily on staging. I didn't specify a version number for my local container instance, as such it pulled `1.4.1` which is the latest. I noticed that it seems to enforce implicitly defined fields (from the include_relationships=True option), during a `schema.load()` which I THINK isn't the desired or USUAL behavior.
Typically (in previous versions specifically 1.1.0 ) we've been able to deserialize into objects with schemas defined with the `include_relationships` option set to True - without having to provide values for relationship fields (which intuitively makes sense), however for some reason it's raising a Validation (Missing field) error on `1.4.1` . This behavior wasn't reproducable on `1.1.0` (not that I would know if this is on later versions in: `1.1.0 < version < 1.4.1` because we've not used any other up until my recent local testing which used `1.4.1`).
```python
class UserSchema(Base):
class Meta:
model = User
include_relationships=True # includes for example an 'address' relationship attr (could be something like a 'backref' or explicitly declared field on the sqlalchemy model)
include_fk = True
load_instance = True
# usage and expected behavior
schema = UserSchema(unknown='exclude', session=session)
user = schema.load(**data)
# returns <User> object
# Observed behavior
# .. same steps
# throws 'Missing data for required 'address' field - error'
``` | open | 2025-03-14T18:27:03Z | 2025-03-22T10:59:09Z | https://github.com/marshmallow-code/marshmallow-sqlalchemy/issues/664 | [] | Curiouspaul1 | 2 |
vllm-project/vllm | pytorch | 14,531 | [Bug]: [Tests] Initialization Test Does Not Work With V1 | ### Your current environment
- Converting over the tests to use V1, this is not working
### 🐛 Describe the bug
- works
```bash
VLLM_USE_V1=0 pytest -v -x models/test_initialization.py -k "not Cohere"
```
- fails on Grok 1
```bash
VLLM_USE_V1=1 pytest -v -x models/test_initialization.py -k "not Cohere"
```
- works! (this is wild)
```bash
VLLM_USE_V1=1 pytest -v -x models/test_initialization.py -k "Grok"
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | open | 2025-03-10T02:38:38Z | 2025-03-10T02:38:38Z | https://github.com/vllm-project/vllm/issues/14531 | [
"bug"
] | robertgshaw2-redhat | 0 |
stanfordnlp/stanza | nlp | 971 | Slovak multiword doesn't work | **Describe the bug**
Slovak module does not handle multiwords such as "naňho"
**To Reproduce**
```py
>>> import stanza
>>> stanza.download("sk")
>>> nlp=stanza.Pipeline("sk")
>>> doc=nlp("Ten naňho spadol a zabil ho.")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "~/.local/lib/python3.9/site-packages/stanza/pipeline/core.py", line 231, in __call__
doc = self.process(doc)
File "~/.local/lib/python3.9/site-packages/stanza/pipeline/core.py", line 225, in process
doc = process(doc)
File "~/.local/lib/python3.9/site-packages/stanza/pipeline/depparse_processor.py", line 51, in process
sentence.build_dependencies()
File "~/.local/lib/python3.9/site-packages/stanza/models/common/doc.py", line 555, in build_dependencies
assert(word.head == head.id)
AssertionError
```
**Expected behavior**
"naňho" should be split into two words "na neho"
**Environment (please complete the following information):**
- OS: Debian
- Python version: Python 3.9.2
- Stanza version: 1.3.0
| closed | 2022-03-06T04:43:30Z | 2022-03-06T10:46:42Z | https://github.com/stanfordnlp/stanza/issues/971 | [
"bug",
"fixed on dev"
] | KoichiYasuoka | 2 |
horovod/horovod | tensorflow | 3,501 | Is there a problem with ProcessSetTable Finalize when elastic? | Background:
Suppose there are currently 4 ranks on 4 machines
Due to the failure of machine 1, rank1 exits directly, and the final shutdown: logic is not executed
Then the remaining machines will perform the shutdown operation in the case of elasticity, and will call process_set_table.Finalize function. this function uses allgather to determine whether the process set needs to be removed, but at this time rank1 has already exited, then allgather operation should theoretically cause the remaining processes to be abnormal, so that the shutdown cannot be normal and elastic cannot be normal.
@maxhgerlach | closed | 2022-04-02T08:23:35Z | 2022-05-12T03:16:01Z | https://github.com/horovod/horovod/issues/3501 | [
"bug"
] | Richie-yan | 1 |
neuml/txtai | nlp | 265 | Add scripts to train query translation models | Add training scripts for building query translation models. | closed | 2022-04-18T09:41:28Z | 2022-04-18T14:39:26Z | https://github.com/neuml/txtai/issues/265 | [] | davidmezzetti | 0 |
ymcui/Chinese-BERT-wwm | tensorflow | 184 | 关于fill-mask的一些疑问 | 中国[MASK]:
```
{'sequence': '中 国 :', 'score': 0.5457051992416382, 'token': 8038, 'token_str': ':'}
{'sequence': '中 国 :', 'score': 0.09207046031951904, 'token': 131, 'token_str': ':'}
{'sequence': '中 国 -', 'score': 0.06536566466093063, 'token': 118, 'token_str': '-'}
{'sequence': '中 国 。', 'score': 0.06007284298539162, 'token': 511, 'token_str': '。'}
{'sequence': '中 国 版', 'score': 0.03868889436125755, 'token': 4276, 'token_str': '版'}
{'sequence': '中 国 ;', 'score': 0.01822206936776638, 'token': 8039, 'token_str': ';'}
{'sequence': '中 国 的', 'score': 0.013966748490929604, 'token': 4638, 'token_str': '的'}
{'sequence': '中 国 ,', 'score': 0.007958734408020973, 'token': 8024, 'token_str': ','}
{'sequence': '中 国 网', 'score': 0.006388372275978327, 'token': 5381, 'token_str': '网'}
{'sequence': '中 国,', 'score': 0.005788101349025965, 'token': 117, 'token_str': ','}
```
机器[MASK]:
```
{'sequence': '机 器 。', 'score': 0.2849466800689697, 'token': 511, 'token_str': '。'}
{'sequence': '机 器 :', 'score': 0.21833810210227966, 'token': 8038, 'token_str': ':'}
{'sequence': '机 器 ;', 'score': 0.13236992061138153, 'token': 8039, 'token_str': ';'}
{'sequence': '机 器 :', 'score': 0.08217491209506989, 'token': 131, 'token_str': ':'}
{'sequence': '机 器 人', 'score': 0.028695881366729736, 'token': 782, 'token_str': '人'}
{'sequence': '机 器 )', 'score': 0.02431340701878071, 'token': 8021, 'token_str': ')'}
{'sequence': '机 器 ;', 'score': 0.023457376286387444, 'token': 132, 'token_str': ';'}
{'sequence': '机 器 的', 'score': 0.012613171711564064, 'token': 4638, 'token_str': '的'}
{'sequence': '机 器 、', 'score': 0.010766545310616493, 'token': 510, 'token_str': '、'}
{'sequence': '机 器 (', 'score': 0.010289286263287067, 'token': 8020, 'token_str': '('}
```
凭经验,如果前缀是"中国", 下一个字是"人"应该概率更高,为什么这样实验的结果会出现很多标点符号?
| closed | 2021-05-17T12:52:17Z | 2021-05-27T21:43:27Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/184 | [
"stale"
] | yoopaan | 3 |
pydantic/pydantic-core | pydantic | 884 | MultiHostUrl build function return type is str | Output type in the `_pydantic_core.pyi` for build function in MultiHostUrl class is str but in the [migration guild](https://docs.pydantic.dev/latest/migration/#url-and-dsn-types-in-pydanticnetworks-no-longer-inherit-from-str) you said dsn types don't inherit str.
Selected Assignee: @dmontagu | closed | 2023-08-15T09:05:04Z | 2023-08-15T09:28:08Z | https://github.com/pydantic/pydantic-core/issues/884 | [
"duplicate"
] | mojtabaAmir | 1 |
MycroftAI/mycroft-core | nlp | 3,038 | Can I download the entire package just once? | So far I've setup Mycroft about 20 times trying different ways, VBox, Virtmgr, Docker mycroft, ubuntu, minideb.. etc.
All this is burning up my bandwidth and seems really unnecessary as they are all accessing the same files.
Can I just download the entire package just once instead of having dev_mycroft download all the packages everytime? I'm using the same ubuntu or debian hosts everytime. Would a list of packages for apt help? The at least I could move them from /var/cache/apt/archives. Or could they be put into a reusable deb file? Or how about an AppImage? | closed | 2021-11-22T05:49:42Z | 2024-09-08T08:24:48Z | https://github.com/MycroftAI/mycroft-core/issues/3038 | [] | auwsom | 7 |
drivendataorg/cookiecutter-data-science | data-science | 282 | Check for existing directory earlier | As soon as the user enters the project name (or repo name? which is used for the directory name), check if the directory already exists and either quit or prompt for a new name.
Otherwise, the user has to answer all of the questions and THEN get an error message. | open | 2022-08-26T18:53:52Z | 2025-03-07T18:58:23Z | https://github.com/drivendataorg/cookiecutter-data-science/issues/282 | [
"enhancement"
] | AllenDowney | 1 |
ultralytics/ultralytics | machine-learning | 18,826 | Failing to compute gradients, RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I'm resarching perturbations in neural networks. I have a video in which YOLOv11 correctly detects several objects. I'd like to add a gradient to each frame, so that it would fail detecting objects in the modified frames.
My current approach is;
```def fgsm(gradients, tensor):
perturbation = epsilon * gradients.sign()
alt_img = tensor + perturbation
alt_img = torch.clamp(alt_img, 0, 1) # clipping pixel values
alt_img_np = alt_img.squeeze().permute(1, 2, 0).detach().numpy()
alt_img_np = (alt_img_np * 255).astype(np.uint8)
return alt_img_np
def perturb(model, cap):
out = cv2.VideoWriter('perturbed.mp4', 0x7634706d, 30.0, (640, 640))
print("CUDA Available: ", torch.cuda.is_available())
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)
while cap.isOpened():
ret, img = cap.read()
resized = cv2.resize(img, (640, 640))
rgb = cv2.cvtColor(resized, cv2.COLOR_BGR2RGB)
tensor = torch.from_numpy(rgb).float()
tensor = tensor.to(device)
tensor = tensor.permute(2, 0, 1).unsqueeze(0) #change tensor dimensions
tensor /= 255.0 #normalize
tensor.requires_grad = True
output = model(tensor)
target = output[0].boxes.cls.long()
logits = output[0].boxes.data
loss = -F.cross_entropy(logits, target)
loss.backward() #Backpropagation
gradients = tensor.grad
if gradients is not None:
alt_img = fgsm(gradients, tensor)
cv2.imshow('Perturbed video', alt_img)
out.write(alt_img)
```
Without loss.requires_grad = True I receive;
```
loss.backward() #Backpropagation
^^^^^^^^^^^^^^^
File "/var/data/python/lib/python3.11/site-packages/torch/_tensor.py", line 581, in backward
torch.autograd.backward(
File "/var/data/python/lib/python3.11/site-packages/torch/autograd/__init__.py", line 347, in backward
_engine_run_backward(
File "/var/data/python/lib/python3.11/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
```
If I enable loss.requires_grad = True, I am able to extract gradients from loss, but those dont look like they are correctly applied (and dont lead to a decrease in detection/classification performance.
What am I missing?
Thanks.
### Additional
_No response_ | closed | 2025-01-22T13:49:32Z | 2025-03-02T15:59:00Z | https://github.com/ultralytics/ultralytics/issues/18826 | [
"question",
"detect"
] | gp-000 | 10 |
xzkostyan/clickhouse-sqlalchemy | sqlalchemy | 194 | Release 0.2.2 version | We have merged #180 PR and need to provide a new version of lib to fix the issue in apache superset
@xzkostyan Can we do this? | closed | 2022-08-23T07:26:15Z | 2022-08-24T07:33:19Z | https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/194 | [] | EugeneTorap | 2 |
ets-labs/python-dependency-injector | flask | 73 | Review and update ExternalDependency provider docs | closed | 2015-07-13T07:33:19Z | 2015-07-16T22:15:30Z | https://github.com/ets-labs/python-dependency-injector/issues/73 | [
"docs"
] | rmk135 | 0 | |
encode/apistar | api | 51 | Using gunicorn if using http.QueryParams without query params causes KeyError | ```
[2017-04-16 20:24:24 -0700] [67301] [ERROR] Traceback (most recent call last):
File "venv/lib/python3.6/site-packages/apistar/app.py", line 68, in func
state[output] = function(**kwargs)
File "app.py", line 27, in build
return cls(url_decode(environ['QUERY_STRING']))
KeyError: 'QUERY_STRING'
``` | closed | 2017-04-17T05:07:57Z | 2017-04-17T16:40:25Z | https://github.com/encode/apistar/issues/51 | [] | kinabalu | 1 |
pytest-dev/pytest-cov | pytest | 270 | Local test failure: ModuleNotFoundError: No module named 'helper' | I am seeing the following test failure locally, also when using tox, even from
a fresh git clone:
```
platform linux -- Python 3.7.2, pytest-4.3.0, py-1.8.0, pluggy-0.9.0
rootdir: …/Vcs/pytest-cov, inifile: setup.cfg
plugins: forked-1.0.2, cov-2.6.1
collected 113 items
tests/test_pytest_cov.py F
=========================================================================================== FAILURES ===========================================================================================
____________________________________________________________________________________ test_central[branch2x] ____________________________________________________________________________________
…/Vcs/pytest-cov/tests/test_pytest_cov.py:187: in test_central
'*10 passed*'
E Failed: nomatch: '*- coverage: platform *, python * -*'
E and: '============================= test session starts =============================='
E and: 'platform linux -- Python 3.7.2, pytest-4.3.0, py-1.8.0, pluggy-0.9.0 -- …/Vcs/pytest-cov/.venv/bin/python'
E and: 'cachedir: .pytest_cache'
E and: 'rootdir: /tmp/pytest-of-user/pytest-647/test_central0, inifile:'
E and: 'plugins: forked-1.0.2, cov-2.6.1'
E and: 'collecting ... collected 0 items / 1 errors'
E and: ''
E and: '==================================== ERRORS ===================================='
E and: '_______________________ ERROR collecting test_central.py _______________________'
E and: "ImportError while importing test module '/tmp/pytest-of-user/pytest-647/test_central0/test_central.py'."
E and: 'Hint: make sure your test modules/packages have valid Python names.'
E and: 'Traceback:'
E and: 'test_central.py:1: in <module>'
E and: ' import sys, helper'
E and: "E ModuleNotFoundError: No module named 'helper'"
E and: ''
E fnmatch: '*- coverage: platform *, python * -*'
E with: '----------- coverage: platform linux, python 3.7.2-final-0 -----------'
E nomatch: 'test_central* 9 * 85% *'
E and: 'Name Stmts Miss Branch BrPart Cover Missing'
E and: '-------------------------------------------------------------'
E and: 'test_central.py 9 8 4 0 8% 3-11'
E and: ''
E and: '!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!'
E and: '=========================== 1 error in 0.05 seconds ============================'
E remains unmatched: 'test_central* 9 * 85% *'
------------------------------------------------------------------------------------- Captured stdout call -------------------------------------------------------------------------------------
running: …/Vcs/pytest-cov/.venv/bin/python -mpytest --basetemp=/tmp/pytest-of-user/pytest-647/test_central0/runpytest-0 -v --cov=/tmp/pytest-of-user/pytest-647/test_central0 --cov-report=term-missing /tmp/pytest-of-user/pytest-647/test_central0/test_central.py --cov-branch --basetemp=/tmp/pytest-of-user/pytest-647/basetemp
in: /tmp/pytest-of-user/pytest-647/test_central0
============================= test session starts ==============================
platform linux -- Python 3.7.2, pytest-4.3.0, py-1.8.0, pluggy-0.9.0 -- …/Vcs/pytest-cov/.venv/bin/python
cachedir: .pytest_cache
rootdir: /tmp/pytest-of-user/pytest-647/test_central0, inifile:
plugins: forked-1.0.2, cov-2.6.1
collecting ... collected 0 items / 1 errors
==================================== ERRORS ====================================
_______________________ ERROR collecting test_central.py _______________________
ImportError while importing test module '/tmp/pytest-of-user/pytest-647/test_central0/test_central.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
test_central.py:1: in <module>
import sys, helper
E ModuleNotFoundError: No module named 'helper'
----------- coverage: platform linux, python 3.7.2-final-0 -----------
Name Stmts Miss Branch BrPart Cover Missing
-------------------------------------------------------------
test_central.py 9 8 4 0 8% 3-11
!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!
=========================== 1 error in 0.05 seconds ============================
=================================================================================== short test summary info ====================================================================================
FAILED tests/test_pytest_cov.py::test_central[branch2x]
```
| closed | 2019-03-08T02:45:40Z | 2019-03-09T17:33:54Z | https://github.com/pytest-dev/pytest-cov/issues/270 | [] | blueyed | 3 |
tortoise/tortoise-orm | asyncio | 1,439 | pip install tortoise-orm==0.20.0 show error | cmd run `pip install -U tortoise-orm==0.20.0 -i https://pypi.python.org/simple ` but i get this error

https://tortoise.github.io/index.html show 0.20.0 is published
| open | 2023-07-26T07:10:07Z | 2023-10-20T07:55:58Z | https://github.com/tortoise/tortoise-orm/issues/1439 | [] | Hillsir | 6 |
autogluon/autogluon | scikit-learn | 4,805 | AutoGluon Compatibility Issue on M4 Pro MacBook (Model Loading Stuck) | Description:
I encountered an issue while running AutoGluon on my M4 Pro MacBook. The application gets stuck indefinitely while loading models, without any explicit error messages. The same code works flawlessly on my Intel-based MacBook.
Here’s the relevant part of the log:
Loading: models/conversion_model/models/KNeighborsDist/model.pkl
Loading: models/conversion_model/models/LightGBM/model.pkl
Loading: models/conversion_model/models/LightGBMLarge/model.pkl
Loading: models/conversion_model/models/NeuralNetFastAI/model.pkl
Loading: models/conversion_model/models/NeuralNetFastAI/model-internals.pkl
The issue persists even after ensuring all dependencies are correctly installed and compatible with the M4 architecture. The logs also show repeated deprecation warnings and debug messages related to Graphviz and Matplotlib, but nothing directly indicative of a failure.
Steps to Reproduce:
1. Run the code on an M4 Pro MacBook.
2. Observe the process stuck while loading models (e.g., model.pkl files).
3. No explicit errors are raised; the application just hangs indefinitely.
System Details:
• Device: M4 Pro MacBook
• Python Version: 3.9
• AutoGluon Version: 0.8.3b20230917
• Other Logs: Includes deprecation warnings from Graphviz and Matplotlib.
This seems to be a compatibility issue with the M4 chip. It would be great if AutoGluon could address this or provide guidance on how to debug this further. | open | 2025-01-16T19:48:41Z | 2025-01-18T06:05:46Z | https://github.com/autogluon/autogluon/issues/4805 | [
"bug: unconfirmed",
"OS: Mac"
] | ranmalmendis | 3 |
aminalaee/sqladmin | fastapi | 609 | Allow `list_query` be defined as ModelAdmin method | ### Discussed in https://github.com/aminalaee/sqladmin/discussions/607
<div type='discussions-op-text'>
<sup>Originally posted by **abdawwa1** September 4, 2023</sup>
Hello there , how can i get access for request.session in a class that inherits from ModelView class ?
Ex :
```
class UserProfileAdmin(ModelView, model=User):
list_query = select(User).filter(User.id == request.session.get("user"))
```</div> | closed | 2023-09-04T18:35:58Z | 2023-09-06T09:25:57Z | https://github.com/aminalaee/sqladmin/issues/609 | [] | aminalaee | 0 |
sinaptik-ai/pandas-ai | data-science | 938 | Large number of dataframes to handle at once | ### 🚀 The feature
I have a use case where I need to plugin let's say 100 dataframes and each has about 15-20 columns but not many rows. For that I would use SmartDatalake and pass in all dataframes as a list, but when I will input a query and it has to choose the right dataframe based on my request, it will pass in data snippet from all the 100 dataframes as prompt, this can easily result into token limit errors.
I'm not sure how we can overcome this issue, maybe vector database with context from each dataframe, and extracting relevant dataframe based on that can help.
### Motivation, pitch
I was just testing out pandas AI and this scenario came to my mind, in real world if we have got multiple dataframes in large number and we want to automate and give chat functionality with all that data, based on current implementation of pandas AI (passing dataframe snippet with prompt), it won't be able to handle that.
### Alternatives
_No response_
### Additional context
_No response_ | closed | 2024-02-15T22:44:30Z | 2024-06-01T00:19:10Z | https://github.com/sinaptik-ai/pandas-ai/issues/938 | [] | BYZANTINE26 | 0 |
OpenInterpreter/open-interpreter | python | 1,529 | generated files contain echo "##active_lineN##" lines | ### Describe the bug
when i'm asking to create some files, the actual files often contain these tracing support lines and i'm not able to instruct to avoid this in any way.
```
echo "##active_line2##"
# frozen_string_literal: true
echo "##active_line3##"
echo "##active_line4##"
class Tasks::CleanupLimitJob < ApplicationJob
echo "##active_line5##"
queue_as :default
echo "##active_line6##"
echo "##active_line7##"
def perform(tag: "", limit: 14)
echo "##active_line8##"
Ops::Backup.retain_last_limit_cleanup_policy(tag: tag, limit: limit)
echo "##active_line9##"
end
echo "##active_line10##"
end
echo "##active_line11##"
```
### Reproduce
have it read a gem project and ask it to generate e.g. a new active job instance.
### Expected behavior
files written should not contain 'echo' lines that serves debugging
### Screenshots
_No response_
### Open Interpreter version
0.4.3 Developer Preview
### Python version
3.10.13
### Operating System name and version
macOS (latest greatest)
### Additional context
_No response_ | open | 2024-11-10T16:27:44Z | 2024-11-29T10:32:55Z | https://github.com/OpenInterpreter/open-interpreter/issues/1529 | [] | koenhandekyn | 1 |
nerfstudio-project/nerfstudio | computer-vision | 2,883 | Docker dromni/nerfstudi ns-train crashes | **Describe the bug**
Docker `dromni/nerfstudi` `ns-train` crashes with tinycudann/modules.py:19 `Unknown compute capability. Ensure PyTorch with CUDA support`
**To Reproduce**
Steps to reproduce the behavior:
1. Download data
```
docker run --gpus all \
--user $(id -u):$(id -g) \
-v $(pwd)/workspace:/workspace/ \
-v $(pwd)/datasets:/datasets/ \
-v $(pwd)/cache:/home/user/.cache/ \
--rm -it \
--shm-size=12gb \
dromni/nerfstudio:1.0.1 \
ns-download-data blender --save-dir /datasets
```
2. Train
```
docker run --gpus all \
--user $(id -u):$(id -g) \
-v $(pwd)/workspace:/workspace/ \
-v $(pwd)/datasets:/datasets/ \
-v $(pwd)/cache:/home/user/.cache/ \
-p 0.0.0.0:7007:7007 \
--rm -it \
--shm-size=12gb \
dromni/nerfstudio:1.0.1 \
ns-train nerfacto --data datasets/blender/mic
```
5. See error
```
/home/user/.local/lib/python3.10/site-packages/torch/cuda/__init__.py:138: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 804: forward compatibility was attempted on non supported HW (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.)
return torch._C._cuda_getDeviceCount() > 0
Could not load tinycudann: Unknown compute capability. Ensure PyTorch with CUDA support is installed.
...
OSError: Unknown compute capability. Ensure PyTorch with CUDA support is installed.
```
**Expected behavior**
Expect ns-train nerfacto --data datasets/blender/mic call to successsfully run while show progress similar to [documentation](https://docs.nerf.studio/quickstart/first_nerf.html)
Note that I can run
```
docker run --gpus all dromni/nerfstudio:1.0.1 /bin/bash -c "nvidia-smi"
==========
== CUDA ==
==========
CUDA Version 11.8.0
Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.
Wed Feb 7 17:00:58 2024
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.223.02 Driver Version: 470.223.02 CUDA Version: 11.8 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 Off | N/A |
| 30% 32C P8 N/A / 75W | 6MiB / 1999MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA GeForce ... Off | 00000000:02:00.0 Off | N/A |
| 27% 32C P8 6W / 180W | 2MiB / 8119MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
```
with `nvidia-smi -L`
```
GPU 0: NVIDIA GeForce GTX 1050
GPU 1: NVIDIA GeForce GTX 1080
```
docker run --gpus all dromni/nerfstudio:1.0.1 /bin/bash -c "nvcc -V"
```
==========
== CUDA ==
==========
CUDA Version 11.8.0
Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
```
**Additional context**
Following #1177 , I suspect the issue is with `tinycudann` and [architecture related issues](https://github.com/nerfstudio-project/nerfstudio/issues/1317)? | closed | 2024-02-07T17:25:03Z | 2024-02-18T20:30:48Z | https://github.com/nerfstudio-project/nerfstudio/issues/2883 | [] | robinsonkwame | 4 |
vastsa/FileCodeBox | fastapi | 148 | 无法使用cloudflare r2来进行分享 | 使用s3配置了cloudflare r2,上传是正常的,但是通过分享节目下载文件 弹错提示
UnauthorizedSigV2 authorization is not supported. Please use SigV4 instead.
需要更新为SigV4 | closed | 2024-04-13T06:59:12Z | 2024-04-29T14:13:20Z | https://github.com/vastsa/FileCodeBox/issues/148 | [] | Emtier | 5 |
pydata/xarray | numpy | 9,098 | ⚠️ Nightly upstream-dev CI failed ⚠️ | [Workflow Run URL](https://github.com/pydata/xarray/actions/runs/9654194590)
<details><summary>Python 3.12 Test Summary</summary>
```
xarray/tests/test_missing.py::test_scipy_methods_function[barycentric]: FutureWarning: 'd' is deprecated and will be removed in a future version. Please use 'D' instead of 'd'.
xarray/tests/test_missing.py::test_scipy_methods_function[krogh]: FutureWarning: 'd' is deprecated and will be removed in a future version. Please use 'D' instead of 'd'.
xarray/tests/test_missing.py::test_scipy_methods_function[pchip]: FutureWarning: 'd' is deprecated and will be removed in a future version. Please use 'D' instead of 'd'.
xarray/tests/test_missing.py::test_scipy_methods_function[spline]: FutureWarning: 'd' is deprecated and will be removed in a future version. Please use 'D' instead of 'd'.
xarray/tests/test_missing.py::test_scipy_methods_function[akima]: FutureWarning: 'd' is deprecated and will be removed in a future version. Please use 'D' instead of 'd'.
xarray/tests/test_missing.py::test_interpolate_pd_compat_non_uniform_index: FutureWarning: 'd' is deprecated and will be removed in a future version. Please use 'D' instead of 'd'.
```
</details>
| closed | 2024-06-12T00:23:50Z | 2024-07-01T14:47:11Z | https://github.com/pydata/xarray/issues/9098 | [
"CI"
] | github-actions[bot] | 3 |
pyg-team/pytorch_geometric | deep-learning | 9,036 | Creating a graph with `torch_geometric.nn.pool.radius` using `max_num_neighbors` behaves different on GPU than it does on CPU | ### 🐛 Describe the bug
`torch_geometric.nn.pool.radius` when using `max_num_neighbors` will behave differently on CPU than on GPU. On CPU, it will add connections based on the angle within the circle (i.e it will start adding connections to nodes of the 1. quadrant, then go to the 2. quadrant, ...). On GPU, it will scatter the connections uniformly across the circle. The GPU behavior is the one I would expect. The following script visualizes this:
```
import os
from argparse import ArgumentParser
import matplotlib.pyplot as plt
import torch
from torch_geometric.nn.pool import radius
def parse_args():
parser = ArgumentParser()
parser.add_argument("--accelerator", type=str, required=True, choices=["gpu", "cpu"])
return vars(parser.parse_args())
def main(accelerator):
if accelerator == "cpu":
dev = torch.device("cpu")
else:
dev = torch.device("cuda:0")
torch.manual_seed(0)
x = torch.rand(512)
y = torch.rand(512)
pos = torch.stack([x, y], dim=1).to(dev)
center = torch.tensor([0.5, 0.5]).unsqueeze(0).to(dev)
# BUG: with CPU, the edges will all be from the first quadrant instead of uniformly distributed across the whole circle
edges = radius(x=pos, y=center, r=0.5, max_num_neighbors=128)
ax = plt.gcf().gca()
ax.add_patch(plt.Circle((0.5, 0.5), 0.5, color="g", fill=False))
plt.scatter([0.5], [0.5])
colors = ["red" if i in edges[1] else "black" for i in range(len(x))]
plt.scatter(x, y, c=colors)
title = f"{os.name}_{accelerator}"
plt.title(title)
# plt.show()
plt.savefig(f"{title}.svg")
if __name__ == "__main__":
main(**parse_args())
```
# With --accelerator gpu
Connections to other nodes within the radius will be scattered uniformly)

# With --accelerator cpu
Cconnections to other nodes within the radius will be first from the 1. quadrant, if all poits from the 1. quadrant are taken, nodes from the 2. quadrant will be taken, ...

### Versions
Versions of relevant libraries:
Python version: 3.9.16 | packaged by conda-forge | (main, Feb 1 2023, 21:39:03) [GCC 11.3.0] (64-bit runtime)
[conda] torch 2.1.1+cu121 pypi_0 pypi
[conda] torch-cluster 1.6.3+pt21cu121 pypi_0 pypi
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torch-harmonics 0.6.4 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt21cu121 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt21cu121 pypi_0 pypi | open | 2024-03-08T14:06:15Z | 2024-03-12T12:38:35Z | https://github.com/pyg-team/pytorch_geometric/issues/9036 | [
"bug"
] | BenediktAlkin | 3 |
sczhou/CodeFormer | pytorch | 175 | upload issue | where to upload our image u diloulge box for uploading image is not working
| open | 2023-03-11T08:46:22Z | 2023-03-11T17:44:37Z | https://github.com/sczhou/CodeFormer/issues/175 | [] | satyabirkumar87 | 1 |
indico/indico | sqlalchemy | 5,987 | Registration notification email are in the user language, not the organiser language | **Describe the bug**
I have enable automatic email notification when a user register to my event. This email is in the user default language (german, polish...) instead of my (the organiser) default language (french, or at least english).
**To Reproduce**
Not easily as one would have to have dummy accounts set up with different language
**Expected behavior**
I would expect the email to be in my language (or the default language of the event is there is such thing)
**Screenshots**
Email I've recived:
<img width="1068" alt="Capture d’écran 2023-10-11 à 18 34 18" src="https://github.com/indico/indico/assets/7677384/9b02fbfc-5ac2-447a-9ba6-0b043322dcbc">
(I got another one in polish, but the bulk of the emails are in english
**Additional context**
This is the event concerned
[Indico event](https://indico.in2p3.fr/event/30589/) using indico v3.2.7
| open | 2023-10-11T16:38:26Z | 2025-01-29T16:14:44Z | https://github.com/indico/indico/issues/5987 | [
"bug"
] | dhrou | 4 |
assafelovic/gpt-researcher | automation | 1,247 | Calling the FastAPI will always returned no sources even if report type is web_search | Hello all,
I have the need to convert gpt researcher into a api endpoint, so I have written the following:
```python from fastapi import APIRouter, HTTPException, FastAPI
from pydantic import BaseModel
from typing import Optional, List, Dict
from gpt_researcher import GPTResearcher
import asyncio
import os
import time
from dotenv import load_dotenv
from backend.utils import write_md_to_word
# Load environment variables from .env file
load_dotenv()
# Verify API keys are present
if not os.getenv("OPENAI_API_KEY"):
raise ValueError("OPENAI_API_KEY environment variable is not set")
if not os.getenv("TAVILY_API_KEY"):
raise ValueError("TAVILY_API_KEY environment variable is not set")
app = FastAPI()
router = APIRouter()
class ResearchRequest(BaseModel):
query: str
report_type: str = "detailed_report"
report_source: str = "deep_research"
source_urls: Optional[List[str]] = [
]
query_domains: Optional[List[str]] = []
headers: Optional[Dict] = None
verbose: bool = True # Set verbose to True by default
tone: Optional[str] = "Objective"
@router.post("/api/v1/research")
async def generate_research(request: ResearchRequest):
try:
if not request.query.strip():
raise HTTPException(status_code=400, detail="Query cannot be empty")
# Initialize researcher with web search settings
researcher = GPTResearcher(
query=request.query,
report_type=request.report_type,
report_source=request.report_source,
source_urls=request.source_urls,
query_domains=request.query_domains,
headers=request.headers,
verbose=request.verbose,
tone=request.tone
)
# Conduct research and generate report
await researcher.conduct_research()
report = await researcher.write_report()
# Get additional information
source_urls = researcher.get_source_urls()
research_costs = researcher.get_costs()
research_images = researcher.get_research_images()
research_sources = researcher.get_research_sources()
# Generate DOCX file
sanitized_filename = f"task_{int(time.time())}_{request.query[:50]}"
docx_path = await write_md_to_word(report, sanitized_filename)
response_data = {
"report": report,
"source_urls": source_urls,
"research_costs": research_costs,
"context": researcher.context,
"visited_urls": list(researcher.visited_urls),
"research_images": research_images,
"research_sources": research_sources,
"agent_info": {
"type": researcher.agent,
"role": researcher.role
},
"docx_path": docx_path
}
return {
"status": "success",
"data": response_data
}
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
# Add this at the end of the file
app.include_router(router)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8001)
```
I'm sorry, but I cannot provide a detailed report on news on March 9, 2025, as the information provided is empty ("[]"), and I do not have access to real-time or future news updates beyond my last training cut-off date in October 2023. If you have specific information or context you'd like me to analyze or expand upon, please provide it, and I will do my best to assist you.
However, even if I remove the source urls block, gpt will always return that it has been given no source for this.
Is there another function crawling and filling the source array?
anyone has done something similar and gotten it to work?
| closed | 2025-03-09T22:53:22Z | 2025-03-20T09:26:07Z | https://github.com/assafelovic/gpt-researcher/issues/1247 | [] | Cloudore | 1 |
deepinsight/insightface | pytorch | 1,926 | Request for the evaluation dataset | There are several algorithm evaluated on the dataset **IFRT**. Can you show the developers the link or the location of the IFRT dataset? Thanks! | closed | 2022-03-06T03:46:51Z | 2022-03-29T13:20:46Z | https://github.com/deepinsight/insightface/issues/1926 | [] | HowToNameMe | 1 |
qubvel-org/segmentation_models.pytorch | computer-vision | 616 | Output n_class is less than len(class_name) in training? | Hi, Thanks for your work.
I used your code to train a multi-class segmentation task. I have 10 classes, but after training, the output mask just have 8 classes( using np.unique).
Do you know how this happend? Thanks for your reply! | closed | 2022-07-07T09:26:55Z | 2022-10-15T02:18:36Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/616 | [
"Stale"
] | ymzlygw | 7 |
Lightning-AI/pytorch-lightning | pytorch | 20,145 | mps and manual_seed_all | ### Description & Motivation
Hi,
I'm sorry to disturb here but i can't find any information anywhere about this.
I'm struggling with an automatic function used with cuda to have the same seed on both GPU and CPU.
Here is the function i worked from:
###### Function for setting the seed:
def set_seed(seed):
np.random.seed(seed)
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
set_seed(42)
##### Ensure that all operations are deterministic on GPU (if used) for reproducibility
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
Here is the way i tried to transpose the function for mps:
###### function for setting the seed
def set_seed(seed):
np.random.seed(seed)
torch.manual_seed(seed)
if torch.backends.mps.is_available(): # GPU operation have separate seeds
torch.mps.manual_seed(seed)
torch.mps.manual_seed_all(seed)
set_seed(42)
But actually (i read the pytorch documentation) it seems the manual_seed_all() function doesn't exist.
I don't know how to get around this problem and keep my reproductibility using mps.
If i mute the torch.mps.manual_seed_all() line i have this error:
---------------------------------------------------------------------------
RecursionError Traceback (most recent call last)
Cell In[35], line 6
4 ## Create a plot for every activation function
5 for i, act_fn_name in enumerate(act_fn_by_name):
----> 6 set_seed(42) # Setting the seed ensures that we have the same weight initialization for each activation function
7 act_fn = act_fn_by_name[act_fn_name]()
8 net_actfn = BaseNetwork(act_fn=act_fn).to(device)
Cell In[34], line 13, in set_seed(seed)
11 torch.mps.manual_seed(seed)
12 # torch.mps.manual_seed_all(seed)
---> 13 set_seed(42)
Cell In[34], line 13, in set_seed(seed)
11 torch.mps.manual_seed(seed)
12 # torch.mps.manual_seed_all(seed)
---> 13 set_seed(42)
[... skipping similar frames: set_seed at line 13 (2957 times)]
Cell In[34], line 13, in set_seed(seed)
11 torch.mps.manual_seed(seed)
12 # torch.mps.manual_seed_all(seed)
---> 13 set_seed(42)
Cell In[34], line 9, in set_seed(seed)
7 def set_seed(seed):
8 np.random.seed(seed)
----> 9 torch.manual_seed(seed)
10 if torch.backends.mps.is_available(): # GPU operation have separate seeds
11 torch.mps.manual_seed(seed)
File [/Applications/anaconda3/envs/DL-torch-arm64/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py:451](http://localhost:8889/Applications/anaconda3/envs/DL-torch-arm64/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py#line=450), in _TorchDynamoContext.__call__.<locals>._fn(*args, **kwargs)
449 prior = set_eval_frame(callback)
450 try:
--> 451 return fn(*args, **kwargs)
452 finally:
453 set_eval_frame(prior)
File [/Applications/anaconda3/envs/DL-torch-arm64/lib/python3.10/site-packages/torch/_dynamo/external_utils.py:36](http://localhost:8889/Applications/anaconda3/envs/DL-torch-arm64/lib/python3.10/site-packages/torch/_dynamo/external_utils.py#line=35), in wrap_inline.<locals>.inner(*args, **kwargs)
34 @functools.wraps(fn)
35 def inner(*args, **kwargs):
---> 36 return fn(*args, **kwargs)
File [/Applications/anaconda3/envs/DL-torch-arm64/lib/python3.10/site-packages/torch/random.py:45](http://localhost:8889/Applications/anaconda3/envs/DL-torch-arm64/lib/python3.10/site-packages/torch/random.py#line=44), in manual_seed(seed)
42 import torch.cuda
44 if not torch.cuda._is_in_bad_fork():
---> 45 torch.cuda.manual_seed_all(seed)
47 import torch.mps
48 if not torch.mps._is_in_bad_fork():
File [/Applications/anaconda3/envs/DL-torch-arm64/lib/python3.10/site-packages/torch/cuda/random.py:126](http://localhost:8889/Applications/anaconda3/envs/DL-torch-arm64/lib/python3.10/site-packages/torch/cuda/random.py#line=125), in manual_seed_all(seed)
123 default_generator = torch.cuda.default_generators[i]
124 default_generator.manual_seed(seed)
--> 126 _lazy_call(cb, seed_all=True)
File [/Applications/anaconda3/envs/DL-torch-arm64/lib/python3.10/site-packages/torch/cuda/__init__.py:230](http://localhost:8889/Applications/anaconda3/envs/DL-torch-arm64/lib/python3.10/site-packages/torch/cuda/__init__.py#line=229), in _lazy_call(callable, **kwargs)
228 global _lazy_seed_tracker
229 if kwargs.get("seed_all", False):
--> 230 _lazy_seed_tracker.queue_seed_all(callable, traceback.format_stack())
231 elif kwargs.get("seed", False):
232 _lazy_seed_tracker.queue_seed(callable, traceback.format_stack())
File [/Applications/anaconda3/envs/DL-torch-arm64/lib/python3.10/traceback.py:213](http://localhost:8889/Applications/anaconda3/envs/DL-torch-arm64/lib/python3.10/traceback.py#line=212), in format_stack(f, limit)
211 if f is None:
212 f = sys._getframe().f_back
--> 213 return format_list(extract_stack(f, limit=limit))
File [/Applications/anaconda3/envs/DL-torch-arm64/lib/python3.10/traceback.py:227](http://localhost:8889/Applications/anaconda3/envs/DL-torch-arm64/lib/python3.10/traceback.py#line=226), in extract_stack(f, limit)
225 if f is None:
226 f = sys._getframe().f_back
--> 227 stack = StackSummary.extract(walk_stack(f), limit=limit)
228 stack.reverse()
229 return stack
File [/Applications/anaconda3/envs/DL-torch-arm64/lib/python3.10/traceback.py:383](http://localhost:8889/Applications/anaconda3/envs/DL-torch-arm64/lib/python3.10/traceback.py#line=382), in StackSummary.extract(klass, frame_gen, limit, lookup_lines, capture_locals)
381 if lookup_lines:
382 for f in result:
--> 383 f.line
384 return result
File [/Applications/anaconda3/envs/DL-torch-arm64/lib/python3.10/traceback.py:306](http://localhost:8889/Applications/anaconda3/envs/DL-torch-arm64/lib/python3.10/traceback.py#line=305), in FrameSummary.line(self)
304 if self.lineno is None:
305 return None
--> 306 self._line = linecache.getline(self.filename, self.lineno)
307 return self._line.strip()
File [/Applications/anaconda3/envs/DL-torch-arm64/lib/python3.10/linecache.py:30](http://localhost:8889/Applications/anaconda3/envs/DL-torch-arm64/lib/python3.10/linecache.py#line=29), in getline(filename, lineno, module_globals)
26 def getline(filename, lineno, module_globals=None):
27 """Get a line for a Python source file from the cache.
28 Update the cache if it doesn't contain an entry for this file already."""
---> 30 lines = getlines(filename, module_globals)
31 if 1 <= lineno <= len(lines):
32 return lines[lineno - 1]
File [/Applications/anaconda3/envs/DL-torch-arm64/lib/python3.10/linecache.py:42](http://localhost:8889/Applications/anaconda3/envs/DL-torch-arm64/lib/python3.10/linecache.py#line=41), in getlines(filename, module_globals)
40 if filename in cache:
41 entry = cache[filename]
---> 42 if len(entry) != 1:
43 return cache[filename][2]
45 try:
RecursionError: maximum recursion depth exceeded while calling a Python object
-------------------------------------------------------------------------------------------------------------------------
Well i'm sorry if my message is not clear of it's a too big demand for a so small use case.
Thank you for your answer.
I can add informations if need.
### Pitch
I would like to use:
torch.mps.manual_seed_all
After a night of thinking i'm wondering how mps is considered.
Is is considered as a GPU? Or as some kind of mix of GPU/CPU. In this second case it seems i maybe wouldn't need the manual_seed_all function because all my processing environment would have the same seed with the torch.mps.manual_seed function.
BUT it seems i still have this issue with RecursionError.
And it seems the error comes from the point where torch.cuda.manual_seed_all is imported in line 45 of manual_seed_function.
Well thank you for your help and answers
### Alternatives
I wish i had the knowledge to do so
### Additional context
active environment : DL-torch-arm64
active env location : /Applications/anaconda3/envs/DL-torch-arm64
shell level : 2
user config file : /Users/.../.condarc
populated config files : /Users/.../.condarc
conda version : 24.7.1
conda-build version : 24.5.1
python version : 3.12.2.final.0
solver : libmamba (default)
virtual packages : __archspec=1=m1
__conda=24.7.1=0
__osx=14.5=0
__unix=0=0
base environment : /Applications/anaconda3 (writable)
conda av data dir : /Applications/anaconda3/etc/conda
conda av metadata url : None
channel URLs : https://conda.anaconda.org/conda-forge/osx-arm64
https://conda.anaconda.org/conda-forge/noarch
https://repo.anaconda.com/pkgs/main/osx-arm64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/osx-arm64
https://repo.anaconda.com/pkgs/r/noarch
package cache : /Applications/anaconda3/pkgs
/Users/.../.conda/pkgs
envs directories : /Applications/anaconda3/envs
/Users/.../.conda/envs
platform : osx-arm64
user-agent : conda/24.7.1 requests/2.32.2 CPython/3.12.2 Darwin/23.5.0 OSX/14.5 solver/libmamba conda-libmamba-solver/24.1.0 libmambapy/1.5.8 aau/0.4.4 c/. s/. e/.
UID:GID : 501:20
netrc file : None
offline mode : False
cc @borda | closed | 2024-07-31T14:31:09Z | 2024-08-01T09:04:56Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20145 | [
"question"
] | Tonys21 | 2 |
mirumee/ariadne | graphql | 1,114 | Cannot install 0.15 | A syntax error introduced in `0.15.0` ([commit](https://github.com/kaktus42/ariadne/commit/dbdb87d650c124afde00d160e2484dabd4ebddcc)) that was not corrected until `0.16.0` ([commit](https://github.com/mirumee/ariadne/commit/dbc57a79d4ccbcf12c3af9feadfad897cddfb1ef)) prevents the installation of ariadne into a fresh environment unless a supported version of starlette is either already installed or explicitly defined as a dependency.
When using poetry, attempting to install ariadne `0.15.1`, I get the following error:
```Invalid requirement (starlette (>0.17<0.20)) found in ariadne-0.15.1 dependencies, skipping```
Might we want to go ahead and cut a bugfix release - `0.15.2`? - to make the package installable?
While I don't think it's a super high priority, it's likely that there are active projects out there where the environment was upgraded incrementally from an older version that now specify `^0.15`, where the environment cannot be rebuilt from the requirements files. | closed | 2023-07-14T23:24:07Z | 2023-10-25T15:50:48Z | https://github.com/mirumee/ariadne/issues/1114 | [
"question",
"waiting"
] | lyndsysimon | 2 |
plotly/jupyter-dash | dash | 32 | alive_url should use server url to run behind proxies | JupyterDash.run_server launches the server and then query health by using the[ alive_url composed of host and port ](https://github.com/plotly/jupyter-dash/blob/86cd38869925a4b096fe55714aa8997fb84a963c/jupyter_dash/jupyter_app.py#L296). When running behind a proxy, the publically available url is arbitrary. There is already a JupyterDash.server_url that is used in [dashboard_url](https://github.com/plotly/jupyter-dash/blob/86cd38869925a4b096fe55714aa8997fb84a963c/jupyter_dash/jupyter_app.py#L242). Shouldn't alive_url follow the same construction? | open | 2020-08-13T20:17:25Z | 2020-08-13T20:17:25Z | https://github.com/plotly/jupyter-dash/issues/32 | [] | ccdavid | 0 |
hootnot/oanda-api-v20 | rest-api | 191 | Proxy uses HTTP and not HTTPS | requests.exceptions.ProxyError: HTTPSConnectionPool(host='[stream-fxpractice.oanda.com](http://stream-fxpractice.oanda.com/)', port=443)
Caused by ProxyError('Your proxy appears to only use HTTP and not HTTPS, try changing your proxy URL to be HTTP. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#https-proxy-error-http-proxy', SSLError(SSLError(1, '[SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:852) | closed | 2022-04-21T07:50:30Z | 2022-04-22T20:07:24Z | https://github.com/hootnot/oanda-api-v20/issues/191 | [] | QuanTurtle-founder | 1 |
lepture/authlib | flask | 43 | OAuth2 ClientCredential grant custom expiration not being read from Flask configuration. | ## Summary
When registering a client credential grant for the `authlib.flask.oauth2.AuthorizationServer` in a Flask application and attempting to set a custom expiration time by setting `OAUTH2_EXPIRES_CLIENT_CREDENTIAL` as specified in the docs:
https://github.com/lepture/authlib/blob/23ea76a4d9099581cd1cb43e0a8a9a49a9328361/docs/flask/oauth2.rst#define-server
the specified custom expiration time is not being read.
## Investigation
At first I believe that there was simply an error in the docs, in that the configuration value key should be pluralized, e.g. `OAUTH2_EXPIRES_CLIENT_CREDENTIALS`. However, upon a deeper look, it seems as though the default credential grant expiration time set in the authlib code base was not being used at all; the value being returned was `3600` instead of `864000` as specified in the mapping:
https://github.com/lepture/authlib/blob/7d2a7b55475e458c7043238bc4642e55c39fd449/authlib/flask/oauth2/authorization_server.py#L15-L20
After a bit more digging, it seems that the `create_expires_generator` is returning the default `BearerToken.DEFAULT_EXPIRES_IN` value because the calculated `conf_key = 'OAUTH2_EXPIRES_{}'.format(grant_type.upper())` only produces `client_credentials` instead of the expected `client_credential`:
https://github.com/lepture/authlib/blob/7d2a7b55475e458c7043238bc4642e55c39fd449/authlib/flask/oauth2/authorization_server.py#L124-L136
## Notes
A very subtle bug that took me a while to track down; I attempted to create a test case, but it's not very obvious as to where the test case should exist since `tests/flask/test_oauth2/test_client_credentials_grant.py` is composed of functional tests as opposed to integration/unit tests.
If this is at all not clear, please let me know and I'll attempt to provide additional information.
And thank you for all your hard work! Authlib is fantastic, and has been a pleasure to use even in it's not-completely-done state. | closed | 2018-04-20T13:40:27Z | 2018-04-20T13:54:31Z | https://github.com/lepture/authlib/issues/43 | [] | jperras | 5 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,001 | Error in pix2pix backward_G? | Hi,
I have a question about the implementation of the `backward_G` function of the pix2pix model. In the file `models/pix2pix_model.py` I see the following
def backward_G(self):
"""Calculate GAN and L1 loss for the generator"""
# First, G(A) should fake the discriminator
fake_AB = torch.cat((self.real_A, self.fake_B), 1)
pred_fake = self.netD(fake_AB)
self.loss_G_GAN = self.criterionGAN(pred_fake, True)
...
Here we call on `criterionGAN` to make a prediction for a fake input, but we set the target value as if it were a `True` input. Shouldnt that be set to False instead? Of not, why not?
Btw. I understand the concept that the generator needs to fool the discriminator. But thought this was done by still giving the right information to the discriminator, at all times.
Now we are just fooling the discriminator by giving wrong information...
| closed | 2020-04-23T08:50:38Z | 2020-04-24T09:53:25Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1001 | [] | zwep | 1 |
HIT-SCIR/ltp | nlp | 366 | 导入报错,何解?谢谢 | 报错:
```
from ltp import LTP
File "/opt/conda/envs/env/lib/python3.6/site-packages/ltp/__init__.py", line 7, in <module>
from .data import Dataset
File "/opt/conda/envs/env/lib/python3.6/site-packages/ltp/data/__init__.py", line 7, in <module>
from .fields import Field
File "/opt/conda/envs/env/lib/python3.6/site-packages/ltp/data/fields/__init__.py", line 14, in <module>
class Field(Generic[DataArray], metaclass=Registrable):
TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases
```
怎么解?谢谢 | closed | 2020-06-17T11:49:02Z | 2020-06-18T03:38:20Z | https://github.com/HIT-SCIR/ltp/issues/366 | [] | MrRace | 1 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,847 | Onion site not reachable | ### What version of GlobaLeaks are you using?
GlobaLeaks version: 4.13.18
Database version: 66
OS: Ubuntu 22.04.3
### What browser(s) are you seeing the problem on?
_No response_
### What operating system(s) are you seeing the problem on?
Linux
### Describe the issue
The onion site is down and has been for several weeks. The GL application talks to the Tor socket, so this appears to be an application issue. There are no logs of any sort, so I have no idea what the issue could be.
Brought this to your attention here since [apparently the discussion board goes unanswered](https://github.com/orgs/globaleaks/discussions/3814)
### Proposed solution
Well. Restating GL, Tor, and the entire server does nothing, so fuck if I know what the issue is. Probably the code. Maybe add some logging so we can debug ourselves and then also fix it. | open | 2023-12-07T20:56:53Z | 2025-02-03T08:49:58Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3847 | [
"T: Bug",
"Triage"
] | brassy-endomorph | 20 |
horovod/horovod | machine-learning | 3,281 | Containerized horovod | Hi all,
I have a problem running horovod using containerized environment.
I'm running it on the host and trying to run on one single machine first:
```
horovodrun -np 4 -H localhost:4 python keras_mnist_advanced.py
2021-11-18 00:12:14.851827: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-11-18 00:12:17.677706: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
[1,2]<stderr>:2021-11-18 00:12:17.677699: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-11-18 00:12:17.678337: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
[1,3]<stderr>:2021-11-18 00:12:17.715725: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:Traceback (most recent call last):
[1,1]<stderr>: File "keras_mnist_advanced.py", line 3, in <module>
[1,1]<stderr>: import keras
[1,1]<stderr>: File "/usr/local/lib/python3.7/dist-packages/keras/__init__.py", line 20, in <module>
[1,1]<stderr>: from . import initializers
[1,1]<stderr>: File "/usr/local/lib/python3.7/dist-packages/keras/initializers/__init__.py", line 124, in <module>
[1,1]<stderr>: populate_deserializable_objects()
[1,1]<stderr>: File "/usr/local/lib/python3.7/dist-packages/keras/initializers/__init__.py", line 82, in populate_deserializable_objects
[1,1]<stderr>: generic_utils.populate_dict_with_module_objects(
[1,1]<stderr>:AttributeError: module 'keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects'
[1,2]<stderr>:Traceback (most recent call last):
[1,2]<stderr>: File "keras_mnist_advanced.py", line 3, in <module>
[1,2]<stderr>: import keras
[1,2]<stderr>: File "/usr/local/lib/python3.7/dist-packages/keras/__init__.py", line 20, in <module>
[1,2]<stderr>: from . import initializers
[1,2]<stderr>: File "/usr/local/lib/python3.7/dist-packages/keras/initializers/__init__.py", line 124, in <module>
[1,2]<stderr>: populate_deserializable_objects()
[1,2]<stderr>: File "/usr/local/lib/python3.7/dist-packages/keras/initializers/__init__.py", line 82, in populate_deserializable_objects
[1,2]<stderr>: generic_utils.populate_dict_with_module_objects(
[1,2]<stderr>:AttributeError: module 'keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects'
[1,0]<stderr>:Traceback (most recent call last):
[1,0]<stderr>: File "keras_mnist_advanced.py", line 3, in <module>
[1,0]<stderr>: import keras
[1,0]<stderr>: File "/usr/local/lib/python3.7/dist-packages/keras/__init__.py", line 20, in <module>
[1,0]<stderr>: from . import initializers
[1,0]<stderr>: File "/usr/local/lib/python3.7/dist-packages/keras/initializers/__init__.py", line 124, in <module>
[1,0]<stderr>: populate_deserializable_objects()
[1,0]<stderr>: File "/usr/local/lib/python3.7/dist-packages/keras/initializers/__init__.py", line 82, in populate_deserializable_objects
[1,0]<stderr>: generic_utils.populate_dict_with_module_objects(
[1,0]<stderr>:AttributeError: module 'keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects'
[1,3]<stderr>:Traceback (most recent call last):
[1,3]<stderr>: File "keras_mnist_advanced.py", line 3, in <module>
[1,3]<stderr>: import keras
[1,3]<stderr>: File "/usr/local/lib/python3.7/dist-packages/keras/__init__.py", line 20, in <module>
[1,3]<stderr>: from . import initializers
[1,3]<stderr>: File "/usr/local/lib/python3.7/dist-packages/keras/initializers/__init__.py", line 124, in <module>
[1,3]<stderr>: populate_deserializable_objects()
[1,3]<stderr>: File "/usr/local/lib/python3.7/dist-packages/keras/initializers/__init__.py", line 82, in populate_deserializable_objects
[1,3]<stderr>: generic_utils.populate_dict_with_module_objects(
[1,3]<stderr>:AttributeError: module 'keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects'
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[44863,1],2]
Exit code: 1
--------------------------------------------------------------------------
```
Any idea on how to fix this?
| closed | 2021-11-18T00:13:28Z | 2021-11-18T17:09:04Z | https://github.com/horovod/horovod/issues/3281 | [] | dimanzt | 1 |
pytest-dev/pytest-html | pytest | 536 | How to add additional column in pytest html report | I need help regarding pytest html report customization. I need to print failed network request status code(By TestCase wise) in report so I did the below code. StatusCode column created successfully but not getting data in html report. also, test case-wise statuscode row does not appear in the report.
```
Conftest.py
@pytest.mark.optionalhook
def pytest_html_results_table_header(cells):
cells.append(html.th('Statuscode'))
@pytest.mark.optionalhook
def pytest_html_result_table_row(report,cells):
cells.append(html.td(report.statuscode))
def pytest_runtest_makereport(item):
"""
Extends the PyTest Plugin to take and embed screenshot in html report, whenever test fails.
:param item:
"""
pytest_html = item.config.pluginmanager.getplugin('html')
outcome = yield
report = outcome.get_result()
setattr(report, "duration_formatter", "%H:%M:%S.%f")
extra = getattr(report, 'extra', [])
statuscode = []
if report.when == 'call' or report.when == "setup":
xfail = hasattr(report, 'wasxfail')
if (report.skipped and xfail) or (report.failed and not xfail):
file_name = report.nodeid.replace("::", "_")+".png"
_capture_screenshot(file_name)
if file_name:
html = '<div><img src="%s" alt="screenshot" style="width:304px;height:228px;" ' \
'onclick="window.open(this.src)" align="right"/></div>' % file_name
extra.append(pytest_html.extras.html(html))
for request in driver.requests:
if url in request.url and request.response.status_code >=400 and request.response.status_code <= 512:
statuscode.append(request.response.status_code)
print("*********Status codes************",statuscode)
report.statuscode=statuscode
report.extra = extra
``` | closed | 2022-07-19T13:23:53Z | 2023-03-05T16:16:07Z | https://github.com/pytest-dev/pytest-html/issues/536 | [] | Alfeshani-Kachhot | 6 |
gradio-app/gradio | python | 9,975 | It isn't possible to disable the heading of a Label | ### Describe the bug
By default, `gr.Label` will always show the top class in a `h2` tag, even if the confidence for that class is <0.5. There doesn't seem to be any way to disable this.
See also https://discuss.huggingface.co/t/how-to-hide-first-label-in-label-component/58036
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
See above.
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
N/A
```
### Severity
Blocking usage of gradio | closed | 2024-11-17T09:34:39Z | 2024-11-27T19:26:25Z | https://github.com/gradio-app/gradio/issues/9975 | [
"bug"
] | umarbutler | 1 |
dynaconf/dynaconf | flask | 379 | [bug] dict-like iteration seems broken (when using example from Home page) | **Describe the bug**
When I try to run the example from section https://www.dynaconf.com/#initialize-dynaconf-on-your-project ("Using Python only") and try to use the dict-like iteration from section https://www.dynaconf.com/#reading-settings-variables then the assertions pass but it returns a TypeError trying to iterate over the settings object.
**To Reproduce**
Run the following `dynaconf_settings.py` file in Python 3.7 (with Dynaconf 3.0.0) with the following `settings.toml` file (copied and edited from Home page).
1. Having the following folder structure
<details>
<summary> Project structure </summary>
```bash
$ tree -v
.
├── SCRIMMAGE_1.md
├── curio_examples.py
├── dynaconf_settings.py
├── ini2json.py
├── settings.toml
├── trio_cancel.py
├── trio_rib.py
└── watchservers.sh
0 directories, 8 files
```
</details>
2. Having the following config files:
<details>
<summary> Config files </summary>
**settings.toml**
```toml
key = "value"
a_boolean = false
number = 789 # had to edit this to match assertion above; previous value: 1234
a_float = 56.8
a_list = [1, 2, 3, 4]
a_dict = {hello="world"}
[a_dict.nested]
other_level = "nested value"
```
</details>
3. Having the following app code:
<details>
<summary> Code </summary>
**dynaconf_settings.py**
```python
from dynaconf import Dynaconf
settings = Dynaconf(
settings_files=["settings.toml"],
)
assert settings.key == "value"
assert settings.number == 789
assert settings.a_dict.nested.other_level == "nested value"
assert settings['a_boolean'] is False
assert settings.get("DONTEXIST", default=1) == 1
for key, value in settings: # dict like iteration
print(key, value)
```
</details>
4. Executing under the following environment
<details>
<summary> Execution </summary>
```bash
$ python3 --version
Python 3.7.7
$ python3 dynaconf_settings.py
Traceback (most recent call last):
File "dynaconf_settings.py", line 15, in <module>
for key, value in settings: # dict like iteration
File "/usr/local/lib/python3.7/site-packages/dynaconf/base.py", line 285, in __getitem__
value = self.get(item, default=empty)
File "/usr/local/lib/python3.7/site-packages/dynaconf/utils/parse_conf.py", line 195, in evaluate
value = f(settings, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/dynaconf/base.py", line 377, in get
if "." in key and dotted_lookup:
TypeError: argument of type 'int' is not iterable
```
</details>
**Expected behavior**
No exception thrown when looping over dictionary.
**Environment (please complete the following information):**
- OS: Mac OS 10.14
- Dynaconf Version 3.0.0 (via pip3)
| closed | 2020-07-30T17:18:34Z | 2020-08-06T17:51:53Z | https://github.com/dynaconf/dynaconf/issues/379 | [
"bug",
"Pending Release"
] | lilalinda | 1 |
axnsan12/drf-yasg | django | 295 | Model | closed | 2019-01-18T00:33:37Z | 2019-01-18T00:33:51Z | https://github.com/axnsan12/drf-yasg/issues/295 | [] | oneandonlyonebutyou | 0 | |
CanopyTax/asyncpgsa | sqlalchemy | 69 | Dependency on old version of asyncpg | When using `asyncpgsa` it's hard to use latest version of asyncpg, because there's `asyncpg~=0.12.0` in `install_requires`.
What's more, I'd like to use `asyncpgsa` only as sql query compiler, without using its context managers (as shown [here](http://asyncpgsa.readthedocs.io/en/latest/#compile)). In this case `asyncpg` is not a `asyncpgsa`'s dependency any more in practice. It's just a query compiler.
How about moving `asyncpg` to `extras_require`? Or splitting the library into 2 separate packages (compiler and asyncpg adapter (any better name?)). | closed | 2018-01-25T14:44:04Z | 2018-02-02T17:49:59Z | https://github.com/CanopyTax/asyncpgsa/issues/69 | [] | bitrut | 3 |
ivy-llc/ivy | tensorflow | 28,459 | Fix Ivy Failing Test: paddle - sorting.msort | To-do List: https://github.com/unifyai/ivy/issues/27501 | closed | 2024-03-01T10:01:31Z | 2024-03-08T11:16:09Z | https://github.com/ivy-llc/ivy/issues/28459 | [
"Sub Task"
] | MuhammadNizamani | 0 |
joeyespo/grip | flask | 100 | TypeError: Can't use a string pattern on a bytes-like object | When I try to export a file with the flags `--gfm --export`, I get the following error:
```
grip --gfm --export Kalender.md
Exporting to Kalender.html
Traceback (most recent call last):
File "/usr/local/bin/grip", line 9, in <module>
load_entry_point('grip==3.2.0', 'console_scripts', 'grip')()
File "/usr/local/lib/python3.4/dist-packages/grip/command.py", line 78, in main
True, args['<address>'])
File "/usr/local/lib/python3.4/dist-packages/grip/exporter.py", line 36, in export
render_offline, render_wide, render_inline)
File "/usr/local/lib/python3.4/dist-packages/grip/exporter.py", line 18, in render_page
return render_app(app)
File "/usr/local/lib/python3.4/dist-packages/grip/renderer.py", line 8, in render_app
response = c.get('/')
File "/usr/local/lib/python3.4/dist-packages/werkzeug/test.py", line 774, in get
return self.open(*args, **kw)
File "/usr/local/lib/python3.4/dist-packages/flask/testing.py", line 108, in open
follow_redirects=follow_redirects)
File "/usr/local/lib/python3.4/dist-packages/werkzeug/test.py", line 742, in open
response = self.run_wsgi_app(environ, buffered=buffered)
File "/usr/local/lib/python3.4/dist-packages/werkzeug/test.py", line 659, in run_wsgi_app
rv = run_wsgi_app(self.application, environ, buffered=buffered)
File "/usr/local/lib/python3.4/dist-packages/werkzeug/test.py", line 867, in run_wsgi_app
app_iter = app(environ, start_response)
File "/usr/local/lib/python3.4/dist-packages/flask/app.py", line 1836, in __call__
return self.wsgi_app(environ, start_response)
File "/usr/local/lib/python3.4/dist-packages/flask/app.py", line 1820, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/usr/local/lib/python3.4/dist-packages/flask/app.py", line 1403, in handle_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.4/dist-packages/flask/_compat.py", line 33, in reraise
raise value
File "/usr/local/lib/python3.4/dist-packages/flask/app.py", line 1817, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.4/dist-packages/flask/app.py", line 1470, in full_dispatch_request
self.try_trigger_before_first_request_functions()
File "/usr/local/lib/python3.4/dist-packages/flask/app.py", line 1497, in try_trigger_before_first_request_functions
func()
File "/usr/local/lib/python3.4/dist-packages/grip/server.py", line 111, in retrieve_styles
app.config['STYLE_ASSET_URLS_INLINE']))
File "/usr/local/lib/python3.4/dist-packages/grip/server.py", line 304, in _get_styles
content = re.sub(asset_pattern, match_asset, _download(app, style_url))
File "/usr/lib/python3.4/re.py", line 175, in sub
return _compile(pattern, flags).sub(repl, string, count)
TypeError: can't use a string pattern on a bytes-like object
```
| closed | 2015-03-04T22:17:11Z | 2015-06-01T02:16:32Z | https://github.com/joeyespo/grip/issues/100 | [
"bug"
] | clawoflight | 5 |
plotly/dash | plotly | 3,230 | document type checking for Dash apps | we want `pyright dash` to produce few (or no) errors - we should add a note saying we are working toward this goal to the documentation, while also explaining that `pyright dash` does currently produce errors and that piecemal community contributions are very welcome. | closed | 2025-03-20T16:01:18Z | 2025-03-21T17:40:33Z | https://github.com/plotly/dash/issues/3230 | [
"documentation",
"P1"
] | gvwilson | 0 |
lepture/authlib | django | 238 | Allow Developers to use encrypted public and private keys for JWT | **Is your feature request related to a problem? Please describe.**
- When using `jwt.encode(header, payload, key)`, if the key is protected using a paraphrase, an error is thrown. This is because when creating the classes `authlib.jose.rfc7518._backends._key_cryptography.RSAKey` and 'authlib.jose.rfc7518._backends._key_cryptography.ECKey`, when calling `load_pem_private_key` method of `cryptography.hazmat.primitives.serialization.load_pem_private_key` the `password arg` is passed as `None` by default and the developer of `authlib` is not given an option to pass in a password.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
- Allow the developers to use password-protected keys


A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| closed | 2020-06-16T00:14:11Z | 2020-06-16T23:06:19Z | https://github.com/lepture/authlib/issues/238 | [
"feature request"
] | moswil | 2 |
nonebot/nonebot2 | fastapi | 3,303 | Plugin: nonebot-plugin-VividusFakeAI | ### PyPI 项目名
nonebot-plugin-vividusfakeai
### 插件 import 包名
nonebot_plugin_VividusFakeAI
### 标签
[{"label":"调情","color":"#eded0b"},{"label":"群友","color":"#ed0b1f"},{"label":"Play","color":"#4eed0b"}]
### 插件配置项
```dotenv
```
### 插件测试
- [ ] 如需重新运行插件测试,请勾选左侧勾选框 | closed | 2025-02-06T15:35:49Z | 2025-02-10T12:35:38Z | https://github.com/nonebot/nonebot2/issues/3303 | [
"Plugin",
"Publish"
] | hlfzsi | 3 |
graphql-python/graphene | graphql | 905 | Django model with custom ModelManager causes GraphQL query to fail | Hi everyone,
I have narrowed down the problem. If I don't define a custom `objects` attribute on my model `Reference` queries in GraphQL are executed fine. But if I do, I get this response:
```
{
"errors": [
{
"message": "object of type 'ManyRelatedManager' has no len()",
"locations": [
{
"line": 7,
"column": 9
}
],
"path": [
"requests",
"edges",
0,
"node",
"references"
]
}
],
"data": {
"requests": {
"edges": [
{
"node": {
"id": "UmVxdWVzdFR5cGU6NDc1MjQ=",
"requestId": 182564,
"references": null
}
}
]
}
}
}
```
On the CLI the Django devel server prints `graphql.error.located_error.GraphQLLocatedError: object of type 'ManyRelatedManager' has no len()`
My `Reference` model is defined like:
```
class Reference(..., models.Model):
...
objects = ReferenceManager()
```
My manager `ReferenceManager` is defined this way:
```
class ExtendedQuerySet(QuerySet):
def filter_or_create(self, defaults=None, **kwargs):
...
class ReferenceQuerySet(ExtendedQuerySet):
def get_or_create(self, defaults=None, **kwargs):
...
class ReferenceManager(BaseManager.from_queryset(ReferenceQuerySet)):
pass
```
I'm not really sure what to make of this situation. Maybe someone experienced with graphene can point me in the right direction for further investigation? | closed | 2019-02-19T15:34:23Z | 2019-02-20T08:37:05Z | https://github.com/graphql-python/graphene/issues/905 | [] | crazyscientist | 2 |
tensorflow/tensor2tensor | deep-learning | 1,210 | Shuffle buffer causes OOM error on CPU (1.10.0) | I noticed that with 1.10.0 a shuffle buffer get build up before training:
```
2018-11-09 11:48:04.525172: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:97] Filling up shuffle buffer (this may take a while): 391 of 512
2018-11-09 11:48:14.233178: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:97] Filling up shuffle buffer (this may take a while): 396 of 512
2018-11-09 11:48:29.700824: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:97] Filling up shuffle buffer (this may take a while): 400 of 512
2018-11-09 11:48:33.617605: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:97] Filling up shuffle buffer (this may take a while): 402 of 512
2018-11-09 11:48:50.017594: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:97] Filling up shuffle buffer (this may take a while): 406 of 512
2018-11-09 11:48:56.350018: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:97] Filling up shuffle buffer (this may take a while): 407 of 512
```
However, for one of my larger t2t-problems this seems to cause an OOM error (CPU RAM). I am not sure if this operation happened before 1.10.0 but in any case I'd like to do something against this OOM error.
Why is there a shuffle buffer getting build up and can I disable it or at least control its size s.t. it fits into memory?
----
Error output:
```
2018-11-09 11:49:16.324220: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:97] Filling up shuffle buffer (this may take a while): 413 of 512
2018-11-09 11:49:25.588304: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:97] Filling up shuffle buffer (this may take a while): 415 of 512
2018-11-09 11:49:33.819391: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:97] Filling up shuffle buffer (this may take a while): 419 of 512
./train.sh: line 96: 712 Killed t2t-trainer --generate_data --t2t_usr_dir=$USER_DIR --worker_gpu=$WORKER_GPU --data_dir=$DATA_DIR --tmp_dir=$TMP_DIR --problem=$PROBLEM --model=$MODEL --hparams_set=$HPARAMS --output_dir=$TRAIN_DIR --train_steps=50000000 --save_checkpoints_secs=3600 --keep_checkpoint_max=5
``` | closed | 2018-11-09T10:57:18Z | 2019-02-13T09:00:03Z | https://github.com/tensorflow/tensor2tensor/issues/1210 | [] | stefan-falk | 7 |
jina-ai/serve | deep-learning | 6,089 | Update the Twitter Logo. | **Describe your proposal/problem**
In the footer of the docs, we are still using the old logo of Twitter. https://docs.jina.ai/
---
**Screenshots**

**Solution**
Update the Logo of Twitter to latest logo (X).
| closed | 2023-10-18T18:32:57Z | 2024-04-12T00:17:46Z | https://github.com/jina-ai/serve/issues/6089 | [
"Stale"
] | niranjan-kurhade | 20 |
ultrafunkamsterdam/undetected-chromedriver | automation | 905 | hCaptcha | **Key issue:**
I'm trying to log into a site that's protected by hCaptcha. I'm able to log in on chrome normally, and I'm not asked to complete a captcha at all. When using selenium I'm asked to complete a captcha, which I do manually, and then receive an error message: _Login failed, please try again._
Has anyone had success bypassing hCaptcha with undetected-chromedriver?
| open | 2022-11-17T16:38:36Z | 2023-05-20T11:22:18Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/905 | [] | jab2727 | 3 |
deepspeedai/DeepSpeed | machine-learning | 6,653 | Do I need to install the apex library to enhance the performance of deepspeed under mixed precision training? | Do I need to install the apex library to enhance the performance of deepspeed under mixed precision training? | closed | 2024-10-23T02:01:22Z | 2024-12-25T09:20:00Z | https://github.com/deepspeedai/DeepSpeed/issues/6653 | [] | yangtian6781 | 2 |
scikit-multilearn/scikit-multilearn | scikit-learn | 135 | Weka wrapper issue | Hello everyone,
I am trying to run the MEKA wrapper in my python code using skmultilearn. I am using the code in the paragraph 4.2 in http://scikit.ml/meka.html step by step. However, I got this error:
File "C:\Users\ferna\Anaconda3\lib\site-packages\skmultilearn\ext\meka.py", line 374, in _parse_output
predictions = self.output_.split(predictions_split_head)[1].split(
IndexError: list index out of range
I have tried the code in three different machines and keep staying. you can find it in the figure attached.

What is wrong?
| closed | 2018-12-02T17:09:52Z | 2019-01-11T09:43:25Z | https://github.com/scikit-multilearn/scikit-multilearn/issues/135 | [] | FernandoSaez95 | 10 |
deeppavlov/DeepPavlov | nlp | 866 | ODQA answers from ru_odqa_infer_wiki and from demo.ipavlov.ai mismatch | Hi!
When I ask russian ODQA at demo.ipavlov.ai it gives me more or less relevant answers.
But when I try to use russian ODQA, using python -m deeppavlov interact ru_odqa_infer_wiki -d it gives irrelevant answers. For example:
```
question_raw::кто такой Владимир Путин
>> 29 %
question_raw::Рим это столица чего?
>> Atelier des båtisseurs
question_raw::Рим - это солица
>> лембос
question_raw::Как отводятся излишки тепла у млекопитающих?
>> Leptoptilos robustus
```
What is the problem? | closed | 2019-06-03T06:28:19Z | 2019-06-04T07:25:54Z | https://github.com/deeppavlov/DeepPavlov/issues/866 | [] | vitalyuf | 2 |
pytest-dev/pytest-flask | pytest | 23 | use monkeypatch fixture to set app config servername | closed | 2015-02-24T16:59:00Z | 2015-02-26T14:10:58Z | https://github.com/pytest-dev/pytest-flask/issues/23 | [] | RonnyPfannschmidt | 1 | |
pytorch/pytorch | machine-learning | 149,829 | TF32 acceleration on top of oneDNN is available for Intel GPUs. The current Torch version does not have Intel GPU Support | The warning message
> /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/backends/mkldnn/__init__.py:78: UserWarning: TF32 acceleration on top of oneDNN is available for Intel GPUs. The current Torch version does not have Intel GPU Support. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/Context.cpp:148.)
> torch._C._set_onednn_allow_tf32(_allow_tf32)
Has been triggering with a normal cpu installation of PyTorch from pypi, making it annoying and it is unclear what the user needs to do. Would it make sense to suppress or improve this warning? (For 2.7 as well)
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @albanD @malfet | open | 2025-03-23T15:31:50Z | 2025-03-24T18:42:04Z | https://github.com/pytorch/pytorch/issues/149829 | [
"module: cpu",
"triaged",
"module: python frontend"
] | justinchuby | 2 |
plotly/dash-table | plotly | 522 | Validate React version of standalone table against Dash React version | First mentioned in https://github.com/plotly/dash-table/pull/508#discussion_r308946435.
Making sure the versions are aligned would ensure the table is always truly tested against the expected version.
Since the table generation needs to happen against the latest version of Dash and is part of the normal development process, it should be possible to retrieve the version in dash-renderer and compare it with the version referenced in the standalone `index.htm`. | open | 2019-07-30T23:42:53Z | 2020-01-30T20:47:32Z | https://github.com/plotly/dash-table/issues/522 | [
"dash-type-maintenance",
"dash-stage-revision_needed"
] | Marc-Andre-Rivet | 0 |
davidsandberg/facenet | tensorflow | 400 | Detecting multiple faces in one image using compare.py | Hi,
I was working with the compare.py code. I am wondering how can i get bounding box coordinates of the multiple people detected in a single image. A single picture may contain more than one face.
But I see that only one face is detected in an image with more than one people? | closed | 2017-07-28T08:09:21Z | 2017-10-21T11:18:16Z | https://github.com/davidsandberg/facenet/issues/400 | [] | surajitsaikia27 | 2 |
slackapi/bolt-python | fastapi | 753 | Running a function on app start? | I am making a slack app the makes polls. We're using redis to keep track of polls in progress so they persist through a restart of the app (server goes down, code is published, etc.). Because polls have an amount of time they are allowed to run for I want to close the polls that should have ended while the bot was down as well as re-add the currently running ones to threads to they can end at the appropriate time. Is there a way to run the given function when the app starts? I feel like this is super easy and I'm just making it more difficult than it is. Thanks!
### Reproducible in:
#### The `slack_bolt` version
slack-bolt==1.15.1
slack-sdk==3.19.1
#### Python runtime version
Python 3.9.13
#### Steps to reproduce:
```python
def cleanup():
if r.exists('polls'):
polls = json.loads(r.get('polls'))
for index, poll in enumerate(polls):
timer = poll['timer']
end_time = int(poll['timestamp']) + (int(timer) * 60)
print(end_time)
print(time.time())
if end_time > time.time():
endPoll(WebClient, index)
polls.pop(index, None)
r.set('polls', json.dumps(polls))
else:
add(WebClient, index, int(end_time - time.time()))
```
### Expected result:
Run the given function when the app starts
### Actual result:
Trying to bind it to app.start predictably throws an error as it overrides the function
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2022-11-01T15:09:49Z | 2022-11-02T15:11:29Z | https://github.com/slackapi/bolt-python/issues/753 | [
"question"
] | IronicTrout | 4 |
amidaware/tacticalrmm | django | 2,075 | Usage Monitoring Endpoint via Docker / MON_TOKEN via (docker-) environment variable | Hi is it possible to use active the monitor feature like descripted here:
[https://docs.tacticalrmm.com/tipsntricks/#monitor-your-trmm-instance-via-the-built-in-monitoring-endpoint](url)
for the docker stack?
I have tried to set Environment var in docker-compose.yml on some places, but it seems not to be regognized.
Thank you. | closed | 2024-11-22T14:56:45Z | 2024-11-22T18:52:07Z | https://github.com/amidaware/tacticalrmm/issues/2075 | [] | tobfel | 1 |
gradio-app/gradio | data-science | 10,045 | Feature request: Allow conversation retention & multiple conversations in `gr.ChatInterface` | - [ X ] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
Chatbots created using `gr.ChatInterface` currently do not retain conversation histories when the web page is closed. Each time a user closes and reopens the web page, the conversation history is lost. This can be frustrating in real-world scenarios where users may want to ask follow-up questions periodically.
Additionally, users are currently limited to creating a single conversation, which is inconvenient. Different topics may require separate conversations, and users need the ability to manage multiple conversations simultaneously.
**Describe the solution you'd like**
It would be similar to many existing chatbot applications that retain conversation histories using front-end caching. As long as users do not clear their browser history, the conversation histories will remain intact.

| closed | 2024-11-27T02:26:19Z | 2025-02-28T04:45:56Z | https://github.com/gradio-app/gradio/issues/10045 | [
"enhancement"
] | jamie0725 | 2 |
mckinsey/vizro | data-visualization | 682 | consider a default dash theme on top of vizro | ### Which package?
None
### What's the problem this feature will solve?
currently i cannot use vizro css with custom dash pages. it would be good if vizro would have 'vanilla' option to just use dash css (including bootstrap ones), so i may use vizro just like a constructor. i know there's custom static css folder, but to many work to include dash configs here...
### Describe the solution you'd like
for example 'light-theme','dark-theme','dash-theme'.
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | closed | 2024-09-04T23:13:49Z | 2024-09-06T12:22:56Z | https://github.com/mckinsey/vizro/issues/682 | [
"Feature Request :nerd_face:",
"Needs triage :mag:"
] | vks2 | 2 |
onnx/onnx | scikit-learn | 6,757 | Support more dtypes in Range | https://onnx.ai/onnx/operators/onnx__Range.html currently doesn't support float16, bfloat16 etc. We should add dtypes for this op. | open | 2025-03-03T20:29:17Z | 2025-03-03T23:51:44Z | https://github.com/onnx/onnx/issues/6757 | [
"topic: operator"
] | justinchuby | 0 |
akurgat/automating-technical-analysis | plotly | 13 | Over resource limits on Streamlit Cloud | Hey there :wave: Just wanted to let you know that [your app on Streamlit Cloud deployed from this repo](https://akurgat-automating-technical-analysis-trade-qn1uzx.streamlit.app/akurgat/automating-technical-analysis/Trade.py) has gone over its resource limits. Access to the app is temporarily limited. Visit the app to see more details and possible solutions. | closed | 2023-06-15T07:49:58Z | 2023-06-22T08:50:45Z | https://github.com/akurgat/automating-technical-analysis/issues/13 | [] | nitek29 | 2 |
pytorch/vision | computer-vision | 8,880 | `alpha` argument of `ElasticTransform()` should completely avoid negative values, giving error and the doc should have the explanation. | ### 📚 The doc issue
Setting `1000` and `-1000` to `alpha` argument of [ElasticTransform()](https://pytorch.org/vision/main/generated/torchvision.transforms.v2.ElasticTransform.html) gets the same kind of results as shown below:
```python
from torchvision.datasets import OxfordIIITPet
from torchvision.transforms.v2 import ElasticTransform
my_data = OxfordIIITPet(
root="data"
)
import matplotlib.pyplot as plt
def show_images2(data, main_title=None, a=50, s=5, f=0):
plt.figure(figsize=(10, 5))
plt.suptitle(t=main_title, y=0.8, fontsize=14)
for i, (im, _) in zip(range(1, 6), data):
plt.subplot(1, 5, i)
et = ElasticTransform(alpha=a, sigma=s, fill=f) # Here
plt.imshow(X=et(im)) # Here
plt.xticks(ticks=[])
plt.yticks(ticks=[])
plt.tight_layout()
plt.show()
show_images2(data=my_data, main_title="alpha1000_data", a=1000) # Here
show_images2(data=my_data, main_title="alpha-1000_data", a=-1000) # Here
```


### Suggest a potential alternative/fix
So, `alphe` argument should completely avoid negative values, giving error and [the doc](https://pytorch.org/vision/main/generated/torchvision.transforms.v2.ElasticTransform.html) should have the explanation. | open | 2025-01-24T03:39:12Z | 2025-02-19T13:39:41Z | https://github.com/pytorch/vision/issues/8880 | [] | hyperkai | 1 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 307 | 有关精调模板的问题 | chinese-alpaca使用原版stanford-alpaca不带input的模板,是英文的。但是语料是中文的,这样不会产生gap吗?
是否使用中文的模板会更好?还是说精调时的模板要翻译成中文呢?
chinese-alpaca使用的模板如下:

如果我在后续精调的时候应该采用中文还是英文的模板呢?
| closed | 2023-05-11T05:06:59Z | 2023-05-11T05:39:57Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/307 | [] | yuanzhiyong1999 | 2 |
pallets/flask | python | 5,434 | Starter example results in 404 error | The basic example from the readme / flask docs throws a 404 error instead of returning the Hello World message
If you run the code from the example:
```
# save this as app.py
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello, World!"
```
and then open 127.0.0.1:5000
Instead of getting the Hello World message, you get a 404.
When re-trying the app exits with code 0 instead of launching the server.
Environment:
- Python version: 3.12
- Flask version: 3.02
| closed | 2024-03-08T19:28:48Z | 2024-03-25T00:06:30Z | https://github.com/pallets/flask/issues/5434 | [] | gwilku | 4 |
dfm/corner.py | data-visualization | 281 | Installation error: No matching distribution found for setuptools>=62.0 | Hello,
I am installing corner as a dependency for [PHOEBE](https://phoebe-project.org/install#source). I face an error in installing corner -- pip and conda can't find and install corner nor corner.py. I try to install from the git source, and face this kind of error when running `python -m pip install .` inside the corner directory:
```
...
ERROR: Could not find a version that satisfies the requirement setuptools>=62.0 (from versions: none)
ERROR: No matching distribution found for setuptools>=62.0
...
```
I have installed setuptools version 75.1.0, and this error still appears.
Is there anything to solve this installation problem? | closed | 2024-11-18T07:41:36Z | 2024-11-23T11:45:12Z | https://github.com/dfm/corner.py/issues/281 | [] | aliyyanurr | 0 |
jupyter-incubator/sparkmagic | jupyter | 401 | Adding default CSRF header as a good security practice . | It is no harm to set X-Requested-By when csrf protection is disabled. This will help user experience so when livy CsrfFilter check for "X-Requested-By" header it doesn't return a ""Missing Required Header for CSRF protection."
Check the usage of CSRF headers at owasp.
The main idea is to check the presence of a custom header (agreed-upon between the server and a client – e.g. X-CSRF or X-Requested-By) in all state-changing requests coming from the client. | closed | 2017-08-23T23:02:48Z | 2017-09-16T23:20:07Z | https://github.com/jupyter-incubator/sparkmagic/issues/401 | [] | JeffRodriguez | 7 |
OFA-Sys/Chinese-CLIP | computer-vision | 147 | 模型测试 | 如何测试自己微调后的权重?我尝试用如下脚本检测自己微调后的权重:

但是存在如下报错:

这种情况我该如何解决呢 | closed | 2023-06-25T07:26:19Z | 2023-07-17T07:12:23Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/147 | [] | Duyz232 | 1 |
ray-project/ray | deep-learning | 51,207 | [Data] Adding streaming capability for `ray.data.Dataset.unique` | ### Description
The current [doc](https://docs.ray.io/en/latest/data/api/doc/ray.data.Dataset.unique.html) indicates that `ray.data.Dataset.unique` is a blocking operation: **_This operation requires all inputs to be materialized in object store for it to execute._**.
But I presume, conceptually, it's possible to implement a streaming one: keeps a record of "seen" values and drops entry when its value is in the "seen" collection
### Use case
A streaming `unique` function will be very useful when the amount of data is too large to be materialized. | open | 2025-03-10T05:26:33Z | 2025-03-13T12:37:57Z | https://github.com/ray-project/ray/issues/51207 | [
"enhancement",
"triage",
"data"
] | marcmk6 | 7 |
allure-framework/allure-python | pytest | 86 | Provide 'Host' and 'Tread' labels | [//]: # (
. Note: for support questions, please use Stackoverflow or Gitter**.
. This repository's issues are reserved for feature requests and bug reports.
.
. In case of any problems with Allure Jenkins plugin** please use the following repository
. to create an issue: https://github.com/jenkinsci/allure-plugin/issues
.
. Make sure you have a clear name for your issue. The name should start with a capital
. letter and no dot is required in the end of the sentence. An example of good issue names:
.
. - The report is broken in IE11
. - Add an ability to disable default plugins
. - Support emoji in test descriptions
)
#### I'm submitting a ...
- [ ] bug report
- [X] feature request
- [ ] support request => Please do not submit support request here, see note at the top of this template.
#### What is the current behavior?
#### If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
#### What is the expected behavior?
#### What is the motivation / use case for changing the behavior?
#### Please tell us about your environment:
- Allure version: 2.1.0
- Test framework: pytest@3.0
- Allure adaptor: allure-pytest@2.0.0b1
#### Other information
[//]: # (
. e.g. detailed explanation, stacktraces, related issues, suggestions
. how to fix, links for us to have more context, eg. Stackoverflow, Gitter etc
)
| closed | 2017-07-08T14:22:16Z | 2017-10-17T15:20:41Z | https://github.com/allure-framework/allure-python/issues/86 | [] | sseliverstov | 0 |
benbusby/whoogle-search | flask | 926 | [BUG] Services defined in WHOOGLE_ALT_<> that start with https:// or http:// are prepended with "//" | **Describe the bug**
Services defined in WHOOGLE_ALT_<> that start with https:// or http:// are prepended with "//". This sometimes causes issues on my browser when trying to hit the alt sites.
**To Reproduce**
Steps to reproduce the behavior:
1. Set the Reddit alt site variable to a http://<site> value
2. Run the application
3. Go to the main page
4. Enable alt site replacements
5. Search "Reddit buildapcsales"
6. See error (despite URL looking accurate in source, links to "http//<site>", missing a ":"
**Deployment Method**
- [ ] Heroku (one-click deploy)
- [x] Docker
- [x] `run` executable
- [ ] pip/pipx
- [ ] Other: [describe setup]
**Version of Whoogle Search**
- [x] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc)
- [ ] Version [version number]
- [ ] Not sure
**Desktop (please complete the following information):**
- OS: Linux Mint 21.1
- Browser: Firefox | closed | 2023-01-04T13:46:59Z | 2023-01-04T17:10:33Z | https://github.com/benbusby/whoogle-search/issues/926 | [
"bug"
] | cazwacki | 1 |
biolab/orange3 | numpy | 6,700 | show help not working | <!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
<!-- Be specific, clear, and concise. Include screenshots if relevant. -->
<!-- If you're getting an error message, copy it, and enclose it with three backticks (```). -->
The "Show Help" function of widgets isn't working properly, except the preview: the option "More" in the preview box of "Show Help" doesn't open when clicking on it.
When opening a widget, the "Show Help" icon doesn't open either.
Uploading Screen Recording 2024-01-09 at 15.09.14.mov…
**How can we reproduce the problem?**
<!-- Upload a zip with the .ows file and data. -->
<!-- Describe the steps (open this widget, click there, then add this...) -->
Place any widget on the canvas, click on it, click on the orange "Show Help" icon at the bottom left of the window. Click on the option "More" in the "Show Help" preview window. Nothing happens.
Place any widget on canvas, double click on it, in the new window, click on the "Show Help" icon bottom left. Nothing happens.
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system: Mac
- Orange version: 3.36.1
- How you installed Orange: from DMG
| closed | 2024-01-09T14:22:54Z | 2024-01-18T15:11:34Z | https://github.com/biolab/orange3/issues/6700 | [
"bug report"
] | erikafuna | 4 |
jacobgil/pytorch-grad-cam | computer-vision | 468 | How to separately visualize heatmaps for classification tasks and localization tasks in object detection | Excuse me, how should I visualize the heatmaps of classification tasks and localization tasks in object detection respectively? Can you give me some ideas?
Thanks!!! | open | 2023-12-05T08:28:59Z | 2023-12-05T08:28:59Z | https://github.com/jacobgil/pytorch-grad-cam/issues/468 | [] | a1b2c3s4d4 | 0 |
d2l-ai/d2l-en | machine-learning | 1,930 | Remember a user's framework selection? | Hello! I'm wondering if it would be a nice convenience for the user if the website remembered their preferred framework (mxnet, pytorch, tensorflow) across pages or even across sessions. I would be happy to contribute if this is a good idea.
I'm not sure how the routing is done, but maybe there would be a way to save some local state in JS or in the URL within a session. Or this could be done across sessions through cookies or local storage (but tbh I don't like the idea of having to add a GDPR cookie warning).
Keep up the awesome work!! | open | 2021-10-16T22:17:13Z | 2021-10-25T12:06:50Z | https://github.com/d2l-ai/d2l-en/issues/1930 | [] | harryli0088 | 2 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 649 | Reduction type for NTXentLoss should be "element" | closed | 2023-07-12T04:19:52Z | 2023-07-12T04:27:39Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/649 | [
"enhancement"
] | KevinMusgrave | 1 | |
huggingface/transformers | deep-learning | 36,564 | Add support for StableAdamW optimizer in Trainer | ### Feature request
StableAdamW is an optimizer first introduced in [Stable and low-precision training for large-scale vision-language models](https://arxiv.org/pdf/2304.13013), an AdamW and AdaFactor hybrid optimizer, leading to more stable training. Most notably, however, it has been used in the [modernBERT paper](https://arxiv.org/pdf/2412.13663):
> StableAdamW’s learning rate clipping outperformed standard gradient clipping on downstream tasks and led to more stable training
It would be great is this is available as an optimizer in `Trainer`!
### Motivation
More models in the future may use StableAdamW because of its success in training modernBERT, and having it as an option in `Trainer` (as `optim` in `TrainingArguments`) would be convenient.
### Your contribution
I'm interested to contribute! The modernBERT paper uses the implementation from [optimi](https://github.com/warner-benjamin/optimi), which can be added as an import. I'd love to submit a PR. | open | 2025-03-05T15:14:19Z | 2025-03-06T10:38:17Z | https://github.com/huggingface/transformers/issues/36564 | [
"Feature request"
] | capemox | 2 |
PokeAPI/pokeapi | graphql | 331 | Missing location_area_encounters | It appears that at least a few pokemon (seemingly many) are missing entries in their `location_area_encounters` endpoint.
For instance, [Dratini's endpoint](https://pokeapi.co/api/v2/pokemon/147/encounters) does not list silver or gold, though it's definitely catchable in [Dragon's Den](https://bulbapedia.bulbagarden.net/wiki/Dragon%27s_Den) (there may be other locations as well, I'm not sure).
I randomly spot-checked a few others - [Girafarig](https://pokeapi.co/api/v2/pokemon/203/encounters) and [Jynx](https://pokeapi.co/api/v2/pokemon/124/encounters). Gold and silver are also conspicuously absent from their URLs. Looking even more closely, red and blue are missing entirely from Jynx's results, and only point to the Safari Zone for Dratini.
PS. There are some entries for "heartgold", but that's a different set of version details.
| closed | 2018-04-05T01:44:19Z | 2018-04-07T08:54:22Z | https://github.com/PokeAPI/pokeapi/issues/331 | [
"veekun"
] | jrubinator | 3 |
gevent/gevent | asyncio | 1,314 | test__util.TestAssertSwitches.test_time_sleep flaky on windows | * gevent version: master
* Python version: multiple
* Operating System: Windows/libuv
Seen with multiple versions of python.
[log](https://ci.appveyor.com/project/denik/gevent/builds/20250150/job/1is5rml33wuew5ee#L506)
```
C:\Python36-x64\python.exe -u -mgevent.tests.test__util
507 ...F.....
508 ======================================================================
509 FAIL: test_time_sleep (__main__.TestAssertSwitches)
510 ----------------------------------------------------------------------
511 Traceback (most recent call last):
512 File "C:\Python36-x64\lib\site-packages\gevent\tests\test__util.py", line 241, in test_time_sleep
513 sleep(0)
514 File "C:\Python36-x64\lib\site-packages\gevent\util.py", line 588, in __exit__
515 raise _FailedToSwitch('\n'.join(report_lines))
516 gevent.util._FailedToSwitch
``` | closed | 2018-11-13T11:23:48Z | 2018-11-14T11:57:17Z | https://github.com/gevent/gevent/issues/1314 | [
"Platform: Windows",
"Loop: libuv"
] | jamadden | 0 |
tox-dev/tox | automation | 2,455 | Multiple Path of test files can to set in env variable but "\" for new line not working | When submitting a bug make sure you can reproduce it via ``tox -rvv`` and attach the output of that to the bug. Ideally, you should also submit a project that allows easily reproducing the bug. Thanks!
So here i am using my pytest files as a env variable for different tox enviroment like if `test-functional_api` is a part of my tox enviroment as factor then `TEST_PATH` variable path will be set where its value contains different pytest folder/dir which are needed to get tested .
The issue is that there are many files and folder structure is different i need to set different python folder/files in on `TEST_PATH` . I want place different folder in new line but it is giving error so thats the issue
```
setenv =
test-functional_api: TEST_PATH= {toxinidir}/pytests/functional/core \
{toxinidir}/pytests/unit/api
```
Here the backslash is not working throwing related to ini file nor can i write different path in different line its also throwing related to python split
```
name, rest = line.split('=', 1)
ValueError: not enough values to unpack (expected 2, got 1)
```
Can anyone see to this issue i have a long list of files and i dont want to write it in 1 line new line in env variable is not working | closed | 2022-07-04T19:27:39Z | 2022-07-04T21:09:23Z | https://github.com/tox-dev/tox/issues/2455 | [
"bug:normal"
] | rahulmukherjee68 | 3 |
ageitgey/face_recognition | machine-learning | 1,105 | face_recognitions.face_locations is too slow | * face_recognition version: 1.3.0
* Python version: 3.7.7
* Operating System: Linux Mint 19.3 Cinnamon
### Description
I have a code which works nice, so there should not be any mistakes, the only problem is, it is too slow. The processing of a frame(either an image in unknown_faces dir or video feed) takes like 20-30 seconds.
### What I Did
I am getting the image from the video feed(my laptop webcam, nothing fancy).
I insterted some `print()`s to determine at where exactly the code slows down and I found that:
`locations = face_recognition.face_locations(image, model=MODEL)`
the delay occurs during the execution of this line.
Is this normal?
`MODEL = "cnn"`, by the way.
| open | 2020-04-03T20:45:28Z | 2020-04-18T09:36:38Z | https://github.com/ageitgey/face_recognition/issues/1105 | [] | muyustan | 4 |
donnemartin/system-design-primer | python | 948 | Test | open | 2024-09-18T11:38:41Z | 2024-12-02T01:13:13Z | https://github.com/donnemartin/system-design-primer/issues/948 | [
"needs-review"
] | kwaker88 | 0 | |
gradio-app/gradio | data-visualization | 10,738 | gradio canvas won't accept images bigger then 600 x 600 on forgewebui | ### Describe the bug
I think it's a gradio problem since the problem started today and forge hasn't updated anything
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
colab on forgewebui
```
### Severity
I can work around it | closed | 2025-03-06T02:57:40Z | 2025-03-06T15:21:24Z | https://github.com/gradio-app/gradio/issues/10738 | [
"bug",
"pending clarification"
] | Darknessssenkrad | 13 |
hzwer/ECCV2022-RIFE | computer-vision | 41 | Model v2 update log | We show some hard case results for every version model.
v2 google drive download link: (https://drive.google.com/file/d/1wsQIhHZ3Eg4_AfCXItFKqqyDMB4NS0Yd/view).
v1.1 2020.11.16 链接:https://pan.baidu.com/s/1SPRw_u3zjaufn7egMr19Eg 密码:orkd
<img width="350" alt="image" src="https://user-images.githubusercontent.com/10103856/100574344-c86ed000-3314-11eb-81ae-7a222afb3d95.png"><img width="350" alt="image" src="https://user-images.githubusercontent.com/10103856/100576188-93647c80-3318-11eb-96c7-f5b368935ede.png">
| closed | 2020-11-30T07:12:53Z | 2021-05-17T06:44:12Z | https://github.com/hzwer/ECCV2022-RIFE/issues/41 | [] | hzwer | 12 |
scanapi/scanapi | rest-api | 261 | Body in the report should be rendered according to it's request content type | Today the Body is showing the representation of python's byte string, which in the report should be agnostic of language, but based on it's request content type.
For example:
- A request of application/json can render the body as json in the report and make use of the same tool that is used at the Content field
- A request of binary content type should just show a placeholder text like: "Binary content"
- A request of text content type should show as a text without python's representation | closed | 2020-08-26T16:04:30Z | 2020-12-16T12:49:48Z | https://github.com/scanapi/scanapi/issues/261 | [
"Reporter",
"Hacktoberfest"
] | loop0 | 1 |
public-apis/public-apis | api | 3,357 | Starter | closed | 2022-11-24T19:23:26Z | 2022-11-24T19:24:41Z | https://github.com/public-apis/public-apis/issues/3357 | [] | TheRealToaster | 0 | |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 457 | [BUG] 抖音API下更换Cookie和网络服务商后依旧出现 <class 'NoneType'> 类型错误 | ***发生错误的平台?***
抖音
***发生错误的端点?***
- API服务
```json
/api/hybrid/video_data?url=https://v.douyin.com/iMrqLnXG/
```
***提交的输入值?***
- 抖音短网址
```json
https://v.douyin.com/iMrqLnXG/
```
- User-Agent:
```json
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36 Edg/126.0.0.0
```
- Cookie:
```json
__ac_nonce=0669b3afb007a9e7f666c; __ac_signature=_02B4Z6wo00f01iePN7AAAIDBdXAXqIA3R5onrzMAAO-Hdf; ttwid=1%7COIZj9G0wAfqNodzhcSS1tWthuiNzuXsF_kK_eO9jLaY%7C1721449211%7C062d189fffdc2fad085780566790bb4b27ada85ce3ebfd4f39f2c57365e64f3e; UIFID_TEMP=1994d5ea02185c59e0ce2f61826e33d5e3041724e0e346378ad909ae2a68ae897d60937dd0c4b1b1902063d06a2aa9dd1e5d54b212c8153c5a7f91a5512b93bb02b01da3fafa0521611ba57d7ad61b52; douyin.com; device_web_cpu_core=8; device_web_memory_size=8; IsDouyinActive=true; home_can_add_dy_2_desktop=%220%22; dy_swidth=1800; dy_sheight=1169; stream_recommend_feed_params=%22%7B%5C%22cookie_enabled%5C%22%3Atrue%2C%5C%22screen_width%5C%22%3A1800%2C%5C%22screen_height%5C%22%3A1169%2C%5C%22browser_online%5C%22%3Atrue%2C%5C%22cpu_core_num%5C%22%3A8%2C%5C%22device_memory%5C%22%3A8%2C%5C%22downlink%5C%22%3A7.4%2C%5C%22effective_type%5C%22%3A%5C%224g%5C%22%2C%5C%22round_trip_time%5C%22%3A250%7D%22; csrf_session_id=4a07038ccf3382a37e8ab9b96491b0ba; strategyABtestKey=%221721449218.314%22; volume_info=%7B%22isUserMute%22%3Afalse%2C%22isMute%22%3Afalse%2C%22volume%22%3A0.5%7D; stream_player_status_params=%22%7B%5C%22is_auto_play%5C%22%3A0%2C%5C%22is_full_screen%5C%22%3A0%2C%5C%22is_full_webscreen%5C%22%3A0%2C%5C%22is_mute%5C%22%3A0%2C%5C%22is_speed%5C%22%3A1%2C%5C%22is_visible%5C%22%3A1%7D%22; FORCE_LOGIN=%7B%22videoConsumedRemainSeconds%22%3A180%7D; s_v_web_id=verify_lytmf2ba_aoSgPpb7_SxDf_4XBa_89N4_ZJKuPC6IU9sI; passport_csrf_token=cd625ebd0c59bfe905e1276465faba81; passport_csrf_token_default=cd625ebd0c59bfe905e1276465faba81; fpk1=U2FsdGVkX19ZUeCkPAkf9vW8q6tIiu6u8efDIBhKxiZm1r3MLCAIAoa3tLtKd240BcHYzMoCTICAOcEsjMBfyw==; fpk2=9af1fd1192d005fa6fee32e72c2ccfb4; biz_trace_id=65ce9008; bd_ticket_guard_client_data=eyJiZC10aWNrZXQtZ3VhcmQtdmVyc2lvbiI6MiwiYmQtdGlja2V0LWd1YXJkLWl0ZXJhdGlvbi12ZXJzaW9uIjoxLCJiZC10aWNrZXQtZ3VhcmQtcmVlLXB1YmxpYy1rZXkiOiJCS2tKWlNZNU5rRUM2azI2TFU4YlBiR2lyaSs4YncrL1oxcGR5OVB0R1c2MmhNTHNxcjVYcHpHYzZSK2duOXRVYnI2NlpGSGdibnpKK095NEtSK3MyM2s9IiwiYmQtdGlja2V0LWd1YXJkLXdlYi12ZXJzaW9uIjoxfQ%3D%3D; bd_ticket_guard_client_web_domain=2; odin_tt=59a2959389114d98fdc5bf6d59b0cd91ab3c10cc783deee2484f4bf1ff4d8fadfe31c500636bddb920f5e538b14a3c9673abc8a894140cf97fbdcd6ecec87804522e97a5f59b57e8247159cd36ba4f2b
```
***是否有再次尝试?***
- 尝试更换Cookie之后,重启服务问题依旧存在
***你有查看本项目的自述文件或接口文档吗?***
- 尝试更换Cookie之后,依旧出现 `<class 'NoneType'>` 问题,更换电信、联通、广电网络后此问题依旧存在。
- **可以帮我验证一下,贴出来的这个 `Cookie` 是否有效吗?如果无效的话能否贴一个正确的 `Cookie` 我再尝试一下。**
```bash
INFO: Will watch for changes in these directories: ['/Users/j/Projects/Douyin_TikTok_Download_API']
INFO: Uvicorn running on http://192.168.8.8:8081 (Press CTRL+C to quit)
INFO: Started reloader process [10831] using StatReload
INFO: Started server process [10836]
INFO: Waiting for application startup.
INFO: Application startup complete.
WARNING 第 1 次响应内容为空, 状态码: 200,
URL:https://www.douyin.com/aweme/v1/web/aweme/detail/?device_platform=webapp&aid=6383&channel=channel_pc_web&pc_client_type=1&version_code=190500&version_name=19.5.0&cookie_enabled=true&screen_width=1920&
screen_height=1080&browser_language=zh-CN&browser_platform=Win32&browser_name=Firefox&browser_version=124.0&browser_online=true&engine_name=Gecko&engine_version=122.0.0.0&os_name=Windows&os_version=10&cpu
_core_num=12&device_memory=8&platform=PC&msToken=&aweme_id=7391090151822216488&a_bogus=EjmMBf0fdi6k6VWg56OLfY3q6XLVYmml0SVkMD2f9PDOwy39HMOa9exoI3Uv1rWjNs%2FDIeEjy4hbT3ohrQ2y0Hwf9W0L%2F25ksDSkKl5Q5xSSs1X9e
ghgJ04qmkt5SMx2RvB-rOXmqhZHKRbp09oHmhK4b1dzFgf3qJLziD%3D%3D
WARNING 第 2 次响应内容为空, 状态码: 200,
URL:https://www.douyin.com/aweme/v1/web/aweme/detail/?device_platform=webapp&aid=6383&channel=channel_pc_web&pc_client_type=1&version_code=190500&version_name=19.5.0&cookie_enabled=true&screen_width=1920&
screen_height=1080&browser_language=zh-CN&browser_platform=Win32&browser_name=Firefox&browser_version=124.0&browser_online=true&engine_name=Gecko&engine_version=122.0.0.0&os_name=Windows&os_version=10&cpu
_core_num=12&device_memory=8&platform=PC&msToken=&aweme_id=7391090151822216488&a_bogus=EjmMBf0fdi6k6VWg56OLfY3q6XLVYmml0SVkMD2f9PDOwy39HMOa9exoI3Uv1rWjNs%2FDIeEjy4hbT3ohrQ2y0Hwf9W0L%2F25ksDSkKl5Q5xSSs1X9e
ghgJ04qmkt5SMx2RvB-rOXmqhZHKRbp09oHmhK4b1dzFgf3qJLziD%3D%3D
WARNING 第 3 次响应内容为空, 状态码: 200,
URL:https://www.douyin.com/aweme/v1/web/aweme/detail/?device_platform=webapp&aid=6383&channel=channel_pc_web&pc_client_type=1&version_code=190500&version_name=19.5.0&cookie_enabled=true&screen_width=1920&
screen_height=1080&browser_language=zh-CN&browser_platform=Win32&browser_name=Firefox&browser_version=124.0&browser_online=true&engine_name=Gecko&engine_version=122.0.0.0&os_name=Windows&os_version=10&cpu
_core_num=12&device_memory=8&platform=PC&msToken=&aweme_id=7391090151822216488&a_bogus=EjmMBf0fdi6k6VWg56OLfY3q6XLVYmml0SVkMD2f9PDOwy39HMOa9exoI3Uv1rWjNs%2FDIeEjy4hbT3ohrQ2y0Hwf9W0L%2F25ksDSkKl5Q5xSSs1X9e
ghgJ04qmkt5SMx2RvB-rOXmqhZHKRbp09oHmhK4b1dzFgf3qJLziD%3D%3D
程序出现异常,请检查错误信息。
ERROR 无效响应类型。响应类型: <class 'NoneType'>
程序出现异常,请检查错误信息。
INFO: 192.168.8.8:49806 - "GET /api/hybrid/video_data?url=https://v.douyin.com/iMrqLnXG/ HTTP/1.1" 400 Bad Request
```
| closed | 2024-07-20T04:48:10Z | 2024-07-31T07:21:32Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/457 | [
"BUG"
] | Jiohon | 6 |
plotly/dash-recipes | dash | 26 | multi-threading | does this pattern apply if Dash is deployed in gunicorn with multiple threads? Or is it necessary to use the Flask-SQLAlchemy extension?
thanks!
| open | 2019-10-01T20:31:43Z | 2019-10-01T20:31:43Z | https://github.com/plotly/dash-recipes/issues/26 | [] | BenMacKenzie | 0 |
ghtmtt/DataPlotly | plotly | 346 | The plugins cannot produce any plot | Is this a real project? It cannot even plot a single point (like 'lat' vs. 'lon'). For both numeric and date fields, the same axis interval [0,6] is displayed. It seems like there is no processing code at all.
<img width="619" alt="image" src="https://github.com/ghtmtt/DataPlotly/assets/7342379/6e1244f5-6c3f-4346-a12b-1ffdd1b802f3">
| closed | 2024-03-13T06:05:40Z | 2024-03-14T17:00:03Z | https://github.com/ghtmtt/DataPlotly/issues/346 | [
"bug"
] | AlexeyPechnikov | 7 |
google-research/bert | tensorflow | 928 | FileNotFoundError: [Errno 2] No such file or directory: 'pybert/output/checkpoints/bert' | Hi,
I am using google colab to run bert example. when i try to run with runtime normal I don't get any error. but when i change runtime to GPU/TPU I get the following error
`Traceback (most recent call last):
File "/content/drive/My Drive/Colab_Notebooks/Bert/Bert-Multi-Label-Text-Classification-master/run_bert.py", line 223, in <module>
main()
File "/content/drive/My Drive/Colab_Notebooks/Bert/Bert-Multi-Label-Text-Classification-master/run_bert.py", line 198, in main
config['checkpoint_dir'].mkdir(exist_ok=True)
File "/usr/lib/python3.6/pathlib.py", line 1248, in mkdir
self._accessor.mkdir(self, mode)
File "/usr/lib/python3.6/pathlib.py", line 387, in wrapped
return strfunc(str(pathobj), *args)
FileNotFoundError: [Errno 2] No such file or directory: 'pybert/output/checkpoints/bert'`
When i check this directory It has no bert folder in colab but in google drive i can see this directory with bert folder. what the issue with GPU and TPU don't retrive this directory pls help me.
Thanks
Sajjad Ahmed
| open | 2019-11-20T07:48:28Z | 2019-11-20T07:49:11Z | https://github.com/google-research/bert/issues/928 | [] | Sajjadahmed668 | 0 |
gunthercox/ChatterBot | machine-learning | 1,970 | Can't find model 'en' and Spacy install error | OS: Arch Linux 5.6.10
Python version: 3.8.2
Pipenv version: 2018.11.15.dev0
I installed chatterbot in virtual environment with pipenv. When I run the following script, I get this error:
> OSError: [E050] Can't find model 'en'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory.
This is my script:
```Python
from chatterbot import ChatBot
from chatterbot.trainers import ListTrainer
import json
conversation = [
"Hello",
"Hi there!",
"How are you doing?",
"I'm doing great.",
"That is good to hear",
"Thank you.",
"You're welcome."
]
chatbot = ChatBot("C. Planet")
trainer = ListTrainer(chatbot)
trainer.train(conversation)
while True:
response = str(input("You: "))
if response.lower() == 'stop':
break
response = chatbot.get_response(response)
print(f'C. Planet: {response}')
```
As I found in other threads, I tried activating the shell and running `python -m spacy download en` but I get an error that says module 'spacy' not found. Maybe I'm not running the correct command syntax inside pipenv virtual environment? I otherwise assume I need to explicitly install spacy in order to run this command, but when I run pipenv install spacy, I get dependency errors with the blis package:
> ERROR: Could not find a version that matches blis<0.3.0,<0.5.0,>=0.2.2,>=0.4.0
Tried: 0.0.1, 0.0.2, 0.0.3, 0.0.4, 0.0.5, 0.0.6, 0.0.8, 0.0.10, 0.0.12, 0.0.13, 0.0.16, 0.0.16, 0.0.16, 0.0.16, 0.0.16, 0.0.16, 0.0.16, 0.1.0, 0.1.0, 0.1.0, 0.1.0, 0.1.0, 0.1.0, 0.1.0, 0.1.0, 0.2.0, 0.2.1, 0.2.1, 0.2.1, 0.2.1, 0.2.1, 0.2.1, 0.2.1, 0.2.1, 0.2e.1, 0.2.1, 0.2.2, 0.2.2, 0.2.2, 0.2.2, 0.2.2, 0.2.2, 0.2.2, 0.2.2, 0.2.2, 0.2.2, 0.2.2, 0.2.2, 0.2.2, 0.2.3, 0.2.4, 0.2.4, 0.2.4, 0.2.4, 0.2.4, 0.2.4, 0.2.4, 0.2.4, 0.2.4, 0.2.4, 0.2.4, 0.2.4, 0.2.4, 0.3.1, 0.3.1, 0.3.1, 0.3.1, 0.3.1, 0.3.1, 0.3.1, 0.3.1, 0.3.1, 0.3.1, 0.3.1, 0.3.1, 0.3.1, 0.4.0, 0.4.0, 0.4.0, 0.4.0, 0.4.0, 0.4.0, 0.4.0, 0.4.0, 0.4.0, 0.4.0, 0.4.0, 0.4.0, 0.4.1, 0.4.1, 0.4.1, 0.4.1, 0.4.1, 0.4.1, 0.4.1, 0.4.1, 0.4.1, 0.4.1, 0.4.1, 0.4.1, 0.4.1, 0.4.1, 0.4.1, 0.4.1
Skipped pre-versions: 0.0.9.dev104, 0.2.0.dev0, 0.2.0.dev0, 0.2.0.dev0, 0.2.0.dev0, 0.2.0.dev0, 0.2.0.dev0, 0.2.0.dev0, 0.2.0.dev0, 0.2.0.dev0, 0.2.2.dev0, 0.2.3.dev0, 0.2.3.dev1, 0.2.3.dev2, 0.2.3.dev3, 0.4.0.dev0, 0.4.0.dev1
There are incompatible versions in the resolved dependencies.
I've tried the following:
- `pipenv lock --pre --clear`
- `pipenv --rm`
- `rm -rf Pipfile.lock`
- `rm -rf ~/.cache/pip`
- `rm -rf ~/.cache/pipenv`
- `pipenv install --pre spacy`
- `pipenv install --skip-lock` it just hangs on installation
- removed spacy from the packages section in the Pipfile before each attempt
Here's my Pipfile:
```
[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
[packages]
chatterbot = {editable = true,git = "https://github.com/gunthercox/ChatterBot.git"}
[requires]
python_version = "3.8"
[pipenv]
allow_prereleases = true
```
This is the Pipfile.lock:
[Pipfile.lock](https://pastebin.com/PskTQn7i)
I've been trying to get this to work for hours to no avail. Do I actually need spacy installed to resolve the first error? Please help! | closed | 2020-05-08T08:47:12Z | 2020-05-08T17:21:48Z | https://github.com/gunthercox/ChatterBot/issues/1970 | [] | ElderBlade | 1 |
jina-ai/serve | deep-learning | 5,636 | bug: `log_config` argument is not uniformly used when creating JinaLogger instances | **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
The `log_config` argument needs to be initialized like the following instances:
- https://github.com/jina-ai/jina/blob/c57fdbb11e4c318e31ab1d778ded3d4183ab1e0c/jina/orchestrate/flow/base.py#L519-L524
- https://github.com/jina-ai/jina/blob/c57fdbb11e4c318e31ab1d778ded3d4183ab1e0c/jina/clients/base/__init__.py#L40
There are several instances where the additional args are not propagated. This means that the `log_config` provided is not used everywhere and causes confusion.
**Describe how you solve it**
<!-- copy past your code/pull request link -->
- Find instances of JinaLogger that don't propagate or respect arguments.
- Update the documentation to make the log configuration explicit.
---
<!-- Optional, but really help us locate the problem faster -->
**Environment**
<!-- Run `jina --version-full` and copy paste the output here -->
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. --> | closed | 2023-01-30T09:40:56Z | 2023-02-01T09:49:29Z | https://github.com/jina-ai/serve/issues/5636 | [] | girishc13 | 0 |
pallets/flask | flask | 5,652 | flask==2.2.4 incompatible with werkzeug==3.1.3 | I use flask==2.2.4 for my project and when I install flask, it pulls **any** werkzeug>=2.2.2 (which is the default behaviour). After [werkzeug==3.1.3](https://pypi.org/project/Werkzeug/3.1.3/) got released on 8 Nov, 2024, flask pulls the latest version of it. With this new version of werkzeug, while executing unit tests, I get an error saying `module 'werkzeug' has no attribute '__version__'`
**A small example to reproduce the issue:**
(myenv) root@6426d8424cca:~# python3
Python 3.9.16 (main, Sep 2 2024, 12:46:28)
[GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
**">>> from flask import Flask
">>> app = Flask(__name__)
">>> my_test_client = app.test_client()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/myenv/lib/python3.9/site-packages/flask/app.py", line 1252, in test_client
return cls( # type: ignore
File "/root/myenv/lib/python3.9/site-packages/flask/testing.py", line 116, in __init__
"HTTP_USER_AGENT": f"werkzeug/{werkzeug.__version__}",
**AttributeError: module 'werkzeug' has no attribute '__version__'****
I shouldn't be seeing any error like above.
Environment:
- Python version: 3.9.16
- Flask version: 2.2.4
| closed | 2024-12-05T11:56:45Z | 2024-12-20T00:07:47Z | https://github.com/pallets/flask/issues/5652 | [] | ranaprathapthanneeru | 1 |
computationalmodelling/nbval | pytest | 124 | Databricks support for code-coverage | Though the `nbval` is used for Jupyter notebook, I am not able to implement it in Databricks as it doesn't have `.ipynb` extension. Can a feature be introduced to test Databricks notebook too.
Community version of Databricks can be used for development - https://community.cloud.databricks.com | closed | 2019-08-02T20:25:12Z | 2020-02-12T12:24:30Z | https://github.com/computationalmodelling/nbval/issues/124 | [] | tintinmj | 14 |
nalepae/pandarallel | pandas | 202 | parallel not working | I have pandas dataframe , inside it there is one text col and i want to apply custom text processing function on that column but parallel.map and .apply stuck in infinite loop.
what went wrong can u tell | closed | 2022-09-02T11:03:58Z | 2022-09-12T14:00:05Z | https://github.com/nalepae/pandarallel/issues/202 | [] | riyaj8888 | 3 |
scikit-learn/scikit-learn | machine-learning | 30,774 | Deprecation message of check_estimator does not point to the right replacement | See here
https://github.com/scikit-learn/scikit-learn/blob/e25e8e2119ab6c5aa5072b05c0eb60b10aee4b05/sklearn/utils/estimator_checks.py#L836
I believe it should point to `sklearn.utils.estimator_checks.estimator_checks_generator` as suggested in the doc string.
Also not sure you want to keep the sphinx directive in the warning message. | closed | 2025-02-06T06:01:18Z | 2025-02-06T18:46:02Z | https://github.com/scikit-learn/scikit-learn/issues/30774 | [
"Bug",
"Documentation"
] | Remi-Gau | 1 |
Python3WebSpider/ProxyPool | flask | 86 | docker-compose up 后不能启动redis,报错如图 |
<img width="855" alt="WX20200813-155849@2x" src="https://user-images.githubusercontent.com/32673411/90109674-ba556d80-dd7e-11ea-89b4-11926e1d6bce.png">
| closed | 2020-08-13T08:05:01Z | 2020-08-13T12:37:38Z | https://github.com/Python3WebSpider/ProxyPool/issues/86 | [
"bug"
] | MazzaWill | 1 |
ndleah/python-mini-project | data-visualization | 143 | Code Issues: Incorrect Structure, Capitalization, Imports, and Variables in Automated_Mailing project | ## Issue: Incorrect Code Structure
### Description
The current code has several structural issues, including incorrect indentation, missing email and password values, and unnecessary variables.
### Type of issue
- [ ] Feature (New Script)
- [x] Bug
- [ ] Documentation
### Checklist:
- [x] I have read the project guidelines.
- [x] I have checked previous issues to avoid duplicates.
- [x] This issue will be meaningful for the project.
## Issue: Subject Field Capitalization
### Description
The subject field should be capitalized as 'Subject' when setting it in the message. It is currently in lowercase as 'subject'.
### Type of issue
- [ ] Feature (New Script)
- [x] Bug
- [ ] Documentation
### Checklist:
- [x] I have read the project guidelines.
- [x] I have checked previous issues to avoid duplicates.
- [x] This issue will be meaningful for the project.
## Issue: Unused Import
### Description
The `name` module is imported from the `os` library (`from os import name`), but it is not used in the code. This import can be removed to improve code readability.
### Type of issue
- [ ] Feature (New Script)
- [x] Bug
- [ ] Documentation
### Checklist:
- [x] I have read the project guidelines.
- [x] I have checked previous issues to avoid duplicates.
- [x] This issue will be meaningful for the project.
## Issue: Incorrect Variable Scope for Email and Password
### Description
The lines for entering your email and password are indented incorrectly, making them part of the loop. These variables should be defined outside the loop as they are constant values.
### Type of issue
- [ ] Feature (New Script)
- [x] Bug
- [ ] Documentation
### Checklist:
- [x] I have read the project guidelines.
- [x] I have checked previous issues to avoid duplicates.
- [x] This issue will be meaningful for the project.
## Issue: Unnecessary Variables
### Description
There are unnecessary variables in the code. For instance, the `name` variable is assigned `data['name'].tolist()` but is not required as `name[i]` is used inside the loop.
### Type of issue
- [ ] Feature (New Script)
- [x] Bug
- [ ] Documentation
### Checklist:
- [x] I have read the project guidelines.
- [x] I have checked previous issues to avoid duplicates.
- [x] This issue will be meaningful for the project.
## Issue: Missing Email and Password Values
### Description
The code is missing the actual values for the `email` and `password` variables, making it impossible to run the script as intended.
### Type of issue
- [ ] Feature (New Script)
- [x] Bug
- [ ] Documentation
### Checklist:
- [x] I have read the project guidelines.
- [x] I have checked previous issues to avoid duplicates.
- [x] This issue will be meaningful for the project.
| closed | 2023-08-24T03:56:20Z | 2023-09-18T04:02:39Z | https://github.com/ndleah/python-mini-project/issues/143 | [] | ChathuraAbeygunawardhana | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.