repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
graphistry/pygraphistry | jupyter | 456 | [BUG] Plugins docs nav missing packages and methods | **Describe the bug**
2

nav issues
- [ ] `graphistry.plugins.cugraph` is missing from the nav
- [ ] methods are not listed in the nav for `graphistry.plugins.cugraph` and `graphistry.plugins.igraph`
| open | 2023-04-03T22:12:56Z | 2023-04-03T22:29:53Z | https://github.com/graphistry/pygraphistry/issues/456 | [
"bug",
"p2",
"docs"
] | lmeyerov | 0 |
gee-community/geemap | jupyter | 845 | Add Ocean Color timelapse ( sea surface temperature, chlorophyll a concentrations) | References:
- https://github.com/giswqs/streamlit-geospatial/issues/21
- https://developers.google.com/earth-engine/datasets/catalog/NASA_OCEANDATA_MODIS-Aqua_L3SMI
- https://developers.google.com/earth-engine/datasets/catalog/NASA_OCEANDATA_MODIS-Terra_L3SMI
- https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S3_OLCI | closed | 2021-12-31T03:27:24Z | 2021-12-31T03:40:11Z | https://github.com/gee-community/geemap/issues/845 | [
"Feature Request"
] | giswqs | 1 |
chatopera/Synonyms | nlp | 61 | out of vocabulary | # description
分词分出“多少钱”的词性为n,查找同义词时synonyms.display("多少钱")返回out of vocabulary,查看文件vocab.txt中有一条记录为 多少钱 3 nr,请问是哪里出了问题
## current
## expected
# solution
# environment
* version:
The commit hash (`git rev-parse HEAD`)
| closed | 2018-05-01T15:49:30Z | 2018-05-05T13:14:19Z | https://github.com/chatopera/Synonyms/issues/61 | [] | GitOad | 1 |
autokey/autokey | automation | 624 | Substitution erases characters before it | ## Classification:
Bug
## Reproducibility:
Sometimes
## Version
AutoKey version: 0.95.10
Used GUI (Gtk, Qt, or both): Gtk
Installed via: pip 21.1.3
Linux Distribution: Ubuntu 20.04
## Summary
When abbreviations of phrases are typed out, the the front of the substitution overwrites characters in the text before it.
## Steps to Reproduce (if applicable)
- Create a phrase.
- Select Paste using 'Keyboard'.
- Add an abbreviation for the phrase.
- Use the option Trigger on all 'non-word'.
- Check the option Remove Typed Abbreviation
- The error occurs regardless of whether 'Ignore case of typed abbreviation' and 'Trigger immediately' are checked.
- Type the abbrevation possibly elsewhere ahead of some text. Leave a space between the text and the abbreviation.
Let's take this example: abbrevation is 'cr', expanded is 'Chorus', and you are typing 'cr' in front of 'lorem ipsum'
## Expected Results
`lorem ipsum Chorus`
## Actual Results
`lorem ipsChorus `
Edit: If I use Select Paste using 'Ctrl+V' instead, the results vary; `lorem ipsChorus Chorus` or `lorem ipsumChoruChorus` or something else.
Running `autokey-gtk --verbose` gives me `ModuleNotFoundError: No module named 'dbus'` and installing dbus using pip/conda isn't working out.
## Notes
EDIT: Only occurs while using the Vivaldi browser. I'm using version 4.1. | open | 2021-11-09T06:16:08Z | 2021-11-11T12:18:22Z | https://github.com/autokey/autokey/issues/624 | [
"bug",
"phrase expansion",
"low-priority"
] | arunkumaraqm | 5 |
ultralytics/yolov5 | deep-learning | 13,539 | Issue with Tellu Organoid Classifier using Yolov5 | ### Search before asking
- [x] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I am having this issue with Tellu Intestinal Organoid Classifier https://colab.research.google.com/drive/1j-jutA52LlPuB4WGBDaoAhimqXC3xguX?usp=sharing
Has worked perfectly before, but know I cannot get this code to run. any help would be appreciated, thanks.
detect: weights=['/content/yolov5/TelluWeights.pt'], source=/content/drive/MyDrive/BlatchleyLab/Projects/Role_of_integrins_in_symmetry_breaking/241217_DJT_SC_Rac1i_p30/D4, data=data/coco128.yaml, imgsz=[640, 640], conf_thres=0.35, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=True, save_conf=True, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=D4, exist_ok=False, line_thickness=3, hide_labels=True, hide_conf=False, half=False, dnn=False, vid_stride=1
YOLOv5 🚀 v7.0-0-g915bbf29 Python-3.11.11 torch-2.6.0+cu124 CPU
Traceback (most recent call last):
File "/content/yolov5/detect.py", line 259, in <module>
main(opt)
File "/content/yolov5/detect.py", line 254, in main
run(**vars(opt))
File "/usr/local/lib/python3.11/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/content/yolov5/detect.py", line 96, in run
model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/content/yolov5/models/common.py", line 345, in __init__
model = attempt_load(weights if isinstance(weights, list) else w, device=device, inplace=True, fuse=fuse)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/content/yolov5/models/experimental.py", line 79, in attempt_load
ckpt = torch.load(attempt_download(w), map_location='cpu') # load
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint.
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
WeightsUnpickler error: Unsupported global: GLOBAL models.yolo.Model was not an allowed global by default. Please use `torch.serialization.add_safe_globals([Model])` or the `torch.serialization.safe_globals([Model])` context manager to allowlist this global if you trust this class/function.
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
\nThe script also accepts other parameters. See https://github.com/ultralytics/yolov5/blob/master/detect.py for all the options.\n --max-detect 1000. Default is 1000; if you require more, add the parameter to the line of code above with a different value.\n --hide-labels Adding this parameter will hide the name of the organoid class detected in the result images. Convenient in dense cultures and for publication purposes\n\nExample:\n!python detect.py --weights $tellu --conf 0.35 --source $dir --save-txt --save-conf --name "results" --hide-labels --max-detect 2000\n
### Additional
_No response_ | open | 2025-03-21T18:02:05Z | 2025-03-22T04:50:41Z | https://github.com/ultralytics/yolov5/issues/13539 | [
"question",
"detect"
] | djtigani | 2 |
microsoft/qlib | deep-learning | 1,796 | port_analysis_config中strategy下kwarg的signal问题 | 在benchmark文件夹下面,模型对应的yaml配置文件中,一般使用signal<PRED >
port_analysis_config: &port_analysis_config
strategy:
class: TopkDropoutStrategy
module_path: qlib.contrib.strategy
kwargs:
signal:<PRED>
topk: 50
n_drop: 5
在workflow__by_code.py中,使用signal": (model, dataset)。
port_analysis_config = {.....
"strategy": {
"class": "TopkDropoutStrategy",
"module_path": "qlib.contrib.strategy.signal_strategy",
"kwargs": {
"signal": (model, dataset),
"topk": 50,
"n_drop": 5,
},
},
问题是<pred>中的pred是什么定义,没有看到相关说明?然后就是这两种方式有什么区别,谢谢大佬们。 | closed | 2024-05-24T01:14:06Z | 2024-05-24T01:15:13Z | https://github.com/microsoft/qlib/issues/1796 | [
"question"
] | semiparametric | 0 |
deeppavlov/DeepPavlov | tensorflow | 910 | Error while running in Alexa mode | I'm running in Alexa mode and getting the error:
```
ubuntu@ip-172-31-26-238:~/alexa_test$ python -m deeppavlov alexa server_config.json -d
Traceback (most recent call last):
File "/home/ubuntu/.pyenv/versions/3.6.8/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/ubuntu/.pyenv/versions/3.6.8/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/ubuntu/.pyenv/versions/3.6.8/lib/python3.6/site-packages/deeppavlov/__main__.py", line 3, in <module>
main()
File "/home/ubuntu/.pyenv/versions/3.6.8/lib/python3.6/site-packages/deeppavlov/deep.py", line 116, in main
default_skill_wrap=not args.no_default_skill)
File "/home/ubuntu/.pyenv/versions/3.6.8/lib/python3.6/site-packages/deeppavlov/utils/alexa/server.py", line 81, in run_alexa_default_agent
ssl_cert=ssl_cert)
File "/home/ubuntu/.pyenv/versions/3.6.8/lib/python3.6/site-packages/deeppavlov/utils/alexa/server.py", line 137, in run_alexa_server
bot = Bot(agent_generator, alexa_server_params, input_q, output_q)
File "/home/ubuntu/.pyenv/versions/3.6.8/lib/python3.6/site-packages/deeppavlov/utils/alexa/bot.py", line 68, in __init__
self.agent = self._init_agent()
File "/home/ubuntu/.pyenv/versions/3.6.8/lib/python3.6/site-packages/deeppavlov/utils/alexa/bot.py", line 95, in _init_agent
agent = self.agent_generator()
File "/home/ubuntu/.pyenv/versions/3.6.8/lib/python3.6/site-packages/deeppavlov/utils/alexa/server.py", line 70, in get_default_agent
model = build_model(model_config)
File "/home/ubuntu/.pyenv/versions/3.6.8/lib/python3.6/site-packages/deeppavlov/core/commands/infer.py", line 44, in build_model
model_config = config['chainer']
KeyError: 'chainer'
ubuntu@ip-172-31-26-
```
My server_config.json:
```
{
"common_defaults": {
"host": "0.0.0.0",
"port": 5000,
"model_endpoint": "/model",
"model_args_names": ["context"],
"https": false,
"https_cert_path": "",
"https_key_path": "",
"stateful": false,
"multi_instance": false
},
"telegram_defaults": {
"token": ""
},
"ms_bot_framework_defaults": {
"auth_polling_interval": 3500,
"conversation_lifetime": 3600,
"auth_app_id": "",
"auth_app_secret": ""
},
"alexa_defaults": {
"intent_name": "AskDeepPavlov",
"slot_name": "raw_input",
"start_message": "Welcome to DeepPavlov Alexa wrapper!",
"unsupported_message": "Sorry, DeepPavlov can't understand it.",
"conversation_lifetime": 3600
},
"model_defaults": {
"ODQA": {
"host": "",
"port": "",
"model_endpoint": "/odqa",
"model_args_names": ""
}
}
}
```
Why is the error happening?
Seems that everything should be right. I followed the documentation guide. Just deleted other skills from `model_defaults` section. | closed | 2019-06-29T11:49:10Z | 2019-07-02T20:25:54Z | https://github.com/deeppavlov/DeepPavlov/issues/910 | [] | sld | 3 |
MilesCranmer/PySR | scikit-learn | 390 | Warning for `^` | The `^` operator is super inefficient if the user is not using `nested_constraints`, so I think a warning should be raised when it is used without `nested_constraints` set. | closed | 2023-07-24T20:29:16Z | 2023-08-05T22:33:13Z | https://github.com/MilesCranmer/PySR/issues/390 | [] | MilesCranmer | 2 |
dpgaspar/Flask-AppBuilder | flask | 1,412 | Large number of Mongo Role/Permission queries by exist_permission_on_roles on page load | If you'd like to report a bug in Flask-Appbuilder, fill out the template below. Provide
any extra information that may be useful
### Environment
apispec==1.3.3
attrs==19.3.0
Babel==2.8.0
blinker==1.4
click==7.1.2
colorama==0.4.3
defusedxml==0.6.0
dnspython==1.16.0
email-validator==1.1.1
Flask==1.1.2
Flask-AppBuilder==2.3.4
Flask-Babel==1.0.0
Flask-DebugToolbar==0.11.0
Flask-JWT-Extended==3.24.1
Flask-Login==0.4.1
flask-mongoengine==0.9.5
Flask-OpenID==1.2.5
Flask-SQLAlchemy==2.4.3
Flask-WTF==0.14.3
idna==2.9
importlib-metadata==1.6.1
itsdangerous==1.1.0
Jinja2==2.11.2
jsonschema==3.2.0
MarkupSafe==1.1.1
marshmallow==2.21.0
marshmallow-enum==1.5.1
marshmallow-sqlalchemy==0.23.1
mongoengine==0.20.0
prison==0.1.3
PyJWT==1.7.1
pymongo==3.5.1
pyrsistent==0.16.0
python-dateutil==2.8.1
python3-openid==3.1.0
pytz==2020.1
PyYAML==5.3.1
six==1.15.0
SQLAlchemy==1.3.17
SQLAlchemy-Utils==0.36.6
Werkzeug==1.0.1
WTForms==2.3.1
zipp==3.1.0
### Describe the expected results
I Used the Contacts example application from the repo, and with Flask Debug Toolbar enabled to monitor mongo query times, to replicate the issue. Updated app/__init__.py shown below, and the additions to config.py also displayed below. When a basic page is loaded in this case the "List Groups" page I would expect a small number of mongo queries related to the permissioning and then a small number relating to loading/presenting the data.
```python
import logging
from flask import Flask
from flask_appbuilder import AppBuilder
from flask_appbuilder.security.mongoengine.manager import SecurityManager
from flask_mongoengine import MongoEngine
from flask_debugtoolbar import DebugToolbarExtension
logging.basicConfig(format="%(asctime)s:%(levelname)s:%(name)s:%(message)s")
logging.getLogger().setLevel(logging.DEBUG)
app = Flask(__name__)
app.config.from_object("config")
dbmongo = MongoEngine(app)
app.debug = True
toolbar = DebugToolbarExtension(app)
appbuilder = AppBuilder(app, security_manager_class=SecurityManager)
from . import models, views # noqa
```
```python
DEBUG_TB_PANELS = [
'flask_debugtoolbar.panels.versions.VersionDebugPanel',
'flask_debugtoolbar.panels.timer.TimerDebugPanel',
'flask_debugtoolbar.panels.headers.HeaderDebugPanel',
'flask_debugtoolbar.panels.request_vars.RequestVarsDebugPanel',
'flask_debugtoolbar.panels.template.TemplateDebugPanel',
'flask_debugtoolbar.panels.logger.LoggingPanel',
'flask_debugtoolbar.panels.profiler.ProfilerDebugPanel',
'flask_mongoengine.panels.MongoDebugPanel',
]
```
### Describe the actual results
When a page is loaded, there are 700+ mongo queries made.

When the flask application is communicating with a remote mongo instance and network latency increases this results in huge page load times.
### Steps to reproduce
This behaviour exists in the mongoengine example app within the repository and code above will recreate.
On investigation using flask_debugtoolbar, I was able to narrow this down to the exist_permission_on_roles function within the mongoengine/security/manager.py:SecurityManager. Due to the document structure - i.e. reference fields - a lot of queries are made on each individual call, a single call to this function in our case took 2-3 seconds and it is actually called multiple times. By immediately returning true from this function mongoqueries were reduced to < 5 and subsequently page loads were as expected but obviously this renders the Roles/Permissions functionality redundant.
It looks like the data model needs to be changed significantly when using mongo. At the moment I've overridden the SecurityManager:exist_permission_on_roles with a version which caches the permissions and roles in memory and so works around this. Will continue to investigate options around data model.
| closed | 2020-06-23T14:53:49Z | 2022-07-08T14:31:03Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1412 | [
"stale"
] | colinpattison | 4 |
ranaroussi/yfinance | pandas | 1,361 | fast_info failing for 2.7 on timezone | basic_info still worked for me while fast_info fails on timezone
For what it is worth, I wrote code to bring all the values backtogther into a single dictionary ie: like the old Ticker info
There is a penalty of course in time but I put my data into redis to reduce calls to yfinance
'''
sym = sym # from above
sinfo_d = {}
removed_d = {"regularMarketPrice": "last_price", "dayHigh": "day_high", "dayLow": "day_low", 'fiftyTwoWeekHigh': 'year_high',
'fiftyTwoWeekLow': "year_low", 'volume': 'last_volume', 'averageVolume': 'three_month_average_volume',
'regularMarketVolume': 'ten_day_average_volume', 'marketCap': 'market_cap', 'currency': 'currency',
'regularMarketDayHigh': "day_high", 'regularMarketDayLow': 'day_low', 'averageVolume10days': "ten_day_average_volume",
'open': "open", "toCurrency": "currency", 'fiftyDayAverage': "fifty_day_average",
'twoHundredDayAverage': "two_hundred_day_average", 'averageDailyVolume10Day': "ten_day_average_volume",
'regularMarketPreviousClose': 'previous_close', 'previousClose': "previous_close", 'regularMarketOpen': 'open', 'regularMarketClose': 'close',
'exchangeTimezoneName': 'timezone', 'exchangeTimezoneShortName': 'timezone', 'exchange': 'exchange', 'symbol': 'symbol', 'currentPrice': 'last_price'
>> }
for k in sinfo:
# dbug(f"chkg k: {k}")
if k in removed_d:
dbug(f"Grabbing basic_info for k: {k}")
if k == 'symbol':
sinfo_d[k] = sym
else:
sinfo_d[k] = sbinfo_d[removed_d[k]]
dbug(sinfo_d[k])
continue
# dbug(f"k: {k} {sinfo[k]}")
sinfo_d[k] = sinfo[k]
gtable(sinfo_d, 'hdr', 'prnt', title="debugging", footer=dbug('here'), rnd=2, human=True) # displays table of info dictionary
'''' | closed | 2023-01-26T20:13:30Z | 2023-03-27T15:41:14Z | https://github.com/ranaroussi/yfinance/issues/1361 | [] | geoffmcnamara | 3 |
sunscrapers/djoser | rest-api | 351 | Exposure of user pk, small security issue | Thank you for making good app. I found small security issue.
For example, when you create account at` "/users/create/"`, you will receive this response.
```
{
"email": "my_name@example.com",
"id": 1000,
"username": "my_name"
}
```
For some developer, this become security issue: [from security.stackexchange](https://security.stackexchange.com/questions/56357/should-i-obscure-database-primary-keys-ids-in-application-front-end#comment89550_56358)
> You would also reveal the number of IDs ..... and the current rate of creation (someone creates one, then a set time later finds the new maximum). This might be of interest in the case of user accounts, as eg. it gives clues as to the financial viability of your application to a competitor. – Julia Hayward
Can I remove user id by djoser settings ?
If no setting about this, I will write views(override djoser views) for no user id response, for private use.
someone needs this improvement?
If djoser maintainer think this is useful, and they can merge this improvement,
I will write code and send pull request.
In that case, I will add Djoser settings as bellow ( in `settings.py` )
```
DJOSER = {
'RETURN_USER_ID': False, # Default is True for compatibility
}
````
How do you think ? Please tell me your opinion ! | closed | 2019-02-09T00:44:16Z | 2019-02-21T13:28:02Z | https://github.com/sunscrapers/djoser/issues/351 | [] | ghost | 2 |
Significant-Gravitas/AutoGPT | python | 8,721 | Condition Block tries to convert everything to a float | You can replicate this by applying this diff for a new test (that you should add)
```diff
diff --git a/autogpt_platform/backend/backend/blocks/branching.py b/autogpt_platform/backend/backend/blocks/branching.py
index 65a01c977..1d3fbe18e 100644
--- a/autogpt_platform/backend/backend/blocks/branching.py
+++ b/autogpt_platform/backend/backend/blocks/branching.py
@@ -57,16 +57,27 @@ class ConditionBlock(Block):
output_schema=ConditionBlock.Output,
description="Handles conditional logic based on comparison operators",
categories={BlockCategory.LOGIC},
- test_input={
- "value1": 10,
- "operator": ComparisonOperator.GREATER_THAN.value,
- "value2": 5,
- "yes_value": "Greater",
- "no_value": "Not greater",
- },
+ test_input=[
+ {
+ "value1": 10,
+ "operator": ComparisonOperator.GREATER_THAN.value,
+ "value2": 5,
+ "yes_value": "Greater",
+ "no_value": "Not greater",
+ },
+ {
+ "value1": "hello",
+ "operator": ComparisonOperator.EQUAL.value,
+ "value2": "hello",
+ "yes_value": "Equal",
+ "no_value": "Not equal",
+ },
+ ],
test_output=[
("result", True),
("yes_output", "Greater"),
+ ("result", True),
+ ("yes_output", "Equal"),
],
)
``` | closed | 2024-11-19T17:22:01Z | 2024-12-05T19:51:22Z | https://github.com/Significant-Gravitas/AutoGPT/issues/8721 | [
"bug"
] | ntindle | 0 |
Lightning-AI/pytorch-lightning | data-science | 20,462 | Type Error in configure_optimizers | ### Bug description
As suggested in the [docs](https://lightning.ai/docs/pytorch/stable/api/lightning.pytorch.core.LightningModule.html#lightning.pytorch.core.LightningModule.configure_optimizers) my configure_optimizers looks ruffly like this:
```
def configure_optimizers(self) -> OptimizerLRSchedulerConfig:
...
return {'optimizer': optimizer, 'lr_scheduler': scheduler}
```
Note that if I want to specify a return type I have to choose `OptimizerLRSchedulerConfig` (or `OptimizerLRScheduler`) since that is the only sub type of `OptimizerLRScheduler` (the return type of `configure_optimizers`) that is a dict.
When I run that I get
```
lightning_fabric.utilities.exceptions.MisconfigurationException: `configure_optimizers` must include a monitor when a `ReduceLROnPlateau` scheduler is used. For example: {"optimizer": optimizer, "lr_scheduler": scheduler, "monitor": "metric_to_track"}
```
But I cannot add `'monitor': 'metric_to_track'` to my returned dict since then mypy complains
```
error: Extra key "monitor" for TypedDict "OptimizerLRSchedulerConfig" [typeddict-unknown-key]
```
## Suggested fix
I suggest to replace
```
class OptimizerLRSchedulerConfig(TypedDict):
optimizer: Optimizer
lr_scheduler: NotRequired[Union[LRSchedulerTypeUnion, LRSchedulerConfigType]]
```
from utilities/types.py with
```
class OptimizerConfigDict(TypedDict):
optimizer: Optimizer
class OptimizerLRSchedulerConfigDict(TypedDict):
optimizer: Optimizer
lr_scheduler: Union[LRSchedulerTypeUnion, LRSchedulerConfigType]
monitor: str
```
and
```
OptimizerLRScheduler = Optional[
Union[
Optimizer,
Sequence[Optimizer],
Tuple[Sequence[Optimizer], Sequence[Union[LRSchedulerTypeUnion, LRSchedulerConfig]]],
OptimizerLRSchedulerConfig,
Sequence[OptimizerLRSchedulerConfig],
]
]
```
with
```
OptimizerLRScheduler = Optional[
Union[
Optimizer,
Sequence[Optimizer],
Tuple[Sequence[Optimizer], Sequence[Union[LRSchedulerTypeUnion, LRSchedulerConfig]]],
OptimizerConfigDict,
OptimizerLRSchedulerConfigDict,
Sequence[OptimizerConfigDict],
Sequence[OptimizerLRSchedulerConfigDict],
]
]
```
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.4.0):
#- PyTorch Version (e.g., 2.4):
#- Python version (e.g., 3.12):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
```
</details>
### More info
_No response_ | closed | 2024-12-03T16:14:32Z | 2024-12-10T09:22:14Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20462 | [
"bug",
"ver: 2.4.x"
] | LukasSalchow | 2 |
ageitgey/face_recognition | machine-learning | 719 | Windows Chinese gibberish | The Chinese character of the position of the avatars is displayed in gibberish. How to set it in Windows environment to support Chinese characters.The effect looks like this:

| open | 2019-01-17T03:51:45Z | 2019-03-25T14:00:57Z | https://github.com/ageitgey/face_recognition/issues/719 | [] | bingws | 1 |
scikit-hep/awkward | numpy | 3,413 | `ctypes` abstraction is not scoped to NumPy only | It seems like we some-where rely on `array.ctypes` internally. We should try and abstract that to inside the nplike / kernels. | open | 2025-03-07T14:17:48Z | 2025-03-07T14:18:08Z | https://github.com/scikit-hep/awkward/issues/3413 | [
"cleanup"
] | agoose77 | 0 |
iterative/dvc | machine-learning | 9,814 | dvc get: throws error when file is in subfolder | # Bug Report
dvc get: throws error when file is in subfolder
## Description
I tracked a file `$SUBDIR/filedata.txt` with DVC. When trying to retrieve this file with `dvc get`, I get an error
```
ERROR: unexpected error - 'NoneType' object has no attribute 'fs'
```
if `$SUBDIR` is not set to the root folder. In other words, the following script works as expected for `SUBDIR=.` but throws the error for `SUBDIR=sub`.
### Reproduce
Please specify some absolute path in line 2. The script works as expected if `SUBDIR=.` is used in line 1, i.e., it obtains the file `datafile.txt` and saves it as `datafile_get.txt`. However, if a subdirectory name (here: `sub`) is specified, the script crashes.
```
SUBDIR=sub
MY_PATH=<SOME_ABSOLUTE_PATH>
mkdir $MY_PATH
cd $MY_PATH
REMOTE_PATH=$MY_PATH/remote/git_storage.git
LOCAL_PATH=$MY_PATH/local
STORE_PATH=$MY_PATH/store
CACHE_PATH=$MY_PATH/cache
git init --bare -q $REMOTE_PATH
git clone $REMOTE_PATH $LOCAL_PATH
cd $LOCAL_PATH
dvc init -q
cat <<EOT >> .dvc/config
[core]
remote = store
[cache]
dir = $CACHE_PATH
type = reflink,symlink
['remote "store"']
url = $STORE_PATH
EOT
# First commit
mkdir $SUBDIR
cd $SUBDIR
echo "Hello World!" >> datafile.txt
dvc add datafile.txt
git add .
git commit -q -m "initial"
dvc push
git push
# Retrieve file
PREVIOUS_COMMIT=$(git log -n 1 --pretty=format:"%H")
dvc get $REMOTE_PATH $SUBDIR/datafile.txt -o datafile_get.txt --rev $PREVIOUS_COMMIT
```
### Expected
I expect that no error is thrown, the file `sub/datafile.txt` can be successfully retrieved, and will be saved as `sub/datafile_get.txt`.
### Environment information
<!--
This is required to ensure that we can reproduce the bug.
-->
**Output of `dvc doctor`:**
```
$ dvc doctor
DVC version: 3.12.0 (pip)
-------------------------
Platform: Python 3.10.8 on Linux-3.10.0-1127.8.2.el7.x86_64-x86_64-with-glibc2.17
Subprojects:
dvc_data = 2.11.0
dvc_objects = 0.24.1
dvc_render = 0.5.3
dvc_task = 0.3.0
scmrepo = 1.1.0
Supports:
http (aiohttp = 3.8.4, aiohttp-retry = 2.8.3),
https (aiohttp = 3.8.4, aiohttp-retry = 2.8.3),
s3 (s3fs = 2023.6.0, boto3 = 1.26.76)
Config:
Global: /home/kpetersen/.config/dvc
System: /etc/xdg/dvc
Cache types: hardlink, symlink
Cache directory: xfs on /dev/sda1
Caches: local
Remotes: local
Workspace directory: xfs on /dev/sda1
Repo: dvc, git
Repo.site_cache_dir: /var/tmp/dvc/repo/0f7f5db67dbdcb6608d56609eb9b05eb
```
**Additional Information (if any):**
<!--
Please check https://github.com/iterative/dvc/wiki/Debugging-DVC on ways to gather more information regarding the issue.
If applicable, please also provide a `--verbose` output of the command, eg: `dvc add --verbose`.
If the issue is regarding the performance, please attach the profiling information and the benchmark comparisons.
-->
| closed | 2023-08-07T06:50:15Z | 2023-08-08T17:39:01Z | https://github.com/iterative/dvc/issues/9814 | [
"awaiting response"
] | kpetersen-hf | 2 |
deepfakes/faceswap | deep-learning | 907 | Return Code: 3221225477 when train the model again after Termination | **Crash reports MUST be included when reporting bugs.**
When I create a Model it works very well. Then when I Terminate the training process and it saved the model and I start the training process again I get: Status: Failed - train.py. Return Code: 3221225477 in the Statusbar in the left bottom.
I don't get any error displayed in the big output textfield.
**To Reproduce**
Steps to reproduce the behavior:
Create a google virtual machine with 6 cores and an Tesla V100 and 10 GB ram
Download Tesladriver and install Faceswap
Create a new Model.
After 100 iterations press terminate and wait till the process stops.
Press train again and after the Message: Enabled TensorBoardLogging the error comes.
**Expected behavior**
I expect that the model train again.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: WIndows Server Datacenter 2019
- Python Version the one downloaded by the Windowsinstaller today
- Conda Version the one downloaded by the Windowsinstaller today
- Commit ID the one downloaded by the Windowsinstaller today
**Crash Report**
No crashreport is generated | closed | 2019-10-25T19:08:00Z | 2020-09-27T00:04:34Z | https://github.com/deepfakes/faceswap/issues/907 | [] | Maxinger15 | 0 |
sktime/pytorch-forecasting | pandas | 1,321 | [TFT] NaN forecasts for series of length 1 with GroupNormalizer | - PyTorch-Forecasting version: 1.0.0
### Expected behavior
I have series of all sizes in my dataset, including several that are smaller than my `max_prediction_length`. However, the minimum length is 1 (I haven't yet tried the "pure" cold-start experience).
To ensure that TFT is able to provide forecasts regardless of the length of the series, I set `min_encoder_length` to 0. With these configurations, I should be able to produce forecasts for all series.
### Actual behavior
For some reason I don't understand, I have no problem when I don't normalize the target by group, but when I do, the forecasts for **all series of length 1** are **NaNs** ==> **no problem for lengths 2 or more**!
I've already tested and checked a lot of things and I've dived into the source code to try to understand. It seems that the problem arises at the `predict` time, I haven't seen anything unusual before, for example in the construction of datasets...
Have I misunderstood something? Or is it an side-effect bug in the GroupNormalizer during inference?
### Code to reproduce the problem
```
train_ds = TimeSeriesDataSet(
data=data,
time_idx="time_idx",
target="qty",
group_ids=["ts_id"],
min_encoder_length=1,
max_encoder_length=2*prediction_length,
min_prediction_length=prediction_length,
max_prediction_length=prediction_length,
target_normalizer=GroupNormalizer(groups=["model_id"])
)
pred_ds = TimeSeriesDataSet.from_dataset(train_ds, data, predict=True)
batch_size = 512
train_dataloader = train_ds.to_dataloader(train=True, batch_size=batch_size)
pred_dataloader = pred_ds.to_dataloader(train=False, batch_size=batch_size)
trainer = pl.Trainer(
max_epochs=5, # tmp for debug
limit_train_batches=5, # tmp for debug
gradient_clip_val=100.0,
accelerator="auto",
logger=None,
)
estimator = TemporalFusionTransformer.from_dataset(
train_ds,
learning_rate=0.001,
hidden_size=16,
attention_head_size=1,
dropout=0.1,
hidden_continuous_size=8,
output_size=7,
loss=QuantileLoss(),
log_interval=1,
reduce_on_plateau_patience=5,
)
trainer.fit(
estimator,
train_dataloaders=train_dataloader
)
predictions = estimator.predict(pred_dataloader, mode="prediction", return_index=True)
``` | open | 2023-06-02T15:28:35Z | 2023-10-03T08:23:18Z | https://github.com/sktime/pytorch-forecasting/issues/1321 | [] | Antoine-Schwartz | 4 |
sktime/sktime | scikit-learn | 7,763 | [BUG] Conversion of 1-column pd.DataFrame to pd.Series loses the column name. Conversion happens for forecasters with scitype:y = pd.Series. | Suppose XYZ is a forecaster with y scitype of pd.Series.
If you call XYZ.fit(y) with y a pd.DataFrame with a single column, sktime is "robust" and will convert the single column DataFrame to a pd.Series. This is done in routine 'convert_MvS_to_UvS_as_Series' in datatypes/_series/_convert.py.
The problem is that in doing this, the column name from the DataFrame has been lost. It should be retained as the attr name in the new series.
This can be reproduced by calling the fit method with a 1-column DataFrame on any forecaster that has y scitype of pd.Series.
| closed | 2025-02-05T08:17:57Z | 2025-02-18T07:36:42Z | https://github.com/sktime/sktime/issues/7763 | [
"bug",
"module:datatypes"
] | ericjb | 1 |
wagtail/wagtail | django | 12,530 | Allow interactions with locked pages | ### Is your proposal related to a problem?
When a page is locked due to a workflow or otherwise, the locking implementation makes it much harder / impossible to do any interaction with the page. See demo: [Welcome to the Wagtail bakery](https://static-wagtail-v6-1.netlify.app/admin-editor/pages/60/edit/). Here are interactions that are harder than necessary, even though they’re entirely safe:
- Copying content from the locked page to another.
- Using collapsible blocks to read the content more easily
- Navigating to a linked snippet / image / page etc
### Describe the solution you'd like
Change the implementation so that instead of the `content-locked` class, we only disable the specific page elements that are actually problematic. Ideally we would support all of those interactions with the content I mentioned above, while still communicating to page users that the page is un-editable.
### Describe alternatives you've considered
There are quite a few - for example using the "copy" feature to make a new page, or cancelling the workflow / getting the page unlocked temporarily. However they don’t seem particularly commensurate with the problem at hand.
### Additional context
None
### Working on this
<!--
Do you have thoughts on skills needed?
Are you keen to work on this yourself once the issue has been accepted?
Please let us know here.
-->
Anyone can contribute to this. View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you’re ready to start.
| open | 2024-11-02T00:41:58Z | 2024-12-26T04:51:01Z | https://github.com/wagtail/wagtail/issues/12530 | [
"type:Enhancement",
"Accessibility",
"component:Workflow",
"component:Locking",
"Sprint topic"
] | thibaudcolas | 1 |
akfamily/akshare | data-science | 5,738 | stock_bid_ask_em 盘口数据没有时间属性 | stock_bid_ask_em 接口盘口数据没有时间,感知不了是什么时候的交易日的数据。
或者有没有接口提供今天是否为a股的交易日,根据这个也有足够信息可以判定
| closed | 2025-02-26T16:25:45Z | 2025-02-26T16:50:04Z | https://github.com/akfamily/akshare/issues/5738 | [] | Falicitas | 2 |
joerick/pyinstrument | django | 39 | How can i use it in windows? | closed | 2018-05-15T17:30:29Z | 2018-05-20T12:33:58Z | https://github.com/joerick/pyinstrument/issues/39 | [] | owandywang | 3 | |
amdegroot/ssd.pytorch | computer-vision | 114 | Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument. | I'm getting this UserWarning:
> Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
> self.softmax(conf.view(-1, self.num_classes)), # conf preds | open | 2018-02-23T12:44:30Z | 2018-02-26T21:48:02Z | https://github.com/amdegroot/ssd.pytorch/issues/114 | [] | santhoshdc1590 | 1 |
alteryx/featuretools | scikit-learn | 1,793 | Add documentation page on Time Series with Featuretools | As a user of Featuretools, I would like to view documentation on how to use Featuretools with a time series problem.
I want to understand how Featuretools can help with feature engineering in a time series problem.
I want to see the helpful primitives that Featuretools has for time series problems.
| closed | 2021-11-22T17:30:39Z | 2022-02-18T19:20:46Z | https://github.com/alteryx/featuretools/issues/1793 | [] | gsheni | 0 |
deepset-ai/haystack | nlp | 8,747 | Extend TransformersTextRouter to use other model providers | **Is your feature request related to a problem? Please describe.**
Users would like to use vLLM or other model providers for text routing but currently the only supported option is to load a model from huggingface and run it.
**Describe the solution you'd like**
Similar to the vLLM integration for chat generators, it would be great if there was a text router that could be used with `api_base_url` or a similar parameter.
**Describe alternatives you've considered**
Write a custom component.
**Additional context**
TransformersZeroShotTextRouter could be extended in a similar way.
| open | 2025-01-19T11:57:28Z | 2025-01-24T05:27:20Z | https://github.com/deepset-ai/haystack/issues/8747 | [
"type:feature",
"P3"
] | julian-risch | 1 |
PokemonGoF/PokemonGo-Bot | automation | 5,768 | New security measures | FYI, the app now may ask for a captcha on logging in. I got a warning on one of my accounts that 3rd party software had been detected. The bot software no longer runs after it logs in and gets the profile info, I worry that trying to use the software and not dealing with captcha might be an extra flag on the account.
Thank you for the good times and the help, I must admit that I play the game more in real life because I had the bot, than if I didn't.
| open | 2016-10-05T23:46:54Z | 2016-10-10T06:07:30Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5768 | [] | ajhalls | 23 |
horovod/horovod | tensorflow | 3,256 | Error on KerasEstimator for Spark | @tgaddair
**Environment:**
1. Framework: (TensorFlow, Keras, PyTorch, MXNet): TensorFlow
2. Framework version: 2.6.0
3. Horovod version: 0.22.1
7. Python version: 3.8.10
8. Spark / PySpark version: 3.2.0
**Bug report:**
I have an spark dataframe with two columns: `content`, binary content of x images loaded with Databricks Spark Autoloader for Cloud Files and `class`, another column with long format and class value.
I defined the `KerasEstimator` to use a `transformation_fn`, this function loads the binary data of content and returns it as numpy array of shape (256,256,3), the class value is returned as it is.
When I trigger the fit method of the model I get this error:
```
[1,0]<stderr>:
[1,0]<stderr>:Function call stack:
[1,0]<stderr>:train_function
[1,0]<stderr>:
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
[1,2]<stderr>:2021-11-02 08:50:51.469193: W tensorflow/core/framework/op_kernel.cc:1669] OP_REQUIRES failed at cast_op.cc:121 : Unimplemented: Cast string to float is not supported
[1,2]<stderr>:Traceback (most recent call last):
[1,2]<stderr>: File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
[1,2]<stderr>: return _run_code(code, main_globals, None,
[1,2]<stderr>: File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
[1,2]<stderr>: exec(code, run_globals)
[1,2]<stderr>: File "/databricks/python/lib/python3.8/site-packages/horovod/spark/task/mpirun_exec_fn.py", line 52, in <module>
[1,2]<stderr>: main(codec.loads_base64(sys.argv[1]), codec.loads_base64(sys.argv[2]))
[1,2]<stderr>: File "/databricks/python/lib/python3.8/site-packages/horovod/spark/task/mpirun_exec_fn.py", line 45, in main
[1,2]<stderr>: task_exec(driver_addresses, settings, 'OMPI_COMM_WORLD_RANK', 'OMPI_COMM_WORLD_LOCAL_RANK')
[1,2]<stderr>: File "/databricks/python/lib/python3.8/site-packages/horovod/spark/task/__init__.py", line 61, in task_exec
[1,2]<stderr>: result = fn(*args, **kwargs)
[1,2]<stderr>: File "/databricks/python/lib/python3.8/site-packages/horovod/spark/keras/remote.py", line 242, in train
[1,2]<stderr>: history = fit(model, train_data, val_data, steps_per_epoch,
[1,2]<stderr>: File "/databricks/python/lib/python3.8/site-packages/horovod/spark/keras/util.py", line 40, in fn
[1,2]<stderr>: return model.fit(
[1,2]<stderr>: File "/databricks/python/lib/python3.8/site-packages/keras/engine/training.py", line 1184, in fit
[1,2]<stderr>: tmp_logs = self.train_function(iterator)
[1,2]<stderr>: File "/databricks/python/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 885, in __call__
[1,2]<stderr>: result = self._call(*args, **kwds)
[1,2]<stderr>: File "/databricks/python/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 950, in _call
[1,2]<stderr>: return self._stateless_fn(*args, **kwds)
[1,2]<stderr>: File "/databricks/python/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3039, in __call__
[1,2]<stderr>: return graph_function._call_flat(
[1,2]<stderr>: File "/databricks/python/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 1963, in _call_flat
[1,2]<stderr>: return self._build_call_outputs(self._inference_function.call(
[1,2]<stderr>: File "/databricks/python/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 591, in call
[1,2]<stderr>: outputs = execute.execute(
[1,2]<stderr>: File "/databricks/python/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute
[1,2]<stderr>: tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
[1,2]<stderr>:tensorflow.python.framework.errors_impl.UnimplementedError: Cast string to float is not supported
[1,2]<stderr>: [[node model/Cast (defined at databricks/python/lib/python3.8/site-packages/horovod/spark/keras/util.py:40) ]] [Op:__inference_train_function_11503]
```
| open | 2021-11-02T09:05:06Z | 2021-11-04T19:30:16Z | https://github.com/horovod/horovod/issues/3256 | [
"bug"
] | WaterKnight1998 | 2 |
STVIR/pysot | computer-vision | 604 | 有人成功训练SiamMask了吗? | 我在训练SiamMask中遇到以下报错,看了历史issues,该问题仍未解决,向各位大佬求助!!!
Traceback (most recent call last):
File "../../tools/train.py", line 317, in <module>
main()
File "../../tools/train.py", line 312, in main
train(train_loader, dist_model, optimizer, lr_scheduler, tb_writer)
File "../../tools/train.py", line 210, in train
outputs = model(data)
File "/home/work/anaconda3/envs/pysot/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/work/dingqishuai/pysot/pysot/utils/distributed.py", line 43, in forward
return self.module(*args, **kwargs)
File "/home/work/anaconda3/envs/pysot/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/work/dingqishuai/pysot/pysot/models/model_builder.py", line 115, in forward
outputs['total_loss'] += cfg.TRAIN.MASK_WEIGHT * mask_loss
TypeError: unsupported operand type(s) for *: 'float' and 'NoneType'
Pysot是否仅支持SiamMask的推理和测评?而不支持训练?Pysot还会继续维护更新吗? | open | 2023-12-19T07:21:11Z | 2023-12-19T07:21:11Z | https://github.com/STVIR/pysot/issues/604 | [] | dqs932 | 0 |
widgetti/solara | jupyter | 444 | ipycanvas doesn't work with solara | I want to develop an app that uses solara and [ipycanvas](https://github.com/jupyter-widgets-contrib/ipycanvas), but it seems like ipycanvas isn't supported.
App code:
```python
import solara
from ipycanvas import Canvas
@solara.component
def Page():
canvas = Canvas(width=200, height=200)
return canvas
```
Then, I start the app:
```sh
solara run app.py
```
And I get this error:
```pytb
Traceback (most recent call last):
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/reacton/core.py", line 1675, in _render
root_element = el.component.f(*el.args, **el.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/eduardo/Desktop/jupyai-demos/app.py", line 58, in Page
canvas = RoughCanvas(width=width, height=height)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/ipycanvas/canvas.py", line 620, in __init__
super(Canvas, self).__init__(*args, **kwargs)
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/ipywidgets/widgets/widget.py", line 506, in __init__
self.open()
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/ipywidgets/widgets/widget.py", line 525, in open
state, buffer_paths, buffers = _remove_buffers(self.get_state())
^^^^^^^^^^^^^^^^
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/ipywidgets/widgets/widget.py", line 615, in get_state
value = to_json(getattr(self, k), self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/ipywidgets/widgets/widget.py", line 54, in _widget_to_json
return "IPY_MODEL_" + x.model_id
^^^^^^^^^^
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/solara/server/patch.py", line 367, in model_id_debug
raise RuntimeError(f"Widget has been closed, the stacktrace when the widget was closed is:\n{closed_stack[id(self)]}")
RuntimeError: Widget has been closed, the stacktrace when the widget was closed is:
File "/Users/eduardo/miniconda3/envs/tmp/bin/solara", line 8, in <module>
sys.exit(main())
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/solara/__main__.py", line 706, in main
cli()
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/rich_click/rich_command.py", line 126, in main
rv = self.invoke(ctx)
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/solara/__main__.py", line 428, in run
start_server()
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/solara/__main__.py", line 400, in start_server
server.run()
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/uvicorn/server.py", line 61, in run
return asyncio.run(self.serve(sockets=sockets))
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/asyncio/base_events.py", line 640, in run_until_complete
self.run_forever()
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/asyncio/base_events.py", line 607, in run_forever
self._run_once()
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/asyncio/base_events.py", line 1922, in _run_once
handle._run()
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/uvicorn/server.py", line 68, in serve
config.load()
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/uvicorn/config.py", line 467, in load
self.loaded_app = import_from_string(self.app)
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/uvicorn/importer.py", line 21, in import_from_string
module = importlib.import_module(module_str)
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/solara/server/starlette.py", line 47, in <module>
from . import app as appmod
File "<frozen importlib._bootstrap>", line 1232, in _handle_fromlist
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/solara/server/app.py", line 407, in <module>
apps["__default__"] = AppScript(os.environ.get("SOLARA_APP", "solara.website.pages:Page"))
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/solara/server/app.py", line 79, in __init__
dummy_kernel_context.close()
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/solara/server/kernel_context.py", line 92, in close
widgets.Widget.close_all()
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/ipywidgets/widgets/widget.py", line 351, in close_all
widget.close()
File "/Users/eduardo/miniconda3/envs/tmp/lib/python3.11/site-packages/solara/server/patch.py", line 387, in close_widget_debug
stacktrace = "".join(traceback.format_stack())
```
Is this expected? Are there any workarounds?
| open | 2024-01-04T22:11:45Z | 2024-01-17T15:25:21Z | https://github.com/widgetti/solara/issues/444 | [] | edublancas | 2 |
oegedijk/explainerdashboard | plotly | 43 | logins with one username and one password | Checking https://explainerdashboard.readthedocs.io/en/latest/deployment.html?highlight=auth#setting-logins-and-password
I was testing with one login and one password.
`logins=["U", "P"]` doesn't work (see below) but `logins=[["U", "P"]]` does.
I don't suppose there is a `login` kwarg? or it can handle a list of len 2? It seems them is coming from dash_auth so I could upstream this there.
```
File "src/dashboard_cel.py", line 24, in <module>
logins=["Celebrity", "Beyond"],
File "C:\Users\131416\AppData\Local\Continuum\anaconda3\envs\e\lib\site-packages\explainerdashboard\dashboards.py", line 369, in __init__
self.auth = dash_auth.BasicAuth(self.app, logins)
File "C:\Users\131416\AppData\Local\Continuum\anaconda3\envs\e\lib\site-packages\dash_auth\basic_auth.py", line 11, in __init__
else {k: v for k, v in username_password_list}
File "C:\Users\131416\AppData\Local\Continuum\anaconda3\envs\e\lib\site-packages\dash_auth\basic_auth.py", line 11, in <dictcomp>
else {k: v for k, v in username_password_list}
ValueError: too many values to unpack (expected 2)
```
| closed | 2020-12-08T22:17:21Z | 2020-12-10T10:57:47Z | https://github.com/oegedijk/explainerdashboard/issues/43 | [] | raybellwaves | 3 |
tflearn/tflearn | data-science | 278 | image size more than about 220 pixel, loss is Nan during training imagenet | I mention in this thread https://github.com/tflearn/tflearn/issues/262, when image size is more than about 220 pixel, loss is nan, but if i decrease image size, then loss and training accuracy is normal.How to solve this so i can use larger image size?
| open | 2016-08-12T15:53:20Z | 2016-08-12T21:09:42Z | https://github.com/tflearn/tflearn/issues/278 | [] | lfwin | 1 |
netbox-community/netbox | django | 18,808 | Clean up nonexistent squashed migrations | ### Proposed Changes
The `sqlmigrate` command fails with errors such as: `django.db.migrations.exceptions.NodeNotFoundError: Migration extras.0002_squashed_0059 dependencies reference nonexistent parent node ('dcim', '0002_auto_20160622_1821')`
This results from migration squashing where the `dependencies` list was not updated to point to the new squashed migrations.
Proposed change is to fix all `dependencies` pointers in existing migrations to ensure they point to the existing migrations containing the original (missing) parent nodes.
### Justification
Though `sqlmigrate` is rarely used (it produces the native SQL operations to be applied during migrations in case they need to be run manually), it is useful as a diagnostic tool and ought to work properly. (Note: `migrate` itself does not seem to have this issue.)
| closed | 2025-03-05T01:05:54Z | 2025-03-10T16:57:46Z | https://github.com/netbox-community/netbox/issues/18808 | [
"status: accepted",
"type: housekeeping"
] | bctiemann | 1 |
graphql-python/graphene-django | django | 971 | DjangoObjectType duplicate models breaks Relay node resolution | I have exactly the same issue as #107.
The proposed solution no longer works
How to do this now in the current state ? | open | 2020-05-24T22:06:06Z | 2022-01-25T14:48:05Z | https://github.com/graphql-python/graphene-django/issues/971 | [
"🐛bug"
] | boolangery | 8 |
microsoft/JARVIS | pytorch | 65 | Can't run JARVIS in local full mode. | I downloaded all the models (28 models in 431GB) on my PC.
And I have hybrid/minimal mode running successfully.
But I can't have JARVIS running in local/full mode. (I do have 128GB RAM)
I got a error message saying that it can't load the file that exists actually. And I checked the file read permission with the process running user no problem.




| closed | 2023-04-06T09:24:20Z | 2023-04-07T10:31:34Z | https://github.com/microsoft/JARVIS/issues/65 | [] | meeeo | 2 |
inducer/pudb | pytest | 448 | RecursionError is not caught by the debugger | With something like
```py
def test():
raise ValueError
import pudb
pudb.set_trace()
test()
```
If you step over `test()`, it tells you that an exception has been raised. But with
```py
def test():
test()
import pudb
pudb.set_trace()
test()
```
The whole program exits with a traceback. In the case where you instead run `python -m pudb file.py`, the RecursionError is caught, but as an "uncaught exception" (post mortem).
I browsed the pudb and bdb code and it isn't clear to me why this is happening. | open | 2021-05-03T23:03:24Z | 2021-05-03T23:29:10Z | https://github.com/inducer/pudb/issues/448 | [] | asmeurer | 2 |
ydataai/ydata-profiling | data-science | 883 | n_rows and n_columns must be positive integer. (Same as #853 and #836) | **Describe the bug**
This is similar to #853 and #836, but posting anyway just as another example with version info and a screenshot in Jupyter Notebook. Feel free to mark as a duplicate if desired.
When attempting the basic profile.to_widgets() example in the README, I encounter the attached error that n-rows and n-columns must be positive integers.
<img width="832" alt="pandas_error" src="https://user-images.githubusercontent.com/42592742/141694005-1c5d9f1e-dc3e-4c9b-8d16-2278f8c11e2d.PNG">
**To Reproduce**
follow the example in picture above
**Version information:**
Windows 10
Python: 3.8.5
Jupyter Notebook: 6.1.4
pandas-profiling: 3.1.0
**Additional context**
Just wanted to throw this out there since I've never used the profiling report and it seems the basic example is broken - I could be doing something obviously wrong though.
Happy to attempt a bug fix and PR if this is an actual issue. Let me know what else you all might need. | closed | 2021-11-14T18:43:49Z | 2023-01-27T19:16:13Z | https://github.com/ydataai/ydata-profiling/issues/883 | [
"bug 🐛"
] | WillTirone | 8 |
falconry/falcon | api | 1,422 | fix: Custom serializers not called for errors that inherit from NoRepresentation, OptionalRepresentation | This is a breakout issue from https://github.com/falconry/falcon/issues/452 - It could be potentially breaking in the case that a custom error serializer is not prepared to deal with these additional error types. | closed | 2019-01-31T21:20:47Z | 2019-02-14T22:18:04Z | https://github.com/falconry/falcon/issues/1422 | [
"bug",
"breaking-change"
] | kgriffs | 0 |
joouha/euporie | jupyter | 129 | Config Location resolve order | I was wondering if we could set an order to the config location? Perhaps it is just me, but on OSX I always have a `${HOME}/.config` directory to make my [dotfiles](https://github.com/stevenwalton/.dotfiles/tree/master/configs) much more portable, since this is where `$XDG_CONFIG_HOME` usually points to.
Request: first check `${HOME}/.config/euporie` and then `${HOME}/Library/Application Support/euporie/`
It may just be me that does this and if so then it's probably not worth bothering with | open | 2025-02-07T01:15:50Z | 2025-02-07T01:15:50Z | https://github.com/joouha/euporie/issues/129 | [] | stevenwalton | 0 |
Ehco1996/django-sspanel | django | 127 | 后端运行出现问题,请帮忙看看。 | [root@server1 shadowsocksr]# python server.py
IPv6 support
Traceback (most recent call last):
File "server.py", line 74, in <module>
main()
File "server.py", line 54, in main
if get_config().API_INTERFACE == 'mudbjson':
AttributeError: 'NoneType' object has no attribute 'API_INTERFACE' | closed | 2018-05-28T05:26:30Z | 2018-07-30T03:19:32Z | https://github.com/Ehco1996/django-sspanel/issues/127 | [] | vggh66 | 2 |
onnx/onnx | tensorflow | 5,818 | get trt engine from onnx model,but the resout is different,why? | the resout of pth and onnx is same,but when I get the trt engine from onnx model,the resout is different,why?
pytorch 2 version,trt 8.6 version.linux,4090 GPU. | closed | 2023-12-21T02:26:34Z | 2023-12-21T08:42:12Z | https://github.com/onnx/onnx/issues/5818 | [
"question"
] | henbucuoshanghai | 2 |
piskvorky/gensim | data-science | 3,168 | Error with older wiki dumps | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
I am trying to load wiki dump and extract articles for word2vec training. This works well for more recent dumps. But for older dumps (e.g., 2010 dump), it fails.
#### Steps/code/corpus to reproduce
Include full tracebacks, logs and datasets if necessary. Please keep the examples minimal ("minimal reproducible example").
If your problem is with a specific Gensim model (word2vec, lsimodel, doc2vec, fasttext, ldamodel etc), include the following:
```python
import multiprocessing
from gensim.corpora.wikicorpus import WikiCorpus
from gensim.models.word2vec import Word2Vec
import logging
logging.basicConfig(format='%(asctime)s: %(levelname)s: %(message)s')
logging.root.setLevel(level=logging.INFO)
wiki_dump= './enwiki-20100312-pages-articles.xml.bz2'
wiki= WikiCorpus(fname= wiki_dump,
lower= False,
lemmatize=False,
dictionary={}, #not needed for word2vec (https://groups.google.com/u/1/g/gensim/c/aI7vbNCxhb8)
processes= max(1, multiprocessing.cpu_count() - 1),
token_min_len=1,
token_max_len=50)
txt_file = './enwiki_20100312_extracted_articles_v1.txt'
with open(txt_file, 'w') as f:
for i, text in enumerate(wiki.get_texts()):
f.write(" ".join(text) + "\n")
if i % 50000 == 0:
logging.info("Saved %d articles" % i)
logging.info("Finished extract wiki, Saved in %s" % txt_file)
```
`Process InputQueue-24:
Traceback (most recent call last):
File "/anaconda/envs/spacy_v3/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/user-sas01/.local/lib/python3.8/site-packages/gensim-4.0.0b0-py3.8-linux-x86_64.egg/gensim/utils.py", line 1215, in run
wrapped_chunk = [list(chunk)]
File "/home/user-sas01/.local/lib/python3.8/site-packages/gensim-4.0.0b0-py3.8-linux-x86_64.egg/gensim/corpora/wikicorpus.py", line 679, in <genexpr>
texts = (
File "/home/user-sas01/.local/lib/python3.8/site-packages/gensim-4.0.0b0-py3.8-linux-x86_64.egg/gensim/corpora/wikicorpus.py", line 430, in extract_pages
ns = elem.find(ns_path).text
AttributeError: 'NoneType' object has no attribute 'text'
`
#### Versions
Please provide the output of:
```python
Linux-5.4.0-1047-azure-x86_64-with-glibc2.10
Python 3.8.5 (default, Sep 4 2020, 07:30:14)
[GCC 7.3.0]
Bits 64
NumPy 1.19.2
SciPy 1.6.0
gensim 4.0.0beta
FAST_VERSION 1
```
| open | 2021-06-09T06:00:23Z | 2021-06-09T08:33:17Z | https://github.com/piskvorky/gensim/issues/3168 | [] | santoshbs | 3 |
nerfstudio-project/nerfstudio | computer-vision | 2,805 | Got an unexpected keyword argument 'point_shape' | I was trying to train the splatfacto model through command :
` ns-train splatfacto --pipeline.model.cull_alpha_thresh=0.005 --pipeline.model.continue_cull_post_densification=False --data ../../data/skoda_ultra_wide/ --output-dir ../../skoda_ultra_wide`
I had this error:
`File "/User/.local/bin/ns-train", line 8, in <module>
sys.exit(entrypoint())
File "/User3D/nerfstudio/nerfstudio/scripts/train.py", line 262, in entrypoint
main(
File "/User/3D/nerfstudio/nerfstudio/scripts/train.py", line 247, in main
launch(
File "/User/3D/nerfstudio/nerfstudio/scripts/train.py", line 189, in launch
main_func(local_rank=0, world_size=world_size, config=config)
File "/User/3D/nerfstudio/nerfstudio/scripts/train.py", line 99, in train_loop
trainer.setup()
File "/User/3D/nerfstudio/nerfstudio/engine/trainer.py", line 178, in setup
self.viewer_state = ViewerState(
File "/User/3D/nerfstudio/nerfstudio/viewer/viewer.py", line 263, in __init__
self.viser_server.add_point_cloud(
TypeError: MessageApi.add_point_cloud() got an unexpected keyword argument 'point_shape' `
I simply run the command on a data folder that was the result of a `ns-process` command. It asked me this:
`load_3D_points is true, but the dataset was processed with an outdated ns-process-data that didn't convert colmap points to .ply! Update the colmap dataset automatically? [y/n]:`
I put y as answer and from that moment it gives me that error. | closed | 2024-01-22T13:21:19Z | 2024-01-22T14:46:00Z | https://github.com/nerfstudio-project/nerfstudio/issues/2805 | [] | SalvoPisciotta | 0 |
JoeanAmier/XHS-Downloader | api | 207 | API 模式传入 Cookie 什么时候可以上传到Docker镜像 | API 模式传入 Cookie 什么时候可以上传到Docker镜像 | open | 2024-12-21T13:43:38Z | 2025-01-17T11:51:38Z | https://github.com/JoeanAmier/XHS-Downloader/issues/207 | [] | kiko923 | 3 |
deepinsight/insightface | pytorch | 2,107 | Why you remove last relu of resnet block? | Hi,
In standard ResNet block, there is a relu at the end of the block (https://github.com/open-mmlab/mmclassification/blob/master/mmcls/models/backbones/resnet.py#L131), but you don't have that (https://github.com/deepinsight/insightface/blob/master/recognition/arcface_torch/backbones/iresnet.py#L57). I tried to add relu but got a very bad result on my own dataset.
Any reason you made the change? | closed | 2022-09-16T09:53:00Z | 2022-09-24T01:35:48Z | https://github.com/deepinsight/insightface/issues/2107 | [] | twmht | 0 |
python-gino/gino | asyncio | 710 | How to close print sql | adding `echo` does not work, how can I close it?
```
db = Gino(
dsn=db_dsn,
pool_min_size=config.DB_POOL_MIN_SIZE,
pool_max_size=config.DB_POOL_MAX_SIZE,
echo=`config.DB_ECHO`,
ssl=config.DB_SSL,
use_connection_for_request=config.DB_USE_CONNECTION_FOR_REQUEST,
retry_limit=config.DB_RETRY_LIMIT,
retry_interval=config.DB_RETRY_INTERVAL,
)
```
| closed | 2020-07-21T13:06:01Z | 2022-11-19T01:10:42Z | https://github.com/python-gino/gino/issues/710 | [
"bug"
] | oleeks | 5 |
Asabeneh/30-Days-Of-Python | pandas | 535 | Close useless issues and merge pull requests. | I was looking through the issues and pull requests tabs and I noticed a lot of proper pull requests that would improve the 30 DoP repo. I would be willing to, but otherwise somebody should clean up those tabs. | open | 2024-06-29T17:44:16Z | 2024-06-29T17:44:16Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/535 | [] | SlothScript | 0 |
Miserlou/Zappa | django | 1,884 | How does Zappa actually work and best practices? | I'm trying to figure out how Zappa actually works and what's going on under-the-hood, along with best practices for deploying into production.
I've been playing around with it, and even though Zappa actually links to "guides", none actually focus on "here are the solid foundations and the `n` best practices for deployment". Instead, they all just kinda have the same 3-4 commands showing how to deploy. No meat, just bones.
As an example, I'm testing a Falcon API with Zappa. So far I've only been able to make it work if Zappa is a requirement for pip. Is that needed? I have some code like this:
```python
import falcon
class TestResource(object):
def on_get(self, req, resp):
resp.status = falcon.HTTP_200
resp.body = "Hello"
app = falcon.API()
test = TestResource()
app.add_route("/hello", test)
```
I've seen example projects, Falcon specific, using custom handlers for WSGI and lambda handling. Again, is this needed? Why? Do I need an empty `__init__.py` in the project source too?
The documentation seems to have a lot of concrete examples for specific use case only. But nothing telling me *why*. I also can't be bothered to start digging around in the source code for answers right now. | closed | 2019-06-11T13:46:24Z | 2019-06-17T09:26:35Z | https://github.com/Miserlou/Zappa/issues/1884 | [] | ghost | 0 |
dynaconf/dynaconf | django | 858 | Suggestions on how to `reverse` in my Django settings.yaml | I currently define settings like `LOGIN_URL` as `reverse_lazy("account_login")`. Is there a way to replicate this type of configuration into something I can use in my Dynaconf settings.yaml?
If possible, I'd prefer to keep all settings in my Dynaconf settings.yaml and keep my Django settings.py file as empty as possible. | closed | 2023-02-04T15:34:20Z | 2023-03-30T19:30:37Z | https://github.com/dynaconf/dynaconf/issues/858 | [
"question",
"Docs"
] | wgordon17 | 6 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,357 | Some problems encountered during testing | After I trained the cyclegan model, why did this result appear in the model test? I trained it twice. | closed | 2021-12-27T01:10:45Z | 2021-12-27T12:50:04Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1357 | [] | supersarr | 2 |
hankcs/HanLP | nlp | 1,344 | 在线演示与代码句法分析结果存在差异。 | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:
我使用的版本是:
<!--以上属于必填项,以下可自由发挥-->
hanlp-1.7.5,master均存在此问题
## 我的问题
在线演示“http://www.hanlp.com/?sentence=打开空调调高温度”
与源码结果不同
<!-- 请详细描述问题,越详细越可能得到解决 -->
对比在线演示和master,hanlp-1.7.5中的DemoDependencyParser的依存分析结果,解析句子“打开空调调高温度”,得到的结果是不同的,是使用了不同模型吗
## 复现问题
<!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?-->
没有修改任何
master与jar包结果。
1 打开 打开 v v _ 0 核心关系 _ _
2 空调 空调 n n _ 1 动宾关系 _ _
3 调高 调高 v v _ 4 定中关系 _ _
4 温度 温度 n n _ 1 动宾关系 _ _
在线演示结果

| closed | 2019-12-07T07:02:21Z | 2019-12-07T07:26:31Z | https://github.com/hankcs/HanLP/issues/1344 | [
"duplicated"
] | mfxss | 1 |
clovaai/donut | nlp | 173 | OSError: Unable to load vocabulary when using with Windows | Trying to `train.py` a new language based on a corpora I generated with synthDOG, running the command `python train.py --config config/base.yaml --exp_version "base"` on up-to-date Windows 11 inside a conda virtualenv. Dev mode in Windows is activated, and I've launched Anaconda as admin, cmd also shows Administrator: at the top.
The error is as follows:
```
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\envs\Donut\lib\site-packages\transformers\tokenization_utils_base.py", line 1958, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "C:\ProgramData\Anaconda3\envs\Donut\lib\site-packages\transformers\models\xlm_roberta\tokenization_xlm_roberta.py", line 168, in __init__
self.sp_model.Load(str(vocab_file))
File "C:\ProgramData\Anaconda3\envs\Donut\lib\site-packages\sentencepiece\__init__.py", line 905, in Load
return self.LoadFromFile(model_file)
File "C:\ProgramData\Anaconda3\envs\Donut\lib\site-packages\sentencepiece\__init__.py", line 310, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
OSError: Not found: "C:\Users\Csanád\.cache\models--naver-clova-ix--donut-base\snapshots\a959cf33c20e09215873e338299c900f57047c61\sentencepiece.bpe.model": No such file or directory Error #2
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Csanád\Documents\Kontron\donut-master\train.py", line 149, in <module>
train(config)
File "C:\Users\Csanád\Documents\Kontron\donut-master\train.py", line 57, in train
model_module = DonutModelPLModule(config)
File "C:\Users\Csanád\Documents\Kontron\donut-master\lightning_module.py", line 30, in __init__
self.model = DonutModel.from_pretrained(
File "C:\Users\Csanád\Documents\Kontron\donut-master\donut\model.py", line 594, in from_pretrained
model = super(DonutModel, cls).from_pretrained(pretrained_model_name_or_path, revision="official", *model_args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\Donut\lib\site-packages\transformers\modeling_utils.py", line 2498, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "C:\Users\Csanád\Documents\Kontron\donut-master\donut\model.py", line 390, in __init__
self.decoder = BARTDecoder(
File "C:\Users\Csanád\Documents\Kontron\donut-master\donut\model.py", line 159, in __init__
self.tokenizer = XLMRobertaTokenizer.from_pretrained(
File "C:\ProgramData\Anaconda3\envs\Donut\lib\site-packages\transformers\tokenization_utils_base.py", line 1804, in from_pretrained
return cls._from_pretrained(
File "C:\ProgramData\Anaconda3\envs\Donut\lib\site-packages\transformers\tokenization_utils_base.py", line 1960, in _from_pretrained
raise OSError(
OSError: Unable to load vocabulary from file. Please check that the provided vocabulary is accessible and not corrupted.
```
When running by default, the cache link is incorrect, as it generates something like "C:\User\me/.cache\etc.". I manually changed the cache, but still after downloading about a GB of model files, the folder only contains SYMLINK files that are 0 KB. Even when I manually downloads all the files and add them in, the path won't be recognized. Copy-pasting the apparently erroneous file path from the cmd stack trace to a python `with open()` command, it seems to open it just fine. I have no idea what's wrong, but I'd really like some help, I'm going crazy. It's saying files aren't there, but they are. | open | 2023-03-30T20:37:32Z | 2023-03-30T20:57:07Z | https://github.com/clovaai/donut/issues/173 | [] | csanadpoda | 0 |
biolab/orange3 | scikit-learn | 6,180 | Widget save data shuffle the table columns | Hi
I'm just starting using Orange and I realised that the widget "Save Data" when writing the xcel file shuffle the columns..

This is how the Save data sees the input table (just the first columns

This is how it is going on the file

<!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
<!-- Be specific, clear, and concise. Include screenshots if relevant. -->
<!-- If you're getting an error message, copy it, and enclose it with three backticks (```). -->
**How can we reproduce the problem?**
<!-- Upload a zip with the .ows file and data. -->
<!-- Describe the steps (open this widget, click there, then add this...) -->
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system:
- Orange version:
- How you installed Orange:
| closed | 2022-10-21T13:12:10Z | 2023-01-20T10:07:02Z | https://github.com/biolab/orange3/issues/6180 | [
"wish",
"meal"
] | EGalloni | 3 |
MycroftAI/mycroft-core | nlp | 3,145 | Unsupported locale setting with debug | **Describe the bug**
On my raspberry pi 2b, when i run ./start-mycroft debug, i get the following error:
`pi@raspi2b:~/mycroft-core $ ./start-mycroft.sh debug
Already up to date.
Starting all mycroft-core services
Initializing...
Starting background service bus
CAUTION: The Mycroft bus is an open websocket with no built-in security
measures. You are responsible for protecting the local port
8181 with a firewall as appropriate.
Starting background service skills
Starting background service audio
Starting background service voice
Starting background service enclosure
Starting cli
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/pi/mycroft-core/mycroft/client/text/__main__.py", line 21, in <module>
from .text_client import (
File "/home/pi/mycroft-core/mycroft/client/text/text_client.py", line 40, in <module>
locale.setlocale(locale.LC_ALL, "") # Set LC_ALL to user default
File "/usr/lib/python3.7/locale.py", line 604, in setlocale
return _setlocale(category, locale)
locale.Error: unsupported locale setting`
**To Reproduce**
Steps to reproduce the behavior:
1. Do ./start-mycroft debug
2. See error in logs
**Expected behavior**
Starting the debug
**Environment (please complete the following information):**
- Device type: Raspberry Pi 2 Model B Rev 1.1
- OS: Raspbian GNU/Linux 10 (buster) armv7l
- Mycroft-core version: just downloaded from main github today | closed | 2022-12-17T21:32:19Z | 2023-06-01T15:06:14Z | https://github.com/MycroftAI/mycroft-core/issues/3145 | [
"bug"
] | FQQD | 2 |
iperov/DeepFaceLab | machine-learning | 800 | VRAM not fully utilized | Only 7159MiB / 8192MiB of vram is used. There was an OOM error even though the gpu's vram wasnt full. there is still 1gb left unused. | closed | 2020-06-24T20:25:28Z | 2020-07-06T22:08:36Z | https://github.com/iperov/DeepFaceLab/issues/800 | [] | test1230-lab | 2 |
nerfstudio-project/nerfstudio | computer-vision | 3,501 | Update installation guide to work better for windows, and also simplify setup in general | **Intro**
So the issue i have had is that i got a new laptop recently, and had to set it up from scratch - and since i only have community edition of visual studio, i simply only have the newest version available (today that is 17.11.x, which is not compatible with cuda 11.8).
I have tried, i think, every single possible way to install nerfstudio on my computer - trying to do the unofficial upgrade of the cuda toolkit to 12.4 and install a newer version of torch and so on, but that did only lead to more headaches. I have tried to install an earlier version of visual studio build tools (17.8, that is compatible with cuda 11.8), that almost worked, but crashed when running ns-train with g-splat.
So to sum it up - i spent an embarrassingly long time yesterday to try to install nerfstudio on my new laptop..
**The solution**
So when i woke up today, i remembered that it is possible to install WSL to run linux on windows computers - and guess if it worked? It worked almost out of the box.
WSL is also especially nice, since you dont need to open a virtual machine or anything, it simply runs as its own command prompt on windows - just also being able to run linux code!
I had to do some small tweaks, first of which, was to ensure miniconda was installed in the linux environment:
`wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh
`
Activate it:
`source ~/.bashrc
`
Set the g++ version to 11:
`sudo apt update
sudo apt install gcc-11 g++-11
sudo apt install -y build-essential
`
I also made sure that i set GCC-11 as the Default for the Installation Process (not sure if this is strictly necessary):
`export CC=/usr/bin/gcc-11
export CXX=/usr/bin/g++-11
`
I also had to make sure the cuda toolkit was installed correctly in WSL:
`sudo apt update
sudo apt install nvidia-cuda-toolkit`
When ensuring all of the previous steps are done, you simply can run the installation guide for linux, and it worked (for me at least)
I really hope this helps anyone experiencing issues installing nerfstudio on newer windows pcs, and maybe hopefully ends up in the actual guide for installing nerfstudio! :D Let's make it easier for everyone to use these amazing tools!
Also a side note; if anyone with better software knowledge then me read through this, the order of things here might be a bit of (thats because i fixed issues as they came when trying to install nerfstudio..). | open | 2024-10-27T10:10:54Z | 2025-01-16T19:01:48Z | https://github.com/nerfstudio-project/nerfstudio/issues/3501 | [] | michael-vedeler | 2 |
anselal/antminer-monitor | dash | 175 | max worker must be greater than 0 | 
| closed | 2020-08-28T12:48:01Z | 2021-07-13T05:43:57Z | https://github.com/anselal/antminer-monitor/issues/175 | [
":bug: bug"
] | jcreyesb | 5 |
huggingface/transformers | pytorch | 36,104 | I get OSError: ... is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' for valid models | ### System Info
OS: Windows 11
Python: Both 3.11.6 and 3.12.9
Pytorch: Both 2.2.0 and 2.6.0
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import OmDetTurboProcessor, OmDetTurboForObjectDetection
processor = OmDetTurboProcessor.from_pretrained("omlab/omdet-turbo-swin-tiny-hf")
model = OmDetTurboForObjectDetection.from_pretrained("omlab/omdet-turbo-swin-tiny-hf")
```
I can't reproduce this in colab so i figure it's my system but i can't figure out why. I get this trying to load any model. I also tried rt-detr and got similar errors
Full Traceback:
```
Traceback (most recent call last):
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\huggingface_hub\utils\_http.py", line 406, in hf_raise_for_status
response.raise_for_status()
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\requests\models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/omlab/omdet-turbo-swin-tiny-hf/resolve/main/preprocessor_config.json
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\transformers\utils\hub.py", line 403, in cached_file
resolved_file = hf_hub_download(
^^^^^^^^^^^^^^^^
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\huggingface_hub\file_download.py", line 860, in hf_hub_download
return _hf_hub_download_to_cache_dir(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\huggingface_hub\file_download.py", line 967, in _hf_hub_download_to_cache_dir
_raise_on_head_call_error(head_call_error, force_download, local_files_only)
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\huggingface_hub\file_download.py", line 1482, in _raise_on_head_call_error
raise head_call_error
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\huggingface_hub\file_download.py", line 1374, in _get_metadata_or_catch_error
metadata = get_hf_file_metadata(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\huggingface_hub\file_download.py", line 1294, in get_hf_file_metadata
r = _request_wrapper(
^^^^^^^^^^^^^^^^^
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\huggingface_hub\file_download.py", line 278, in _request_wrapper
response = _request_wrapper(
^^^^^^^^^^^^^^^^^
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\huggingface_hub\file_download.py", line 302, in _request_wrapper
hf_raise_for_status(response)
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\huggingface_hub\utils\_http.py", line 454, in hf_raise_for_status
raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-67a9034f-3221be180265fb393ff1b352;afb39851-99d5-491d-8d27-f58783b491da)
Repository Not Found for url: https://huggingface.co/omlab/omdet-turbo-swin-tiny-hf/resolve/main/preprocessor_config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid credentials in Authorization header
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\obkal\Desktop\cbtest\omdet\detect.py", line 5, in <module>
processor = OmDetTurboProcessor.from_pretrained("omlab/omdet-turbo-swin-tiny-hf")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\transformers\processing_utils.py", line 974, in from_pretrained
args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\transformers\processing_utils.py", line 1020, in _get_arguments_from_pretrained
args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\transformers\image_processing_base.py", line 209, in from_pretrained
image_processor_dict, kwargs = cls.get_image_processor_dict(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\transformers\image_processing_base.py", line 341, in get_image_processor_dict
resolved_image_processor_file = cached_file(
^^^^^^^^^^^^
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\transformers\utils\hub.py", line 426, in cached_file
raise EnvironmentError(
OSError: omlab/omdet-turbo-swin-tiny-hf is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`
``` | closed | 2025-02-09T20:00:14Z | 2025-02-10T19:36:15Z | https://github.com/huggingface/transformers/issues/36104 | [
"bug"
] | ogkalu2 | 3 |
iperov/DeepFaceLab | machine-learning | 5,516 | RTM WF faceset | My data_src face size is 768x768 and RTM WF Faceset.zip (https://disk.yandex.ru/d/7i5XTKIKVg5UUg/DeepFaceLab/Facesets) are different size (768x768 and 224x224) for create my own RTM model must change RTM WF Faceset.zip faces size to 768x768 or use without changing image size? | open | 2022-05-10T17:00:07Z | 2023-06-08T23:19:00Z | https://github.com/iperov/DeepFaceLab/issues/5516 | [] | CHRISTOPHERMOSLEY1975 | 2 |
JoeanAmier/XHS-Downloader | api | 54 | 请问能否在保存的文件名前加上作品发布时间呢? | open | 2024-02-25T17:21:53Z | 2024-02-26T11:54:53Z | https://github.com/JoeanAmier/XHS-Downloader/issues/54 | [] | mortgoupil | 1 | |
lepture/authlib | flask | 349 | Cannot validate custom claims in jwt | https://github.com/lepture/authlib/blob/8b7c35ce5cec230a8fc984486c2ea2c81cfa7c31/authlib/jose/rfc7519/claims.py#L88
I've noticed that this method does not trigger validation for custom claims that I have added via validation options in the constructor.
For example I've added these custom validators:
```python
def validate_int(value):
return isinstance(value, int)
def validate_game_id(value):
if not validate_int(value):
return False
return None is not Game.query.get(value)
validation_options = {
'iss': {
'essential': True,
'values': ['yeepa'],
},
'iat': {
'essential': True,
'validate': JWTClaims.validate_iat,
},
'exp': {
'essential': True,
'validate': JWTClaims.validate_exp,
},
'user_id': {
'essential': True,
'validate': validate_int,
},
'game_id': {
'essential': True,
'validate': validate_game_id,
},
'game_run_id': {
'essential': True,
'validate': validate_int,
},
}
```
And indeed missing status is [validated here](https://github.com/lepture/authlib/blob/8b7c35ce5cec230a8fc984486c2ea2c81cfa7c31/authlib/jose/rfc7519/claims.py#L90) but the validation routines are never called.
This should either be documented, I.e. that setting custom validation rules is only supported for certain values (iss, sub seems to be covered) or all custom validation rules should be evaluated. | closed | 2021-05-25T15:33:18Z | 2021-06-09T04:01:55Z | https://github.com/lepture/authlib/issues/349 | [
"feature request"
] | dwt | 0 |
yzhao062/pyod | data-science | 96 | Installation is broken due to missing sklearn.externals.funcsigs module | Trying out to install pyod as usual
`pip install pyod`.
By default it uses now sklearn==0.21.0 which does not have sklearn.externals.funcsigs anymore.
Running
`from pyod.models.knn import KNN`
gives an error "No module named sklearn.externals.funcsigs"
Can be fixed by using scikit-learn<0.21 | closed | 2019-05-13T12:37:19Z | 2019-05-13T16:43:09Z | https://github.com/yzhao062/pyod/issues/96 | [] | lavsurgut | 3 |
ydataai/ydata-profiling | jupyter | 906 | HTML rendering KeyError with custom Typeset | **Describe the bug**
I am experimenting with customized Typesets and Summarizers.
When generating the `description_set` I have no issues, but when trying to render the HTML file, I have a `KeyError` issue in the `render_map`
```
~\Anaconda3\envs\py39\lib\site-packages\pandas_profiling\report\structure\report.py in render_variables_section(config, dataframe_summary)
100 # Per type template variables
101 template_variables.update(
--> 102 render_map[summary["type"]](config, template_variables)
103 )
104
KeyError: 'MyNewType'
```
Because my new type is not in the `render_map` definition, I get a KeyError. See the definition of render_map in `model/handler.py`
```python
def get_render_map() -> Dict[str, Callable]:
import pandas_profiling.report.structure.variables as render_algorithms
render_map = {
"Boolean": render_algorithms.render_boolean,
"Numeric": render_algorithms.render_real,
"Complex": render_algorithms.render_complex,
"DateTime": render_algorithms.render_date,
"Categorical": render_algorithms.render_categorical,
"URL": render_algorithms.render_url,
"Path": render_algorithms.render_path,
"File": render_algorithms.render_file,
"Image": render_algorithms.render_image,
"Unsupported": render_algorithms.render_generic,
}
```
**Quick fix I made in `report\structure\report.py`:**
Since we cannot define a rendering mapping to the ProfileReport object (maybe in the future ?), I decided to treat all unexpected typess as Unsupported for now.
```python
# At line 100
# Per type template variables
if summary["type"] in render_map.keys():
template_variables.update(
render_map[summary["type"]](config, template_variables)
)
else:
# If we don't have a renderer for this type, use Unsupported
template_variables.update(
render_map["Unsupported"](config, template_variables)
)
```
Let me know if I should make a PR to implement this fix, or if there is another obvious thing I missed when trying to fix it.
**Version information:**
* _Python version_: I tested it on two conda environments, `3.7.6` and `3.9.9`
* _Environment_: On a local Jupyter Notebook
* _`pip`_: I'm using the latest version of pandas-profiling 3.1.0 | closed | 2022-01-13T09:56:29Z | 2022-02-02T09:30:03Z | https://github.com/ydataai/ydata-profiling/issues/906 | [] | ArnaudWald | 1 |
nicodv/kmodes | scikit-learn | 100 | Parallel Computing | Hi!
I somehow don't manage an implementation of the parrallel Computation...
I get an error as follows:

Furthermore I'm pretty new to this topic but I wondered what the difference would be if i just reduce my data to the Objects (X1,X2,...,Xn) which are unique - how would that affect the outcome?
As far as I can see from the theorem in the paper:

it basically tells us that the frequency of an attribute's category should be bigger within a cluster than in the whole set of objects- right?
And if that is so, then if i only take the unique objects how would that change my result? | closed | 2019-01-16T18:22:23Z | 2019-01-25T19:52:08Z | https://github.com/nicodv/kmodes/issues/100 | [
"question"
] | shmulik90 | 4 |
deepspeedai/DeepSpeed | pytorch | 5,883 | [BUG] deepspeed.utils.safe_get_full_grad get all nan value | **Describe the bug**
hello, i try to train model with deepspeed ZERO, this is my script
```
import os
import torch
import deepspeed
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig
from torch.utils.data import DataLoader, TensorDataset
# Constants
MODEL_NAME = "facebook/opt-125m"
MAX_LENGTH = 512
PER_DEVICE_TRAIN_BATCH_SIZE = 2
GRADIENT_ACCUMULATION_STEPS = 1
NUM_EPOCHS = 1
LEARNING_RATE = 5e-5
WEIGHT_DECAY = 0.01
LOGGING_STEPS = 10
MAX_STEPS = 100
def main():
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
tokenizer.pad_token = tokenizer.eos_token
# Creating a dummy dataset
dummy_data = torch.randint(0, tokenizer.vocab_size, (1000, MAX_LENGTH))
dummy_labels = dummy_data.clone()
train_dataset = TensorDataset(dummy_data, dummy_labels)
train_loader = DataLoader(train_dataset, batch_size=PER_DEVICE_TRAIN_BATCH_SIZE, shuffle=True)
config = AutoConfig.from_pretrained(MODEL_NAME)
model = AutoModelForCausalLM.from_config(config)
model.gradient_checkpointing_enable()
optimizer = torch.optim.AdamW(model.parameters(), lr=LEARNING_RATE, weight_decay=WEIGHT_DECAY)
model_engine, optimizer, _, _ = deepspeed.initialize(
model=model,
optimizer=optimizer,
config_params={
"train_batch_size": 4,
"train_micro_batch_size_per_gpu": PER_DEVICE_TRAIN_BATCH_SIZE,
"gradient_accumulation_steps": GRADIENT_ACCUMULATION_STEPS,
"fp16": {
"enabled": True
},
"zero_optimization": {
"stage": 1
}
}
)
for epoch in range(NUM_EPOCHS):
model_engine.train()
for step, batch in enumerate(train_loader):
inputs = batch[0].to(model_engine.local_rank)
labels = batch[1].to(model_engine.local_rank)
outputs = model_engine(input_ids=inputs, labels=labels)
loss = outputs.loss
model_engine.backward(loss)
model_engine.step()
if step % LOGGING_STEPS == 0:
print(f"Step: {step}, Loss: {loss.item()}")
for name, param in model.named_parameters():
grad = deepspeed.utils.safe_get_full_grad(param)
print(f"{name} grad: {grad}")
print(f"grad sum: {grad.nan_to_num(0).sum()}")
if step >= MAX_STEPS:
break
if __name__ == "__main__":
torch.manual_seed(0)
main()
```
i try to get the grad value for analysis, but get almost all nan via **deepspeed.utils.safe_get_full_grad** like the following image.
<img width="879" alt="image" src="https://github.com/user-attachments/assets/f69fe2af-515f-47e0-b01d-994cbe91f953">
**To Reproduce**
1. run by `torchrun --nproc_per_node=2 train.py` or `deepspeed --num_gpus=2 train.py`
**Expected behavior**
expect some normal value for gradients.
**ds_report output**
<img width="776" alt="image" src="https://github.com/user-attachments/assets/9b40c72d-9c2a-4546-8168-8bd4892799de">
**Screenshots**
<img width="908" alt="image" src="https://github.com/user-attachments/assets/90b49ec8-d47e-4232-92ed-7d858f7265ce">
**System info (please complete the following information):**
- nvcr.io/nvidia/cuda 12.4.1-cudnn-devel-ubuntu22.04
- single machine with x8 H100s
**Launcher context**
`deepspeed` launcher or `torchrun` both have same error
| closed | 2024-08-08T14:25:57Z | 2024-09-12T15:25:11Z | https://github.com/deepspeedai/DeepSpeed/issues/5883 | [
"bug",
"training"
] | gin-xi | 2 |
mars-project/mars | numpy | 2,828 | [BUG] Distributed training failed on Ray cluster | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
Mars integrates some deep learning frameworks(PyTorch, TensorFlow), these frameworks usually need to set some environments for distributed training, `TF_CONFIG` for TensorFlow, `MASTER_ADDR` for PyTorch. We use `ctx.get_worker_addresses()` to collect all worker addresses, it works well for Oscar backend, while for Ray, the addresses start with `ray://` which is invalid for them, we need a method to get worker's host IP not address to address the issue.
| open | 2022-03-16T10:02:50Z | 2022-03-16T10:02:50Z | https://github.com/mars-project/mars/issues/2828 | [
"type: bug",
"mod: learn"
] | hekaisheng | 0 |
pydantic/pydantic-settings | pydantic | 439 | Pydantic 2.5.0 regressed Discriminated Unions usage with DotEnvSettingsSource and forbidden extra | I haven't tested the latest main, but I'm pretty sure the issue is still not fixed.
https://github.com/pydantic/pydantic-settings/pull/386 introduced a regression which breaks what worked under 2.4.0. The problem probably is the new condition `lenient_issubclass(field.annotation, BaseModel) and env_name.startswith(field_env_name)` that cannot work for `Union` fields -- just simple `BaseModel`.
It causes the following to result in error:
```python
from typing import Annotated, Literal
import pydantic
import pydantic_settings
class Cat(pydantic.BaseModel):
pet_type: Literal["cat"]
sound: str
class Dog(pydantic.BaseModel):
pet_type: Literal["dog"]
sound: str
class Settings(pydantic_settings.BaseSettings):
model_config = pydantic_settings.SettingsConfigDict(
env_file=".env", env_prefix="FOO_", env_nested_delimiter="-", extra="forbid"
)
pet: Annotated[Cat | Dog, pydantic.Field(discrimator="pet_type")]
print(Settings())
```
With the following configuration provided with `.env` file:
```dotenv
FOO_PET-SOUND="woof"
FOO_PET-PET_TYPE="dog"
```
Under pydantic-settings 2.5.2 it results in the following error:
```python
pydantic_core._pydantic_core.ValidationError: 2 validation errors for Settings
foo_pet-sound
Extra inputs are not permitted [type=extra_forbidden, input_value='woof', input_type=str]
For further information visit https://errors.pydantic.dev/2.9/v/extra_forbidden
foo_pet-pet_type
Extra inputs are not permitted [type=extra_forbidden, input_value='dog', input_type=str]
For further information visit https://errors.pydantic.dev/2.9/v/extra_forbidden
```
Under pydantic-settings 2.4.0 I'm getting the correct result:
```python
pet=Dog(pet_type='dog', sound='woof')
``` | closed | 2024-10-07T18:46:00Z | 2024-10-07T18:54:40Z | https://github.com/pydantic/pydantic-settings/issues/439 | [
"unconfirmed"
] | pszpetkowski | 1 |
keras-team/keras | data-science | 20,052 | Is 2D CNN's description correct ? | > This layer creates a convolution kernel that is convolved with the layer
input over a single spatial (or temporal) dimension to produce a tensor of
outputs. If `use_bias` is True, a bias vector is created and added to the
outputs. Finally, if `activation` is not `None`, it is applied to the
outputs as well.
[keras docs](https://github.com/keras-team/keras/blob/v3.4.1/keras/src/layers/convolutional/conv2d.py#L5)
Isn't convolved over *2 spatial dimensions* ? | closed | 2024-07-28T07:25:48Z | 2024-07-29T15:31:01Z | https://github.com/keras-team/keras/issues/20052 | [
"type:docs"
] | newresu | 1 |
microsoft/unilm | nlp | 1,354 | ValueError: 'layoutlmv3' is already used by a Transformers config, pick another name. | I am trying to fine tune Layout LM v3 on my custom data for document layout analysis (distinguishing between text, table and figure). but I am encountering following issue:
Traceback (most recent call last):
File "/content/unilm/layoutlmv3/examples/object_detection/train_net.py", line 27, in <module>
from ditod import add_vit_config
File "/content/unilm/layoutlmv3/examples/object_detection/ditod/__init__.py", line 11, in <module>
from .backbone import build_vit_fpn_backbone
File "/content/unilm/layoutlmv3/examples/object_detection/ditod/backbone.py", line 29, in <module>
from layoutlmft.models.layoutlmv3 import LayoutLMv3Model
File "/content/unilm/layoutlmv3/layoutlmft/__init__.py", line 1, in <module>
from .models import (
File "/content/unilm/layoutlmv3/layoutlmft/models/__init__.py", line 1, in <module>
from .layoutlmv3 import (
File "/content/unilm/layoutlmv3/layoutlmft/models/layoutlmv3/__init__.py", line 16, in <module>
AutoConfig.register("layoutlmv3", LayoutLMv3Config)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration_auto.py", line 1094, in register
CONFIG_MAPPING.register(model_type, config, exist_ok=exist_ok)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration_auto.py", line 794, in register
raise ValueError(f"'{key}' is already used by a Transformers config, pick another name.")
ValueError: 'layoutlmv3' is already used by a Transformers config, pick another name. | open | 2023-11-06T04:32:16Z | 2025-01-10T06:58:46Z | https://github.com/microsoft/unilm/issues/1354 | [] | AbdulDD | 3 |
RobertCraigie/prisma-client-py | pydantic | 439 | Support query raw annotations | ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Prisma recently changed how raw queries work internally, values are now returned alongside meta information, for example:
```json
{
"count": {"prisma__type": "bigint", "prisma__value": "1"}
}
```
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We need to figure out how to support this internally as we can't just naively return the results from the raw query anymore, we may have to do some form of parsing using a `BaseModel` / `Field`... | closed | 2022-06-28T10:31:06Z | 2022-12-31T15:30:41Z | https://github.com/RobertCraigie/prisma-client-py/issues/439 | [
"kind/improvement",
"topic: internal",
"level/advanced",
"priority/high"
] | RobertCraigie | 3 |
Nekmo/amazon-dash | dash | 103 | How to use the "access_token" option | Hi,
Anyone can point out how to use the "access_token" option for running commands in Home Assistant?
Thanks. | closed | 2018-10-28T15:23:25Z | 2022-01-28T00:05:33Z | https://github.com/Nekmo/amazon-dash/issues/103 | [] | msanchezt | 8 |
noirbizarre/flask-restplus | flask | 603 | How to use api.expect with validate=True for own date-time type? | I wan´t to check my own date format, for example `'2019-03-07T15:40:01'`.
Here is how to generate it:
```
from datetime import datetime
datetime.now().isoformat(timespec='seconds')
```
I found some useful information in issue #204.
api object:
```
from jsonschema import FormatChecker
api = Api(blueprint,
title='...',
format_checker=FormatChecker()
)
```
data model:
```
testdata = api.model('TestData', {
'timestamp': fields.DateTime(required=True)
})
```
test class:
```
@api.route('/test', methods=["post"])
class Test(Resource):
@api.expect(testdata, validate=True)
def post(self):
'''Test'''
return
```
If I test it with
```
{
"timestamp": "2019-03-01T11:22:34"
}
```
it fails, because the `date-time` format requires the letter `Z` at the end.
But I wan´t to use my own format.
- I think I have to extend the FormatChecker with a new method. Any hints how to implement this?
- Where is the mapping between `fields.DateTime` and `date-time` format? | closed | 2019-03-07T15:07:44Z | 2019-03-14T08:09:10Z | https://github.com/noirbizarre/flask-restplus/issues/603 | [] | nobodyman1 | 5 |
public-apis/public-apis | api | 3,127 | Crypto price.js | // Variables used by Scriptable.
// These must be at the very top of the file. Do not edit.
// icon-color: green; icon-glyph: dollar-sign;
// Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
// The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
const params = args.widgetParameter ? args.widgetParameter.split(",") : [];
const isDarkTheme = params?.[0] === 'dark';
const padding = 2;
const widget = new ListWidget();
if (isDarkTheme) {
widget.backgroundColor = new Color('#1C1C1E');;
}
widget.setPadding(padding, padding, padding, padding);
widget.url = 'https://www.coingecko.com/en/coins/rubic';
const headerStack = widget.addStack();
headerStack.setPadding(0, 0, 25, 0);
const headerText = headerStack.addText("Crypto price");
headerText.font = Font.mediumSystemFont(16);
if (isDarkTheme) {
headerText.textColor = new Color('#FFFFFF');
}
async function buildWidget() {
const rubicImage = await loadImage('https://rubic.exchange/assets/images/widget/rubic.png');
const ethereumImage = await loadImage('https://rubic.exchange/assets/images/widget/ethereum.png');
const rubicPriceInfo = await getTokenPriceInfo('rubic');
const ethereumPriceInfo = await getTokenPriceInfo('ethereum');
const roundedRubicPrice = Math.round(rubicPriceInfo.price * 1000) / 1000;
const roundedEthereumPrice = Math.round(ethereumPriceInfo.price);
addCrypto(rubicImage, 'RBC', `$${roundedRubicPrice}`, rubicPriceInfo.grow);
addCrypto(ethereumImage, 'ETH', `$${roundedEthereumPrice}`, ethereumPriceInfo.grow);
}
function addCrypto(image, symbol, price, grow) {
const rowStack = widget.addStack();
rowStack.setPadding(0, 0, 20, 0);
rowStack.layoutHorizontally();
const imageStack = rowStack.addStack();
const symbolStack = rowStack.addStack();
const priceStack = rowStack.addStack();
imageStack.setPadding(0, 0, 0, 10);
symbolStack.setPadding(0, 0, 0, 8);
const imageNode = imageStack.addImage(image);
imageNode.imageSize = new Size(20, 20);
imageNode.leftAlignImage();
const symbolText = symbolStack.addText(symbol);
symbolText.font = Font.mediumSystemFont(16);
const priceText = priceStack.addText(price);
priceText.font = Font.mediumSystemFont(16);
if (isDarkTheme) {
symbolText.textColor = new Color('#FFFFFF');
}
if (grow) {
priceText.textColor = new Color('#4AA956');
} else {
priceText.textColor = new Color('#D22E2E');
}
}
async function getTokenPriceInfo(tokenId) {
const url = `https://api.coingecko.com/api/v3/coins/markets?vs_currency=usd&ids=${tokenId}`;
const req = new Request(url)
const apiResult = await req.loadJSON()
return { price: apiResult[0].current_price, grow: apiResult[0].price_change_24h > 0 };
}
async function loadImage(imgUrl) {
const req = new Request(imgUrl)
return await req.loadImage()
}
await buildWidget();
Script.setWidget(widget);
Script.complete();
widget.presentSmall(); | closed | 2022-04-06T16:41:00Z | 2022-04-06T16:44:07Z | https://github.com/public-apis/public-apis/issues/3127 | [] | Joh777 | 0 |
httpie/cli | api | 987 | version check error running on a VM bitnami LAMP AWS Server | $ http --help
Traceback (most recent call last):
File "/usr/bin/http", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2927, in <module>
@_call_aside
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2913, in _call_aside
f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2940, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 635, in _build_master
ws.require(__requires__)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 943, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 829, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'httpie==0.9.2' distribution was not found and is required by the application | closed | 2020-11-07T23:30:12Z | 2020-12-21T15:38:39Z | https://github.com/httpie/cli/issues/987 | [] | inglesuniversal | 1 |
tartiflette/tartiflette | graphql | 216 | mutation validation error message is not very helpful | ## Request a feature
* [ ] **Tartiflette version:** _0.8.8_
When required fields are missing in an input Object (at root or nested), the returned error message reports the field is missing from the mutation result object rather than indicating where in the input it is actually missing
e.g. this error is returned
```json
{
"data": {
"createFenixShipment": null
},
"errors": [
{
"message": "Invalid value (value: None) for field `createFenixShipment` of type `createFenixShipmentResult`",
"path": [
"createFenixShipment"
],
"locations": [
{
"line": 2,
"column": 3
}
]
}
]
}
```
when I submit this mutation it would be helpful to get an error about missing `vendorCode` on `createFenixShipment.fenixShipment.CCIs[0]` or something that at least mentioned `vendorCode` instead of `createFenixShipmentResult` which is an output type.
```graphql
mutation{
createFenixShipment(input: {
fenixShipment: {
importerCode: "test",
fileNumber: "test",
ccIs: [
{
importerCode: "required",
vendorCode: null
}
]
}
}
) {
status
}
}
```
Part of the schema def is shown below
```graphql
type Mutation {
createFenixShipment(input: FenixShipmentInput!): createFenixShipmentResult
}
type createFenixShipmentResult {
status: String!
message: String
}
input FenixShipmentInput {
fenixShipment: FenixShipment!
}
input FenixShipment {
transactionNumber: String
fileNumber: String!
importerCode: String!
ccIs: [CCI!]
}
input CCI {
importerCode: String!
importerAddress: Address
vendorCode: String!
}
```
| closed | 2019-04-19T16:13:34Z | 2019-09-11T14:51:26Z | https://github.com/tartiflette/tartiflette/issues/216 | [
"enhancement"
] | bkcsfi | 3 |
dnouri/nolearn | scikit-learn | 131 | LICENSE | The PyPi package details (https://pypi.python.org/pypi/nolearn) imply that this code is MIT licensed but there doesn't seem to be a license file included in the git repo.
Please could you kindly confirm what license you have made this code available in and ideally add a license file to the repo?
| closed | 2015-07-27T11:50:45Z | 2015-07-27T14:33:01Z | https://github.com/dnouri/nolearn/issues/131 | [] | Rob-Bishop | 0 |
fastapi-users/fastapi-users | asyncio | 178 | Probem querying added user with psql | I have just added "Full example" to my fast api project with postgres.
After adding a user with cookie authentication, Trying to query and see recently added user by psql. when I query "user" table just see a table with single user column with a postgres value.
P.S: authentication is working but when i try to logout cookie authed users, i can't.
fastapi: 0.54.1
fastapi-users: 0.8.0 | closed | 2020-05-07T11:50:59Z | 2020-12-01T06:39:05Z | https://github.com/fastapi-users/fastapi-users/issues/178 | [
"question",
"stale"
] | mahdikhashan | 7 |
graphql-python/graphql-core | graphql | 213 | Add the ability to specify `GraphQLResolveInfo.context` type | Currently, `GraphQLResolveInfo.context` is being typed as `Any`, making `GraphQLResolveInfo` a generic would give users the ability to provide their type. This is already being done in `GraphQL.js`
```
export type GraphQLFieldResolver<
TSource,
TContext,
TArgs = any,
TResult = unknown,
> = (
source: TSource,
args: TArgs,
context: TContext,
info: GraphQLResolveInfo,
) => TResult;
```
If this change will be welcome, I'd like to make a pull-request implementing it. | open | 2024-02-20T00:51:52Z | 2024-02-20T05:57:38Z | https://github.com/graphql-python/graphql-core/issues/213 | [] | fedirz | 1 |
pyeve/eve | flask | 1,220 | get_internal pagination | I try to use get_internal('resource') in my custom controller to collect data from 2 or more resource.
get_internal result is more than the pagination limit, so I need some option to paging result, but the best to me to have a param to return all documents.
Could you help to me in this?
| closed | 2019-01-16T16:14:51Z | 2019-07-22T17:47:55Z | https://github.com/pyeve/eve/issues/1220 | [
"stale"
] | dulyts | 1 |
ipython/ipython | jupyter | 14,433 | Released PyPI versions have tags missing from repo | There maybe some others, I didn't check, but 8.16.1 was the one I was looking for:
https://pypi.org/project/ipython/8.16.1/
```
$ git checkout 8.16.1
error: pathspec '8.16.1' did not match any file(s) known to git
```
Where can I find [tags](https://github.com/ipython/ipython/tags) for the PyPI releases? | closed | 2024-05-09T20:12:58Z | 2024-05-14T17:33:57Z | https://github.com/ipython/ipython/issues/14433 | [] | wimglenn | 3 |
clovaai/donut | nlp | 4 | Local custom dataset & Potential typo in test.py | Hi, thanks for this interesting work!
I tried to use this model on a local custom dataset and followed the dataset structure as specified but it failed to load correctly. I ended up having to hard code some data loading code to make it work. It would be greatly appreciated if you guys can provide a demo or example of local dataset. Thanks!
PS: I think there may be a typo in the test.py: the '--pretrained_path' should probably be '--pretrained_model_name_or_path' ? | closed | 2022-07-25T11:53:00Z | 2022-07-29T08:58:10Z | https://github.com/clovaai/donut/issues/4 | [] | xingjianz | 1 |
facebookresearch/fairseq | pytorch | 4,853 | Request to provide torchscript-ed model | ## 🚀 Feature Request
<!-- A clear and concise description of the feature proposal -->
https://huggingface.co/facebook/wav2vec2-base-960h/tree/main
provides only `model.state_dict()`. Please also consider upload a torchscript-ed model using either `torch.jit.script()`
or `torch.jit.trace()`.
### Motivation
It simplifies users' life as they don't need to install fairseq to use a torchscript-ed model
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
### Additional context
We want to test models trained using CTC loss from various frameworks using [k2](http://github.com/k2-fsa/k2) for decoding.
(See https://github.com/k2-fsa/k2/pull/1096)
We only need a torchscript-ed model but for the current situation, we have to install fairseq and export the pretrained model by ourselves via torchscript. | open | 2022-11-09T05:50:34Z | 2022-11-09T05:50:49Z | https://github.com/facebookresearch/fairseq/issues/4853 | [
"enhancement",
"help wanted",
"needs triage"
] | csukuangfj | 0 |
allenai/allennlp | data-science | 5,537 | Using evaluate command for multi-task model | <!--
Please fill this template entirely and do not erase any of it.
We reserve the right to close without a response bug reports which are incomplete.
If you have a question rather than a bug, please ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/allennlp) rather than posting an issue here.
-->
## Checklist
<!-- To check an item on the list replace [ ] with [x]. -->
- [x] I have verified that the issue exists against the `main` branch of AllenNLP.
- [x] I have read the relevant section in the [contribution guide](https://github.com/allenai/allennlp/blob/main/CONTRIBUTING.md#bug-fixes-and-new-features) on reporting bugs.
- [x] I have checked the [issues list](https://github.com/allenai/allennlp/issues) for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/allenai/allennlp/pulls) for existing proposed fixes.
- [x] I have checked the [CHANGELOG](https://github.com/allenai/allennlp/blob/main/CHANGELOG.md) and the [commit log](https://github.com/allenai/allennlp/commits/main) to find out if the bug was already fixed in the main branch.
- [x] I have included in the "Description" section below a traceback from any exceptions related to this bug.
- [x] I have included in the "Related issues or possible duplicates" section beloew all related issues and possible duplicate issues (If there are none, check this box anyway).
- [x] I have included in the "Environment" section below the name of the operating system and Python version that I was using when I discovered this bug.
- [x] I have included in the "Environment" section below the output of `pip freeze`.
- [x] I have included in the "Steps to reproduce" section below a minimally reproducible example.
## Description
evaluate command can not support multitask that input_file only takes the file name as input.
But multi-task reader needs Dic.
<!-- Please provide a clear and concise description of what the bug is here. -->
<details>
<summary><b>Python traceback:</b></summary>
<p>
<!-- Paste the traceback from any exception (if there was one) in between the next two lines below -->
(allennlp2_8) D:\WorkSpace\GeoQA-with-UniT -2_8>allennlp evaluate save/test test_data_path --include-package NGS_Aux --cuda-device 0
2022-01-07 09:38:22,837 - INFO - allennlp.models.archival - loading archive file save/test
2022-01-07 09:38:23,469 - INFO - allennlp.data.vocabulary - Loading token dictionary from save/test\vocabulary.
##### Checkpoint Loaded! #####
2022-01-07 09:38:28,271 - INFO - allennlp.modules.token_embedders.embedding - Loading a model trained before embedding extension was implemented; pass an explicit vocab namespace if you wan
t to extend the vocabulary.
2022-01-07 09:38:29,051 - INFO - allennlp.common.checks - Pytorch version: 1.10.0+cu113
2022-01-07 09:38:29,053 - INFO - allennlp.commands.evaluate - Reading evaluation data from test_data_path
Traceback (most recent call last):
File "D:\Anaconda\envs\allennlp2_8\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "D:\Anaconda\envs\allennlp2_8\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "D:\Anaconda\envs\allennlp2_8\Scripts\allennlp.exe\__main__.py", line 7, in <module>
File "D:\Anaconda\envs\allennlp2_8\lib\site-packages\allennlp\__main__.py", line 46, in run
main(prog="allennlp")
File "D:\Anaconda\envs\allennlp2_8\lib\site-packages\allennlp\commands\__init__.py", line 123, in main
args.func(args)
File "D:\Anaconda\envs\allennlp2_8\lib\site-packages\allennlp\commands\evaluate.py", line 189, in evaluate_from_args
params=data_loader_params, reader=dataset_reader, data_path=evaluation_data_path
File "D:\Anaconda\envs\allennlp2_8\lib\site-packages\allennlp\common\from_params.py", line 656, in from_params
**extras,
File "D:\Anaconda\envs\allennlp2_8\lib\site-packages\allennlp\common\from_params.py", line 686, in from_params
return constructor_to_call(**kwargs) # type: ignore
File "D:\Anaconda\envs\allennlp2_8\lib\site-packages\allennlp\data\data_loaders\multitask_data_loader.py", line 143, in __init__
if self.readers.keys() != self.data_paths.keys():
AttributeError: 'str' object has no attribute 'keys'
</p>
</details>
## Related issues or possible duplicates
- None
## Environment
<!-- Provide the name of operating system below (e.g. OS X, Linux) -->
Windows:
aiohttp 3.8.0
aiosignal 1.2.0
allennlp 2.8.0
argcomplete 1.12.3
argon2-cffi 21.1.0
async-timeout 4.0.0
asynctest 0.13.0
atomicwrites 1.4.0
attrs 21.2.0
backcall 0.2.0
backports.csv 1.0.7
base58 2.1.1
beautifulsoup4 4.10.0
bleach 4.1.0
blis 0.7.5
boto3 1.19.12
botocore 1.22.12
cached-path 0.3.2
cached-property 1.5.2
cachetools 4.2.4
catalogue 1.0.0
certifi 2021.10.8
cffi 1.15.0
chardet 4.0.0
charset-normalizer 2.0.7
checklist 0.0.11
cheroot 8.5.2
CherryPy 18.6.1
click 8.0.3
colorama 0.4.4
configparser 5.1.0
cryptography 35.0.0
cymem 2.0.6
Cython 0.29.23
datasets 1.15.1
debugpy 1.5.1
decorator 5.1.0
defusedxml 0.7.1
dill 0.3.4
docker-pycreds 0.4.0
en-core-web-sm 2.3.0
entrypoints 0.3
fairscale 0.4.0
feedparser 6.0.8
filelock 3.3.2
frozenlist 1.2.0
fsspec 2021.11.0
future 0.18.2
gensim 4.1.2
gitdb 4.0.9
GitPython 3.1.24
google-api-core 2.2.2
google-auth 2.3.3
google-cloud-core 2.1.0
google-cloud-storage 1.42.3
google-crc32c 1.3.0
google-resumable-media 2.1.0
googleapis-common-protos 1.53.0
h5py 3.5.0
huggingface-hub 0.1.1
idna 3.3
importlib-metadata 4.8.1
importlib-resources 5.4.0
iniconfig 1.1.1
ipykernel 6.5.0
ipython 7.29.0
ipython-genutils 0.2.0
ipywidgets 7.6.5
iso-639 0.4.5
jaraco.classes 3.2.1
jaraco.collections 3.4.0
jaraco.functools 3.4.0
jaraco.text 3.6.0
jedi 0.18.0
jieba 0.42.1
Jinja2 3.0.2
jmespath 0.10.0
joblib 1.1.0
jsonschema 4.2.1
jupyter 1.0.0
jupyter-client 7.0.6
jupyter-console 6.4.0
jupyter-core 4.9.1
jupyterlab-pygments 0.1.2
jupyterlab-widgets 1.0.2
lmdb 1.2.1
lxml 4.6.4
MarkupSafe 2.0.1
matplotlib-inline 0.1.3
mistune 0.8.4
more-itertools 8.10.0
multidict 5.2.0
multiprocess 0.70.12.2
munch 2.5.0
murmurhash 1.0.6
nbclient 0.5.4
nbconvert 6.2.0
nbformat 5.1.3
nest-asyncio 1.5.1
nltk 3.6.5
notebook 6.4.5
numpy 1.21.4
opencv-python 4.5.4.58
overrides 3.1.0
packaging 21.2
pandas 1.3.4
pandocfilters 1.5.0
parso 0.8.2
pathtools 0.1.2
pathy 0.6.1
patternfork-nosql 3.6
pdfminer.six 20211012
pickleshare 0.7.5
Pillow 8.4.0
pip 21.2.4
plac 1.1.3
pluggy 1.0.0
portend 3.0.0
preshed 3.0.6
prometheus-client 0.12.0
promise 2.3
prompt-toolkit 3.0.22
protobuf 3.19.1
psutil 5.8.0
py 1.11.0
pyarrow 6.0.0
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycparser 2.21
pydantic 1.8.2
Pygments 2.10.0
pyparsing 2.4.7
pyrsistent 0.18.0
pytest 6.2.5
python-dateutil 2.8.2
python-docx 0.8.11
pytz 2021.3
pywin32 302
pywinpty 1.1.5
PyYAML 6.0
pyzmq 22.3.0
qtconsole 5.1.1
QtPy 1.11.2
regex 2021.11.2
requests 2.26.0
rsa 4.7.2
s3transfer 0.5.0
sacremoses 0.0.46
scikit-learn 1.0.1
scipy 1.7.2
Send2Trash 1.8.0
sentencepiece 0.1.96
sentry-sdk 1.4.3
setuptools 58.0.4
sgmllib3k 1.0.0
shortuuid 1.0.1
six 1.16.0
smart-open 5.2.1
smmap 5.0.0
soupsieve 2.3
spacy 2.3.7
spacy-legacy 3.0.8
sqlitedict 1.7.0
srsly 1.0.5
subprocess32 3.5.4
tempora 4.1.2
tensorboardX 2.4
termcolor 1.1.0
terminado 0.12.1
testpath 0.5.0
thinc 7.4.5
threadpoolctl 3.0.0
tokenizers 0.10.3
toml 0.10.2
torch 1.10.0+cu113
torchaudio 0.10.0+cu113
torchvision 0.11.1+cu113
tornado 6.1
tqdm 4.62.3
traitlets 5.1.1
transformers 4.12.3
typer 0.4.0
typing-extensions 3.10.0.2
urllib3 1.26.7
wandb 0.12.6
wasabi 0.8.2
wcwidth 0.2.5
webencodings 0.5.1
wheel 0.37.0
widgetsnbextension 3.5.2
wincertstore 0.2
xxhash 2.0.2
yarl 1.7.2
yaspin 2.1.0
zc.lockfile 2.0
zipp 3.6.0
<!-- Provide the Python version you were using (e.g. 3.7.1) -->
Python version:
3.7
<details>
<summary><b>Output of <code>pip freeze</code>:</b></summary>
<p>
<!-- Paste the output of `pip freeze` in between the next two lines below -->
```
```
</p>
</details>
## Steps to reproduce
<details>
<summary><b>Example source:</b></summary>
<p>
<!-- Add a fully runnable example in between the next two lines below that will reproduce the bug -->
```
```
</p>
</details>
| closed | 2022-01-07T02:16:46Z | 2022-02-28T21:42:05Z | https://github.com/allenai/allennlp/issues/5537 | [
"bug",
"stale"
] | ICanFlyGFC | 10 |
strawberry-graphql/strawberry | django | 3,681 | GraohiQL is not disabled by None type | ## Describe the Bug
Seeting `graphiq=None` doesn't disable GraphiQL view. | closed | 2024-10-27T16:34:51Z | 2025-03-20T15:56:54Z | https://github.com/strawberry-graphql/strawberry/issues/3681 | [
"bug"
] | ldynia | 4 |
svc-develop-team/so-vits-svc | deep-learning | 18 | Many speaker voice conversion task | Hi
If i use your model for a voice conversion task with around 100 speakers, would its performance be better than that of freevc?
And can i get the checkpoint repository link? | closed | 2023-03-13T09:00:06Z | 2023-03-23T08:38:00Z | https://github.com/svc-develop-team/so-vits-svc/issues/18 | [] | lsw5835 | 1 |
supabase/supabase-py | flask | 667 | 🤓Add optional key_words:access_token to creat_client let auth user easier without signing in | **Is your feature request related to a problem? Please describe.**
here's about my understanding on recent issues and my explanation on solutions in PR #656
#658 #663 #645
___
in _cannot authenticate user_ #645
in short, i think it shows up for doing nothing on refresh the token passed to postgrest,it fixed in v2.3.3 by updating
```python
def _listen_to_auth_events(self, event: AuthChangeEvent, session: Session | None):
"""listen to auth events and update auth token"""
access_token = self._access_token
if event in ["SIGNED_IN", "TOKEN_REFRESHED", "SIGNED_OUT"]:
# reset postgrest and storage instance on event change
self._postgrest = None
self._storage = None
self._functions = None
access_token = session.access_token if session else self._access_token
self._auth_token = self._create_auth_header(access_token)
```
,`self._auth_token` would be update on new events and also as the new `header` of `postgrest` etc
____
in _failed to get_session after create_client by accees_token as supabase_key_ #663
firstly , `client.auth.get_session()` would retrun None ,because in `AsyncGoTrueClient` , `get_session()` only works proper as we saved `session` in `self._storage`,which means we need technically `client.auth.sign _in_xxx()` or `client.auth.set_session()`
```python
async def get_session(self) -> Union[Session, None]:
...
current_session: Union[Session, None] = None
if self._persist_session:
maybe_session = await self._storage.get_item(self._storage_key)
....
else:
current_session = self._in_memory_session
if not current_session:
return None
...
```
secondly, i will fail quickly on calling `postgrest` via create_client passed accees_token as supabase_key
for two reasons,
1. called `client._auth_token = await client._get_token_header()` on createing client(u do not signed in at that moment) so that the `client.auth.get_session()` in ` client._get_token_header()` must return None,so `acees_token` set None for `postgrest` unless u sign in later
2. we called `client.init`,and it set `apiKey` in headers as `access_token` that got after user signed in ,and is not valided for `apiKey` after tests
```python
# in `client.init`,
...
options.headers.update(self._get_auth_headers())
self.options = options
...
def _get_auth_headers(self) -> Dict[str, str]:
"""Helper method to get auth headers."""
return {
"apiKey": self.supabase_key,
"Authorization": f"Bearer {self.supabase_key}",
}
```
____
in _Clients should take an optional supabase_token (workaround provided)_ #658
i do agree that should take an optional `supabase_token` ,we can not auth only by `access_token` (actually in code,it means not able to set `self._auth_token` by passing `access_token` explicitly now)
____
my solutions on just passing `token` without changing `supabase-py` to auth user after tests
_should do `create_client` with normal `key` and `url`_ firstly
1. pass `access_token` manually like
``` python
self.auth.postgrest.auth(access_token)
# and then do crud operations ...
```
2. call `self.auth.set_session()` manually like
```python
# firstly should get aceess_token and refresh_token from front_end
self.auth.set_session(aceess_token,refresh_token )
```
____
my solutions on changeing code of `supabase-py` in PR #656
1. add optional `access_token` key words and set is as default key, if None, set supabase_key as `self. _access_token`, and it will default bt passed to `self._create_auth_header()`
```python
# in init
def __init__(
self,
supabase_url: str,
supabase_key: str,
access_token: Union[str, None] = None,
options: ClientOptions = ClientOptions(storage=AsyncMemoryStorage()),
):
# ....
# only can be set by init instance
self._access_token = access_token if access_token else supabase_key
# will be modified by auth state change
self._auth_token = self._create_auth_header(self._access_token)
# ...
# listen to event to refresh access_token if we explicitly sign in
def _listen_to_auth_events(self, event: AuthChangeEvent, session: Session | None):
"""listen to auth events and update auth token"""
access_token = self._access_token
if event in ["SIGNED_IN", "TOKEN_REFRESHED", "SIGNED_OUT"]:
# reset postgrest and storage instance on event change
self._postgrest = None
self._storage = None
self._functions = None
access_token = session.access_token if session else self._access_token
self._auth_token = self._create_auth_header(access_token)
```
2. add `self._update_auth_headers()` to update `self.options.headers` and it also will be setted as `headers` that passes to `postgrest`
```python
def _update_auth_headers(self) -> None:
"""Helper method to get auth headers."""
new_headers = {
"apiKey": self.supabase_key,
**self._auth_token,
}
self.options.headers.update(**new_headers)
@property
def postgrest(self):
if self._postgrest is None:
self._update_auth_headers()
self._postgrest = self._init_postgrest_client(
rest_url=self.rest_url,
headers=self.options.headers,
schema=self.options.schema,
timeout=self.options.postgrest_client_timeout,
)
return self._postgrest
```
in the end u will find `def _get_token_header(self):` no where to use
in short add optional `aceess_token` as `auth_token` in first init client, and update `auth_token` by listen to events, if `access_token` is None, default same to `supabase_key`
| closed | 2024-01-15T03:06:26Z | 2024-04-28T22:20:34Z | https://github.com/supabase/supabase-py/issues/667 | [] | AtticusZeller | 1 |
deezer/spleeter | deep-learning | 716 | Colab notebook not working | ## Description
Google colab notebook doesn't work. Thanks for you work and your support.
## Step to reproduce
Just used https://colab.research.google.com/github/deezer/spleeter/blob/master/spleeter.ipynb
in Google Colab Pro, GPU session
I get an error in line:
!spleeter separate -o output/ audio_example.mp3
## Output
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py", line 1375, in _do_call
return fn(*args)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py", line 1360, in _run_fn
target_list, run_metadata)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py", line 1453, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
(0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node conv2d/Conv2D}}]]
[[strided_slice_23/_907]]
(1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node conv2d/Conv2D}}]]
0 successful operations.
0 derived errors ignored.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/spleeter", line 8, in <module>
sys.exit(entrypoint())
File "/usr/local/lib/python3.7/dist-packages/spleeter/__main__.py", line 256, in entrypoint
spleeter()
File "/usr/local/lib/python3.7/dist-packages/typer/main.py", line 214, in __call__
return get_command(self)(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/typer/main.py", line 497, in wrapper
return callback(**use_params) # type: ignore
File "/usr/local/lib/python3.7/dist-packages/spleeter/__main__.py", line 137, in separate
synchronous=False,
File "/usr/local/lib/python3.7/dist-packages/spleeter/separator.py", line 378, in separate_to_file
sources = self.separate(waveform, audio_descriptor)
File "/usr/local/lib/python3.7/dist-packages/spleeter/separator.py", line 319, in separate
return self._separate_tensorflow(waveform, audio_descriptor)
File "/usr/local/lib/python3.7/dist-packages/spleeter/separator.py", line 301, in _separate_tensorflow
prediction = next(prediction_generator)
File "/usr/local/lib/python3.7/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 631, in predict
preds_evaluated = mon_sess.run(predictions)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/training/monitored_session.py", line 779, in run
run_metadata=run_metadata)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/training/monitored_session.py", line 1284, in run
run_metadata=run_metadata)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/training/monitored_session.py", line 1385, in run
raise six.reraise(*original_exc_info)
File "/usr/local/lib/python3.7/dist-packages/six.py", line 703, in reraise
raise value
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/training/monitored_session.py", line 1370, in run
return self._sess.run(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/training/monitored_session.py", line 1443, in run
run_metadata=run_metadata)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/training/monitored_session.py", line 1201, in run
return self._sess.run(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py", line 968, in run
run_metadata_ptr)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py", line 1191, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py", line 1369, in _do_run
run_metadata)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py", line 1394, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
(0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[node conv2d/Conv2D (defined at /lib/python3.7/dist-packages/spleeter/model/functions/unet.py:109) ]]
[[strided_slice_23/_907]]
(1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[node conv2d/Conv2D (defined at /lib/python3.7/dist-packages/spleeter/model/functions/unet.py:109) ]]
0 successful operations.
0 derived errors ignored.
Errors may have originated from an input operation.
Input Source operations connected to node conv2d/Conv2D:
strided_slice_3 (defined at /lib/python3.7/dist-packages/spleeter/model/__init__.py:307)
Input Source operations connected to node conv2d/Conv2D:
strided_slice_3 (defined at /lib/python3.7/dist-packages/spleeter/model/__init__.py:307)
Original stack trace for 'conv2d/Conv2D':
File "/bin/spleeter", line 8, in <module>
sys.exit(entrypoint())
File "/lib/python3.7/dist-packages/spleeter/__main__.py", line 256, in entrypoint
spleeter()
File "/lib/python3.7/dist-packages/typer/main.py", line 214, in __call__
return get_command(self)(*args, **kwargs)
File "/lib/python3.7/dist-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/lib/python3.7/dist-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/lib/python3.7/dist-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/lib/python3.7/dist-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/lib/python3.7/dist-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/lib/python3.7/dist-packages/typer/main.py", line 497, in wrapper
return callback(**use_params) # type: ignore
File "/lib/python3.7/dist-packages/spleeter/__main__.py", line 137, in separate
synchronous=False,
File "/lib/python3.7/dist-packages/spleeter/separator.py", line 378, in separate_to_file
sources = self.separate(waveform, audio_descriptor)
File "/lib/python3.7/dist-packages/spleeter/separator.py", line 319, in separate
return self._separate_tensorflow(waveform, audio_descriptor)
File "/lib/python3.7/dist-packages/spleeter/separator.py", line 301, in _separate_tensorflow
prediction = next(prediction_generator)
File "/lib/python3.7/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 613, in predict
self.config)
File "/lib/python3.7/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1163, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/lib/python3.7/dist-packages/spleeter/model/__init__.py", line 568, in model_fn
return builder.build_predict_model()
File "/lib/python3.7/dist-packages/spleeter/model/__init__.py", line 516, in build_predict_model
tf.estimator.ModeKeys.PREDICT, predictions=self.outputs
File "/lib/python3.7/dist-packages/spleeter/model/__init__.py", line 318, in outputs
self._build_outputs()
File "/lib/python3.7/dist-packages/spleeter/model/__init__.py", line 499, in _build_outputs
self._outputs = self._build_output_waveform(self.masked_stfts)
File "/lib/python3.7/dist-packages/spleeter/model/__init__.py", line 342, in masked_stfts
self._build_masked_stfts()
File "/lib/python3.7/dist-packages/spleeter/model/__init__.py", line 465, in _build_masked_stfts
for instrument, mask in self.masks.items():
File "/lib/python3.7/dist-packages/spleeter/model/__init__.py", line 336, in masks
self._build_masks()
File "/lib/python3.7/dist-packages/spleeter/model/__init__.py", line 432, in _build_masks
output_dict = self.model_outputs
File "/lib/python3.7/dist-packages/spleeter/model/__init__.py", line 312, in model_outputs
self._build_model_outputs()
File "/lib/python3.7/dist-packages/spleeter/model/__init__.py", line 212, in _build_model_outputs
input_tensor, self._instruments, self._params["model"]["params"]
File "/lib/python3.7/dist-packages/spleeter/model/functions/unet.py", line 197, in unet
return apply(apply_unet, input_tensor, instruments, params)
File "/lib/python3.7/dist-packages/spleeter/model/functions/__init__.py", line 45, in apply
input_tensor, output_name=out_name, params=params or {}
File "/lib/python3.7/dist-packages/spleeter/model/functions/unet.py", line 109, in apply_unet
conv1 = conv2d_factory(conv_n_filters[0], (5, 5))(input_tensor)
File "/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 783, in __call__
outputs = call_fn(cast_inputs, *args, **kwargs)
File "/lib/python3.7/dist-packages/tensorflow/python/keras/layers/convolutional.py", line 249, in call
outputs = self._convolution_op(inputs, self.kernel)
File "/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "/lib/python3.7/dist-packages/tensorflow/python/ops/nn_ops.py", line 1019, in convolution_v2
name=name)
File "/lib/python3.7/dist-packages/tensorflow/python/ops/nn_ops.py", line 1149, in convolution_internal
name=name)
File "/lib/python3.7/dist-packages/tensorflow/python/ops/nn_ops.py", line 2603, in _conv2d_expanded_batch
name=name)
File "/lib/python3.7/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 973, in conv2d
data_format=data_format, dilations=dilations, name=name)
File "/lib/python3.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 750, in _apply_op_helper
attrs=attr_protos, op_def=op_def)
File "/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py", line 3565, in _create_op_internal
op_def=op_def)
File "/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py", line 2045, in __init__
self._traceback = tf_stack.extract_stack_for_node(self._c_op)
| closed | 2022-01-24T16:00:48Z | 2022-01-28T16:40:53Z | https://github.com/deezer/spleeter/issues/716 | [
"bug",
"invalid"
] | smithee771 | 3 |
plotly/dash | plotly | 2,224 | Gunicorn Dash Refreshing Error [BUG] | Hello, i'm created realtime crypto chart flask&dash app. My problem with gunicorn&dash combination. If i start flask "flask run". I'm not getting refresh errors. But if i start "gunicorn wsgi:app" i got refreshin errors. Let me explain the refresh error: dash callback (or whatever i don't know where is reason issue) send my chart fig and send None, but if i run app without gunicorn callback dont send None. Please help me. | closed | 2022-09-11T17:54:42Z | 2024-07-24T15:14:03Z | https://github.com/plotly/dash/issues/2224 | [] | qraxiss | 1 |
mwaskom/seaborn | data-science | 3,694 | Geographic Filled KDE Plot | I am plotting lat/lon points on a geographic map, and I would like to underlay a KDE plot beneath scatter points. Because they are lat/lon points, the plotting will use a cartopy transform (in my case PlateCarree). When calling the kdeplot function with fill=False, everything works fine. However, with fill = True it returns the error:
AttributeError: 'PlateCarree' object has no attribute 'contains_branch_seperately'
Here is reproducible code:
import numpy as np
import seaborn as sns
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
import cartopy.feature as cfeature
##Create example lat/lon points
lons = np.random.uniform(low = -100, high = -90, size = 20)
lats = np.random.uniform(low = 30, high = 40, size = 20)
##Create the figure
fig = plt.figure(figsize = (6, 4), dpi = 300)
ax = fig.add_subplot(1, 1, 1, projection = ccrs.LambertConformal())
##Add features
ax.add_feature(cfeature.STATES.with_scale('50m'), zorder = 1)
ax.set_extent((-103, -88, 22.5, 42))
##Scatter plot the lat/lon points
ax.scatter(lons, lats, c = 'black', marker = 'x', transform = ccrs.PlateCarree(), zorder = 2)
##Add a KDE underlay
kde = sns.kdeplot(x = lons, y = lats, fill = True, transform = ccrs.PlateCarree(), zorder = 1) | open | 2024-05-10T22:31:09Z | 2024-09-12T23:41:30Z | https://github.com/mwaskom/seaborn/issues/3694 | [] | bschweigert | 5 |
nteract/papermill | jupyter | 800 | error with `numpy` v2 | I get the following error when I run `papermill` with `numpy` version 2.0.0:
```
papermill.exceptions.PapermillExecutionError:
---------------------------------------------------------------------------
Exception encountered at "In [1]":
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
AttributeError: _ARRAY_API not found
```
I think this has to do with a change in the `numpy` API, as [here](https://stackoverflow.com/a/78650263).
| open | 2024-06-21T12:35:22Z | 2024-08-20T17:38:51Z | https://github.com/nteract/papermill/issues/800 | [
"bug",
"help wanted"
] | jbloom | 1 |
OpenBB-finance/OpenBB | python | 7,001 | [Bug] Installer Fails On Windows Because Poetry Can't Downgrade Itself | Chicken-and-Egg scenario with Poetry as a dependency, it can't manage itself as a versioned dependency in the project.

| closed | 2025-01-10T17:36:32Z | 2025-01-10T21:23:35Z | https://github.com/OpenBB-finance/OpenBB/issues/7001 | [
"bug",
"installer"
] | deeleeramone | 0 |
thtrieu/darkflow | tensorflow | 873 | tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape | ## tensorflow.python.framework.errors_impl.ResourceExhaustedError:
OOM when allocating tensor with shape[16,19,19,1024] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[Node: 36-leaky = Maximum[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](mul_15, BiasAdd_15)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
## ResourceExhaustedError (see above for traceback):
OOM when allocating tensor with shape[16,19,19,1024] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[Node: 36-leaky = Maximum[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](mul_15, BiasAdd_15)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
| open | 2018-08-14T05:12:14Z | 2018-09-14T08:32:06Z | https://github.com/thtrieu/darkflow/issues/873 | [] | leekwunfung817 | 2 |
microsoft/MMdnn | tensorflow | 432 | how convert tranformer network model from tensorflow to pytorch ? | Platform (like ubuntu 16.04/win10):
Python version: python2.7
Source framework with version (like Tensorflow 1.8 with GPU):
Destination framework with version (pytorch with GPU):
Pre-trained model path (transformer network model):
I want convert transformer, which is for NLP and not in MMdnn.extract_model. How to add new network model supporting in MMdnn.
| closed | 2018-09-28T11:09:08Z | 2018-12-26T14:17:21Z | https://github.com/microsoft/MMdnn/issues/432 | [
"enhancement"
] | owenustc | 3 |
yt-dlp/yt-dlp | python | 12,337 | [10play.com.au] HTTP Error 502: Bad Gateway | ### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar issues **including closed ones**. DO NOT post duplicates
- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Australia
### Provide a description that is worded well enough to be understood
```
[TenPlay] Extracting URL: https://10play.com.au/spongebob-squarepants/episodes/season-4/episode-14/tpv240610dykzu
[TenPlay] tpv240610dykzu: Downloading JSON metadata
[TenPlay] tpv240610dykzu: Downloading video JSON
[TenPlay] tpv240610dykzu: Checking stream URL
[TenPlay] tpv240610dykzu: Downloading m3u8 information
ERROR: [TenPlay] tpv240610dykzu: Failed to download m3u8 information: HTTP Error 502: Bad Gateway (caused by <HTTPError 502: Bad Gate
way>)
```
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['--verbose', '-F', 'https://10play.com.au/spongebob-squarepants/episodes/season-4/episode-14/tpv240610d
ykzu']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] Python 3.12.8 (CPython x86_64 64bit) - Linux-6.11.0-14-generic-x86_64-with-glibc2.40 (OpenSSL 3.3.2 3 Sep 2024, glibc 2.40)
[debug] exe versions: ffmpeg n7.1-184-gdc07f98934-20250127 (setts), ffprobe n7.1-184-gdc07f98934-20250127
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, mutagen-1.47.0, requests-2.32.3, sqlite3-3.46.1, url
lib3-2.3.0, websockets-14.2
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1839 extractors
[TenPlay] Extracting URL: https://10play.com.au/spongebob-squarepants/episodes/season-4/episode-14/tpv240610dykzu
[TenPlay] tpv240610dykzu: Downloading JSON metadata
[TenPlay] tpv240610dykzu: Downloading video JSON
[TenPlay] tpv240610dykzu: Checking stream URL
[TenPlay] tpv240610dykzu: Downloading m3u8 information
ERROR: [TenPlay] tpv240610dykzu: Failed to download m3u8 information: HTTP Error 502: Bad Gateway (caused by <HTTPError 502: Bad Gate
way>)
File "/app/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.12/site-packages/yt_dlp/extractor/tenplay.py", line 90, in _real_extract
formats = self._extract_m3u8_formats(m3u8_url, content_id, 'mp4')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 2055, in _extract_m3u8_formats
fmts, subs = self._extract_m3u8_formats_and_subtitles(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 2077, in _extract_m3u8_formats_and_subtitles
res = self._download_webpage_handle(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 962, in _download_webpage_handle
urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 911, in _request_webpage
raise ExtractorError(errmsg, cause=err)
File "/app/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 898, in _request_webpage
return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query, extensions))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 4175, in urlopen
return self._request_director.send(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.12/site-packages/yt_dlp/networking/common.py", line 117, in send
response = handler.send(request)
^^^^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.12/site-packages/yt_dlp/networking/_helper.py", line 208, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.12/site-packages/yt_dlp/networking/common.py", line 340, in send
return self._send(request)
^^^^^^^^^^^^^^^^^^^
File "/app/lib/python3.12/site-packages/yt_dlp/networking/_requests.py", line 365, in _send
raise HTTPError(res, redirect_loop=max_redirects_exceeded)
yt_dlp.networking.exceptions.HTTPError: HTTP Error 502: Bad Gateway
``` | open | 2025-02-11T04:25:44Z | 2025-02-11T14:59:47Z | https://github.com/yt-dlp/yt-dlp/issues/12337 | [
"geo-blocked",
"site-bug",
"triage"
] | nlogozzo | 0 |
microsoft/nni | pytorch | 5,670 | NNI install in development mode on macOS M1 | I install nni from soruce code in development mode on apple M1 by following instructions from https://nni.readthedocs.io/en/stable/notes/build_from_source.html
After install, I can import nni successfully.
However, when I try some simple NAS tutorial in here https://nni.readthedocs.io/en/stable/tutorials/hello_nas.html
I fail to import sublibrary retiarii
from nni.retiarii import model_wrapper
How should I solve the missing extra dependencies if I install from source code for M1?
thank you for your help!
| open | 2023-08-24T19:57:07Z | 2023-10-25T14:08:56Z | https://github.com/microsoft/nni/issues/5670 | [] | XiaochunLiu | 1 |
Miserlou/Zappa | django | 1,782 | Django + Xray Issue | <!--- Provide a general summary of the issue in the Title above -->
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
Deploy with Django + Xray is always falling
## Actual Behavior
<!--- Tell us what happens instead -->
Get error:
> "{'message': 'An uncaught exception happened while servicing this request. You can investigate this with the `zappa tail` command.', 'traceback': ['Traceback (most recent call last):\\n', ' File \"/var/task/handler.py\", line 518, in handler\\n response = Response.from_app(self.wsgi_app, environ)\\n', ' File \"/var/task/werkzeug/wrappers.py\", line 939, in from_app\\n return cls(*_run_wsgi_app(app, environ, buffered))\\n', ' File \"/var/task/werkzeug/test.py\", line 923, in run_wsgi_app\\n app_rv = app(environ, start_response)\\n', \"TypeError: 'NoneType' object is not callable\\n\"]}"
## Steps to Reproduce
1. Enable **xray_tracing** in zappa_settings.json
2. Enable xray in Django settings.py
3. Deploy
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.47.1
* Operating System and Python version: 3.6
* The output of `pip freeze`:
> aiohttp==3.5.4
> amqp==2.4.1
> anyjson==0.3.3
> appdirs==1.4.3
> argcomplete==1.9.3
> asn1crypto==0.24.0
> async-timeout==3.0.1
> attrs==17.4.0
> autopep8==1.3
> awsebcli==3.14.6
> beautifulsoup4==4.4.1
> billiard==3.5.0.3
> blessed==1.14.2
> blinker==1.3
> boto==2.48.0
> boto3==1.9.30
> botocore==1.12.30
> cachetools==3.1.0
> cement==2.8.2
> cffi==1.11.5
> cfn-flip==1.0.3
> chardet==3.0.4
> click==6.6
> colorama==0.3.9
> construct==2.9.45
> contextlib2==0.5.5
> coreapi==2.3.3
> coreschema==0.0.4
> curlify==2.1.0
> defusedxml==0.5.0
> Django==2.1.2
> django-compat==1.0.15
> django-cors-headers==2.4.0
> django-filter==2.0.0
> django-filters==0.2.1
> django-health-check==3.5.0
> django-hijack==2.1.10
> django-hijack-admin==2.1.10
> django-resized==0.3.8
> django-rest-auth==0.9.1
> django-s3-storage==0.12.4
> django-ses==0.8.5
> django-silk==3.0.1
> django-templated-mail==1.1.0
> djangorestframework==3.9.0
> djangorestframework-jwt==1.9.0
> djoser==1.3.1
> dockerpty==0.4.1
> docopt==0.6.2
> docutils==0.14
> dry-rest-permissions==0.1.10
> durationpy==0.5
> extras==0.0.3
> filelock==3.0.10
> first==2.0.1
> fixtures==2.0.0
> Flask==0.11.1
> Flask-Gravatar==0.4.2
> Flask-HTMLmin==1.2
> Flask-Login==0.3.2
> Flask-Mail==0.9.1
> Flask-Principal==0.4.0
> Flask-Security==1.7.5
> Flask-WTF==0.12
> future==0.16.0
> google-api-python-client==1.6.2
> google-auth==1.6.2
> googleads==10.0.0
> gprof2dot==2016.10.13
> greenlet==0.4.13
> gunicorn==19.7.0
> hjson==3.0.1
> html5lib==1.0b3
> htmlmin==0.1.10
> httplib2==0.10.3
> idna==2.6
> idna-ssl==1.0.1
> ifaddr==0.1.4
> ipaddress==1.0.22
> itsdangerous==0.24
> itypes==1.1.0
> Jinja2==2.7.3
> jmespath==0.9.3
> jsonpickle==1.1
> kappa==0.6.0
> kombu==4.3.0
> lambda-packages==0.20.0
> linecache2==1.0.0
> mailchimp3==3.0.6
> Markdown==2.6.8
> MarkupSafe==0.23
> mock==2.0.0
> multidict==4.1.0
> oauth2client==4.1.3
> oauthlib==3.0.1
> olefile==0.44
> openapi-codec==1.3.2
> packaging==16.8
> passlib==1.6.2
> pathspec==0.5.5
> pbr==1.9.1
> pip-tools==1.8.0
> pipdeptree==0.13.1
> placebo==0.8.2
> pluggy==0.8.1
> pretty-cron==1.2.0
> psycopg2==2.7.6.1
> py==1.8.0
> pyaes==1.6.1
> pyasn1==0.4.2
> pyasn1-modules==0.2.1
> pyaspeller==0.1.0
> pycodestyle==2.3.1
> pycparser==2.19
> pycriteo==0.0.2
> pycurl==7.43.0.1
> Pygments==2.2.0
> PyJWT==1.4.2
> PyMySQL==0.7.10
> pyparsing==2.2.0
> pyrsistent==0.11.13
> PySocks==1.6.8
> python-dateutil==2.6.1
> python-mimeparse==1.5.1
> python-slugify==1.2.4
> python3-openid==3.1.0
> pytz==2018.3
> requests==2.21.0
> rsa==3.4.2
> s3transfer==0.1.13
> semantic-version==2.5.0
> sentry-sdk==0.7.2
> shellescape==3.4.1
> simplejson==3.6.5
> six==1.11.0
> speaklater==1.3
> sqlparse==0.1.19
> suds-jurko==0.6
> tabulate==0.7.5
> termcolor==1.1.0
> testtools==2.2.0
> toml==0.10.0
> tqdm==4.19.1
> traceback2==1.4.0
> transliterate==1.10
> troposphere==2.3.3
> typing-extensions==3.7.2
> unicodecsv==0.14.1
> Unidecode==1.0.22
> unittest2==1.1.0
> uritemplate==3.0.0
> urllib3==1.22
> vine==1.1.4
> virtualenv==16.4.1
> vk==2.0.2
> wcwidth==0.1.7
> websocket-client==0.47.0
> Werkzeug==0.14.1
> wrapt==1.11.1
> wsgi-request-logger==0.4.6
> yarl==1.1.1
> zappa==0.47.1
> zeroconf==0.21.3
* Your `zappa_settings.py`:
```
{
"dev": {
"aws_region": "eu-central-1",
"project_name": "dev",
"runtime": "python3.6",
"s3_bucket": "zappa-sdfdasdasdas",
"timeout_seconds": 900,
"memory_size": 1280,
"async_source": "sns",
"async_resources": true,
"async_response_table": "table",
"async_response_table_read_capacity": 1,
"async_response_table_write_capacity": 1,
"xray_tracing": true
}
}
```
* Django `settings.py`:
```
INSTALLED_APPS = [
....
....
'aws_xray_sdk.ext.django'
]
MIDDLEWARE = [
'aws_xray_sdk.ext.django.middleware.XRayMiddleware',
.....
.....
]
XRAY_RECORDER = {
'AWS_XRAY_TRACING_NAME': 'dev',
'AWS_XRAY_CONTEXT_MISSING': 'LOG_ERROR',
'AUTO_INSTRUMENT': True
}
```
| open | 2019-02-24T10:43:25Z | 2019-08-21T12:29:21Z | https://github.com/Miserlou/Zappa/issues/1782 | [] | makarovstas | 2 |
docarray/docarray | fastapi | 1,430 | docs: index backend implementation docs | closed | 2023-04-21T11:12:08Z | 2023-05-19T11:04:03Z | https://github.com/docarray/docarray/issues/1430 | [
"area/docs"
] | JoanFM | 0 | |
521xueweihan/HelloGitHub | python | 2,670 | 【开源自荐】AI Group Tabs 用 AI 分类归纳浏览器标签页 | ## 推荐项目
- 项目地址:https://github.com/MichaelYuhe/ai-group-tabs
- 类别:JS
- 项目标题:使用 AI 自动分类归纳浏览器标签
- 项目描述:基于大语言模型以及 Chrome 的 Tab Group API,自动分类和归纳你的 Chrome 标签页,再也不用担心开了太多标签页挤成一团找不到了。支持自定义分组和自动归纳,支持 GPT 和 Gemini
- 亮点:永久免费使用,丰富的自定义规则
- 截图:

- 后续更新计划:
- 使用本地模型分类,节省 token 保护隐私。
- 搜索 Tab
- 支持自定义根据域名分组 | open | 2024-01-08T05:50:34Z | 2024-01-23T07:48:44Z | https://github.com/521xueweihan/HelloGitHub/issues/2670 | [
"其它"
] | MichaelYuhe | 0 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 349 | 请问7B和7Bplus两个基础模型有没有同时用2M指令微调对比过? | 主要是想了解以下7B到7Bplus的提升是预训练语料从20G到120G带来的更大些,还是指令数据从2M到4M提升的更大一些?
或者7B和7Bplus两个基础模型同时用4M指令微调对比
谢谢 | closed | 2023-05-16T07:35:51Z | 2023-05-26T23:43:37Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/349 | [
"stale"
] | xinghua3632 | 3 |
sgl-project/sglang | pytorch | 3,912 | [Bug] OpenAI compatible api return finish_reason as empty string instead of null causing json deseriliazition failure | ### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
- [x] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [x] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
### Describe the bug
chunk response from streaming api:
```
{
"id": "b2946a2aa5574af6aadf0c0889e8dcef",
"object": "chat.completion.chunk",
"created": 1740634121,
"model": "deepseek-r1:70b",
"choices": [
{
"index": 0,
"delta": {
"role": null,
"content": "*****",
"tool_calls": null
},
"logprobs": null,
"finish_reason": "",
"matched_stop": null
}
],
"usage": null
}
```
### Reproduction
stream = true
### Environment
env: sglang 0.4.3 post2, deepseek-r1-distill-llama-70b | open | 2025-02-27T06:16:33Z | 2025-03-18T03:05:17Z | https://github.com/sgl-project/sglang/issues/3912 | [] | jzhouw | 2 |
explosion/spaCy | machine-learning | 12,688 | Incorrect lemmatization for spanish tokens,even if they are present in es_lemma_lookup.json | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
```python
import spacy
nlp = spacy.load('es_core_news_lg')
examples = [
'oído',
'engrandece',
'detecté',
'detectársele',
'mostrárselo',
'decírselo',
]
for example in examples:
doc = nlp(example)
for token in doc:
print(token.lemma_)
# Output:
# oer
# engrandecir
# detecté
# detectársele
# mostrárselo
# decir él él
```
The forms `oído`, `engrandece` and `detecté` are present in the [es_lemma_lookup.json](https://raw.githubusercontent.com/explosion/spacy-lookups-data/master/spacy_lookups_data/data/es_lemma_lookup.json) file, but those values are not the ones in the output.
In the case of `detectársele` and `mostrárselo` they are not lemmatized, but `decírselo` is. I will open a separated issue for this case if you prefer.
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: Linux desktop 5.19.0-42-generic 43~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Apr 21 16:51:08 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
* Python Version Used: Python 3.10.6
* spaCy Version Used: spaCy version 3.5.3
* Environment Information:
| closed | 2023-05-31T10:25:54Z | 2023-07-07T00:02:24Z | https://github.com/explosion/spaCy/issues/12688 | [
"lang / es",
"feat / lemmatizer"
] | torce | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.