repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
vitalik/django-ninja | pydantic | 1,412 | Why is Django ninja silencing AttributeError in resolve_* methods? | Let's consider this example:
```python
class BookOut(Scheme):
authors: list[str] = list()
@staticmethod
def resolve_authors(obj):
authors = []
for author in obj.authors.all():
# this will cause AttributeError
autrhos.append(author.nonexisting_attribute)
return authors
```
When there is AttributeError in the resolver AND the `authors` fields has default value, the error will be silenced and there will be empty list in result.authors. Why is that? Personally I would like to know if there are any errors occurring in `resolve_*` method, disregarding if the field has default or not. | closed | 2025-02-21T13:00:04Z | 2025-02-24T12:53:31Z | https://github.com/vitalik/django-ninja/issues/1412 | [] | flaiming | 4 |
graphdeco-inria/gaussian-splatting | computer-vision | 338 | if download code (simple-knn) fail, then ... | you can down load ./submodules/diff-gaussian-rasterization/ from web, but simple-knn fail.
please use git clone *** --recursive
you can 'cd ./submodules/diff-gaussian-rasterization/; python -Bu setup.py install; cd ../../', but simple-knn fail.
please use 'pip install ./submodules/simple-knn/'
in a world, just follow officail steps. | closed | 2023-10-18T18:06:44Z | 2023-10-21T14:31:18Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/338 | [] | yuedajiong | 0 |
sqlalchemy/alembic | sqlalchemy | 331 | Autogenerated op.drop_constraint(None, ...) fails because name is None | **Migrated issue, originally created by Greg Kempe ([@longhotsummer](https://github.com/longhotsummer))**
I have a table for which Alembic autogenerated this upgrade migration, which works:
```python
op.create_foreign_key(None, 'committee_question', 'minister', ['minister_id'], ['id'], ondelete='SET NULL')
```
however the auto generated downgrade migration:
```python
op.drop_constraint(None, 'committee_question', type_='foreignkey')
```
lacks a name and so it fails.
| closed | 2015-10-07T06:55:54Z | 2015-10-08T13:33:02Z | https://github.com/sqlalchemy/alembic/issues/331 | [
"bug"
] | sqlalchemy-bot | 6 |
davidteather/TikTok-Api | api | 565 | TikTok Sound Summary Count | Is there a way to get the summary statistic for a TikTok sound, i.e. how many total sounds there currently are. This would be mega helpful for tracking changes over time. | closed | 2021-04-16T16:38:36Z | 2021-04-17T11:26:06Z | https://github.com/davidteather/TikTok-Api/issues/565 | [
"feature_request"
] | eddyojb88 | 2 |
TencentARC/GFPGAN | pytorch | 63 | About training with 8 gpus | Hi xintao, thanks for sharing your great work.
I currently trying to train GFPGAN with 8 gpus, which means the total batchsize will be double. Should I modified some hyperparameter in the train_gfpgan_v1.yml? Such as the learning rate and the totoal step, etc. Thanks again, have a nice day~. | closed | 2021-09-15T04:43:09Z | 2021-09-17T03:26:39Z | https://github.com/TencentARC/GFPGAN/issues/63 | [] | NNNNAI | 2 |
huggingface/datasets | pandas | 6,566 | I train controlnet_sdxl in bf16 datatype, got unsupported ERROR in datasets | ### Describe the bug
```
Traceback (most recent call last):
File "train_controlnet_sdxl.py", line 1252, in <module>
main(args)
File "train_controlnet_sdxl.py", line 1013, in main
train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint)
File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 592, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3093, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3489, in _map_single
writer.write_batch(batch)
File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_writer.py", line 557, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 248, in pyarrow.lib.array
File "pyarrow/array.pxi", line 113, in pyarrow.lib._handle_arrow_array_protocol
File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_writer.py", line 191, in __arrow_array__
out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))
File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/features/features.py", line 447, in cast_to_python_objects
return _cast_to_python_objects(
File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/features/features.py", line 324, in _cast_to_python_objects
for x in obj.detach().cpu().numpy()
TypeError: Got unsupported ScalarType BFloat16
```
### Steps to reproduce the bug
Here is my train script I use BF16 type,I use diffusers train my model
```
export MODEL_DIR="/home/mhh/sd_models/stable-diffusion-xl-base-1.0"
export OUTPUT_DIR="./control_net"
export VAE_NAME="/home/mhh/sd_models/sdxl-vae-fp16-fix"
accelerate launch train_controlnet_sdxl.py \
--pretrained_model_name_or_path=$MODEL_DIR \
--output_dir=$OUTPUT_DIR \
--pretrained_vae_model_name_or_path=$VAE_NAME \
--dataset_name=/home/mhh/sd_datasets/fusing/fill50k \
--mixed_precision="bf16" \
--resolution=1024 \
--learning_rate=1e-5 \
--max_train_steps=200 \
--validation_image "/home/mhh/sd_datasets/controlnet_image/conditioning_image_1.png" "/home/mhh/sd_datasets/controlnet_image/conditioning_image_2.png" \
--validation_prompt "red circle with blue background" "cyan circle with brown floral background" \
--validation_steps=50 \
--train_batch_size=1 \
--gradient_accumulation_steps=4 \
--report_to="wandb" \
--seed=42 \
```
### Expected behavior
When I changed the data type to fp16, it worked.
### Environment info
datasets 2.16.1
numpy 1.24.4 | closed | 2024-01-08T02:37:03Z | 2024-06-02T14:24:39Z | https://github.com/huggingface/datasets/issues/6566 | [
"bug"
] | HelloWorldBeginner | 1 |
ansible/ansible | python | 84,781 | Data Tagging: extending `AnsibleDumper` can result in strange Python errors | ### Fallible Version
2025.3.3
### Summary
community.general's `yaml` plugin does (among other things)
```
from ansible.parsing.yaml.dumper import AnsibleDumper
class MyDumper(AnsibleDumper):
def represent_scalar(self, tag, value, style=None):
"""Uses block style for multi-line strings"""
if style is None:
if should_use_block(value):
style = '|'
# we care more about readable than accuracy, so...
# ...no trailing space
value = value.rstrip()
# ...and non-printable characters
value = ''.join(x for x in value if x in string.printable or ord(x) >= 0xA0)
# ...tabs prevent blocks from expanding
value = value.expandtabs()
# ...and odd bits of whitespace
value = re.sub(r'[\x0b\x0c\r]', '', value)
# ...as does trailing space
value = re.sub(r' +\n', '\n', value)
else:
style = self.default_style
node = yaml.representer.ScalarNode(tag, value, style=style)
if self.alias_key is not None:
self.represented_objects[self.alias_key] = node
return node
```
This causes the `import` sanity tests to fail with:
```
03:54 ERROR: plugins/callback/yaml.py:56:0: traceback: TypeError: function() argument 'code' must be code, not str (0%)
```
Line 56 is the line with `class MyDumper(AnsibleDumper):`.
### <!-- Bot instructions (ignore this) -->
<!--
### Component Name
bin/ansible
### Issue Type
Bug Report
### Ansible Version
2.19.0.dev0
### Configuration
### OS / Environment
-->
| open | 2025-03-05T20:19:49Z | 2025-03-09T16:16:08Z | https://github.com/ansible/ansible/issues/84781 | [
"bug",
"has_pr",
"data_tagging",
"fallible_dt"
] | felixfontein | 3 |
pytorch/pytorch | deep-learning | 149,493 | DISABLED [WORKFLOW_NAME] / [PLATFORM_NAME] / [JOB_NAME] | > For example, DISABLED pull / win-vs2022-cpu-py3 / test (default). Once
> created, the job will be disabled within 15 minutes. You can check the
> list of disabled jobs at https://ossci-metrics.s3.amazonaws.com/disabled-jobs.json
> If you need to get this out ASAP instead of waiting for 15 minutes,
> you can manually trigger the workflow at https://github.com/pytorch/test-infra/actions/workflows/update_disabled_tests.yml
> once the issue is created to update the above JSON list right away.
> Noted: you need to have write access to PyTorch repo to disable CI
> jobs. The issue will be rejected otherwise.
## Reason
*Provide a reason why this is needed and when this can be resolved*.
cc @seemethere @malfet @pytorch/pytorch-dev-infra | closed | 2025-03-19T07:45:37Z | 2025-03-19T07:45:41Z | https://github.com/pytorch/pytorch/issues/149493 | [
"module: ci"
] | Owner-DSH | 1 |
microsoft/MMdnn | tensorflow | 357 | (keras2IR) TypeError: unsupported operand type(s) for +: 'NoneType' and 'int' | Platform : ubuntu 16.04
Python version : 3.6
Source framework with version : keras 2.20 with GPU
Destination framework with version : pytorch with GPU
Pre-trained model path (webpath or webdisk path):
Running scripts: mmtoir -f keras -d vgg16_bangs_pcb -n vgg16_3bangs.json -w vgg16_3bangs.h5
I got following error message:
Using TensorFlow backend.
.
.
.
Network file [vgg16_3bangs.json] and [vgg16_3bangs.h5] is loaded successfully.
Traceback (most recent call last):
File "/home/suzukilab/.pyenv/versions/anaconda3-5.2.0/envs/anaconda3/bin/mmtoir", line 11, in <module>
sys.exit(_main())
File "/home/suzukilab/.pyenv/versions/anaconda3-5.2.0/envs/anaconda3/lib/python3.6/site-packages/mmdnn/conversion/_script/convertToIR.py", line 192, in _main
ret = _convert(args)
File "/home/suzukilab/.pyenv/versions/anaconda3-5.2.0/envs/anaconda3/lib/python3.6/site-packages/mmdnn/conversion/_script/convertToIR.py", line 115, in _convert
parser.run(args.dstPath)
File "/home/suzukilab/.pyenv/versions/anaconda3-5.2.0/envs/anaconda3/lib/python3.6/site-packages/mmdnn/conversion/common/DataStructure/parser.py", line 22, in run
self.gen_IR()
File "/home/suzukilab/.pyenv/versions/anaconda3-5.2.0/envs/anaconda3/lib/python3.6/site-packages/mmdnn/conversion/keras/keras2_parser.py", line 142, in gen_IR
func(current_node)
File "/home/suzukilab/.pyenv/versions/anaconda3-5.2.0/envs/anaconda3/lib/python3.6/site-packages/mmdnn/conversion/keras/keras2_parser.py", line 419, in rename_Conv2D
self._convert_convolution(source_node, 2)
File "/home/suzukilab/.pyenv/versions/anaconda3-5.2.0/envs/anaconda3/lib/python3.6/site-packages/mmdnn/conversion/keras/keras2_parser.py", line 273, in _convert_convolution
Keras2Parser._convert_padding(source_node, IR_node)
File "/home/suzukilab/.pyenv/versions/anaconda3-5.2.0/envs/anaconda3/lib/python3.6/site-packages/mmdnn/conversion/keras/keras2_parser.py", line 204, in _convert_padding
list(source_node.layer.strides))
File "/home/suzukilab/.pyenv/versions/anaconda3-5.2.0/envs/anaconda3/lib/python3.6/site-packages/mmdnn/conversion/common/utils.py", line 114, in compute_tf_same_padding
output_shape = (input_shape[idx] + strides[idx] - 1) // strides[idx]
**TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'**
I want to convert my own keras model to IR (and then IR to pytorch) with its architecture and weights.
I make my keras model based on pre-trained vgg16 and add some layers including regression layer, and then I fine-tuned it.
I save the trained model with
model_json_str = model.to_json()
open('vgg16_3bangs.json', 'w').write(model_json_str)
model.save_weights('vgg16_3bangs.h5')
How can I solve the error message?
BTW, I cannot figure out the meaning of second argument 'vgg16_bangs_pcb'. So I arbitrary wrote it. What is the meaning of it? | closed | 2018-08-13T05:49:11Z | 2018-12-22T09:56:21Z | https://github.com/microsoft/MMdnn/issues/357 | [] | YusukeO | 5 |
amidaware/tacticalrmm | django | 1,362 | Option to cache a task script locally on the machine, so it will still run without network | **Is your feature request related to a problem? Please describe.**
Certain scheduled tasks should be able to run even when a pc does not have internet.
eg, a script to autoconfigure network setup :P we do prepare lots of machines locally and when changing networks
sometimes we forget to change network settings.
We have prepared a script to autoconfigure network settings based on the client/site name which is schedulled to run at the deployment day.
**Describe the solution you'd like**
An option to cache the script locally, so it can run even without internet.
This should be an opt-in option, and not set by default.
**Describe alternatives you've considered**
Create another script to create the script and the task so it can run offline.
**Additional context**
Add any other context or screenshots about the feature request here.
| open | 2022-12-02T13:41:32Z | 2023-07-06T05:15:18Z | https://github.com/amidaware/tacticalrmm/issues/1362 | [
"enhancement"
] | stavros-k | 2 |
httpie/cli | python | 1,266 | JSON highlighting corrupted by green background in Windows Terminal | ## Checklist
- [X] I've searched for similar issues.
- [X] I'm using the latest version of HTTPie.
---
## Minimal reproduction code and steps
1. Request a JSON file using HTTPie in Windows Terminal, e.g. `http -j GET https://raw.githubusercontent.com/httpie/httpie/master/tests/fixtures/test.json`
2. Observe corrupt/incorrect green background in JSON syntax highlighting
Windows Terminal version 1.11.3471.0
Windows 10 21H2 (19044.1415)
HTTPie 2.6.0
Python 3.10.1
JSON highlighting used to work correctly for me, but I'm not sure in exactly which version(s). I tested in Command Prompt, too, with the same result, so I don't think this is specific to Windows Terminal.
---
## Debug output
```txt
>http -j --debug GET https://raw.githubusercontent.com/httpie/httpie/master/tests/fixtures/test.json
HTTPie 2.6.0
Requests 2.27.1
Pygments 2.11.2
Python 3.10.1 (tags/v3.10.1:2cd268a, Dec 6 2021, 19:10:37) [MSC v.1929 64 bit (AMD64)]
C:\Users\will\scoop\apps\python\current\python.exe
Windows 10
<Environment {'colors': 256,
'config': {'default_options': []},
'config_dir': WindowsPath('C:/Users/will/AppData/Roaming/httpie'),
'devnull': <property object at 0x000002A31B077A10>,
'is_windows': True,
'log_error': <function Environment.log_error at 0x000002A31B097010>,
'program_name': 'http',
'stderr': <colorama.ansitowin32.StreamWrapper object at 0x000002A31B089D80>,
'stderr_isatty': True,
'stdin': <_io.TextIOWrapper name='<stdin>' mode='r' encoding='utf-8'>,
'stdin_encoding': 'utf-8',
'stdin_isatty': True,
'stdout': <colorama.ansitowin32.StreamWrapper object at 0x000002A31B089690>,
'stdout_encoding': 'utf-8',
'stdout_isatty': True}>
<PluginManager {'adapters': [],
'auth': [<class 'httpie.plugins.builtin.BasicAuthPlugin'>,
<class 'httpie.plugins.builtin.DigestAuthPlugin'>],
'converters': [],
'formatters': [<class 'httpie.output.formatters.headers.HeadersFormatter'>,
<class 'httpie.output.formatters.json.JSONFormatter'>,
<class 'httpie.output.formatters.xml.XMLFormatter'>,
<class 'httpie.output.formatters.colors.ColorFormatter'>]}>
>>> requests.request(**{'auth': None,
'data': '',
'headers': {'User-Agent': b'HTTPie/2.6.0', 'Accept': b'application/json, */*;q=0.5', 'Content-Type': b'application/json'},
'method': 'get',
'params': <generator object MultiValueOrderedDict.items at 0x000002A31B178AC0>,
'url': 'https://raw.githubusercontent.com/httpie/httpie/master/tests/fixtures/test.json'})
HTTP/1.1 200 OK
Accept-Ranges: bytes
Access-Control-Allow-Origin: *
Cache-Control: max-age=300
Connection: keep-alive
Content-Encoding: gzip
Content-Length: 180
Content-Security-Policy: default-src 'none'; style-src 'unsafe-inline'; sandbox
Content-Type: text/plain; charset=utf-8
Date: Tue, 11 Jan 2022 15:00:58 GMT
ETag: W/"020a89035cfd7a956c3a3db63baedb50bec31c5b8516170321eeb60c2f338f55"
Expires: Tue, 11 Jan 2022 15:05:58 GMT
Source-Age: 79
Strict-Transport-Security: max-age=31536000
Vary: Authorization,Accept-Encoding,Origin
Via: 1.1 varnish
X-Cache: HIT
X-Cache-Hits: 1
X-Content-Type-Options: nosniff
X-Fastly-Request-ID: 3f9a95965264c43c85ce6b1c6b891280811ce375
X-Frame-Options: deny
X-GitHub-Request-Id: B0CA:18E9:14CFB7:1AFE2C:61DD9B58
X-Served-By: cache-iad-kcgs7200037-IAD
X-Timer: S1641913258.082057,VS0,VE1
X-XSS-Protection: 1; mode=block
{
"name": "Jakub Roztočil",
"unicode": "χρυσαφὶ 太陽 เลิศ ♜♞♝♛♚♝♞♜ оживлённым तान्यहानि 有朋"
}
```
## Additional information, screenshots, or code examples

| closed | 2022-01-11T15:22:35Z | 2022-01-14T16:47:10Z | https://github.com/httpie/cli/issues/1266 | [
"bug",
"windows"
] | wjrogers | 6 |
labmlai/annotated_deep_learning_paper_implementations | pytorch | 177 | The classifier-free guidance of diffusion models is wrong. | The classifier-free guidance equation of diffusion models [here](https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/b05c9e0c57c6223b8f59dc11be114b97896b0481/labml_nn/diffusion/stable_diffusion/sampler/__init__.py#L50) is wrong, which is
$$\epsilon_\theta(x_t, c) = s\epsilon_\text{cond}(x_t, c) + (s - 1)\epsilon_\text{cond}(x_t, c_u).$$
However, the correct equation is given in [the Imagen paper](https://arxiv.org/pdf/2205.11487.pdf), Section 2.2, Equation (2), as
$$\epsilon_\theta(x_t, c) = s\epsilon_\text{cond}(x_t, c) + (1 - s)\epsilon_\text{cond}(x_t, c_u).$$
The code [here](https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/b05c9e0c57c6223b8f59dc11be114b97896b0481/labml_nn/diffusion/stable_diffusion/sampler/__init__.py#L67) implements the correct equation, though. So there should be no need to fix the code.
I believe all the sampling articles such as [this](https://nn.labml.ai/diffusion/stable_diffusion/sampler/ddpm.html) and [this](https://nn.labml.ai/diffusion/stable_diffusion/sampler/ddpm.html) use the wrong equation, so they should also be corrected.
| open | 2023-04-09T14:07:09Z | 2023-06-30T10:12:32Z | https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/177 | [
"bug"
] | luowyang | 0 |
dpgaspar/Flask-AppBuilder | flask | 1,626 | created/changed_by_fk issues updating database outside of FAB framework | **PROBLEM:**
We have a database class (call it Collection), which inherits the AuditMixin mixer. This mixer automatically generates the fields "created_by_fk" and "changed_by_fk" for every insert/update to the table.
In our application we have asynchronous tasks that must run outside of the FAB thread. When the tasks finish, they update a column in Collection. However, the AuditMixin does not have access to a valid g.user in order to generate the user_id for these updates.
**ATTEMPTED SOLUTION (FAILED):**
I imported the g global in the task thread and passed it the user id which initiated the call.
Then I tried to spoof the g.user.id for AudixMixin in the following way:
```
class UserSpoof:
def __init__(self, _id):
self.id = _id
.
.
.
def run_task(user_id):
with app.app_context():
g.user = UserSpoof(user_id)
.
.
.
Collection.update_status('finished')
```
**QUESTION**
Do you have any suggestions on how we should solve this issue?
| closed | 2021-04-28T15:50:53Z | 2022-04-17T16:24:28Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1626 | [
"stale"
] | cbisaccia78 | 2 |
Anjok07/ultimatevocalremovergui | pytorch | 793 | AttributeError: module 'PIL.Image' has no attribute 'ANTIALIAS' | how to fix? thx a lot | open | 2023-09-12T17:37:45Z | 2024-03-11T18:30:08Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/793 | [] | hyrulelinks | 1 |
aminalaee/sqladmin | asyncio | 93 | Support for registering custom converters | ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
There doesn't seem to be an obvious way to register converter functions with `@converts` or subclass `ModelConverter`.
This might also be a bug where `ModelConverterBase.get_converter` is unable to recognize `TypeDecorator` types that extend a type that already has a converter.
### Describe the solution you would like.
Possibly utilizing a global registry for `@converts`.
### Describe alternatives you considered
_No response_
### Additional context
Encountered while trying to create a `ModelAdmin` for a `SQLModel` (related to #57)
`Exception: Could not find field converter for column name (<class 'sqlmodel.sql.sqltypes.AutoString'>).` where `AutoString` extends `String`
EDIT:
Got it to work by setting the `sa_column=` on the SQLModel field:
```python
class MyModel(SQLModel):
# name: str = Field(..., index=True) # broken
name: str = Field(..., sa_column=Column(String(length=512))) # works
```
I believe the feature request still has value | closed | 2022-03-16T17:18:05Z | 2022-06-15T07:56:20Z | https://github.com/aminalaee/sqladmin/issues/93 | [
"enhancement"
] | lovetoburnswhen | 6 |
scrapy/scrapy | web-scraping | 5,874 | Scrapy does not decode base64 MD5 checksum from GCS | <!--
Thanks for taking an interest in Scrapy!
If you have a question that starts with "How to...", please see the Scrapy Community page: https://scrapy.org/community/.
The GitHub issue tracker's purpose is to deal with bug reports and feature requests for the project itself.
Keep in mind that by filing an issue, you are expected to comply with Scrapy's Code of Conduct, including treating everyone with respect: https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md
The following is a suggested template to structure your issue, you can find more guidelines at https://doc.scrapy.org/en/latest/contributing.html#reporting-bugs
-->
### Description
Incorrect GCS Checksum processing
### Steps to Reproduce
1. Obtain the checksum for an up-to-date file.
**Expected behavior:** [What you expect to happen]
matches the checksum of the file downloaded
**Actual behavior:** [What actually happens]
NOT matches the checksum of the file downloaded
**Reproduces how often:** [What percentage of the time does it reproduce?]
Always
### Versions
current
### Additional context
https://cloud.google.com/storage/docs/json_api/v1/objects
> MD5 hash of the data, encoded using [base64](https://datatracker.ietf.org/doc/html/rfc4648#section-4).
But, Scrapy dose not decode MD5 from GCS.
| closed | 2023-03-27T05:55:22Z | 2023-04-11T16:25:43Z | https://github.com/scrapy/scrapy/issues/5874 | [
"bug",
"good first issue"
] | namelessGonbai | 12 |
PaddlePaddle/models | nlp | 5,219 | 单目标跟踪模型动态图转静态图失败 | 背景:
想导出https://github.com/PaddlePaddle/models/tree/release/2.0-beta/PaddleCV/tracking 这里面的atom_resnet18模型,用来部署推理验证,发现是只有动态图模型,所以想转成静态图。
代码:

在最后一步, paddle.jit.save(model, 'inference_models/AtomNet')的时候失败,报下面的问题(参考文件的error [log)](url
[error.txt](https://github.com/PaddlePaddle/models/files/5843038/error.txt)
)
这个可能是什么问题,请帮忙看一下,谢谢!是和模型在1.8版本训练的有关吗?我现在使用2.0版本的转静态图。 | closed | 2021-01-20T14:24:21Z | 2021-01-22T12:38:45Z | https://github.com/PaddlePaddle/models/issues/5219 | [] | AnBaolei1984 | 1 |
graphql-python/graphene-django | django | 1,219 | CAMELCASE_ERRORS setting breaks __all__ field | **Note: for support questions, please use stackoverflow**. This repository's issues are reserved for feature requests and bug reports.
* **What is the current behavior?**
When `CAMELCASE_ERRORS` is set to `True` the form level all field loses it's first underscore and has a capitalized A. `_All__`
* **If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem**
Simply set `CAMELCASE_ERRORS` to `True` and trigger a form level validation error to reproduce the error.
* **What is the expected behavior?**
The field should still remain as `__all__`
* **What is the motivation / use case for changing the behavior?**
This is a bug that will cause unintended behavior.
* **Please tell us about your environment:**
- Version: 2.15.0
- Platform: Python 3.8 | open | 2021-06-27T14:55:21Z | 2021-06-27T14:56:35Z | https://github.com/graphql-python/graphene-django/issues/1219 | [
"🐛bug"
] | pfcodes | 0 |
ranaroussi/yfinance | pandas | 2,112 | Tests failing | Running `python -m unittest discover -s tests` from #1084 causes 5 failures and 1 error.
======================================================================
ERROR: test_resampling (test_price_repair.TestPriceRepairAssumptions.test_resampling)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/dhruvan/yfinance/tests/test_price_repair.py", line 49, in test_resampling
elif dfr.index[0] == df_truth.index[1]:
~~~~~~~~~~~~~~^^^
File "/home/dhruvan/yfinance/.venv/lib/python3.12/site-packages/pandas/core/indexes/base.py", line 5389, in __getitem__
return getitem(key)
^^^^^^^^^^^^
File "/home/dhruvan/yfinance/.venv/lib/python3.12/site-packages/pandas/core/arrays/datetimelike.py", line 381, in __getitem__
result = cast("Union[Self, DTScalarOrNaT]", super().__getitem__(key))
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dhruvan/yfinance/.venv/lib/python3.12/site-packages/pandas/core/arrays/_mixins.py", line 284, in __getitem__
result = self._ndarray[key]
~~~~~~~~~~~~~^^^^^
IndexError: index 1 is out of bounds for axis 0 with size 1
======================================================================
FAIL: test_repair_bad_div_adjusts (test_price_repair.TestPriceRepair.test_repair_bad_div_adjusts)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/dhruvan/yfinance/tests/test_price_repair.py", line 668, in test_repair_bad_div_adjusts
self.assertTrue(f_close.all())
AssertionError: np.False_ is not true
======================================================================
FAIL: test_repair_zeroes_daily (test_price_repair.TestPriceRepair.test_repair_zeroes_daily)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/dhruvan/yfinance/tests/test_price_repair.py", line 384, in test_repair_zeroes_daily
self.assertTrue(_np.isclose(repaired_df[c], correct_df[c], rtol=1e-8).all())
AssertionError: np.False_ is not true
======================================================================
FAIL: test_setTzCacheLocation (test_utils.TestCache.test_setTzCacheLocation)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/dhruvan/yfinance/tests/test_utils.py", line 52, in test_setTzCacheLocation
self.assertTrue(os.path.exists(os.path.join(self.tempCacheDir.name, "tkr-tz.db")))
AssertionError: False is not true
======================================================================
FAIL: test_tzCacheRootLookup (test_utils.TestCacheNoPermission.test_tzCacheRootLookup)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/dhruvan/yfinance/tests/test_utils.py", line 81, in test_tzCacheRootLookup
self.assertTrue(cache.dummy)
AssertionError: False is not true
======================================================================
FAIL: test_tzCacheRootStore (test_utils.TestCacheNoPermission.test_tzCacheRootStore)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/dhruvan/yfinance/tests/test_utils.py", line 70, in test_tzCacheRootStore
self.assertTrue(cache.dummy)
AssertionError: False is not true
----------------------------------------------------------------------
Ran 109 tests in 308.065s
FAILED (failures=5, errors=1, skipped=2, expected failures=1) | closed | 2024-11-04T10:53:30Z | 2025-01-25T16:31:17Z | https://github.com/ranaroussi/yfinance/issues/2112 | [] | dhruvan2006 | 3 |
OpenInterpreter/open-interpreter | python | 1,390 | Add real terminal support | ### Is your feature request related to a problem? Please describe.
OpenInterpreter currently is unable to interact with common REPL and shell environment in an asynchronous way. It is always blocking.
### Describe the solution you'd like
Introducing a fully capable terminal agent environment. Here are few things it can do.
You can see the position of the cursor, the range of the selected text.

You can also capture a screenshot of the terminal with cursor denoted in red.

Grayscale augmented terminal gives high contrast to the red cursor, making the agent easier to locate it.

Would be great if OpenInterpreter adopts this.
### Describe alternatives you've considered
OpenDevin has a [milestone](https://github.com/OpenDevin/OpenDevin/issues/3031) over this. [Devin](https://cognition.notaku.site/introducing-devin) as shown is already capable of doing this.
### Additional context
You can learn more about my efforts [here](https://github.com/james4ever0/agi_computer_control). | open | 2024-08-10T02:37:46Z | 2024-08-10T02:37:46Z | https://github.com/OpenInterpreter/open-interpreter/issues/1390 | [] | James4Ever0 | 0 |
ultralytics/ultralytics | pytorch | 19,730 | How to get loss value from a middle module of my model? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I'v designed a module to process the features and now I need to calculate a loss value in this module. Is there a way to add this loss to the final loss value calculated in my customized loss class, which will trigger the backward process?
Here is part of my model.yaml
```
.....
- [[19,21], 1, MyModule, [module_args]] # 22
......
- [[28,30,32], 1, Detect, [det_nc]] # 34 Detect(P3,P4,P5)
```
For example, a loss will be calculated in "MyModule" and the output of "Detect" will be used to get another loss value. How could I fuse them two?
### Additional
_No response_ | closed | 2025-03-16T16:21:09Z | 2025-03-20T18:27:53Z | https://github.com/ultralytics/ultralytics/issues/19730 | [
"question"
] | xiyuxx | 4 |
pyro-ppl/numpyro | numpy | 1,392 | More than 1 `input_shape` when initializing `flax_module` | Some modules require more than 1 input when initializing, which can be passed through `kwargs`. But this doesn't work in some cases. For example:
```python
class RNN(nn.Module):
@functools.partial(
nn.transforms.scan,
variable_broadcast='params',
split_rngs={'params': False})
@nn.compact
def __call__(self, state, x):
return RNNCell()(state, x)
```
I tried to declare this with the following statement:
```python
rnn = flax_module(
'rnn',
RNN(),
input_shape=(num_hiddens,),
x=jnp.ones((10, 10))
)
```
But I can't use kwargs because `nn.transforms.scan` does not support them:
```
RuntimeWarning: kwargs are not supported in scan, so "x" is(are) ignored
```
I worked around this by wrapping my `RNN` with another class, after which I could pass `x` as a kwarg. However, I think `input_shape` should allow passing dimensions for more than one input.
https://github.com/pyro-ppl/numpyro/blob/0bff074a4a54a593a7fab7e68b5c10f85dd332a6/numpyro/contrib/module.py#L83 | closed | 2022-04-13T11:20:46Z | 2022-09-10T16:25:30Z | https://github.com/pyro-ppl/numpyro/issues/1392 | [
"enhancement",
"good first issue"
] | UmarJ | 3 |
slackapi/python-slack-sdk | asyncio | 939 | v3.3 document updates | ### The page URLs
- [x] Add RTM v2 in [this page](https://slack.dev/python-slack-sdk/real_time_messaging.html) https://github.com/slackapi/python-slack-sdk/pull/933
- [x] Add Audit Logs API client page https://github.com/slackapi/python-slack-sdk/pull/936
- [x] Add SCIM API client page https://github.com/slackapi/python-slack-sdk/issues/437
~~- [ ] Add retry policy configuration for API clients https://github.com/slackapi/python-slack-sdk/issues/887~~
### Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/python-slack-sdk/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2021-02-01T02:36:02Z | 2021-02-05T02:09:43Z | https://github.com/slackapi/python-slack-sdk/issues/939 | [
"docs",
"rtm-client",
"web-client",
"Version: 3x"
] | seratch | 1 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 738 | cuda out of memory | 
i did some research but still couldn't get how to deal with this error into my head? any idea what i need to avoid? | closed | 2021-04-16T21:36:44Z | 2021-04-20T02:57:18Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/738 | [] | jackthenewbie | 3 |
pyg-team/pytorch_geometric | pytorch | 9,041 | Example regression GNN architecture for homogeneous graph and node level prediction | ### 🚀 The feature, motivation and pitch
Hello,
I would like to know if there is any example/tutorial available to build a GNN model (layers architecture) with Pytorch Geometric for a regression task using homogeneous graph and node level prediction?
I did not understood fully the suggestion in: https://github.com/pyg-team/pytorch_geometric/issues/3794
Thank you in advance!
### Alternatives
_No response_
### Additional context
_No response_ | open | 2024-03-10T10:33:36Z | 2024-03-10T13:27:02Z | https://github.com/pyg-team/pytorch_geometric/issues/9041 | [
"feature"
] | MICMTS | 1 |
nvbn/thefuck | python | 1,379 | Using fuck outputs the right correction, but freezes the terminal and doesn't execute or let me input anything else. | <!-- If you have any issue with The Fuck, sorry about that, but we will do what we
can to fix that. Actually, maybe we already have, so first thing to do is to
update The Fuck and see if the bug is still there. -->
<!-- If it is (sorry again), check if the problem has not already been reported and
if not, just open an issue on [GitHub](https://github.com/nvbn/thefuck) with
the following basic information: -->
The output of `thefuck --version` (something like `The Fuck 3.1 using Python
3.5.0 and Bash 4.4.12(1)-release`):
The Fuck 3.32 using Python 3.10.10 and Bash 5.2.15(1)-release
Your system (Debian 7, ArchLinux, Windows, etc.):
Windows 11
How to reproduce the bug:
In bash, type an incorrect command, then type `fuck`
The output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck):
```
DEBUG:` Run with settings: {'alter_history': True,
'debug': True,
'env': {'GIT_TRACE': '1', 'LANG': 'C', 'LC_ALL': 'C'},
'exclude_rules': [],
'excluded_search_path_prefixes': [],
'history_limit': None,
'instant_mode': False,
'no_colors': False,
'num_close_matches': 3,
'priority': {},
'repeat': False,
'require_confirmation': True,
'rules': [<const: All rules enabled>],
'slow_commands': ['lein', 'react-native', 'gradle', './gradlew', 'vagrant'],
'user_dir': WindowsPath('C:/Users/rocco/.config/thefuck'),
'wait_command': 3,
'wait_slow_command': 15}
DEBUG: Received output: The system cannot find the path specified.
DEBUG: Call: cd Docmts; with env: {'ACLOCAL_PATH': 'C:\\Program Files\\Git\\mingw64\\share\\aclocal;C:\\Program Files\\Git\\usr\\share\\aclocal', 'ALLUSERSPROFILE': 'C:\\ProgramData', 'APPDATA': 'C:\\Users\\rocco\\AppData\\Roaming', 'COMMONPROGRAMFILES': 'C:\\Program Files\\Common Files', 'COMPUTERNAME': 'LAPTOP-7233C0SF', 'COMSPEC': 'C:\\WINDOWS\\system32\\cmd.exe', 'CONFIG_SITE': 'C:/Program Files/Git/etc/config.site', 'COMMONPROGRAMFILES(X86)': 'C:\\Program Files (x86)\\Common Files', 'COMMONPROGRAMW6432': 'C:\\Program Files\\Common Files', 'DISPLAY': 'needs-to-be-defined', 'DRIVERDATA': 'C:\\Windows\\System32\\Drivers\\DriverData', 'EFC_11820': '1', 'EXEPATH': 'C:\\Program Files\\Git', 'FPS_BROWSER_APP_PROFILE_STRING': 'Internet Explorer', 'FPS_BROWSER_USER_PROFILE_STRING': 'Default', 'HOME': 'C:\\Users\\rocco', 'HOMEDRIVE': 'C:', 'HOMEPATH': '\\Users\\rocco', 'HOSTNAME': 'LAPTOP-7233C0SF', 'INFOPATH': 'C:\\Program Files\\Git\\mingw64\\local\\info;C:\\Program Files\\Git\\mingw64\\share\\info;C:\\Program Files\\Git\\usr\\local\\info;C:\\Program Files\\Git\\usr\\share\\info;C:\\Program Files\\Git\\usr\\info;C:\\Program Files\\Git\\share\\info', 'LC_CTYPE': 'en_US.UTF-8', 'LOCALAPPDATA': 'C:\\Users\\rocco\\AppData\\Local', 'LOGONSERVER': '\\\\LAPTOP-7233C0SF', 'MANPATH': 'C:\\Program Files\\Git\\mingw64\\local\\man;C:\\Program Files\\Git\\mingw64\\share\\man;C:\\Program Files\\Git\\usr\\local\\man;C:\\Program Files\\Git\\usr\\share\\man;C:\\Program Files\\Git\\usr\\man;C:\\Program Files\\Git\\share\\man', 'MINGW_CHOST': 'x86_64-w64-mingw32', 'MINGW_PACKAGE_PREFIX': 'mingw-w64-x86_64', 'MINGW_PREFIX': 'C:/Program Files/Git/mingw64', 'MSYSTEM': 'MINGW64', 'MSYSTEM_CARCH': 'x86_64', 'MSYSTEM_CHOST': 'x86_64-w64-mingw32', 'MSYSTEM_PREFIX': 'C:/Program Files/Git/mingw64', 'NUMBER_OF_PROCESSORS': '24', 'ORIGINAL_PATH': 'C:\\Program Files\\Git\\mingw64\\bin;C:\\Program Files\\Git\\usr\\bin;C:\\Users\\rocco\\bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0;C:\\Windows\\System32\\OpenSSH;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR;C:\\Program Files\\MATLAB\\R2022b\\bin;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0;C:\\WINDOWS\\System32\\OpenSSH;C:\\Program Files\\Git\\cmd;C:\\Users\\rocco\\AppData\\Local\\Microsoft\\WindowsApps', 'ORIGINAL_TEMP': 'C:/Users/rocco/AppData/Local/Temp', 'ORIGINAL_TMP': 'C:/Users/rocco/AppData/Local/Temp', 'OS': 'Windows_NT', 'ONEDRIVE': 'C:\\Users\\rocco\\OneDrive', 'PATH': 'C:\\Users\\rocco\\bin;C:\\Program Files\\Git\\mingw64\\bin;C:\\Program Files\\Git\\usr\\local\\bin;C:\\Program Files\\Git\\usr\\bin;C:\\Program Files\\Git\\usr\\bin;C:\\Program Files\\Git\\mingw64\\bin;C:\\Program Files\\Git\\usr\\bin;C:\\Users\\rocco\\bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0;C:\\Windows\\System32\\OpenSSH;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR;C:\\Program Files\\MATLAB\\R2022b\\bin;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0;C:\\WINDOWS\\System32\\OpenSSH;C:\\Program Files\\Git\\cmd;C:\\Users\\rocco\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Program Files\\Git\\usr\\bin\\vendor_perl;C:\\Program Files\\Git\\usr\\bin\\core_perl;C:\\Users\\rocco\\miniconda3\\Scripts', 'PATHEXT': '.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC', 'PKG_CONFIG_PATH': 'C:\\Program Files\\Git\\mingw64\\lib\\pkgconfig;C:\\Program Files\\Git\\mingw64\\share\\pkgconfig', 'PKG_CONFIG_SYSTEM_INCLUDE_PATH': 'C:/Program Files/Git/mingw64/include', 'PKG_CONFIG_SYSTEM_LIBRARY_PATH': 'C:/Program Files/Git/mingw64/lib', 'PLINK_PROTOCOL': 'ssh', 'PROCESSOR_ARCHITECTURE': 'AMD64', 'PROCESSOR_IDENTIFIER': 'Intel64 Family 6 Model 151 Stepping 2, GenuineIntel', 'PROCESSOR_LEVEL': '6', 'PROCESSOR_REVISION': '9702', 'PROGRAMFILES': 'C:\\Program Files', 'PS1': '\\[\\033]0;$TITLEPREFIX:$PWD\\007\\]\\n\\[\\033[32m\\]\\u@\\h \\[\\033[35m\\]$MSYSTEM \\[\\033[33m\\]\\w\\[\\033[36m\\]`__git_ps1`\\[\\033[0m\\]\\n$ ', 'PSMODULEPATH': 'C:\\Program Files\\WindowsPowerShell\\Modules;C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\Modules', 'PUBLIC': 'C:\\Users\\Public', 'PWD': 'C:/Program Files/Git/', 'PYTHONIOENCODING': 'utf-8', 'PROGRAMDATA': 'C:\\ProgramData', 'PROGRAMFILES(X86)': 'C:\\Program Files (x86)', 'PROGRAMW6432': 'C:\\Program Files', 'SESSIONNAME': 'Console', 'SHELL': 'C:\\Program Files\\Git\\usr\\bin\\bash.exe', 'SHLVL': '0', 'SSH_ASKPASS': 'C:/Program Files/Git/mingw64/bin/git-askpass.exe', 'SYSTEMDRIVE': 'C:', 'SYSTEMROOT': 'C:\\WINDOWS', 'TEMP': 'C:\\Users\\rocco\\AppData\\Local\\Temp', 'TERM': 'xterm', 'TERM_PROGRAM': 'mintty', 'TERM_PROGRAM_VERSION': '3.6.3', 'TF_ALIAS': 'fuck', 'TF_HISTORY': '\t thefuck --version\n\t cd Docmtas\n\t fuck\n\t thefuck --version\n\t THEFUCK_DEBUG=true\n\t cd Docmts\n\t fuck\n\t export THEFUCK_DEBUG=true\n\t fuck\n\t cd Docmts', 'TF_SHELL': 'bash', 'TF_SHELL_ALIASES': "alias la='ls --all'\nalias ll='ls -l'\nalias ls='ls -F --color=auto --show-control-chars'\nalias winget='winpty winget.exe'", 'THEFUCK_DEBUG': 'true', 'TMP': 'C:\\Users\\rocco\\AppData\\Local\\Temp', 'TMPDIR': 'C:\\Users\\rocco\\AppData\\Local\\Temp', 'USERDOMAIN': 'LAPTOP-7233C0SF', 'USERDOMAIN_ROAMINGPROFILE': 'LAPTOP-7233C0SF', 'USERNAME': 'rocco', 'USERPROFILE': 'C:\\Users\\rocco', 'WINDIR': 'C:\\WINDOWS', 'ZES_ENABLE_SYSMAN': '1', '_': 'C:/Users/rocco/miniconda3/Scripts/thefuck', 'LC_ALL': 'C', 'LANG': 'C', 'GIT_TRACE': '1'}; is slow: False took: 0:00:00.040878
DEBUG: Importing rule: adb_unknown_command; took: 0:00:00
DEBUG: Importing rule: ag_literal; took: 0:00:00
DEBUG: Importing rule: apt_get; took: 0:00:00
DEBUG: Importing rule: apt_get_search; took: 0:00:00
DEBUG: Importing rule: apt_invalid_operation; took: 0:00:00.008001
DEBUG: Importing rule: apt_list_upgradable; took: 0:00:00
DEBUG: Importing rule: apt_upgrade; took: 0:00:00
DEBUG: Importing rule: aws_cli; took: 0:00:00
DEBUG: Importing rule: az_cli; took: 0:00:00
DEBUG: Importing rule: brew_cask_dependency; took: 0:00:00
DEBUG: Importing rule: brew_install; took: 0:00:00
DEBUG: Importing rule: brew_link; took: 0:00:00
DEBUG: Importing rule: brew_reinstall; took: 0:00:00
DEBUG: Importing rule: brew_uninstall; took: 0:00:00
DEBUG: Importing rule: brew_unknown_command; took: 0:00:00
DEBUG: Importing rule: brew_update_formula; took: 0:00:00
DEBUG: Importing rule: cargo; took: 0:00:00
DEBUG: Importing rule: cargo_no_command; took: 0:00:00
DEBUG: Importing rule: cat_dir; took: 0:00:00
DEBUG: Importing rule: cd_correction; took: 0:00:00.008117
DEBUG: Importing rule: cd_cs; took: 0:00:00
DEBUG: Importing rule: cd_mkdir; took: 0:00:00
DEBUG: Importing rule: cd_parent; took: 0:00:00
DEBUG: Importing rule: chmod_x; took: 0:00:00
DEBUG: Importing rule: choco_install; took: 0:00:00
DEBUG: Importing rule: composer_not_command; took: 0:00:00
DEBUG: Importing rule: conda_mistype; took: 0:00:00
DEBUG: Importing rule: cp_create_destination; took: 0:00:00
DEBUG: Importing rule: cp_omitting_directory; took: 0:00:00
DEBUG: Importing rule: cpp11; took: 0:00:00.008096
DEBUG: Importing rule: dirty_untar; took: 0:00:00
DEBUG: Importing rule: dirty_unzip; took: 0:00:00
DEBUG: Importing rule: django_south_ghost; took: 0:00:00
DEBUG: Importing rule: django_south_merge; took: 0:00:00
DEBUG: Importing rule: dnf_no_such_command; took: 0:00:00
DEBUG: Importing rule: docker_image_being_used_by_container; took: 0:00:00
DEBUG: Importing rule: docker_login; took: 0:00:00
DEBUG: Importing rule: docker_not_command; took: 0:00:00.008269
DEBUG: Importing rule: dry; took: 0:00:00
DEBUG: Importing rule: fab_command_not_found; took: 0:00:00
DEBUG: Importing rule: fix_alt_space; took: 0:00:00
DEBUG: Importing rule: fix_file; took: 0:00:00
DEBUG: Importing rule: gem_unknown_command; took: 0:00:00
DEBUG: Importing rule: git_add; took: 0:00:00
DEBUG: Importing rule: git_add_force; took: 0:00:00
DEBUG: Importing rule: git_bisect_usage; took: 0:00:00
DEBUG: Importing rule: git_branch_0flag; took: 0:00:00
DEBUG: Importing rule: git_branch_delete; took: 0:00:00
DEBUG: Importing rule: git_branch_delete_checked_out; took: 0:00:00
DEBUG: Importing rule: git_branch_exists; took: 0:00:00
DEBUG: Importing rule: git_branch_list; took: 0:00:00.000209
DEBUG: Importing rule: git_checkout; took: 0:00:00
DEBUG: Importing rule: git_clone_git_clone; took: 0:00:00
DEBUG: Importing rule: git_commit_add; took: 0:00:00
DEBUG: Importing rule: git_commit_amend; took: 0:00:00
DEBUG: Importing rule: git_commit_reset; took: 0:00:00
DEBUG: Importing rule: git_diff_no_index; took: 0:00:00
DEBUG: Importing rule: git_diff_staged; took: 0:00:00
DEBUG: Importing rule: git_fix_stash; took: 0:00:00
DEBUG: Importing rule: git_flag_after_filename; took: 0:00:00
DEBUG: Importing rule: git_help_aliased; took: 0:00:00
DEBUG: Importing rule: git_hook_bypass; took: 0:00:00
DEBUG: Importing rule: git_lfs_mistype; took: 0:00:00
DEBUG: Importing rule: git_main_master; took: 0:00:00
DEBUG: Importing rule: git_merge; took: 0:00:00
DEBUG: Importing rule: git_merge_unrelated; took: 0:00:00
DEBUG: Importing rule: git_not_command; took: 0:00:00
DEBUG: Importing rule: git_pull; took: 0:00:00
DEBUG: Importing rule: git_pull_clone; took: 0:00:00
DEBUG: Importing rule: git_pull_uncommitted_changes; took: 0:00:00
DEBUG: Importing rule: git_push; took: 0:00:00
DEBUG: Importing rule: git_push_different_branch_names; took: 0:00:00
DEBUG: Importing rule: git_push_force; took: 0:00:00
DEBUG: Importing rule: git_push_pull; took: 0:00:00
DEBUG: Importing rule: git_push_without_commits; took: 0:00:00
DEBUG: Importing rule: git_rebase_merge_dir; took: 0:00:00
DEBUG: Importing rule: git_rebase_no_changes; took: 0:00:00
DEBUG: Importing rule: git_remote_delete; took: 0:00:00
DEBUG: Importing rule: git_remote_seturl_add; took: 0:00:00
DEBUG: Importing rule: git_rm_local_modifications; took: 0:00:00
DEBUG: Importing rule: git_rm_recursive; took: 0:00:00
DEBUG: Importing rule: git_rm_staged; took: 0:00:00
DEBUG: Importing rule: git_stash; took: 0:00:00
DEBUG: Importing rule: git_stash_pop; took: 0:00:00
DEBUG: Importing rule: git_tag_force; took: 0:00:00
DEBUG: Importing rule: git_two_dashes; took: 0:00:00.008000
DEBUG: Importing rule: go_run; took: 0:00:00
DEBUG: Importing rule: go_unknown_command; took: 0:00:00
DEBUG: Importing rule: gradle_no_task; took: 0:00:00
DEBUG: Importing rule: gradle_wrapper; took: 0:00:00
DEBUG: Importing rule: grep_arguments_order; took: 0:00:00
DEBUG: Importing rule: grep_recursive; took: 0:00:00
DEBUG: Importing rule: grunt_task_not_found; took: 0:00:00
DEBUG: Importing rule: gulp_not_task; took: 0:00:00
DEBUG: Importing rule: has_exists_script; took: 0:00:00
DEBUG: Importing rule: heroku_multiple_apps; took: 0:00:00
DEBUG: Importing rule: heroku_not_command; took: 0:00:00
DEBUG: Importing rule: history; took: 0:00:00
DEBUG: Importing rule: hostscli; took: 0:00:00
DEBUG: Importing rule: ifconfig_device_not_found; took: 0:00:00.008005
DEBUG: Importing rule: java; took: 0:00:00
DEBUG: Importing rule: javac; took: 0:00:00
DEBUG: Importing rule: lein_not_task; took: 0:00:00
DEBUG: Importing rule: ln_no_hard_link; took: 0:00:00
DEBUG: Importing rule: ln_s_order; took: 0:00:00
DEBUG: Importing rule: long_form_help; took: 0:00:00
DEBUG: Importing rule: ls_all; took: 0:00:00
DEBUG: Importing rule: ls_lah; took: 0:00:00
DEBUG: Importing rule: man; took: 0:00:00
DEBUG: Importing rule: man_no_space; took: 0:00:00
DEBUG: Importing rule: mercurial; took: 0:00:00
DEBUG: Importing rule: missing_space_before_subcommand; took: 0:00:00
DEBUG: Importing rule: mkdir_p; took: 0:00:00
DEBUG: Importing rule: mvn_no_command; took: 0:00:00
DEBUG: Importing rule: mvn_unknown_lifecycle_phase; took: 0:00:00
DEBUG: Importing rule: nixos_cmd_not_found; took: 0:00:00
DEBUG: Importing rule: no_command; took: 0:00:00
DEBUG: Importing rule: no_such_file; took: 0:00:00
DEBUG: Importing rule: npm_missing_script; took: 0:00:00
DEBUG: Importing rule: npm_run_script; took: 0:00:00
DEBUG: Importing rule: npm_wrong_command; took: 0:00:00
DEBUG: Importing rule: omnienv_no_such_command; took: 0:00:00.008162
DEBUG: Importing rule: open; took: 0:00:00
DEBUG: Importing rule: pacman; took: 0:00:00.008185
DEBUG: Importing rule: pacman_invalid_option; took: 0:00:00
DEBUG: Importing rule: pacman_not_found; took: 0:00:00
DEBUG: Importing rule: path_from_history; took: 0:00:00
DEBUG: Importing rule: php_s; took: 0:00:00
DEBUG: Importing rule: pip_install; took: 0:00:00
DEBUG: Importing rule: pip_unknown_command; took: 0:00:00
DEBUG: Importing rule: port_already_in_use; took: 0:00:00.000186
DEBUG: Importing rule: prove_recursively; took: 0:00:00
DEBUG: Importing rule: python_command; took: 0:00:00
DEBUG: Importing rule: python_execute; took: 0:00:00
DEBUG: Importing rule: python_module_error; took: 0:00:00
DEBUG: Importing rule: quotation_marks; took: 0:00:00
DEBUG: Importing rule: rails_migrations_pending; took: 0:00:00
DEBUG: Importing rule: react_native_command_unrecognized; took: 0:00:00
DEBUG: Importing rule: remove_shell_prompt_literal; took: 0:00:00
DEBUG: Importing rule: remove_trailing_cedilla; took: 0:00:00
DEBUG: Importing rule: rm_dir; took: 0:00:00
DEBUG: Importing rule: rm_root; took: 0:00:00
DEBUG: Importing rule: scm_correction; took: 0:00:00
DEBUG: Importing rule: sed_unterminated_s; took: 0:00:00
DEBUG: Importing rule: sl_ls; took: 0:00:00
DEBUG: Importing rule: ssh_known_hosts; took: 0:00:00
DEBUG: Importing rule: sudo; took: 0:00:00
DEBUG: Importing rule: sudo_command_from_user_path; took: 0:00:00.008001
DEBUG: Importing rule: switch_lang; took: 0:00:00
DEBUG: Importing rule: systemctl; took: 0:00:00
DEBUG: Importing rule: terraform_init; took: 0:00:00
DEBUG: Importing rule: test.py; took: 0:00:00
DEBUG: Importing rule: tmux; took: 0:00:00
DEBUG: Importing rule: touch; took: 0:00:00
DEBUG: Importing rule: tsuru_login; took: 0:00:00
DEBUG: Importing rule: tsuru_not_command; took: 0:00:00
DEBUG: Importing rule: unknown_command; took: 0:00:00
DEBUG: Importing rule: unsudo; took: 0:00:00
DEBUG: Importing rule: vagrant_up; took: 0:00:00
DEBUG: Importing rule: whois; took: 0:00:00
DEBUG: Importing rule: workon_doesnt_exists; took: 0:00:00
DEBUG: Importing rule: wrong_hyphen_before_subcommand; took: 0:00:00
DEBUG: Importing rule: yarn_alias; took: 0:00:00
DEBUG: Importing rule: yarn_command_not_found; took: 0:00:00
DEBUG: Importing rule: yarn_command_replaced; took: 0:00:00
DEBUG: Importing rule: yarn_help; took: 0:00:00
DEBUG: Importing rule: yum_invalid_operation; took: 0:00:00.008232
DEBUG: Trying rule: path_from_history; took: 0:00:00
DEBUG: Trying rule: cd_cs; took: 0:00:00
DEBUG: Trying rule: dry; took: 0:00:00
DEBUG: Trying rule: git_stash_pop; took: 0:00:00
DEBUG: Trying rule: test.py; took: 0:00:00
DEBUG: Trying rule: adb_unknown_command; took: 0:00:00
DEBUG: Trying rule: ag_literal; took: 0:00:00
DEBUG: Trying rule: aws_cli; took: 0:00:00
DEBUG: Trying rule: az_cli; took: 0:00:00
DEBUG: Trying rule: brew_link; took: 0:00:00
DEBUG: Trying rule: brew_reinstall; took: 0:00:00
DEBUG: Trying rule: brew_uninstall; took: 0:00:00
DEBUG: Trying rule: brew_update_formula; took: 0:00:00
DEBUG: Trying rule: cargo; took: 0:00:00
DEBUG: Trying rule: cargo_no_command; took: 0:00:00
DEBUG: Trying rule: cat_dir; took: 0:00:00
DEBUG: Trying rule: cd_correction; took: 0:00:00
DEBUG: Trying rule: cd_mkdir; took: 0:00:00
DEBUG: Trying rule: cd_parent; took: 0:00:00
DEBUG: Trying rule: chmod_x; took: 0:00:00
DEBUG: Trying rule: composer_not_command; took: 0:00:00
DEBUG: Trying rule: conda_mistype; took: 0:00:00
DEBUG: Trying rule: cp_create_destination; took: 0:00:00
DEBUG: Trying rule: cp_omitting_directory; took: 0:00:00
DEBUG: Trying rule: cpp11; took: 0:00:00
DEBUG: Trying rule: dirty_untar; took: 0:00:00
DEBUG: Trying rule: dirty_unzip; took: 0:00:00
DEBUG: Trying rule: django_south_ghost; took: 0:00:00
DEBUG: Trying rule: django_south_merge; took: 0:00:00
DEBUG: Trying rule: docker_image_being_used_by_container; took: 0:00:00
DEBUG: Trying rule: docker_login; took: 0:00:00
DEBUG: Trying rule: docker_not_command; took: 0:00:00
DEBUG: Trying rule: fab_command_not_found; took: 0:00:00
DEBUG: Trying rule: fix_alt_space; took: 0:00:00
DEBUG: Trying rule: fix_file; took: 0:00:00
DEBUG: Trying rule: gem_unknown_command; took: 0:00:00
DEBUG: Trying rule: git_add; took: 0:00:00
DEBUG: Trying rule: git_add_force; took: 0:00:00
DEBUG: Trying rule: git_bisect_usage; took: 0:00:00
DEBUG: Trying rule: git_branch_0flag; took: 0:00:00
DEBUG: Trying rule: git_branch_delete; took: 0:00:00
DEBUG: Trying rule: git_branch_delete_checked_out; took: 0:00:00
DEBUG: Trying rule: git_branch_exists; took: 0:00:00
DEBUG: Trying rule: git_branch_list; took: 0:00:00
DEBUG: Trying rule: git_checkout; took: 0:00:00
DEBUG: Trying rule: git_clone_git_clone; took: 0:00:00
DEBUG: Trying rule: git_commit_add; took: 0:00:00
DEBUG: Trying rule: git_commit_amend; took: 0:00:00
DEBUG: Trying rule: git_commit_reset; took: 0:00:00
DEBUG: Trying rule: git_diff_no_index; took: 0:00:00
DEBUG: Trying rule: git_diff_staged; took: 0:00:00
DEBUG: Trying rule: git_fix_stash; took: 0:00:00
DEBUG: Trying rule: git_flag_after_filename; took: 0:00:00
DEBUG: Trying rule: git_help_aliased; took: 0:00:00
DEBUG: Trying rule: git_lfs_mistype; took: 0:00:00
DEBUG: Trying rule: git_merge; took: 0:00:00
DEBUG: Trying rule: git_merge_unrelated; took: 0:00:00
DEBUG: Trying rule: git_not_command; took: 0:00:00
DEBUG: Trying rule: git_pull; took: 0:00:00
DEBUG: Trying rule: git_pull_clone; took: 0:00:00
DEBUG: Trying rule: git_pull_uncommitted_changes; took: 0:00:00
DEBUG: Trying rule: git_push; took: 0:00:00
DEBUG: Trying rule: git_push_different_branch_names; took: 0:00:00
DEBUG: Trying rule: git_push_pull; took: 0:00:00
DEBUG: Trying rule: git_push_without_commits; took: 0:00:00
DEBUG: Trying rule: git_rebase_merge_dir; took: 0:00:00
DEBUG: Trying rule: git_rebase_no_changes; took: 0:00:00
DEBUG: Trying rule: git_remote_delete; took: 0:00:00
DEBUG: Trying rule: git_remote_seturl_add; took: 0:00:00
DEBUG: Trying rule: git_rm_local_modifications; took: 0:00:00
DEBUG: Trying rule: git_rm_recursive; took: 0:00:00
DEBUG: Trying rule: git_rm_staged; took: 0:00:00
DEBUG: Trying rule: git_stash; took: 0:00:00
DEBUG: Trying rule: git_tag_force; took: 0:00:00
DEBUG: Trying rule: git_two_dashes; took: 0:00:00
DEBUG: Trying rule: go_run; took: 0:00:00
DEBUG: Trying rule: go_unknown_command; took: 0:00:00
DEBUG: Trying rule: gradle_no_task; took: 0:00:00
DEBUG: Trying rule: gradle_wrapper; took: 0:00:00
DEBUG: Trying rule: grep_arguments_order; took: 0:00:00
DEBUG: Trying rule: grep_recursive; took: 0:00:00
DEBUG: Trying rule: grunt_task_not_found; took: 0:00:00
DEBUG: Trying rule: gulp_not_task; took: 0:00:00
DEBUG: Trying rule: has_exists_script; took: 0:00:00
DEBUG: Trying rule: heroku_multiple_apps; took: 0:00:00
DEBUG: Trying rule: heroku_not_command; took: 0:00:00
DEBUG: Trying rule: hostscli; took: 0:00:00
DEBUG: Trying rule: ifconfig_device_not_found; took: 0:00:00
DEBUG: Trying rule: java; took: 0:00:00
DEBUG: Trying rule: javac; took: 0:00:00
DEBUG: Trying rule: lein_not_task; took: 0:00:00
DEBUG: Trying rule: ln_no_hard_link; took: 0:00:00
DEBUG: Trying rule: ln_s_order; took: 0:00:00
DEBUG: Trying rule: ls_all; took: 0:00:00
DEBUG: Trying rule: ls_lah; took: 0:00:00
DEBUG: Trying rule: man; took: 0:00:00
DEBUG: Trying rule: mercurial; took: 0:00:00
DEBUG: Trying rule: mkdir_p; took: 0:00:00
DEBUG: Trying rule: mvn_no_command; took: 0:00:00
DEBUG: Trying rule: mvn_unknown_lifecycle_phase; took: 0:00:00
DEBUG: Trying rule: no_such_file; took: 0:00:00
DEBUG: Trying rule: open; took: 0:00:00
DEBUG: Trying rule: pacman_invalid_option; took: 0:00:00
DEBUG: Trying rule: php_s; took: 0:00:00
DEBUG: Trying rule: pip_install; took: 0:00:00
DEBUG: Trying rule: pip_unknown_command; took: 0:00:00
DEBUG: Trying rule: prove_recursively; took: 0:00:00
DEBUG: Trying rule: python_command; took: 0:00:00
DEBUG: Trying rule: python_execute; took: 0:00:00
DEBUG: Trying rule: python_module_error; took: 0:00:00
DEBUG: Trying rule: quotation_marks; took: 0:00:00
DEBUG: Trying rule: rails_migrations_pending; took: 0:00:00
DEBUG: Trying rule: react_native_command_unrecognized; took: 0:00:00
DEBUG: Trying rule: remove_shell_prompt_literal; took: 0:00:00
DEBUG: Trying rule: remove_trailing_cedilla; took: 0:00:00
DEBUG: Trying rule: rm_dir; took: 0:00:00
DEBUG: Trying rule: scm_correction; took: 0:00:00
DEBUG: Trying rule: sed_unterminated_s; took: 0:00:00
DEBUG: Trying rule: sl_ls; took: 0:00:00
DEBUG: Trying rule: ssh_known_hosts; took: 0:00:00
DEBUG: Trying rule: sudo; took: 0:00:00
DEBUG: Trying rule: sudo_command_from_user_path; took: 0:00:00
DEBUG: Trying rule: switch_lang; took: 0:00:00
DEBUG: Trying rule: systemctl; took: 0:00:00
DEBUG: Trying rule: terraform_init; took: 0:00:00
DEBUG: Trying rule: tmux; took: 0:00:00
DEBUG: Trying rule: touch; took: 0:00:00
DEBUG: Trying rule: tsuru_login; took: 0:00:00
DEBUG: Trying rule: tsuru_not_command; took: 0:00:00
DEBUG: Trying rule: unknown_command; took: 0:00:00
DEBUG: Trying rule: unsudo; took: 0:00:00
DEBUG: Trying rule: vagrant_up; took: 0:00:00
DEBUG: Trying rule: whois; took: 0:00:00
DEBUG: Trying rule: workon_doesnt_exists; took: 0:00:00
DEBUG: Trying rule: yarn_alias; took: 0:00:00
DEBUG: Trying rule: yarn_command_not_found; took: 0:00:00
DEBUG: Trying rule: yarn_command_replaced; took: 0:00:00
DEBUG: Trying rule: yarn_help; took: 0:00:00
DEBUG: Trying rule: git_hook_bypass; took: 0:00:00
DEBUG: Trying rule: git_main_master; took: 0:00:00
DEBUG: Trying rule: man_no_space; took: 0:00:00
DEBUG: Trying rule: no_command; took: 0:00:00.008259
DEBUG: Trying rule: missing_space_before_subcommand; took: 0:00:00
DEBUG: Trying rule: wrong_hyphen_before_subcommand; took: 0:00:00
DEBUG: Trying rule: long_form_help; took: 0:00:00
DEBUG: Trying rule: history; took: 0:00:00
cd Documents [enter/↑/↓/ctrl+c]
```
Anything else you think is relevant:
The terminal just freezes like this after outputting the correct command. Pressing enter just makes a newline, the arrow keys move around the space, and ctrl+c does nothing.

<!-- It's only with enough information that we can do something to fix the problem. -->
| open | 2023-06-08T17:52:12Z | 2024-09-26T13:35:47Z | https://github.com/nvbn/thefuck/issues/1379 | [] | scharney | 2 |
docarray/docarray | fastapi | 1,677 | `tensor_type` argument for all DocVec deserializations | `DocVec.from_protobuf(tensor_type=...)` already exists, but this needs to be the case for all deserializations:
- proto
- json
- pandas
- bytes
- binary
- base64
Otherwise there is no way of knowing if the deserialized DocVec should use torch, np, or tf | closed | 2023-06-28T13:58:41Z | 2023-07-26T02:48:42Z | https://github.com/docarray/docarray/issues/1677 | [] | JohannesMessner | 3 |
ijl/orjson | numpy | 494 | Support for CPython 3.13 | PyO3 recently released and is testing for 3.13:
- https://github.com/PyO3/pyo3/commit/388d1760b5d6545c94925dafe0d640200b9fded2
Any suggestions on how to fork and test `orjson` with this newer version? | closed | 2024-06-04T19:07:26Z | 2024-06-07T15:44:35Z | https://github.com/ijl/orjson/issues/494 | [
"invalid"
] | jm-nab | 0 |
huggingface/datasets | machine-learning | 6,489 | load_dataset imageflder for aws s3 path | ### Feature request
I would like to load a dataset from S3 using the imagefolder option
something like
`dataset = datasets.load_dataset('imagefolder', data_dir='s3://.../lsun/train/bedroom', fs=S3FileSystem(), streaming=True) `
### Motivation
no need of data_files
### Your contribution
no experience with this | open | 2023-12-12T00:08:43Z | 2023-12-12T00:09:27Z | https://github.com/huggingface/datasets/issues/6489 | [
"enhancement"
] | segalinc | 0 |
open-mmlab/mmdetection | pytorch | 11,542 | train_dataloader | I want to combine images from two datasets into a batch input network. I referred to the configuration file writing in semi detection and used the GroupMultiSource Sampler method. The specific configuration is shown in the figure below. However, during training, I have been in the first round and will not proceed with the second round, and verification will not be conducted. I would like to ask how to solve this problem.


| open | 2024-03-11T09:00:11Z | 2024-03-11T09:00:29Z | https://github.com/open-mmlab/mmdetection/issues/11542 | [] | monster1129 | 0 |
babysor/MockingBird | deep-learning | 290 | 运行pre.py时报错 | 完整输出:
PS D:\MockingBird> python pre.py D:\
Using data from:
D:\aidatatang_200zh\corpus\train
aidatatang_200zh: 6%|███▎ | 50/840 [23:39<6:13:51, 28.39s/speakers]
Traceback (most recent call last):
File "D:\MockingBird\pre.py", line 74, in <module>
preprocess_dataset(**vars(args))
File "D:\MockingBird\synthesizer\preprocess.py", line 74, in preprocess_dataset
for speaker_metadata in tqdm(job, dataset, len(speaker_dirs), unit="speakers"):
File "E:\python\lib\site-packages\tqdm\std.py", line 1180, in __iter__
Process SpawnPoolWorker-1:
Traceback (most recent call last):
File "E:\python\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
File "E:\python\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "E:\python\lib\multiprocessing\pool.py", line 114, in worker
task = get()
File "E:\python\lib\multiprocessing\queues.py", line 368, in get
return _ForkingPickler.loads(res)
MemoryError
for obj in iterable:
File "E:\python\lib\multiprocessing\pool.py", line 870, in next
raise value
multiprocessing.pool.MaybeEncodingError: Error sending result: '<multiprocessing.pool.ExceptionWithTraceback object at 0x000002734060DD30>'. Reason: 'PicklingError("Can't pickle <class 'MemoryError'>: it's not the same object as builtins.MemoryError")'
Traceback (most recent call last):
File "E:\python\lib\multiprocessing\util.py", line 300, in _run_finalizers
finalizer()
File "E:\python\lib\multiprocessing\util.py", line 224, in __call__
res = self._callback(*self._args, **self._kwargs)
File "E:\python\lib\multiprocessing\pool.py", line 692, in _terminate_pool
cls._help_stuff_finish(inqueue, task_handler, len(pool))
File "E:\python\lib\multiprocessing\pool.py", line 674, in _help_stuff_finish
inqueue._reader.recv()
File "E:\python\lib\multiprocessing\connection.py", line 256, in recv
return _ForkingPickler.loads(buf.getbuffer())
MemoryError
内存16G,硬盘剩余空间200G,CPU i7-6700HQ,显卡 NVIDIA Geforce GTX 960M,仓库昨天刚clone的,PyTorch是最新版 | closed | 2021-12-23T14:05:47Z | 2021-12-24T08:43:19Z | https://github.com/babysor/MockingBird/issues/290 | [] | hutianyu2006 | 2 |
ethanopp/fitly | plotly | 16 | Performance view not rendering | Hi! After successfully refreshing my data, when attempting to view the `performance` view, the page does not render and the following is logged:
> {"loglevel": "info", "workers": 8, "bind": "0.0.0.0:80", "workers_per_core": 2.0, "host": "0.0.0.0", "port": "80"}
Exception on /_dash-update-component [POST]
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.7/site-packages/dash/dash.py", line 1076, in dispatch
response.set_data(func(*args, outputs_list=outputs_list))
File "/usr/local/lib/python3.7/site-packages/dash/dash.py", line 1007, in add_context
output_value = func(*args, **kwargs) # %% callback invoked %%
File "/app/src/fitly/utils.py", line 87, in router_callback
layout = page(**kwargs)
File "/app/src/fitly/pages/performance.py", line 46, in get_layout
pmc_switch_settings = json.loads(athlete_info.pmc_switch_settings)
File "/usr/local/lib/python3.7/json/__init__.py", line 341, in loads
raise TypeError(f'the JSON object must be str, bytes or bytearray, '
TypeError: the JSON object must be str, bytes or bytearray, not NoneType
As far as I can tell, the other views I'd want to use (Home, Power) seem to be working as expected.
Appreciate the assistance to date, and as always, happy to provide any additional details.
Thanks! | closed | 2021-01-10T18:25:35Z | 2021-01-10T19:44:49Z | https://github.com/ethanopp/fitly/issues/16 | [] | spawn-github | 2 |
fastapi-users/fastapi-users | fastapi | 338 | Get all users route | Love the project and the ease that it can be implemented.
It would be great if there was an endpoint on the Users router that returned a list of the users. This would just return a list of the users that could be gotten from GET /{user_id} if you knew all the ID's.
| closed | 2020-09-23T11:35:31Z | 2020-09-25T13:31:27Z | https://github.com/fastapi-users/fastapi-users/issues/338 | [
"question"
] | lockieRichter | 3 |
jazzband/django-oauth-toolkit | django | 1,186 | 'oauth2_provider' is not a registered namespace | I have a problem with oauth2_provider recently, it used to work, but suddenly it doesn't work anymore.
here is the code:
- api_partner/oauth2_urls.py (api_partner folder)
```
from django.conf.urls import url, include
from django.contrib.auth.decorators import login_required
import oauth2_provider.views as oauth2_views
# OAuth2 provider endpoints
oauth2_endpoint_views = [
url(r'^token/$', oauth2_views.TokenView.as_view(), name="token"),
url(r'^authorize/$', oauth2_views.AuthorizationView.as_view(), name="authorize"),
url(r'^revoke-token/$', oauth2_views.RevokeTokenView.as_view(), name="revoke-token"),
]
# if settings.DEBUG:
# OAuth2 Application Management endpoints
oauth2_endpoint_views += [
# {{ URL }}/api/partner/oauth2/applications/
url(r'^applications/$', login_required(oauth2_views.ApplicationList.as_view()), name="list"),
url(r'^applications/register/$', login_required(oauth2_views.ApplicationRegistration.as_view()), name="register"),
url(r'^applications/(?P<pk>\d+)/$', login_required(oauth2_views.ApplicationDetail.as_view()), name="detail"),
url(r'^applications/(?P<pk>\d+)/delete/$', login_required(oauth2_views.ApplicationDelete.as_view()), name="delete"),
url(r'^applications/(?P<pk>\d+)/update/$', login_required(oauth2_views.ApplicationUpdate.as_view()), name="update"),
]
# OAuth2 Token Management endpoints
oauth2_endpoint_views += [
url(r'^authorized-tokens/$', login_required(oauth2_views.AuthorizedTokensListView.as_view()), name="authorized-token-list"),
url(r'^authorized-tokens/(?P<pk>\d+)/delete/$', login_required(oauth2_views.AuthorizedTokenDeleteView.as_view()),
name="authorized-token-delete"),
]
urlpatterns = [
# OAuth 2 endpoints:
url(r'^', include((oauth2_endpoint_views, 'oauth2_provider'))),
]
```
- api_partner/urls.py
```
app_name = 'api-partner'
urlpatterns = [
path('oauth2/', include('api_partner.oauth2_urls')),
```
- app/urls.py
```
urlpatterns = [
path('admin/', admin.site.urls),
path('', home.DashboardIndex.as_view(), name="home"),
path('api/', include('api.urls')),
path('api/agent/', include('api_agent.urls')),
path('api/partner/', include('api_partner.urls', namespace='api-partner')),
```
things that I already did but still get same error NoReverseMatch in django 'oauth2_provider is not a registered namespace:
- add app_name = 'oauth2_provider in api_partner/oauth2_urls.py
- adding namespace in both api_partner/urls.py and api_partner/oauth2_urls.py
**api_partner/urls.py**
```
path('oauth2/', include('api_partner.oauth2_urls', namespace='oauth2_provider)),
```
**oauth2_urls.py**
```
app_name = 'oauth2_provider'
url(r'^', include((oauth2_endpoint_views, 'oauth2_provider'), namespace='oauth2_provider')),
```
| closed | 2022-07-21T02:02:37Z | 2023-10-04T14:50:46Z | https://github.com/jazzband/django-oauth-toolkit/issues/1186 | [
"question"
] | ashasanm | 1 |
babysor/MockingBird | pytorch | 144 | 自定义训练音频相关 | 首先很感谢作者的付出,在这里,我想问下,如果我想训练自己的音频,是不是只能到你已经定义好的文件侠里面把原有的音频和对应的TXT替换?但这样操作起来真的很不方便啊,要是只要按照指定格式,然后自己随便指定文件名就好了。不知道这个作者能优化下吗?感激不尽啊! | closed | 2021-10-13T13:45:57Z | 2021-10-16T00:18:42Z | https://github.com/babysor/MockingBird/issues/144 | [] | fangg2021 | 2 |
pallets-eco/flask-sqlalchemy | flask | 929 | Getting `sqlalchemy.exc.NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:postgres`since SQLAlchemy has released 1.4 | Getting `sqlalchemy.exc.NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:postgres`since SQLAlchemy has released [1.4](https://docs.sqlalchemy.org/en/14/index.html)
I'd freeze the **SQLAlchemy** version for now
https://github.com/pallets/flask-sqlalchemy/blob/222059e200e6b2e3b0ac57028b08290a648ae8ea/setup.py#L12 | closed | 2021-03-16T10:26:52Z | 2021-04-01T00:13:41Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/929 | [] | tbarda | 9 |
iperov/DeepFaceLab | deep-learning | 5,597 | OptimiSation | Not a bug or problem, I'd like more information - if possible?...
Supposing you have a very high quality faceset, but start training with a low quality (jpg quality 15) set.
Does the low quality degrade initial traiining? (It seems to speed it up!)
When the set looks good, upgrade the source faceset to a higher quality, and repeat....
After sufficient training, it seems that lower quality src and dst give a better likeness.
I want to train more efficiently without degrading the overall quality.
Does the overall model degrade if the src quality is lowered?
Equally it's unclear if the dst quality affects the final result.
I would assume that the src material should be as high quality as possible, and that the dst is not as much of a factor.
Your thoughts on these questiosn would be well appreciated.
Cheers! | open | 2022-12-10T03:18:21Z | 2023-06-10T05:02:36Z | https://github.com/iperov/DeepFaceLab/issues/5597 | [] | robtoll | 2 |
aio-libs/aiomysql | sqlalchemy | 669 | Review all code examples before 1.0 | open | 2022-01-16T18:22:19Z | 2022-02-18T00:01:10Z | https://github.com/aio-libs/aiomysql/issues/669 | [
"docs"
] | Nothing4You | 0 | |
plotly/dash | jupyter | 3,016 | [BUG] Make a minor release updating plotly bundle to 2.35.2 or newer to fix maplibre | I got the pip package of dash, version 2.18.1.
Would it be possible to make a new release that updated plotly from 2.35.0 to 2.35.2? We have an offline application, and the bundled plotly (v2.35.0) is trying to get maplibre-gl.js from some CDN, instead of having it bundled, and they fixed that on plotly 2.35.2, but the latest stable dash release has not been updated accordingly.
Best regards,
Arturo | closed | 2024-09-24T23:57:28Z | 2024-09-25T19:37:44Z | https://github.com/plotly/dash/issues/3016 | [] | pupitetris | 2 |
horovod/horovod | deep-learning | 3,884 | Reporting a vulnerability | Hello!
I hope you are doing well!
We are a security research team. Our tool automatically detected a vulnerability in this repository. We want to disclose it responsibly. GitHub has a feature called **Private vulnerability reporting**, which enables security research to privately disclose a vulnerability. Unfortunately, it is not enabled for this repository.
Can you enable it, so that we can report it?
Thanks in advance!
PS: you can read about how to enable private vulnerability reporting here: https://docs.github.com/en/code-security/security-advisories/repository-security-advisories/configuring-private-vulnerability-reporting-for-a-repository | closed | 2023-04-10T11:49:29Z | 2023-12-15T04:10:49Z | https://github.com/horovod/horovod/issues/3884 | [
"wontfix"
] | igibek | 2 |
unionai-oss/pandera | pandas | 1,360 | Design Data Types Library That Supports Both PySpark & Pandas | #### Design Data Types Library That Supports Both PySpark & Pandas
Hi, I have multiple data types I commonly work with, sometimes in pandas and sometimes in pyspark.
I don't want to create 2 pandera DataFrameModels for each type, that seems like a really bad practice.
What's the best way to currently do this?
Is there also a way to write code that will work both on pyspark & pandas?
| open | 2023-10-01T10:01:38Z | 2025-01-11T08:20:12Z | https://github.com/unionai-oss/pandera/issues/1360 | [
"question"
] | lior5654 | 11 |
mitmproxy/mitmproxy | python | 6,237 | Can't clone Git repository over `mitmproxy` | #### Problem Description
When cloning a Git repository via HTTP over `mitmproxy`, it just hangs. Works for small repositories but seems not to work for bigger repositories. The entry in the `mitmproxy` UI shows "content missing".
<table><tr><td>

</td><td>

</td><td>

</td><td>

</tr></tr></table>
<p align="center">

</p>
#### Steps to reproduce the behavior:
1. `mitmproxy`
2. `git config --global http.proxy=http://localhost:8080 && git config --global http.sslCAInfo=$HOME/.mitmproxy/mitmproxy-ca-cert.cer`
3. `git clone https://gitlab.freedesktop.org/gstreamer/gstreamer-rs.git`
#### System Information
```
Mitmproxy: 9.0.1
Python: 3.11.4
OpenSSL: OpenSSL 3.1.1 30 May 2023
Platform: macOS-13.4.1-arm64-arm-64bit
``` | closed | 2023-07-10T10:15:03Z | 2023-07-11T14:37:03Z | https://github.com/mitmproxy/mitmproxy/issues/6237 | [
"kind/ux"
] | NiklasRosenstein | 3 |
facebookresearch/fairseq | pytorch | 5,532 | Overflow issue with Fairseq Preprocess for large datasets | ## 🐛 Bug
I realise no one is maintaining this anymore, but just for anyone who might come across a similar issue which was hard to debug:
With the default binarized dataset type in fairseq preprocess (mmap), it is possible to get integer overflow errors when processing big datasets. The key snippet of code is in `fairseq/data/indexed_dataset.py`:
```
@staticmethod
def _get_pointers(sizes):
dtype_size = dtype().itemsize
address = 0
pointers = []
for size in sizes:
pointers.append(address)
address += size * dtype_size
return pointers
```
for some reason, when using multiple workers it is possible for some of the values in sizes to be np.int32, rather than int. I have not worked out why this is. However, for large enough datasets this can lead to integer overflow (as address becomes type np.int32 rather than int).
The fix is just to change:
```address += int(size * dtype_size)```
| open | 2024-08-07T09:06:18Z | 2025-02-04T10:22:31Z | https://github.com/facebookresearch/fairseq/issues/5532 | [
"bug",
"needs triage"
] | henrycharlesworth | 2 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 16,539 | [Feature Request]: Add Custom Notifications for All Tabs (Not Just Text2Img) | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
It would be helpful to have customizable notification sounds across all tabs in the WebUI, not just for Text2Img. This would allow users to set different sounds for processes like img2img, inpainting, or LoRA training, enhancing workflow by making it easier to identify when a task is done, even if they are multitasking or working in other tabs. This builds on the existing notification feature but adds more flexibility and customization.
### Proposed workflow
### How to Access and Use Customizable Notification Sounds Feature:
1. **Settings Menu:**
- Navigate to the **Settings** tab in the WebUI.
- Find a new section labeled **Notifications**.
2. **Enable Custom Sounds:**
- Toggle **Enable Custom Notification Sounds** to activate custom sounds for all tabs.
3. **Select Sounds for Each Tab:**
- Assign different sounds for **txt2img**, **img2img**, **inpainting**, **LoRA training**, etc.
- Choose or upload an audio file (e.g., .mp3, .wav) from a dropdown or upload option.
4. **Volume and Notification Options:**
- Adjust the volume for each sound.
- Option to play sounds even when the tab is not focused.
5. **Save Preferences:**
- Click **Save** to apply your custom settings across all tabs.
### Additional information
_No response_ | open | 2024-10-08T04:58:37Z | 2024-10-08T04:58:37Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16539 | [
"enhancement"
] | seantamturk | 0 |
BeanieODM/beanie | asyncio | 124 | Nested object field do not use Field.alias value in ExpressionField | When executing an expression of a nested object field of a Document class, the value of the aliases is not used to create the expression string for the mongo query.
E.g.
```python
class HeaderObject(BaseModel):
header_id: str = Field(alias="headerId")
class Header(Document):
class Collection:
name = "test-header"
header: HeaderObject
print(Header.header.header_id == 1) # actual => {'header.header_id': 1}; expected => {'header.headerId': 1}
```
So the solution is that beanie during the class init, check the type of the field, if it is object, so go inside it to get the alias. | closed | 2021-10-04T16:04:31Z | 2023-04-02T02:20:38Z | https://github.com/BeanieODM/beanie/issues/124 | [
"Stale"
] | KiraPC | 9 |
WZMIAOMIAO/deep-learning-for-image-processing | pytorch | 360 | 怎么处理4通道的图像 | line 75, in normalize
return (image - mean[:, None, None]) / std[:, None, None]
RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0 | closed | 2021-10-12T01:23:56Z | 2021-10-14T11:00:09Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/360 | [] | Lu2017828292 | 2 |
dpgaspar/Flask-AppBuilder | rest-api | 1,647 | Overriding Chart template in view gives error ```TypeError: Object of type Undefined is not JSON serializable``` | If you'd like to report a bug in Flask-Appbuilder, fill out the template below. Provide
any extra information that may be useful
Responsible disclosure:
We want to keep Flask-AppBuilder safe for everyone. If you've discovered a security vulnerability
please report to danielvazgaspar@gmail.com.
### Environment
Flask-Appbuilder version: 3.3.0
pip freeze output:
```
alembic==1.6.5
apispec==3.3.2
attrs==21.2.0
Babel==2.9.1
click==7.1.2
colorama==0.4.4
defusedxml==0.7.1
dnspython==2.1.0
email-validator==1.1.2
Flask==1.1.4
Flask-AppBuilder==3.3.0
Flask-Babel==1.0.0
Flask-JWT-Extended==3.25.1
Flask-Login==0.4.1
Flask-Migrate==3.0.0
Flask-OpenID==1.2.5
Flask-SQLAlchemy==2.5.1
Flask-WTF==0.14.3
idna==3.1
itsdangerous==1.1.0
Jinja2==2.11.3
jsonschema==3.2.0
Mako==1.1.4
MarkupSafe==2.0.1
marshmallow==3.12.1
marshmallow-enum==1.5.1
marshmallow-sqlalchemy==0.23.1
numpy==1.20.3
pandas==1.2.4
prison==0.1.3
PyJWT==1.7.1
pyrsistent==0.17.3
python-dateutil==2.8.1
python-dotenv==0.15.0
python-editor==1.0.4
python3-openid==3.2.0
pytz==2021.1
PyYAML==5.4.1
six==1.16.0
SQLAlchemy==1.3.24
SQLAlchemy-Utils==0.37.4
Werkzeug==1.0.1
WTForms==2.3.3
```
### Describe the expected results
I want to use a customised chart template so I can change the styling etc. but get error ```TypeError: Object of type Undefined is not JSON serializable```when overriding the template in the view.
1. I copied the default chart template ```appbuilder/general/widgets/direct_chart.html``` to app/templates and renamed to```my_direct_chart_html```
2. In ```views.py``` I override the chart template ```chart_template = 'my_direct_chart.html'```
3. I would expect the chart to render normally (as no modifications have yet been made)
```python
class ChartBalancesView(DirectByChartView):
datamodel = SQLAInterface(ChartBalances)
chart_template = 'my_direct_chart.html'
chart_title = 'Bank Acc Balances'
definitions = [
{
'label': 'total',
'group': 'acc_name',
'series': ['balance']
}
]
```
### Describe the actual results
Get error when requesting the view.
```
TypeError: Object of type Undefined is not JSON serializable
2021-05-31 10:01:18,658:INFO:werkzeug:127.0.0.1 - - [31/May/2021 10:01:18] "GET /chartbalancesview/chart/ HTTP/1.1" 500 -
```
```pytb
2021-05-31 10:43:58,982:ERROR:app:Exception on /chartbalancesview/chart/ [GET]
Traceback (most recent call last):
File "/home/tnporter/.virtualenvs/budgetiq/lib/python3.9/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/tnporter/.virtualenvs/budgetiq/lib/python3.9/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/tnporter/.virtualenvs/budgetiq/lib/python3.9/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/tnporter/.virtualenvs/budgetiq/lib/python3.9/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/tnporter/.virtualenvs/budgetiq/lib/python3.9/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/tnporter/.virtualenvs/budgetiq/lib/python3.9/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/tnporter/.virtualenvs/budgetiq/lib/python3.9/site-packages/flask_appbuilder/security/decorators.py", line 109, in wraps
return f(self, *args, **kwargs)
File "/home/tnporter/.virtualenvs/budgetiq/lib/python3.9/site-packages/flask_appbuilder/charts/views.py", line 209, in chart
return self.render_template(
File "/home/tnporter/.virtualenvs/budgetiq/lib/python3.9/site-packages/flask_appbuilder/baseviews.py", line 287, in render_template
return render_template(
File "/home/tnporter/.virtualenvs/budgetiq/lib/python3.9/site-packages/flask/templating.py", line 137, in render_template
return _render(
File "/home/tnporter/.virtualenvs/budgetiq/lib/python3.9/site-packages/flask/templating.py", line 120, in _render
rv = template.render(context)
File "/home/tnporter/.virtualenvs/budgetiq/lib/python3.9/site-packages/jinja2/environment.py", line 1090, in render
self.environment.handle_exception()
File "/home/tnporter/.virtualenvs/budgetiq/lib/python3.9/site-packages/jinja2/environment.py", line 832, in handle_exception
reraise(*rewrite_traceback_stack(source=source))
File "/home/tnporter/.virtualenvs/budgetiq/lib/python3.9/site-packages/jinja2/_compat.py", line 28, in reraise
raise value.with_traceback(tb)
File "/home/tnporter/gl/tnporter/applications/budgetiq/biq-gui/app/templates/my_direct_chart.html", line 12, in top-level template code
var jsonData{{ modelview_name }} = {{ value_columns | tojson }}
File "/home/tnporter/.virtualenvs/budgetiq/lib/python3.9/site-packages/flask/json/__init__.py", line 376, in tojson_filter
return Markup(htmlsafe_dumps(obj, **kwargs))
File "/home/tnporter/.virtualenvs/budgetiq/lib/python3.9/site-packages/flask/json/__init__.py", line 290, in htmlsafe_dumps
dumps(obj, **kwargs)
File "/home/tnporter/.virtualenvs/budgetiq/lib/python3.9/site-packages/flask/json/__init__.py", line 211, in dumps
rv = _json.dumps(obj, **kwargs)
File "/usr/lib/python3.9/json/__init__.py", line 234, in dumps
return cls(
File "/usr/lib/python3.9/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python3.9/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/home/tnporter/.virtualenvs/budgetiq/lib/python3.9/site-packages/flask/json/__init__.py", line 100, in default
return _json.JSONEncoder.default(self, o)
File "/usr/lib/python3.9/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type Undefined is not JSON serializable
2021-05-31 10:43:58,988:INFO:werkzeug:127.0.0.1 - - [31/May/2021 10:43:58] "GET /chartbalancesview/chart/ HTTP/1.1" 500 -
```
### Steps to reproduce
As above
| closed | 2021-05-31T09:49:12Z | 2021-06-23T16:21:27Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1647 | [] | tobyporter | 0 |
jonaswinkler/paperless-ng | django | 195 | Missing search results | Hi,
i am currently using the version 0.9.9 docker-compose version.
I added some documents and they were correctly processed. I can see the content of the documents.
But if i search with the top bar the results are not appearing:

But the auto completion is working:

The container log doesn't show any hint:
```
webserver_1 | WARNING 2020-12-27 14:30:16,799 log Bad Request: /api/search/
webserver_1 | 172.23.0.3 - - [27/Dec/2020:14:30:19 +0000] "GET /api/search/autocomplete/?term=leben HTTP/1.1" 200 164 "https://pl.xxx.de/search?query=lebenslauf%20" "Mozilla/5.0 (X11; Linux x86_64; rv:84.0) Gecko/20100101 Firefox/84.0"
webserver_1 | 172.23.0.3 - - [27/Dec/2020:14:30:20 +0000] "GET /api/search/autocomplete/?term=lebens HTTP/1.1" 200 164 "https://pl.xxx.de/search?query=lebenslauf%20" "Mozilla/5.0 (X11; Linux x86_64; rv:84.0) Gecko/20100101 Firefox/84.0"
webserver_1 | 172.23.0.3 - - [27/Dec/2020:14:30:20 +0000] "GET /api/search/autocomplete/?term=lebensl HTTP/1.1" 200 14 "https://pl.xxx.de/search?query=lebenslauf%20" "Mozilla/5.0 (X11; Linux x86_64; rv:84.0) Gecko/20100101 Firefox/84.0"
webserver_1 | 172.23.0.3 - - [27/Dec/2020:14:30:20 +0000] "GET /api/search/autocomplete/?term=lebensla HTTP/1.1" 200 14 "https://pl.xxx.de/search?query=lebenslauf%20" "Mozilla/5.0 (X11; Linux x86_64; rv:84.0) Gecko/20100101 Firefox/84.0"
webserver_1 | 172.23.0.3 - - [27/Dec/2020:14:30:21 +0000] "GET /api/search/autocomplete/?term=lebenslau HTTP/1.1" 200 14 "https://pl.xxx.de/search?query=lebenslauf%20" "Mozilla/5.0 (X11; Linux x86_64; rv:84.0) Gecko/20100101 Firefox/84.0"
webserver_1 | 172.23.0.3 - - [27/Dec/2020:14:30:22 +0000] "GET /api/search/autocomplete/?term=lebenslauf HTTP/1.1" 200 14 "https://pl.xxx.de/search?query=lebenslauf%20" "Mozilla/5.0 (X11; Linux x86_64; rv:84.0) Gecko/20100101 Firefox/84.0"
webserver_1 | 127.0.0.1 - - [27/Dec/2020:14:30:31 +0000] "GET / HTTP/1.1" 302 0 "-" "curl/7.64.0"
```
| closed | 2020-12-27T14:32:58Z | 2020-12-31T01:33:37Z | https://github.com/jonaswinkler/paperless-ng/issues/195 | [
"bug",
"fixed in next release"
] | Perry3D | 6 |
deepinsight/insightface | pytorch | 2,364 | Impact of input scale on the output in Retinaface | I have been utilizing the `retinaface_r50_v1` model and inputting images of size (640, 640), and the results have been consistently impressive. However, I'm curious to explore whether the outcome would improve if I were to omit resizing the images to (640, 640) and directly input them into the model. I'm uncertain about the potential impact of input scale on the output. | open | 2023-07-11T01:14:37Z | 2024-02-29T14:00:47Z | https://github.com/deepinsight/insightface/issues/2364 | [] | Younghyo | 1 |
unit8co/darts | data-science | 2,706 | Is It Possible to Deploy Darts Model in a TinyML Setup? | **Use Case**
Time series forecasting with LSTM and transformer but hosted on an edge device with limited battery power, memory and compute resource.
**Question**
Any framework or specific model format conversion available for DARTS model to help such a scenario? I am looking for something equivalent to
* LiteRT from Google (for tensorflow models)
Does Dart offer any such framework or easy integration with LiteRT?
| closed | 2025-03-02T01:46:22Z | 2025-03-04T09:31:24Z | https://github.com/unit8co/darts/issues/2706 | [
"question"
] | barmanroys | 1 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 1,176 | melspectrogram() error | On ubuntu 22.04
when I run the demo_cli.py, I got this:
melspectrogram() takes 0 positional arguments but 2 positional arguments.
| open | 2023-03-18T02:09:25Z | 2023-09-23T12:14:29Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1176 | [] | tony2023 | 7 |
DistrictDataLabs/yellowbrick | matplotlib | 1,101 | yellowbrick dataset load functions are not able to download data | **Describe the bug**
Calling one of the yellowbrick.datasets.load_() functions for a dataset that is not locally cached fails.
**To Reproduce**
```python
from yellowbrick.datasets import load_energy
X, y = load_energy()
```
**Expected behavior**
The energy dataset is not cached locally and should be downloaded first, then loaded. It fails, instructing to contact yellowbrick maintainers.
**Traceback**
```
---------------------------------------------------------------------------
DatasetsError Traceback (most recent call last)
<ipython-input-19-af4150160b82> in <module>
2 from yellowbrick.datasets import load_credit
3
----> 4 X, y = load_credit()
5
6 # X, y = load_energy()
~/miniconda3/envs/ensf-ml/lib/python3.8/site-packages/yellowbrick/datasets/loaders.py in load_credit(data_home, return_dataset)
191 data in a variety of formats as well as associated metadata and content.
192 """
--> 193 return _load_dataset("credit", data_home, return_dataset)
194
195
~/miniconda3/envs/ensf-ml/lib/python3.8/site-packages/yellowbrick/datasets/loaders.py in _load_dataset(name, data_home, return_dataset)
60 if return_dataset:
61 return data
---> 62 return data.to_data()
63
64
~/miniconda3/envs/ensf-ml/lib/python3.8/site-packages/yellowbrick/datasets/base.py in to_data(self)
175 """
176 if pd is not None:
--> 177 return self.to_pandas()
178 return self.to_numpy()
179
~/miniconda3/envs/ensf-ml/lib/python3.8/site-packages/yellowbrick/datasets/base.py in to_pandas(self)
218 # Ensure the metadata is valid before continuing
219 if self.meta is None:
--> 220 raise DatasetsError(
221 (
222 "the downloaded dataset was improperly packaged without meta.json "
DatasetsError: the downloaded dataset was improperly packaged without meta.json - please report this bug to the Yellowbrick maintainers!
```
**Desktop (please complete the following information):**
- OS: macOs and Windows
- Python Version Anaconda Python 3.8
- Yellowbrick Version 1.1
**Many thanks!** | closed | 2020-10-02T02:11:48Z | 2020-10-02T15:18:16Z | https://github.com/DistrictDataLabs/yellowbrick/issues/1101 | [
"type: task"
] | ypauchard | 6 |
aio-libs-abandoned/aioredis-py | asyncio | 1,443 | TypeError: duplicate base class TimeoutError | ### Describe the bug
When trying to import `aioredis` in Python 3.11 an error is raised
### To Reproduce
1- use python 3.11
2- try to import aioredis
3- this error is raised:
```python
>>> import aioredis
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".venv/lib64/python3.11/site-packages/aioredis/__init__.py", line 1, in <module>
from aioredis.client import Redis, StrictRedis
File ".venv/lib64/python3.11/site-packages/aioredis/client.py", line 32, in <module>
from aioredis.connection import (
File ".venv/lib64/python3.11/site-packages/aioredis/connection.py", line 33, in <module>
from .exceptions import (
File ".venv/lib64/python3.11/site-packages/aioredis/exceptions.py", line 14, in <module>
class TimeoutError(asyncio.TimeoutError, builtins.TimeoutError, RedisError):
TypeError: duplicate base class TimeoutError
```
### Expected behavior
it should import correctly without errors. I think the problem is in [aioredis/exceptions.py#L14](https://github.com/aio-libs/aioredis-py/blob/master/aioredis/exceptions.py#L14)
```python
class TimeoutError(asyncio.TimeoutError, builtins.TimeoutError, RedisError):
pass
```
The `asyncio.TimeoutError` is inheriting from `builtins.TimeoutError` and they are both used as base classes which python doesn't like.
### Logs/tracebacks
```python-traceback
already added above
```
### Python Version
```console
$ python --version
3.11.0
```
### aioredis Version
```console
$ python -m pip show aioredis
Name: aioredis
Version: 2.0.1
Summary: asyncio (PEP 3156) Redis support
Home-page: https://github.com/aio-libs/aioredis-py
Author:
Author-email:
License: MIT
Location: /mnt/d/dev/python/smsarko/smsarko-fastapi/.venvl/lib64/python3.11/site-packages
Requires: async-timeout, typing-extensions
Required-by:
```
### Additional context
_No response_
### Code of Conduct
- [X] I agree to follow the aio-libs Code of Conduct | open | 2022-11-02T16:28:43Z | 2022-11-07T05:10:08Z | https://github.com/aio-libs-abandoned/aioredis-py/issues/1443 | [
"bug"
] | farahats9 | 2 |
benbusby/whoogle-search | flask | 279 | [BUG] WHOOGLE_CONFIG_STYLE & WHOOGLE_CONFIG_SAFE doesn't work | **Describe the bug**
Docker's WHOOGLE_CONFIG_STYLE and WHOOGLE_CONFIG_SAFE doesn't work. Doesn't change the css and safe search settings.
**To Reproduce**
Steps to reproduce the behavior:
1. Using the default docker compose but changed the css. I used my own cusom css but for simplicity sake, the following also didn't work:
```
environment:
- PUID=102
- PGID=102
- WHOOGLE_DOTENV=0
- WHOOGLE_CONFIG_URL=*secret*
- WHOOGLE_CONFIG_COUNTRY=*secret*
- WHOOGLE_CONFIG_LANGUAGE=*secret*
- WHOOGLE_CONFIG_SAFE=0
- WHOOGLE_CONFIG_DARK=1
- WHOOGLE_CONFIG_STYLE=":root{--whoogle-dark-logo:#fff;}"
```
2. Go to whoogle and the css doesn't change. Safe search is on.
**Deployment Method**
- Docker
**Version of Whoogle Search**
- Latest from docker
**Desktop (please complete the following information):**
- OS: Windows
- Browser Firefox, chrome
**Smartphone (please complete the following information):**
- Tested also on iphone safari
**Additional context**
Not really any error in the docker logs, maybe this:
```
I learned some more directory information, but not enough to build a circuit: We need more microdescriptors: we have 0/6649, and can only build 0% of likely paths. (We have 0% of guards bw, 0% of midpoint bw, and 0% of end bw (no exits in consensus, using mid) = 0% of path bw.)
```
I can set the css using the "Custom CSS" field on the home page, which only persist for the particular browser that I'm using.
Maybe it'd be easier if there is a way to mount config.json?
| closed | 2021-04-10T10:16:46Z | 2021-04-12T20:59:36Z | https://github.com/benbusby/whoogle-search/issues/279 | [
"bug"
] | ghost | 1 |
microsoft/nni | deep-learning | 5,273 | TypeError: forward() missing 1 required positional argument: 'input' | **Describe the issue**:
Facing this error in ModelSpeedup(model, torch.rand(1, 2, 512, 512).to(device), masks).speedup_model()
This is a UNet model
**Environment**:
- NNI version:
- Training service (local|remote|pai|aml|etc):
- Client OS:
- Server OS (for remote mode only):
- Python version:
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?:
- Is running in Docker?:
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
022-12-07 21:08:58] [32minfer module masks...[0m
2022-12-07 21:08:58,875 infer module masks...
[2022-12-07 21:08:58] [32mUpdate mask for .prim::TupleUnpack.63[0m
2022-12-07 21:08:58,895 Update mask for .prim::TupleUnpack.63
[2022-12-07 21:08:58] [32mUpdate mask for module.down_convs.0.block_beforepool.0[0m
2022-12-07 21:08:58,895 Update mask for module.down_convs.0.block_beforepool.0
Traceback (most recent call last):
File "prune_oneshot_nni.py", line 447, in <module>
main()
File "prune_oneshot_nni.py", line 414, in main
ModelSpeedup(model, torch.rand(1, 2, 512, 512).to(device), masks).speedup_model()
**How to reproduce it?**: | closed | 2022-12-08T05:37:09Z | 2023-02-24T02:37:27Z | https://github.com/microsoft/nni/issues/5273 | [] | nralka2007 | 2 |
keras-team/keras | python | 20,568 | Keras 3.7 Broke My Code | Hello Devs,
I am trying to Impliment the Keras Deel:abV3 Segmentation https://keras.io/keras_hub/guides/semantic_segmentation_deeplab_v3/ on Custom Dataset
With Following Changes:
1. Classes: 2
2. Image Size (1024,1024)
In Keras V 3.6 there were no issues while training, but since last release i.e. keras 3.7, after 107 Steps in first Epcoh I started getting
**loss: nan**, but as soon as I reverted back the version to 3.6 all was good.
To Resolve the issue with 3.7 I tried multiple approaces:
1. Exploding Gradients
2. NaN Data Points
3. Different Optimisers
But the issue still remains. Also I neoted a new Warning in the new version
`
**" UserWarning: The structure of `inputs` doesn't match the expected structure: ['keras_tensor_265']. Received: the structure of inputs=(2,1024,1024,3)
warnings.warn(
"** `
I am a novice but it will be grate if anyone can guide me through this and how to resolve this. Following is the code snippet to create the model
` INITIAL_LR = 0.007 * BATCH_SIZE / 16
EPOCHS = 20
learning_rate = keras.optimizers.schedules.CosineDecay(
INITIAL_LR,
decay_steps=EPOCHS * 2124,
)
IMAGE_SIZE = 1024
strategy = tf.distribute.MirroredStrategy()
print('Number of devices: {}'.format(strategy.num_replicas_in_sync))
with strategy.scope():
image_converter = keras_hub.layers.DeepLabV3ImageConverter(image_size = (IMAGE_SIZE,IMAGE_SIZE), interpolation="bilinear",data_format='channels_last')
preprocessor = keras_hub.models.DeepLabV3ImageSegmenterPreprocessor(image_converter)
image_encoder = keras_hub.models.ResNetBackbone.from_preset("resnet_50_imagenet")
deeplab_backbone = keras_hub.models.DeepLabV3Backbone(
image_encoder=image_encoder,
low_level_feature_key="P2",
spatial_pyramid_pooling_key="P5",
dilation_rates=[6, 12, 18],
upsampling_size=8,
)
model = keras_hub.models.DeepLabV3ImageSegmenter(
backbone=deeplab_backbone,
num_classes=NUM_CLASSES,
activation="sigmoid",
# activation = "relu",
preprocessor=preprocessor,
)
model.load_weights("/kaggle/working/DeepLab.weights.h5")
loss = keras.losses.CategoricalCrossentropy(from_logits=False) ## Required On Hot Encoding
# loss = keras.losses.SparseCategoricalCrossentropy(from_logits=False) ## Does Not Require One Hot Encoding
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=learning_rate, weight_decay=0.0001, global_clipnorm=1.0),
loss=loss,
metrics=[ keras.metrics.MeanIoU(num_classes=2,name="iou"),
# keras.metrics.IoU(num_classes=2,target_class_ids=(0,1),sparse_y_pred=True, name="iou"),\
# sparse_y_pred=False
# keras.metrics.CategoricalAccuracy(name="cat_acc", dtype=None)
],
)
early_sasving = keras.callbacks.ModelCheckpoint('/kaggle/working/DeepLab.weights.h5', verbose=1, save_weights_only=True ,\
monitor='iou',save_best_only=True, mode='auto')
early_stopping = keras.callbacks.EarlyStopping(monitor='iou', patience=10)
# early_sasving
history = model.fit(train_dataset,callbacks=[early_sasving,early_stopping],shuffle=True,\
validation_data=val_dataset, epochs=EPOCHS)
`
I am running this notebook on Kaggle using 2 x T4GPU | open | 2024-11-30T07:28:52Z | 2025-01-24T20:44:04Z | https://github.com/keras-team/keras/issues/20568 | [
"type:support"
] | das-apratim | 12 |
holoviz/colorcet | matplotlib | 69 | Some categorical colormaps are given as list of numerical RGB instead of list of hex strings | colorcet 2.0.6
The colorcet user guide specifically mentions that it provides 'Bokeh-style' palettes as lists of hex strings, which is handy when working with Bokeh.
However, I realised this was not the case for some of the categorical palettes, including `cc.glasbey_bw` and `cc.glasbey_hv`. These return lists of RGB triplets which don't work with Bokeh.
Accessing these palettes by string name (_e.g._ `cc.palette['glasbey_hv']`) does yield a list of hex strings... so this is only an issue with regard to consistency. | closed | 2021-09-08T14:01:54Z | 2021-11-27T02:29:42Z | https://github.com/holoviz/colorcet/issues/69 | [] | TheoMathurin | 2 |
nalepae/pandarallel | pandas | 223 | `from time import time_ns` raise ImportError when using python3.6 (same with #38) | ## General
- **Operating System**:
- **Python version**: 3.6
- **Pandas version**: 1.1.5
- **Pandarallel version**: 1.6.4
## Acknowledgement
- [x] My issue is **NOT** present when using `pandas` without alone (without `pandarallel`)
- [x] If I am on **Windows**, I read the [Troubleshooting page](https://nalepae.github.io/pandarallel/troubleshooting/)
before writing a new bug report
## Bug description
### Observed behavior
```
Traceback (most recent call last):
File "...", line 273, in <module>
robot.validate()
File "...", line 7, in validate
if not validator.validate():
File "...", line 76, in validate
from pandarallel import pandarallel
File "/opt/conda/envs/python3.6/lib/python3.6/site-packages/pandarallel/__init__.py", line 1, in <module>
from .core import pandarallel
File "/opt/conda/envs/python3.6/lib/python3.6/site-packages/pandarallel/core.py", line 26, in <module>
from .progress_bars import ProgressBarsType, get_progress_bars, progress_wrapper
File "/opt/conda/envs/python3.6/lib/python3.6/site-packages/pandarallel/progress_bars.py", line 8, in <module>
from time import time_ns
ImportError: cannot import name 'time_ns'
```
### Expected behavior
return nothing(import successfully)
## Minimal but working code sample to ease bug fix for `pandarallel` team
```python
from pandarallel import pandarallel
``` | closed | 2023-02-07T13:00:55Z | 2023-02-12T12:07:45Z | https://github.com/nalepae/pandarallel/issues/223 | [] | tongyifan | 1 |
babysor/MockingBird | pytorch | 37 | 用这里的模型跑出现这个RuntimeError: Error(s) in loading state_dict for Tacotron: size mismatch for encoder.embedding.weight: copying a param with shape torch.Size([70, 512]) from checkpoint, the shape in current model is torch.Size([75, 512]). | closed | 2021-08-23T04:49:00Z | 2025-03-07T22:47:57Z | https://github.com/babysor/MockingBird/issues/37 | [
"bug",
"wontfix"
] | wangkewk | 80 | |
albumentations-team/albumentations | machine-learning | 1,526 | [Bug] On the website search shows old tranforms, but does not show those that were added recently | closed | 2024-02-20T03:20:05Z | 2024-03-26T03:31:12Z | https://github.com/albumentations-team/albumentations/issues/1526 | [
"bug"
] | ternaus | 1 | |
jonaswinkler/paperless-ng | django | 1,261 | [Other] Creating Tags based on Correspondant or Regex input | I have an interesting use-case. All of my documents have a numbering sequence on them automatically ie: 22-44421 which is easy to regex match and assign a tag.
I'm wondering if paperless-ng has the capability to consume the document, match the regex pattern, take the found number sequence ie: 22-44421, create a tag, and assign it.
I reviewed the documentation and didn't see a place where this could be done.
I can try to poke at the code myself to make this work but wanted to know if it's already built in. | open | 2021-08-26T18:38:14Z | 2021-09-07T15:28:58Z | https://github.com/jonaswinkler/paperless-ng/issues/1261 | [] | engineeringsys | 2 |
plotly/dash | data-visualization | 2,312 | [BUG] page modules not imported correctly | **Describe your context**
Dash pages for example see https://dash.plotly.com/urls
build from source using dev branch to get the latest
```
git checkout 756562bdbb5a3b7ef48197a4f9c6bfc803fb63e6
```
**Describe the bug**
In my code
```dash.page_registry[module_name]```
gives me access to many useful attributes of my page including function layout() however I can not access any custom methods or classes because the page modules are not loaded properly.
```sys.modules[module_name]```
Does not contain my module.
**Expected behavior**
One simple addition to dash.py will solve this and allow me to use page modules that have been loaded
```diff --git a/dash/dash.py b/dash/dash.py
index bb8327e6..bd43df8f 100644
--- a/dash/dash.py
+++ b/dash/dash.py
@@ -2024,6 +2024,7 @@ class Dash:
_pages.PAGE_REGISTRY[module_name]["layout"] = getattr(
page_module, "layout"
)
+ sys.modules[module_name] = page_module
@staticmethod
def _path_to_page(path_id):
```
See for example this discussion https://stackoverflow.com/questions/73060129/how-are-changes-to-sys-modules-propagated
With this fix I can then use getmembers to access my custom methods and classes
```print("functions", getmembers(sys.modules[module_name], isfunction))```
| closed | 2022-11-11T20:10:31Z | 2023-03-15T22:27:44Z | https://github.com/plotly/dash/issues/2312 | [] | peteasa | 1 |
allenai/allennlp | data-science | 4,718 | Support for transformers 3.1.0 | Are there any plans to support transformers 3.1 and above?
Currently, pip install allennlp will uninstall transformers version later than 3.0.2 | closed | 2020-10-08T14:36:55Z | 2020-10-08T16:14:59Z | https://github.com/allenai/allennlp/issues/4718 | [
"Feature request"
] | javierabosch2 | 1 |
xonsh/xonsh | data-science | 5,299 | NotADirectoryError: [Errno 20] Not a directory: 'dircolors' | Running on Mac without `dircolors` installed.
```xsh
python3 -m xonsh
```
```python
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/xonsh/main.py", line 470, in main
sys.exit(main_xonsh(args))
File "/usr/local/lib/python3.10/site-packages/xonsh/main.py", line 511, in main_xonsh
print_welcome_screen()
File "/usr/local/lib/python3.10/site-packages/xonsh/xonfig.py", line 826, in print_welcome_screen
print_color(line)
File "/usr/local/lib/python3.10/site-packages/xonsh/tools.py", line 2055, in print_color
xsh.shell.shell.print_color(string, **kwargs)
File "/usr/local/lib/python3.10/site-packages/xonsh/ptk_shell/shell.py", line 542, in print_color
tokens = partial_color_tokenize(string)
File "/usr/local/lib/python3.10/site-packages/xonsh/style_tools.py", line 70, in partial_color_tokenize
styles = XSH.shell.shell.styler.styles
File "/usr/local/lib/python3.10/site-packages/xonsh/base_shell.py", line 337, in styler
self._styler = XonshStyle(env.get("XONSH_COLOR_STYLE"))
File "/usr/local/lib/python3.10/site-packages/xonsh/pyghooks.py", line 372, in __init__
self.style_name = style_name
File "/usr/local/lib/python3.10/site-packages/xonsh/pyghooks.py", line 413, in style_name
for file_type, xonsh_color in XSH.env.get("LS_COLORS", {}).items():
File "/usr/local/lib/python3.10/site-packages/xonsh/environ.py", line 2234, in get
return self[key]
File "/usr/local/lib/python3.10/site-packages/xonsh/environ.py", line 2171, in __getitem__
val = self._d[key] = val(self)
File "/usr/local/lib/python3.10/site-packages/xonsh/environ.py", line 693, in default_lscolors
lsc = LsColors.fromdircolors()
File "/usr/local/lib/python3.10/site-packages/xonsh/environ.py", line 485, in fromdircolors
out = subprocess.check_output(
File "/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 421, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 503, in run
with Popen(*popenargs, **kwargs) as process:
File "/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 971, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 1847, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
NotADirectoryError: [Errno 20] Not a directory: 'dircolors'
Xonsh encountered an issue during launch
Failback to /bin/zsh
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| closed | 2024-03-11T20:32:48Z | 2024-03-11T20:53:52Z | https://github.com/xonsh/xonsh/issues/5299 | [
"mac osx",
"environ"
] | anki-code | 0 |
seleniumbase/SeleniumBase | pytest | 2,403 | Add YAML support for processing capabilities files when using remote Selenium Grids | ## Add YAML support for processing capabilities files when using remote Selenium Grids
Currently, there's only support for Python and JSON formats.
Resolving this will also resolve https://github.com/seleniumbase/SeleniumBase/issues/2401
For more information on capabilities files, see: [SeleniumBase/examples/capabilities/ReadMe.md](https://github.com/seleniumbase/SeleniumBase/blob/master/examples/capabilities/ReadMe.md)
Here's an example run command that's expected to work once this new feature is added:
```bash
pytest --browser=remote --server=USERNAME:KEY@hub.browserstack.com --port=80 --cap-file=capabilities/sample_cap_file_BS.yml
```
Here's an example `.yml` file that was generated from https://www.browserstack.com/docs/automate/capabilities:
```yml
platforms:
- browserName: safari
osVersion: 17
deviceName: iPhone 15 Pro Max
buildIdentifier: ${BUILD_NUMBER}
parallelsPerPlatform: 1
projectName: My Project
browserstackLocal: true
debug: true
networkLogs: true
```
| closed | 2023-12-31T19:22:54Z | 2023-12-31T23:46:47Z | https://github.com/seleniumbase/SeleniumBase/issues/2403 | [
"enhancement"
] | mdmintz | 1 |
Guovin/iptv-api | api | 357 | docker运行出错 | 在docker环境下运行无法启动,日志报:exec /bin/sh: exec format error | closed | 2024-09-30T00:19:41Z | 2024-10-01T23:17:32Z | https://github.com/Guovin/iptv-api/issues/357 | [
"question"
] | aweder | 3 |
manrajgrover/halo | jupyter | 182 | is_supported is always False on Windows | I removed the support check and now the symbols are working. | open | 2024-03-10T14:29:33Z | 2024-03-10T14:32:25Z | https://github.com/manrajgrover/halo/issues/182 | [] | sushantshah-dev | 1 |
stitchfix/hamilton | numpy | 248 | Usage telemetry of Hamilton features | **Is your feature request related to a problem? Please describe.**
To be able to better serve the Hamilton community, finer grained usage metrics would be very helpful.
In the project's current state, we don't know any usage of the feature set that hamilton offers, other than want people ask in the slack help channel.
It would be create to know what is really being used. E.g. what decorators, what experimental modules, etc.
That way when deciding on future improvements and adjustments we could:
1. Make an informed decision as to how likely a change is to impact the community.
2. Understand the impact of new feature additions and adoption.
3. Understand when features should move on from being experimental.
4. Understand how quickly people adjust and upgrade their Hamilton versions.
5. Understand where people encounter the most errors -- and help improve documentation/and or error messages.
**Describe the solution you'd like**
It would be great to know in an anonymous fashion:
1. Provide the ability to opt-out to not sending any tracking information.
2. What decorators are used in a Hamilton DAG definition.
3. What graph adapters are used.
4. How many functions comprise a DAG & what are the in/out edge counts.
5. Python version
6. Operating system type
7. Operating system version
8. Source of errors at DAG construction time, i.e. which part of the Hamilton code base is throwing it. Ideally we know which line of Hamilton code caused it.
9. Source of errors at DAG execution time -- is it user code, or Hamilton code.
Of course we'd have an explicit policy on its usage, and make it clear to users how to opt-out.
**Describe alternatives you've considered**
N/A
**Additional context**
Telemetry usage tracking is becoming more standard in open source. It helps the maintainers to better serve the community.
E.g. data diff does this -- see their tracking code and privacy policy:
* https://github.com/datafold/data-diff/blob/master/data_diff/tracking.py
* https://docs.datafold.com/os_diff/usage_analytics_data_privacy/
| closed | 2022-12-16T20:05:28Z | 2023-01-02T15:24:59Z | https://github.com/stitchfix/hamilton/issues/248 | [
"product idea",
"repo hygiene"
] | skrawcz | 6 |
pallets-eco/flask-sqlalchemy | flask | 553 | Mention error message in app context docs | When using `init_app`, all operations have to be in a view function or application context. The error message was updated to explain this more clearly (I hope), but the docs mention neither the old nor new error message, so people probably aren't finding them through search.
Docs should mention the error message as well as that the error happens with `init_app`, even if you're not using the factory pattern. | closed | 2017-10-03T13:06:46Z | 2020-12-05T20:55:32Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/553 | [
"docs"
] | davidism | 0 |
mlfoundations/open_clip | computer-vision | 536 | This machine is not connected to the Internet, how to adapt the code to prevent the pre-model from being downloaded online. | do:
model, _, _ = open_clip.create_model_and_transforms("ViT-H-14", device="cpu", pretrained="laion2b_s32b_b79k", cache_dir="/data/work/StableSR-main/")
error:
urllib3.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1125)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/site-packages/requests/adapters.py", line 439, in send
resp = conn.urlopen(
File "/opt/conda/lib/python3.8/site-packages/urllib3/connectionpool.py", line 844, in urlopen
retries = retries.increment(
File "/opt/conda/lib/python3.8/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /laion/CLIP-ViT-H-14-laion2B-s32B-b79K/resolve/main/open_clip_pytorch_model.bin (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1125)')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/conda/lib/python3.8/site-packages/open_clip/factory.py", line 151, in create_model_and_transforms
model = create_model(
File "/opt/conda/lib/python3.8/site-packages/open_clip/factory.py", line 113, in create_model
checkpoint_path = download_pretrained(pretrained_cfg, cache_dir=cache_dir)
File "/opt/conda/lib/python3.8/site-packages/open_clip/pretrained.py", line 295, in download_pretrained
target = download_pretrained_from_hf(model_id, cache_dir=cache_dir)
File "/opt/conda/lib/python3.8/site-packages/open_clip/pretrained.py", line 265, in download_pretrained_from_hf
cached_file = hf_hub_download(model_id, filename, revision=revision, cache_dir=cache_dir)
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 120, in _inner_fn
return fn(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1166, in hf_hub_download
metadata = get_hf_file_metadata(
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 120, in _inner_fn
return fn(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1498, in get_hf_file_metadata
r = _request_wrapper(
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 407, in _request_wrapper
response = _request_wrapper(
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 442, in _request_wrapper
return http_backoff(
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/utils/_http.py", line 129, in http_backoff
response = requests.request(method=method, url=url, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/opt/conda/lib/python3.8/site-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/requests/adapters.py", line 514, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /laion/CLIP-ViT-H-14-laion2B-s32B-b79K/resolve/main/open_clip_pytorch_model.bin (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1125)')))
| closed | 2023-05-19T05:59:47Z | 2025-01-04T23:32:24Z | https://github.com/mlfoundations/open_clip/issues/536 | [] | ouyangjiacs | 7 |
geex-arts/django-jet | django | 451 | Not support Django3 | I checkout and `python manage.py makemigrations` and got this
```
File "/Users/sarit/study/django-jet/jet/dashboard/models.py", line 4, in <module>
from django.utils.encoding import python_2_unicode_compatible
ImportError: cannot import name 'python_2_unicode_compatible' from 'django.utils.encoding' (/Users/sarit/.pyenv/versions/django-jet/lib/python3.8/site-packages/django/utils/encoding.py)
```
| closed | 2020-05-27T05:14:15Z | 2020-05-28T02:39:13Z | https://github.com/geex-arts/django-jet/issues/451 | [] | elcolie | 2 |
Textualize/rich | python | 2,473 | support for creating scrolling within a layout | I am using layout in order to print `rich.table` which is quite nice however the tables i have are sometimes long and the layout will cause the tables to be cut off. Shown below is an example of layout where lines are not cut off
```
(buildtest) ~/Documents/github/buildtest/ [fix_tables_wrapping_bc_summary*] buildtest bc summary
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ Reading Buildspec Cache File: /Users/siddiq90/Documents/github/buildtest/var/buildspecs/cache.json │
│ Total Valid Buildspecs: 48 │
│ Total Invalid Buildspecs: 3 │
│ Total Unique Tags: 15 │
│ Total Maintainers: 3 │
│ │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────╯
Tag Breakdown Executor Breakdown Maintainers Breakdown
┏━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┓ ┏━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┓ ┏━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓
┃ tag ┃ total tests ┃ ┃ executor ┃ total tests ┃ ┃ maintainers ┃ total buildspecs ┃
┡━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━┩ ┡━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━┩ ┡━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩
│ tutorials │ 35 │ │ generic.local.sh │ 21 │ │ @johndoe │ 1 │
│ python │ 2 │ │ generic.local.bash │ 82 │ │ @bobsmith │ 1 │
│ fail │ 3 │ │ generic.local.csh │ 3 │ │ @shahzebsiddiqui │ 2 │
│ network │ 2 │ │ badexecutor │ 1 │ └──────────────────┴──────────────────┘
│ ping │ 1 │ │ generic.local.(bash|sh) │ 4 │
│ pass │ 2 │ │ generic.pbs.workq │ 1 │
│ system │ 9 │ └─────────────────────────┴─────────────┘
│ filesystem │ 1 │
│ storage │ 1 │
│ configuration │ 1 │
│ slurm │ 17 │
│ cobalt │ 7 │
│ lsf │ 12 │
│ containers │ 8 │
│ singularity │ 8 │
└───────────────┴─────────────┘
Invalid Buildspecs
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Buildspecs ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ /Users/siddiq90/Documents/github/buildtest/tutorials/invalid_buildspec_section.yml │
├─────────────────────────────────────────────────────────────────────────────────────────┤
│ /Users/siddiq90/Documents/github/buildtest/tutorials/invalid_tags.yml │
├─────────────────────────────────────────────────────────────────────────────────────────┤
│ /Users/siddiq90/Documents/github/buildtest/tutorials/burstbuffer_datawarp_executors.yml │
└─────────────────────────────────────────────────────────────────────────────────────────┘
```
Currently my terminal is 36 lines with a 2:1 ratio at the top vs bottom
```
(buildtest) ~/Documents/github/buildtest/ [fix_tables_wrapping_bc_summary*] echo $LINES
36
```
Now if i rerun the same output with a smaller terminal size let's say 20 lines then i get into situation where the tables are printed but not able to show all the content of everything. It would be nice to have some scrolling capability
```
(buildtest) ~/Documents/github/buildtest/ [fix_tables_wrapping_bc_summary*] LINES=20 buildtest bc summary
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ Reading Buildspec Cache File: /Users/siddiq90/Documents/github/buildtest/var/buildspecs/cache.json │
│ Total Valid Buildspecs: 48 │
│ Total Invalid Buildspecs: 3 │
│ Total Unique Tags: 15 │
│ Total Maintainers: 3 │
│ │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────╯
Tag Breakdown Executor Breakdown Maintainers Breakdown
┏━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┓ ┏━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┓ ┏━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓
┃ tag ┃ total tests ┃ ┃ executor ┃ total tests ┃ ┃ maintainers ┃ total buildspecs ┃
┡━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━┩ ┡━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━┩ ┡━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩
│ tutorials │ 35 │ │ generic.local.sh │ 21 │ │ @johndoe │ 1 │
│ python │ 2 │ │ generic.local.bash │ 82 │ │ @bobsmith │ 1 │
│ fail │ 3 │ │ generic.local.csh │ 3 │ │ @shahzebsiddiqui │ 2 │
│ network │ 2 │ │ badexecutor │ 1 │ └──────────────────┴──────────────────┘
│ ping │ 1 │ │ generic.local.(bash|sh) │ 4 │
│ pass │ 2 │ │ generic.pbs.workq │ 1 │
│ system │ 9 │ └─────────────────────────┴─────────────┘
│ filesystem │ 1 │
│ storage │ 1 │
Invalid Buildspecs
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Buildspecs ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ /Users/siddiq90/Documents/github/buildtest/tutorials/invalid_buildspec_section.yml │
├─────────────────────────────────────────────────────────────────────────────────────────┤
│ /Users/siddiq90/Documents/github/buildtest/tutorials/invalid_tags.yml │
```
Note i already have pager support built in to the code where pagination is done for the entire output but that is not exactly what i am looking for. I want to scroll within a layout if its possible to click on the layout. I am not sure if https://github.com/Textualize/textual project is suppose to address this problem. I have not tried this out yet. | closed | 2022-08-17T17:32:53Z | 2022-09-23T13:15:29Z | https://github.com/Textualize/rich/issues/2473 | [] | shahzebsiddiqui | 2 |
hzwer/ECCV2022-RIFE | computer-vision | 67 | Add installation for Windows | Add installation for Windows to the description This repository works perfectly on this instruction
```
git clone git@github.com:hzwer/arXiv2020-RIFE.git
cd arXiv2020-RIFE
1 pip install torch===1.7.1 torchvision===0.8.2 torchaudio===0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
2 pip install -r requirements_win10.txt
```
**numpy-1.19.4 has error "module not found error: no module named "cv2" on Win, so replaced with numpy-1.19.2**
Easiest way! Anaconda gpu
```
git clone git@github.com:hzwer/arXiv2020-RIFE.git
cd arXiv2020-RIFE
1 conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
2 conda install -c conda-forge ffmpeg
3 pip install -r requirements_win10.txt
```
requirements_Win10.txt
```
numpy==1.19.2
tqdm
sk-video
torch
opencv-python
moviepy
``` | closed | 2020-12-14T02:41:09Z | 2021-03-19T10:00:38Z | https://github.com/hzwer/ECCV2022-RIFE/issues/67 | [] | Anafeyka | 1 |
yihong0618/running_page | data-visualization | 339 | GPX不包含心率跟步频 | 1.60导出Keep跟悦跑圈的GPX现在没有心率跟步频数据,大佬可以加上吗 | closed | 2022-11-10T11:02:54Z | 2023-10-29T08:23:44Z | https://github.com/yihong0618/running_page/issues/339 | [
"bug"
] | lbp0 | 5 |
nonebot/nonebot2 | fastapi | 2,475 | Bug: 使用正向连接的适配器在 NoneBot 启动完毕前处理事件 | ### 操作系统
Other
### Python 版本
*无关*
### NoneBot 版本
2.1.2
### 适配器
OneBot v11/v12 2.2.4, Console 0.4.0, Discord 0.1.1, DoDo 0.1.4, QQ 1.3.2, Satori 0.8.0, Telegram 0.1.0b14
### 协议端
*无关*
### 描述问题
使用正向连接的适配器在 NoneBot 启动完毕前 (即 `Lifespan.startup()` 运行完毕前) 处理事件 (即调用 `nonebot.message.handle_event()`).
### 复现步骤
使用任意正向连接的适配器处理事件即可.
下面的代码可以**稳定**触发此 bug.
```python
from asyncio import sleep
from nonebot import get_driver, logger, on
startup = False
driver = get_driver()
matcher = on()
@driver.on_startup
async def _():
global startup
await sleep(5) # 一段足够长的时间
logger.success("startup")
startup = True
@matcher.handle()
async def _():
if not startup:
logger.critical("handle event before startup!")
```
### 期望的结果
```
> nb run
使用 Python: /home/nixos/adapter-handle-before-startup/.venv/bin/python
12-01 20:19:47 [SUCCESS] nonebot | NoneBot is initializing...
12-01 20:19:47 [INFO] nonebot | Current Env: prod
12-01 20:19:47 [SUCCESS] nonebot | Running NoneBot...
12-01 20:19:52 [SUCCESS] __main__ | startup
12-01 20:19:52 [INFO] nonebot | Application startup completed.
12-01 20:19:53 [INFO] nonebot | OneBot V11 | Bot ********** connected
12-01 20:19:53 [INFO] nonebot | Event will be handled by Matcher(type='', module=__main__, lineno=13)
12-01 20:19:53 [INFO] nonebot | Matcher(type='', module=__main__, lineno=13) running complete
12-01 20:19:53 [INFO] nonebot | Event will be handled by Matcher(type='', module=__main__, lineno=13)
12-01 20:19:53 [INFO] nonebot | Matcher(type='', module=__main__, lineno=13) running complete
```
### 截图或日志
```
> nb run
使用 Python: /home/nixos/adapter-handle-before-startup/.venv/bin/python
12-01 20:17:57 [SUCCESS] nonebot | NoneBot is initializing...
12-01 20:17:57 [INFO] nonebot | Current Env: prod
12-01 20:17:58 [SUCCESS] nonebot | Running NoneBot...
12-01 20:17:58 [INFO] nonebot | OneBot V11 | Bot ********** connected
12-01 20:17:58 [INFO] nonebot | Event will be handled by Matcher(type='', module=__main__, lineno=13)
12-01 20:17:58 [CRITICAL] __main__ | handle event before startup!
12-01 20:17:58 [INFO] nonebot | Matcher(type='', module=__main__, lineno=13) running complete
12-01 20:17:58 [INFO] nonebot | Event will be handled by Matcher(type='', module=__main__, lineno=13)
12-01 20:17:58 [CRITICAL] __main__ | handle event before startup!
12-01 20:17:58 [INFO] nonebot | Matcher(type='', module=__main__, lineno=13) running complete
12-01 20:18:03 [SUCCESS] __main__ | startup
12-01 20:18:03 [INFO] nonebot | Application startup completed.
```
```[tasklist]
### Tasks
- [ ] #2483
- [ ] https://github.com/nonebot/adapter-onebot/pull/85
- [ ] Console
- [ ] Discord
- [ ] DoDo
- [ ] QQ
- [ ] Satori
- [ ] Telegram
```
| closed | 2023-12-01T20:30:24Z | 2024-01-24T17:20:19Z | https://github.com/nonebot/nonebot2/issues/2475 | [
"bug"
] | ProgramRipper | 11 |
apify/crawlee-python | web-scraping | 700 | Add an option for JSON-compatible logs | ### Description
Currently, Crawlee "statistics" logs are formatted as tables, which are human-readable but problematic when using JSON logs.
### Solution
Introduce a Crawler's flag that outputs logs in a JSON-compatible format. This would allow users to toggle between "table" and JSON-compatible logs.
| closed | 2024-11-15T11:53:03Z | 2025-03-18T10:13:54Z | https://github.com/apify/crawlee-python/issues/700 | [
"enhancement",
"t-tooling"
] | vdusek | 1 |
sepandhaghighi/samila | matplotlib | 43 | README Bugs | #### Description
There are some bugs in README.md.
This issue is addressing these bugs and track them.
## Bugs
- [x] PyPI Counter
The link to pypi counter should be `https://pepy.tech/project/samila` instead of `https://pepy.tech/count/samila` | closed | 2021-10-03T11:47:59Z | 2021-10-14T08:50:14Z | https://github.com/sepandhaghighi/samila/issues/43 | [
"bug"
] | sadrasabouri | 1 |
Miserlou/Zappa | django | 1,954 | Can't update due Segmentation fault (core dumped) | When trying to update a stage (in my case, dev) just after uploading the zip a message appears saying literally the title of this issue.
I made sure I was running zappa from venv and using python 3.7 (been using it with no problems up until now)
## Expected Behavior
The workflow should go on like normal, updating the existing API
## Actual Behavior
Crashes and burns with no further explanation.
## Steps to Reproduce
1. get into a virtual env
2. run zappa update <branch>
3. wait until it segfaults
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.48.2
* Operating System and Python version: Ubuntu 19.10 Python 3.7
* The output of `pip freeze`:
```
argcomplete==1.9.3
boto3==1.9.173
botocore==1.12.173
certifi==2019.6.16
cfn-flip==1.2.1
chardet==3.0.4
Click==7.0
docutils==0.14
durationpy==0.5
Flask==1.0.3
future==0.16.0
hjson==3.0.1
idna==2.8
itsdangerous==1.1.0
Jinja2==2.10.1
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.20.0
MarkupSafe==1.1.1
marshmallow==3.2.2
placebo==0.9.0
python-dateutil==2.6.1
python-slugify==1.2.4
PyYAML==5.1.1
requests==2.22.0
s3transfer==0.2.1
six==1.12.0
toml==0.10.0
tqdm==4.19.1
troposphere==2.4.7
Unidecode==1.1.0
urllib3==1.25.3
Werkzeug==0.15.4
wsgi-request-logger==0.4.6
zappa==0.48.2
```
* Link to your project (optional): https://github.com/loscil06/random_names_api
* Your `zappa_settings.py`:
```
{
"dev":
{
"s3_bucket": "lmbda2randomnames",
"app_function": "app.app",
"aws_region": "us-east-2",
"parameter_depth": 1,
"environment_variables":
{}
}
}
```
| closed | 2019-11-05T07:16:15Z | 2020-06-22T20:37:58Z | https://github.com/Miserlou/Zappa/issues/1954 | [] | loscil06 | 8 |
ipyflow/ipyflow | jupyter | 61 | handle nested symbols in dynamic slicer | If a cell references `lst[1]`, we need to include both the slice the defines `lst[1]`, as well as the slice that defines the symbol `lst`. This is hard because `lst` could have multiple aliases and we need to pick the right one, and the code is not currently structured in a way that makes it easy to do so. | closed | 2021-04-17T02:54:49Z | 2021-05-05T00:01:29Z | https://github.com/ipyflow/ipyflow/issues/61 | [] | smacke | 2 |
mirumee/ariadne-codegen | graphql | 302 | Enhancements for Custom Operation Builder | The initial version of the feature for building custom queries/mutations has been released. However, there are several improvements and additional functionalities needed to complete the feature. The tasks outlined below will address these enhancements.
- [ ] Support for Introspection Fields
- [ ] Support for Directives
- [ ] Support for Query/Mutation as a Return Type
- [ ] Add Possibility to Select Queries/Mutations from Schema
- [x] https://github.com/mirumee/ariadne-codegen/issues/303
### Contribution
If anyone is interested in contributing, feel free to submit pull requests. We welcome any help to improve this feature and make it more robust and user-friendly. Thank you for your support! | open | 2024-07-17T13:34:58Z | 2024-07-30T09:59:14Z | https://github.com/mirumee/ariadne-codegen/issues/302 | [] | DamianCzajkowski | 0 |
Yorko/mlcourse.ai | matplotlib | 78 | week 3 workbooks / hw dot not found | Possibly can be fixed by
```RUN apt-get install graphviz``` | closed | 2017-09-21T07:50:51Z | 2017-09-25T09:39:03Z | https://github.com/Yorko/mlcourse.ai/issues/78 | [
"enhancement"
] | sudodoki | 4 |
ploomber/ploomber | jupyter | 357 | Improve error message when failing to initialize Metaproduct | Tasks may generate more than one product like this:
```python
from ploomber.products import File
from ploomber.tasks import PythonCallable
from ploomber import DAG
def _do_stuff():
pass
# note we are calling FIle
PythonCallable(_do_stuff, {'a': File('something')}, dag=DAG())
```
But if the user forgets that:
```python
# forgot to call File!
PythonCallable(_do_stuff, {'a': 'something'}, dag=DAG())
```
We get this error:
```pytb
~/dev/ploomber/src/ploomber/tasks/tasks.py in __init__(self, source, product, dag, name, params, unserializer, serializer)
100 self._source = type(self)._init_source(source, kwargs)
101 self._unserializer = unserializer or dag.unserializer
--> 102 super().__init__(product, dag, name, params)
103
104 @staticmethod
~/dev/ploomber/src/ploomber/tasks/abc.py in __init__(self, product, dag, name, params)
192 type(self).__name__))
193
--> 194 self.product.task = self
195 self._client = None
196
~/dev/ploomber/src/ploomber/products/metaproduct.py in task(self, value)
113 def task(self, value):
114 for p in self.products:
--> 115 p.task = value
116
117 def exists(self):
AttributeError: 'str' object has no attribute 'task'
```
Better: check if `p` has a task attribute, if it doesn't, raise a better error like "doesn't look like a Product instance, got object of type {type}"
| closed | 2021-10-14T20:02:05Z | 2022-01-01T14:12:46Z | https://github.com/ploomber/ploomber/issues/357 | [
"good first issue"
] | edublancas | 4 |
keras-team/keras | data-science | 20,449 | Deep Learning Model building error | InvalidArgumentError Traceback (most recent call last)
Cell In[88], line 1
----> 1 history = model.fit(
2 train_ds,
3 epochs=EPOCHS,
4 batch_size=BATCH_SIZE,
5 verbose=1,
6 validation_data=val_ds
7 ) | closed | 2024-11-05T07:18:28Z | 2024-12-05T02:09:06Z | https://github.com/keras-team/keras/issues/20449 | [
"stat:awaiting response from contributor",
"stale",
"type:Bug"
] | bankarrohan09 | 4 |
huggingface/datasets | machine-learning | 6,641 | unicodedecodeerror: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte | ### Describe the bug
unicodedecodeerror: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte
### Steps to reproduce the bug
```
import sys
sys.getdefaultencoding()
'utf-8'
from datasets import load_dataset
print(f"Train dataset size: {len(dataset['train'])}")
print(f"Test dataset size: {len(dataset['test'])}")
Resolving data files: 100%
159/159 [00:00<00:00, 9909.28it/s]
Using custom data configuration samsum-0b1209637541c9e6
Downloading and preparing dataset json/samsum to C:/Users/Administrator/.cache/huggingface/datasets/json/samsum-0b1209637541c9e6/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...
Downloading data files: 100%
3/3 [00:00<00:00, 119.99it/s]
Extracting data files: 100%
3/3 [00:00<00:00, 9.54it/s]
Generating train split:
88392/0 [00:15<00:00, 86848.17 examples/s]
Generating test split:
0/0 [00:00<?, ? examples/s]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\packaged_modules\json\json.py:132, in Json._generate_tables(self, files)
131 try:
--> 132 pa_table = paj.read_json(
133 io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)
134 )
135 break
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pyarrow\_json.pyx:290, in pyarrow._json.read_json()
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pyarrow\error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pyarrow\error.pxi:100, in pyarrow.lib.check_status()
ArrowInvalid: JSON parse error: Invalid value. in row 0
During handling of the above exception, another exception occurred:
UnicodeDecodeError Traceback (most recent call last)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:1819, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1818 _time = time.time()
-> 1819 for _, table in generator:
1820 if max_shard_size is not None and writer._num_bytes > max_shard_size:
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\packaged_modules\json\json.py:153, in Json._generate_tables(self, files)
152 with open(file, encoding="utf-8") as f:
--> 153 dataset = json.load(f)
154 except json.JSONDecodeError:
File ~\AppData\Local\Programs\Python\Python310\lib\json\__init__.py:293, in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
276 """Deserialize ``fp`` (a ``.read()``-supporting file-like object containing
277 a JSON document) to a Python object.
278
(...)
291 kwarg; otherwise ``JSONDecoder`` is used.
292 """
--> 293 return loads(fp.read(),
294 cls=cls, object_hook=object_hook,
295 parse_float=parse_float, parse_int=parse_int,
296 parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File ~\AppData\Local\Programs\Python\Python310\lib\codecs.py:322, in BufferedIncrementalDecoder.decode(self, input, final)
321 data = self.buffer + input
--> 322 (result, consumed) = self._buffer_decode(data, self.errors, final)
323 # keep undecoded input until the next call
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
Cell In[81], line 5
1 from datasets import load_dataset
3 # Load dataset from the hub
4 #dataset = load_dataset("json",data_files="C:/Users/Administrator/Desktop/samsum/samsum/data/corpus/train.json",field="data")
----> 5 dataset = load_dataset('json',"samsum")
6 #dataset = load_dataset("samsum")
7 print(f"Train dataset size: {len(dataset['train'])}")
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py:1758, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs)
1755 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1757 # Download and prepare data
-> 1758 builder_instance.download_and_prepare(
1759 download_config=download_config,
1760 download_mode=download_mode,
1761 ignore_verifications=ignore_verifications,
1762 try_from_hf_gcs=try_from_hf_gcs,
1763 num_proc=num_proc,
1764 )
1766 # Build dataset for splits
1767 keep_in_memory = (
1768 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1769 )
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:860, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
858 if num_proc is not None:
859 prepare_split_kwargs["num_proc"] = num_proc
--> 860 self._download_and_prepare(
861 dl_manager=dl_manager,
862 verify_infos=verify_infos,
863 **prepare_split_kwargs,
864 **download_and_prepare_kwargs,
865 )
866 # Sync info
867 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:953, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
949 split_dict.add(split_generator.split_info)
951 try:
952 # Prepare split will record examples associated to the split
--> 953 self._prepare_split(split_generator, **prepare_split_kwargs)
954 except OSError as e:
955 raise OSError(
956 "Cannot find data file. "
957 + (self.manual_download_instructions or "")
958 + "\nOriginal error:\n"
959 + str(e)
960 ) from None
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:1708, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, num_proc, max_shard_size)
1706 gen_kwargs = split_generator.gen_kwargs
1707 job_id = 0
-> 1708 for job_id, done, content in self._prepare_split_single(
1709 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1710 ):
1711 if done:
1712 result = content
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:1851, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1849 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1850 e = e.__context__
-> 1851 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1853 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
can't load dataset
### Environment info
dataset:samsum
system :win10
gpu:m40 24G | closed | 2024-02-04T08:49:31Z | 2024-02-06T09:26:07Z | https://github.com/huggingface/datasets/issues/6641 | [] | Hughhuh | 1 |
sgl-project/sglang | pytorch | 4,629 | [Bug] ValueError: '<class 'sglang.srt.configs.qwen2_5_vl_config.Qwen2_5_VLConfig'>' is already used by a Transformers model. | ### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
- [x] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [x] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
### Describe the bug
I use the latest docker image.
```
Singularity> pip list|grep sglang
sglang 0.4.4.post1 /sgl-workspace/sglang/python
```
```
Singularity> python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-V3 --tp 32 --dist-init-addr 10.168.16.121:5000 --nnodes 4 --node-rank 0 --trust-remote-code --host 0.0.0.0 --port 30000
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/sgl-workspace/sglang/python/sglang/launch_server.py", line 6, in <module>
from sglang.srt.entrypoints.http_server import launch_server
File "/sgl-workspace/sglang/python/sglang/srt/entrypoints/http_server.py", line 44, in <module>
from sglang.srt.entrypoints.engine import _launch_subprocesses
File "/sgl-workspace/sglang/python/sglang/srt/entrypoints/engine.py", line 36, in <module>
from sglang.srt.managers.data_parallel_controller import (
File "/sgl-workspace/sglang/python/sglang/srt/managers/data_parallel_controller.py", line 27, in <module>
from sglang.srt.managers.io_struct import (
File "/sgl-workspace/sglang/python/sglang/srt/managers/io_struct.py", line 25, in <module>
from sglang.srt.managers.schedule_batch import BaseFinishReason
File "/sgl-workspace/sglang/python/sglang/srt/managers/schedule_batch.py", line 43, in <module>
from sglang.srt.configs.model_config import ModelConfig
File "/sgl-workspace/sglang/python/sglang/srt/configs/__init__.py", line 5, in <module>
from sglang.srt.configs.qwen2_5_vl_config import (
File "/sgl-workspace/sglang/python/sglang/srt/configs/qwen2_5_vl_config.py", line 1005, in <module>
AutoImageProcessor.register(Qwen2_5_VLConfig, None, Qwen2_5_VLImageProcessor, None)
File "/home/liyumin/.local/lib/python3.10/site-packages/transformers/models/auto/image_processing_auto.py", line 628, in register
IMAGE_PROCESSOR_MAPPING.register(
File "/home/liyumin/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 833, in register
raise ValueError(f"'{key}' is already used by a Transformers model.")
ValueError: '<class 'sglang.srt.configs.qwen2_5_vl_config.Qwen2_5_VLConfig'>' is already used by a Transformers model.
```
### Reproduction
```
python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-V3 --tp 32 --dist-init-addr 10.168.16.121:5000 --nnodes 4 --node-rank 0 --trust-remote-code --host 0.0.0.0 --port 30000
```
### Environment
```
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/sgl-workspace/sglang/python/sglang/check_env.py", line 306, in <module>
check_env()
File "/sgl-workspace/sglang/python/sglang/check_env.py", line 285, in check_env
env_info.update(get_package_versions(PACKAGE_LIST))
File "/sgl-workspace/sglang/python/sglang/check_env.py", line 62, in get_package_versions
module = importlib.import_module(package_name)
File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/liyumin/.local/lib/python3.10/site-packages/sgl_kernel/__init__.py", line 12, in <module>
from sgl_kernel import common_ops
ImportError: libcuda.so.1: cannot open shared object file: No such file or directory
```
I am using ROCm. It seems `python3 -m sglang.check_env` has bug too... | closed | 2025-03-20T13:47:28Z | 2025-03-20T20:39:07Z | https://github.com/sgl-project/sglang/issues/4629 | [] | chn-lee-yumi | 2 |
huggingface/transformers | tensorflow | 36,806 | Logic Errors in Image_processing_gemma3_fast.py | ### System Info
- `transformers` version: 4.50.0.dev0
- Platform: macOS-15.3.2-arm64-arm-64bit
- Python version: 3.12.9
- Huggingface_hub version: 0.29.3
- Safetensors version: 0.5.3
- Accelerate version: 1.5.2
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0 (False)
- Tensorflow version (GPU?): 2.19.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.10.4 (cpu)
- Jax version: 0.5.2
- JaxLib version: 0.5.1
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@amyeroberts
@qubvel
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Steps to reproduce:
1. Load the Gemma 3 model locally using a pipeline with an image as input.
2. Ensure the do_pan_and_scan option is set to False.
3. Run the script — the error appears when the model tries to process the image input.
### Expected behavior
It tries to process the image but encounters some logic errors, they are not major errors but little yet errors:
image_processing_gemma3_fast.py
Line 357: The code references images_list, but this variable is defined only inside the if do_pan_and_scan: condition. When do_pan_and_scan == False, images_list is never initialized, resulting in an UnboundLocalError.
image_text_to_text.py
Line 84: Inside the retrieve_images_in_messages() function, the variable idx_images must be incremented even when the first if condition is met. Otherwise, the final check at line 105 throws an IndexError due to a mismatch in the expected number of images.
I implemented the following changes, which resolved the issues:
In image_processing_gemma3_fast.py, replace:
num_crops = [[0] for images in images_list]
With:
num_crops = [[0] for _ in image_list]
In the same file, replace all references to images_list with image_list after the if do_pan_and_scan: condition to ensure consistency.
In image_text_to_text.py, modify line 84 to increment idx_images inside the first if block:
if key in content:
retrieved_images.append(content[key])
idx_images += 1 # Fix to ensure alignment in the list of images | open | 2025-03-19T01:27:59Z | 2025-03-19T16:50:11Z | https://github.com/huggingface/transformers/issues/36806 | [
"bug",
"Vision",
"Processing"
] | javierchacon262 | 3 |
biolab/orange3 | pandas | 6,455 | how to rename the name of the widget icon on the canvas? | **What's your use case?**
I want to rename the name of the widget icon on the canvas programmlly.But I canot find a method for doing that.

| closed | 2023-05-26T07:21:50Z | 2023-05-26T08:28:56Z | https://github.com/biolab/orange3/issues/6455 | [] | leaf918 | 3 |
kornia/kornia | computer-vision | 2,928 | RandomMosaic not working with masks? | ### Describe the bug
/.conda/lib/python3.10/site-packages/kornia/augmentation/_2d/mix/base.py", line 124, in apply_non_transform_mask
raise NotImplementedError
NotImplementedError
### Reproduction steps
```bash
1. mosaic_mixup = kornia.augmentation.RandomMosaic(data_keys=['input','mask','mask']
2. input_shape = [4, 640, 640], mask_shape = [4,640, 640], [4, 640, 640]
```
### Expected behavior
Cut out masks to match image crops and compose into mosaic augmentation
### Environment
```shell
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.29.3
Libc version: glibc-2.31
```
### Additional context
_No response_ | open | 2024-06-14T11:11:12Z | 2024-06-19T23:58:01Z | https://github.com/kornia/kornia/issues/2928 | [
"help wanted"
] | Sapf3ar | 2 |
ludwig-ai/ludwig | computer-vision | 3,827 | Softmax missing from Torchvision models | **Describe the bug**
I'm training an image classifier with Ludwig's TorchVision models.
The original models have a softmax operator in the last layer but they are [removed](https://github.com/ludwig-ai/ludwig/blob/master/ludwig/encoders/image/torchvision.py#L123) because it doesn't belong in the encoder. However, the softmax layer is [never put back in the decoder](https://github.com/ludwig-ai/ludwig/blob/master/ludwig/decoders/generic_decoders.py#L177). Is this done intentionally?
I need to calculate the softmax of the output. There are 3 ways I can do this going forward:
- Add the softmax layer to the decoder
- Add the softmax layer when exporting the model to Torchscript, ONNX, or CoreML
- Leave things as is and calculate the softmax in the application
Here is the debug print statement of the model architecture. I removed most of it for conciseness.
```
ECD(
(input_features): LudwigFeatureDict(
(module_dict): ModuleDict(
(image_path__ludwig): ImageInputFeature(
(encoder_obj): TVEfficientNetEncoder(
(model): EfficientNet(
(features): Sequential(
(0): Conv2dNormActivation(
(0): Conv2d(3, 24, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
// --- removed for conciseness ---
(7): Conv2dNormActivation(
(0): Conv2d(256, 1280, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1280, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
)
(avgpool): AdaptiveAvgPool2d(output_size=1)
(classifier): Sequential(
(0): Dropout(p=0.2, inplace=True)
(1): Identity()
)
)
)
)
)
)
(output_features): LudwigFeatureDict(
(module_dict): ModuleDict(
(label__ludwig): CategoryOutputFeature(
(fc_stack): FCStack(
(stack): ModuleList()
)
(reduce_sequence_input): SequenceReducer(
(_reduce_obj): ReduceSum()
)
(decoder_obj): Classifier(
(dense): Dense(
(dense): Linear(in_features=1280, out_features=4, bias=True)
)
)
(train_loss_function): SoftmaxCrossEntropyLoss(
(loss_fn): CrossEntropyLoss()
)
)
)
)
(combiner): ConcatCombiner(
(fc_stack): FCStack(
(stack): ModuleList()
)
)
)
```
**To Reproduce**
Python file:
```
import logging
from ludwig.api import LudwigModel
CONFIG = "/auto-ml/ludwig.yaml"
def train_classifier_ludwig(df, save_dir, model_name):
model = LudwigModel(CONFIG, logging_level=logging.INFO)
model.train(
dataset=df,
output_directory=save_dir,
experiment_name="ludwig",
model_name=model_name,
skip_save_processed_input=True,
)
```
YAML file:
```
trainer:
epochs: 100
early_stop: 10
use_mixed_precision: false
input_features:
- name: image_path
type: image
preprocessing:
num_processes: 4
encoder:
type: efficientnet
use_pretrained: True
trainable: True
model_cache_dir: null
model_variant: v2_m
fc_layers:
- output_size: 128
dropout: 0.4
output_features:
- name: label
type: category
```
**Expected behavior**
When inferencing on an image classifier, the output probabilities should add to 1.
Example values I'm getting from an image classifier with 4 classes:
```
[-1.0383801 -1.1289184 3.9636617 -0.988309 ]
```
However, it should be:
```
[0.00659277 0.0060221 0.98045385 0.00693128]
```
**Environment:**
- OS: Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.2
- Python 3.10.9
- Ludwig version: latest from master, sha=890f261fa947ed9485065844fe1bd5a35460f6f4
**Additional context**
I'm not sure if this is related, but there is a [SoftmaxCrossEntropyLoss](https://github.com/ludwig-ai/ludwig/blob/master/ludwig/modules/loss_modules.py#L154) module but it has no softmax operator in it. Is that intentional? Am I missing something here?
@skanjila, @ethanreidel, @arnavgarg1 | closed | 2023-12-13T04:33:24Z | 2024-10-20T02:45:26Z | https://github.com/ludwig-ai/ludwig/issues/3827 | [] | saad-palapa | 3 |
JaidedAI/EasyOCR | pytorch | 1,224 | AttributeError: module 'PIL.Image' has no attribute 'Resampling' | Hello,
I'm encountering an issue while using EasyOCR on my system. Despite updating Pillow to the recommended version, I still receive the following error when executing my script:
`Neither CUDA nor MPS are available - defaulting to CPU. Note: This module is much faster with a GPU.
Traceback (most recent call last):
File "/Users/pierreburianne/Desktop/import easyocr.py", line 3, in <module>
result = reader.readtext('/Users/pierreburianne/Downloads/similar_frame_0.jpg')
File "/Users/pierreburianne/Library/Python/3.9/lib/python/site-packages/easyocr/easyocr.py", line 468, in readtext
result = self.recognize(img_cv_grey, horizontal_list, free_list,\
File "/Users/pierreburianne/Library/Python/3.9/lib/python/site-packages/easyocr/easyocr.py", line 383, in recognize
image_list, max_width = get_image_list(h_list, f_list, img_cv_grey, model_height = imgH)
File "/Users/pierreburianne/Library/Python/3.9/lib/python/site-packages/easyocr/utils.py", line 613, in get_image_list
crop_img,ratio = compute_ratio_and_resize(crop_img,width,height,model_height)
File "/Users/pierreburianne/Library/Python/3.9/lib/python/site-packages/easyocr/utils.py", line 576, in compute_ratio_and_resize
img = cv2.resize(img,(int(model_height*ratio),model_height),interpolation=Image.Resampling.LANCZOS)
File "/Users/pierreburianne/Library/Python/3.9/lib/python/site-packages/PIL/Image.py", line 77, in __getattr__
raise AttributeError(f"module '{__name__}' has no attribute '{name}'")
AttributeError: module 'PIL.Image' has no attribute 'Resampling'`
Environment:
Python version: 3.9.6
EasyOCR version: 1.7.1
Pillow version: 9.5.0
I've tried updating Pillow, reinstalling libraries, and even recreating my virtual environment, but the issue persists.
Could you please assist me in resolving this issue?
Thank you very much for your help.
| open | 2024-03-09T15:44:19Z | 2024-04-05T05:29:10Z | https://github.com/JaidedAI/EasyOCR/issues/1224 | [] | pierreburnn | 2 |
hatchet-dev/hatchet | fastapi | 1,147 | Workflow continues to run after cancelled on dashboard | Description:
After cancelling a workflow on dashboard, There is a warning 'Thread 6330920960 with run id 3be4c09f-7ca1-482d-aeee-30fd50f9eb1c is still running after cancellation'. The code still continue to run until it finishes.
Logs:
```
[DEBUG] 🪓 -- 2024-12-23 15:34:48,817 - sending heartbeat
[INFO] 🪓 -- 2024-12-23 15:34:51,904 - rx: start step run: 3be4c09f-7ca1-482d-aeee-30fd50f9eb1c/first-python-workflow:step1
[DEBUG] 🪓 -- 2024-12-23 15:34:51,905 - tx: event: first-python-workflow:step1/1
[INFO] 🪓 -- 2024-12-23 15:34:51,905 - run: start step: first-python-workflow:step1/3be4c09f-7ca1-482d-aeee-30fd50f9eb1c
[DEBUG] 🪓 -- 2024-12-23 15:34:51,906 - tx: event: first-python-workflow:step1/1
INFO:root:executed step1
[DEBUG] 🪓 -- 2024-12-23 15:34:51,906 - start time: 0.0013298988342285156
0
[DEBUG] 🪓 -- 2024-12-23 15:34:52,826 - sending heartbeat
1
2
3
4
[DEBUG] 🪓 -- 2024-12-23 15:34:56,871 - sending heartbeat
5
[INFO] 🪓 -- 2024-12-23 15:34:57,469 - rx: cancel step run: 3be4c09f-7ca1-482d-aeee-30fd50f9eb1c
[INFO] 🪓 -- 2024-12-23 15:34:57,469 - cancel: step run: /3be4c09f-7ca1-482d-aeee-30fd50f9eb1c
[DEBUG] 🪓 -- 2024-12-23 15:34:57,470 - cancelling step...
6
[WARNING] 🪓 -- 2024-12-23 15:34:58,471 - Thread 6330920960 with run id 3be4c09f-7ca1-482d-aeee-30fd50f9eb1c is still running after cancellation. This could cause the thread pool to get blocked and prevent new tasks from running.
7
8
[DEBUG] 🪓 -- 2024-12-23 15:35:00,933 - sending heartbeat
9
10
11
12
[DEBUG] 🪓 -- 2024-12-23 15:35:04,938 - sending heartbeat
13
14
15
16
[DEBUG] 🪓 -- 2024-12-23 15:35:08,940 - sending heartbeat
17
18
19
[DEBUG] 🪓 -- 2024-12-23 15:35:12,944 - sending heartbeat
[DEBUG] 🪓 -- 2024-12-23 15:35:16,949 - sending heartbeat
[DEBUG] 🪓 -- 2024-12-23 15:35:21,017 - sending heartbeat
```
Expected behaviour:
It should stop logging new numbers after the workflow is cancelled.
Worker code for reproduction:
```
from hatchet_sdk import Context, Hatchet, ClientConfig
from dotenv import load_dotenv
import logging
logging.basicConfig(level=logging.INFO)
LOG = logging.getLogger()
load_dotenv()
hatchet = Hatchet(
debug=True,
config=ClientConfig(
logger=LOG,
),
)
@hatchet.workflow(name="first-python-workflow")
class MyWorkflow:
@hatchet.step(retries=3)
def step1(self, context: Context):
LOG.info("executed step1")
import time
i = 0
while i < 20:
print(i)
i += 1
time.sleep(1)
return {
"result": "success"
}
if __name__ == "__main__":
worker = hatchet.worker('first-worker')
worker.register_workflow(MyWorkflow())
worker.start()
```
Configuration Details:
- Self hosted with https://docs.hatchet.run/self-hosting/docker-compose
- Hatched SDK version 0.42.5
- Python version: 3.11.9
- macOS version: 15.1.1
| closed | 2024-12-23T07:51:20Z | 2025-03-17T03:07:45Z | https://github.com/hatchet-dev/hatchet/issues/1147 | [] | kahkeong | 2 |
napari/napari | numpy | 6,761 | New labels annotation tool and tensorstore | ### 🐛 Bug Report
Using the new annotation tool in the labels layer with tensorstore doesn't provide any feedback when the saving operation is unsuccessful.
### 💡 Steps to Reproduce
A test showing interaction with a tensorstore array (numpy array for comparison).
The array has two slices to easily demonstrate re-reading of the annotation.
```
import napari
import zarr
import tensorstore as ts
import numpy as np
array_size = (2,2000, 2000)
np_array = np.zeros(array_size, dtype='uint32')
zarr_path = r'd://example.zarr'
z = zarr.zeros(array_size, chunks=(1,1000, 1000), dtype='uint32')
zarr.save(zarr_path, z)
spec = {
'driver': 'zarr',
'kvstore': {
'driver': 'file',
'path': zarr_path,
},
}
ts_array = ts.open(spec).result()
viewer = napari.Viewer()
viewer.add_labels(ts_array,name='ts')
viewer.add_labels(np_array,name='np')
```
https://github.com/napari/napari/assets/7549583/e30eb1d6-e657-403f-9584-c590c601597f
### 💡 Expected Behavior
Maybe an error message that saving to the zarr array failed in this situation?
### 🌎 Environment
napari: 0.5.0a2.dev606+gb3e15c51
Platform: Windows-10-10.0.19045-SP0
Python: 3.10.13 | packaged by conda-forge | (main, Dec 23 2023, 15:27:34) [MSC v.1937 64 bit (AMD64)]
Qt: 5.15.2
PyQt5: 5.15.10
NumPy: 1.26.4
SciPy: 1.12.0
Dask: 2024.3.1
VisPy: 0.14.2
magicgui: 0.8.2
superqt: 0.6.2
in-n-out: 0.2.0
app-model: 0.2.5
npe2: 0.7.4
### 💡 Additional Context
It's related to the performance and when the classical drawing tool (brush) is responsive, the new tool is working correctly too. Yet, this performance issue seems unavoidable with an HDD drive (in the example that I provide, a small 2k x 2k px array is not performant) and when the brush is used the problem is obvious. In contrast with the new tool, the annotation is displayed correctly but stored incorrectly in the underlying zarr.
| open | 2024-03-20T18:59:19Z | 2024-03-25T03:52:40Z | https://github.com/napari/napari/issues/6761 | [
"bug"
] | fjorka | 6 |
dpgaspar/Flask-AppBuilder | flask | 2,172 | Make Google OAuth login work for users created using `create-user` | Hey folks, looks like there is no good way to access control the app to a subset of users when using Google OAuth. What we are trying to achieve is restrict either users with a particular domain `@example.com`, or manually add new users using `flask fab create -user` command.
The issue is that during OAuth, FAB set the `userinfo` for Google as:
```
return {
"username": "google_" + data.get("id", ""),
"first_name": data.get("given_name", ""),
"last_name": data.get("family_name", ""),
"email": data.get("email", ""),
}
```
and then when validating, it validates whether username `google_<id>` exist in the database. If we create users manually, we only know the email address and not the Google's user.id. Typically we are doing:
```
flask fab create-user --username helloworld --email hello@example.com --firstname hello --lastname world
```
If we switch the lookup in the database to both username and email based, this issue can be resolved:
```
def auth_user_oauth(self, userinfo):
username = None
email = None
user = None
if "username" in userinfo:
username = userinfo["username"]
if username:
user = self.find_user(username=username)
if user is None and "email" in userinfo:
email = userinfo["email"]
if email:
user = self.find_user(email=email)
else:
log.error("OAUTH userinfo does not have username or email %s", userinfo)
return None
# If username and email is empty, go away
if not username and not email:
return None
```
### Environment
Flask-Appbuilder version: v4.3.10
### Describe the expected results
We should be able to let users created using `create-user` to login via OAuth
### Describe the actual results
User not able to login, and the authentication fails because then there's a conflict with an existing email address associated with the user we created manually.
### Steps to reproduce
Set up Google OAuth, and create the user using `flask fab create-user` before logging in.
PS: I can also send out a fix for this if the issue is accepted.
| open | 2023-11-25T23:15:36Z | 2023-11-25T23:54:10Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/2172 | [] | jainankit | 0 |
marshmallow-code/flask-smorest | rest-api | 290 | Two tests failing on FreeBSD | The failures start at [line 520 in the log](https://pastebin.com/x1VMBnk1), but I think the cause of those errors is the same. Can you help me identify what's causing it, please? Thank you! | closed | 2021-10-11T07:19:29Z | 2021-10-11T13:56:50Z | https://github.com/marshmallow-code/flask-smorest/issues/290 | [] | mekanix | 2 |
bigscience-workshop/petals | nlp | 322 | How to specify lora parameters | When running an entire bloom model in local environment, I can view the information of all layers and specify the query_key_value module in lora. But in petals, the (h) layer becomes a remote sequential. How should I specify the target module in lora like this:
```
config = LoraConfig(
r=16,
lora_alpha=16,
target_modules=["query_key_value"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
```
## print bloom in device:
<img width="668" alt="" src="https://github.com/bigscience-workshop/petals/assets/63391992/fa1ea5ea-329c-4945-b6a7-f03e58ff027c">
## print bloom in petals:
<img width="823" alt="" src="https://github.com/bigscience-workshop/petals/assets/63391992/f83ace4e-3bce-4d50-b4db-4649a9823918">
| open | 2023-06-04T04:31:18Z | 2023-08-30T04:12:44Z | https://github.com/bigscience-workshop/petals/issues/322 | [] | 01miaom | 1 |
huggingface/datasets | pytorch | 6,541 | Dataset not loading successfully. | ### Describe the bug
When I run down the below code shows this error: AttributeError: module 'numpy' has no attribute '_no_nep50_warning'
I also added this issue in transformers library please check out: [link](https://github.com/huggingface/transformers/issues/28099)
### Steps to reproduce the bug
## Reproduction
Hi, please check this line of code, when I run Show attribute error.
```
from datasets import load_dataset
from transformers import WhisperProcessor, WhisperForConditionalGeneration
# Select an audio file and read it:
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = ds[0]["audio"]
waveform = audio_sample["array"]
sampling_rate = audio_sample["sampling_rate"]
# Load the Whisper model in Hugging Face format:
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny.en")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en")
# Use the model and processor to transcribe the audio:
input_features = processor(
waveform, sampling_rate=sampling_rate, return_tensors="pt"
).input_features
# Generate token ids
predicted_ids = model.generate(input_features)
# Decode token ids to text
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
transcription[0]
```
**Attribute Error**
```
AttributeError Traceback (most recent call last)
Cell In[9], line 6
4 # Select an audio file and read it:
5 ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
----> 6 audio_sample = ds[0]["audio"]
7 waveform = audio_sample["array"]
8 sampling_rate = audio_sample["sampling_rate"]
File /opt/pytorch/lib/python3.8/site-packages/datasets/arrow_dataset.py:2795, in Dataset.__getitem__(self, key)
2793 def __getitem__(self, key): # noqa: F811
2794 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2795 return self._getitem(key)
File /opt/pytorch/lib/python3.8/site-packages/datasets/arrow_dataset.py:2780, in Dataset._getitem(self, key, **kwargs)
2778 formatter = get_formatter(format_type, features=self._info.features, **format_kwargs)
2779 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 2780 formatted_output = format_table(
2781 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
2782 )
2783 return formatted_output
File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:629, in format_table(table, key, formatter, format_columns, output_all_columns)
627 python_formatter = PythonFormatter(features=formatter.features)
628 if format_columns is None:
--> 629 return formatter(pa_table, query_type=query_type)
630 elif query_type == "column":
631 if key in format_columns:
File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:396, in Formatter.__call__(self, pa_table, query_type)
394 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
395 if query_type == "row":
--> 396 return self.format_row(pa_table)
397 elif query_type == "column":
398 return self.format_column(pa_table)
File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:437, in PythonFormatter.format_row(self, pa_table)
435 return LazyRow(pa_table, self)
436 row = self.python_arrow_extractor().extract_row(pa_table)
--> 437 row = self.python_features_decoder.decode_row(row)
438 return row
File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:215, in PythonFeaturesDecoder.decode_row(self, row)
214 def decode_row(self, row: dict) -> dict:
--> 215 return self.features.decode_example(row) if self.features else row
File /opt/pytorch/lib/python3.8/site-packages/datasets/features/features.py:1917, in Features.decode_example(self, example, token_per_repo_id)
1903 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1904 """Decode example with custom feature decoding.
1905
1906 Args:
(...)
1914 `dict[str, Any]`
1915 """
-> 1917 return {
1918 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1919 if self._column_requires_decoding[column_name]
1920 else value
1921 for column_name, (feature, value) in zip_dict(
1922 {key: value for key, value in self.items() if key in example}, example
1923 )
1924 }
File /opt/pytorch/lib/python3.8/site-packages/datasets/features/features.py:1918, in <dictcomp>(.0)
1903 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1904 """Decode example with custom feature decoding.
1905
1906 Args:
(...)
1914 `dict[str, Any]`
1915 """
1917 return {
-> 1918 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1919 if self._column_requires_decoding[column_name]
1920 else value
1921 for column_name, (feature, value) in zip_dict(
1922 {key: value for key, value in self.items() if key in example}, example
1923 )
1924 }
File /opt/pytorch/lib/python3.8/site-packages/datasets/features/features.py:1339, in decode_nested_example(schema, obj, token_per_repo_id)
1336 elif isinstance(schema, (Audio, Image)):
1337 # we pass the token to read and decode files from private repositories in streaming mode
1338 if obj is not None and schema.decode:
-> 1339 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
1340 return obj
File /opt/pytorch/lib/python3.8/site-packages/datasets/features/audio.py:191, in Audio.decode_example(self, value, token_per_repo_id)
189 array = array.T
190 if self.mono:
--> 191 array = librosa.to_mono(array)
192 if self.sampling_rate and self.sampling_rate != sampling_rate:
193 array = librosa.resample(array, orig_sr=sampling_rate, target_sr=self.sampling_rate)
File /opt/pytorch/lib/python3.8/site-packages/lazy_loader/__init__.py:78, in attach.<locals>.__getattr__(name)
76 submod_path = f"{package_name}.{attr_to_modules[name]}"
77 submod = importlib.import_module(submod_path)
---> 78 attr = getattr(submod, name)
80 # If the attribute lives in a file (module) with the same
81 # name as the attribute, ensure that the attribute and *not*
82 # the module is accessible on the package.
83 if name == attr_to_modules[name]:
File /opt/pytorch/lib/python3.8/site-packages/lazy_loader/__init__.py:77, in attach.<locals>.__getattr__(name)
75 elif name in attr_to_modules:
76 submod_path = f"{package_name}.{attr_to_modules[name]}"
---> 77 submod = importlib.import_module(submod_path)
78 attr = getattr(submod, name)
80 # If the attribute lives in a file (module) with the same
81 # name as the attribute, ensure that the attribute and *not*
82 # the module is accessible on the package.
File /usr/lib/python3.8/importlib/__init__.py:127, in import_module(name, package)
125 break
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1014, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:991, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:975, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:671, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:848, in exec_module(self, module)
File <frozen importlib._bootstrap>:219, in _call_with_frames_removed(f, *args, **kwds)
File /opt/pytorch/lib/python3.8/site-packages/librosa/core/audio.py:13
11 import audioread
12 import numpy as np
---> 13 import scipy.signal
14 import soxr
15 import lazy_loader as lazy
File /opt/pytorch/lib/python3.8/site-packages/scipy/signal/__init__.py:323
314 from ._spline import ( # noqa: F401
315 cspline2d,
316 qspline2d,
(...)
319 symiirorder2,
320 )
322 from ._bsplines import *
--> 323 from ._filter_design import *
324 from ._fir_filter_design import *
325 from ._ltisys import *
File /opt/pytorch/lib/python3.8/site-packages/scipy/signal/_filter_design.py:16
13 from numpy.polynomial.polynomial import polyval as npp_polyval
14 from numpy.polynomial.polynomial import polyvalfromroots
---> 16 from scipy import special, optimize, fft as sp_fft
17 from scipy.special import comb
18 from scipy._lib._util import float_factorial
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/__init__.py:405
1 """
2 =====================================================
3 Optimization and root finding (:mod:`scipy.optimize`)
(...)
401
402 """
404 from ._optimize import *
--> 405 from ._minimize import *
406 from ._root import *
407 from ._root_scalar import *
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_minimize.py:26
24 from ._trustregion_krylov import _minimize_trust_krylov
25 from ._trustregion_exact import _minimize_trustregion_exact
---> 26 from ._trustregion_constr import _minimize_trustregion_constr
28 # constrained minimization
29 from ._lbfgsb_py import _minimize_lbfgsb
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_trustregion_constr/__init__.py:4
1 """This module contains the equality constrained SQP solver."""
----> 4 from .minimize_trustregion_constr import _minimize_trustregion_constr
6 __all__ = ['_minimize_trustregion_constr']
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_trustregion_constr/minimize_trustregion_constr.py:5
3 from scipy.sparse.linalg import LinearOperator
4 from .._differentiable_functions import VectorFunction
----> 5 from .._constraints import (
6 NonlinearConstraint, LinearConstraint, PreparedConstraint, strict_bounds)
7 from .._hessian_update_strategy import BFGS
8 from .._optimize import OptimizeResult
File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_constraints.py:8
6 from ._optimize import OptimizeWarning
7 from warnings import warn, catch_warnings, simplefilter
----> 8 from numpy.testing import suppress_warnings
9 from scipy.sparse import issparse
12 def _arr_to_scalar(x):
13 # If x is a numpy array, return x.item(). This will
14 # fail if the array has more than one element.
File /opt/pytorch/lib/python3.8/site-packages/numpy/testing/__init__.py:11
8 from unittest import TestCase
10 from . import _private
---> 11 from ._private.utils import *
12 from ._private.utils import (_assert_valid_refcount, _gen_alignment_data)
13 from ._private import extbuild, decorators as dec
File /opt/pytorch/lib/python3.8/site-packages/numpy/testing/_private/utils.py:480
476 pprint.pprint(desired, msg)
477 raise AssertionError(msg.getvalue())
--> 480 @np._no_nep50_warning()
481 def assert_almost_equal(actual,desired,decimal=7,err_msg='',verbose=True):
482 """
483 Raises an AssertionError if two items are not equal up to desired
484 precision.
(...)
548
549 """
550 __tracebackhide__ = True # Hide traceback for py.test
File /opt/pytorch/lib/python3.8/site-packages/numpy/__init__.py:313, in __getattr__(attr)
305 raise AttributeError(__former_attrs__[attr])
307 # Importing Tester requires importing all of UnitTest which is not a
308 # cheap import Since it is mainly used in test suits, we lazy import it
309 # here to save on the order of 10 ms of import time for most users
310 #
311 # The previous way Tester was imported also had a side effect of adding
312 # the full `numpy.testing` namespace
--> 313 if attr == 'testing':
314 import numpy.testing as testing
315 return testing
AttributeError: module 'numpy' has no attribute '_no_nep50_warning'
```
### Expected behavior
``` ' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.' ```
Also, make sure this script is provided for your official website so please update:
[script](https://huggingface.co/docs/transformers/model_doc/whisper)
### Environment info
**System Info**
* transformers -> 4.36.1
* datasets -> 2.15.0
* huggingface_hub -> 0.19.4
* python -> 3.8.10
* accelerate -> 0.25.0
* pytorch -> 2.0.1+cpu
* Using GPU in Script -> No
| closed | 2023-12-29T01:35:47Z | 2024-01-17T00:40:46Z | https://github.com/huggingface/datasets/issues/6541 | [] | hisushanta | 4 |
babysor/MockingBird | deep-learning | 933 | RuntimeError: Error(s) in loading state_dict for Tacotron: size mismatch for encoder_proj.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([128, 1024]). size mismatch for decoder.attn_rnn.weight_ih: copying a param with shape torch.Size([384, 768]) from checkpoint, the shape in current model is torch.Size([384, 1280]). size mismatch for decoder.rnn_input.weight: copying a param with shape torch.Size([1024, 640]) from checkpoint, the shape in current model is torch.Size([1024, 1152]). size mismatch for decoder.stop_proj.weight: copying a param with shape torch.Size([1, 1536]) from checkpoint, the shape in current model is torch.Size([1, 2048]). | **Summary[问题简述(一句话)]**
A clear and concise description of what the issue is.
**Env & To Reproduce[复现与环境]**
描述你用的环境、代码版本、模型
**Screenshots[截图(如有)]**
If applicable, add screenshots to help
| closed | 2023-07-05T09:07:58Z | 2023-07-10T08:16:15Z | https://github.com/babysor/MockingBird/issues/933 | [] | Adolph3671 | 1 |
gevent/gevent | asyncio | 1,221 | gevent 1.3.2 fail to install on centos:7 docker image | * gevent version: 1.3.2
* Python version: python 2.7.5 from centos:7 docker image
* Operating System: docker image
### Description:
pip install is failing on the dockerized version of centos 7. More information with steps to reproduce the problem below.
```
[root@94b6a831e82b tmp]# pip install gevent==1.3.2
Collecting gevent==1.3.2
Using cached https://files.pythonhosted.org/packages/62/85/3a75fa15a5375506a6617c1ce706ea800f016ca2be1a87165f1ab5aff3a2/gevent-1.3.2.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-n5GstP/gevent/setup.py", line 417, in <module>
run_setup(EXT_MODULES, run_make=_BUILDING)
File "/tmp/pip-build-n5GstP/gevent/setup.py", line 401, in run_setup
"signal_os_incompat = gevent.monkey:_subscribe_signal_os",
File "/usr/lib64/python2.7/distutils/core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 265, in __init__
self.fetch_build_eggs(attrs.pop('setup_requires'))
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 289, in fetch_build_eggs
parse_requirements(requires), installer=self.fetch_build_egg
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 601, in resolve
requirements = list(requirements)[::-1] # set up the stack
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2839, in parse_requirements
line, p, specs = scan_list(VERSION,LINE_END,line,p,(1,2),"version spec")
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2817, in scan_list
"Expected ',' or end-of-list in",line,"at",line[p:]
ValueError: ("Expected ',' or end-of-list in", "cffi >= 1.11.5 ; sys_platform == 'win32' and platform_python_implementation == 'CPython'", 'at', " ; sys_platform == 'win32' and platform_python_implementation == 'CPython'")
```
### What I've run:
```
From the host:
docker pull centos:7
docker run -ti --rm centos:7
From the container:
yum install -y epel-release && yum install -y python-pip
pip install gevent==1.3.2
```
| closed | 2018-05-29T20:45:21Z | 2018-05-30T11:46:03Z | https://github.com/gevent/gevent/issues/1221 | [] | dantonpimentel | 2 |
fastapi/sqlmodel | fastapi | 909 | Add an overload to the `exec` method with `_Executable` statement for update and delete statements | I think we should add an overload to the `exec` method to still have the possibility of passing an `_Executable` statement:
```
@overload
def exec(
self,
statement: _Executable,
*,
params: Optional[Union[Mapping[str, Any], Sequence[Mapping[str, Any]]]] = None,
execution_options: Mapping[str, Any] = util.EMPTY_DICT,
bind_arguments: Optional[Dict[str, Any]] = None,
_parent_execute_state: Optional[Any] = None,
_add_event: Optional[Any] = None,
) -> TupleResult[_TSelectParam]:
...
```
_Originally posted by @joachimhuet in https://github.com/tiangolo/sqlmodel/discussions/831#discussioncomment-9234181_ | open | 2024-04-26T19:00:18Z | 2025-02-26T20:10:57Z | https://github.com/fastapi/sqlmodel/issues/909 | [] | joachimhuet | 11 |
microsoft/nni | pytorch | 4,905 | aten::upsample_nearest2d is not Supported! | **Describe the issue**:
[2022-06-01 15:25:48] INFO (FixMaskConflict/MainThread) dim0 sparsity: 0.794792
[2022-06-01 15:25:48] INFO (FixMaskConflict/MainThread) dim1 sparsity: 0.000000
[2022-06-01 15:25:48] INFO (FixMaskConflict/MainThread) Dectected conv prune dim" 0
[2022-06-01 15:25:49] INFO (nni.compression.pytorch.speedup.compressor/MainThread) infer module masks...
[2022-06-01 15:25:49] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for .aten::upsample_nearest2d.126
[2022-06-01 15:25:49] ERROR (nni.compression.pytorch.speedup.jit_translate/MainThread) aten::upsample_nearest2d is not Supported! Please report an issue at https://github.com/microsoft/nni. Thanks~
[2022-06-01 15:25:49] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for fc
**Environment**:ubuntu18.04
- NNI version:2.7
- Training service (local|remote|pai|aml|etc):
- Client OS:
- Server OS (for remote mode only):
- Python version:3.9.7
- PyTorch/TensorFlow version:pytorch1.7.1
- Is conda/virtualenv/venv used?:conda
- Is running in Docker?:no
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [11], in <cell line: 6>()
4 # speedup the model
5 from nni.compression.pytorch.speedup import ModelSpeedup
----> 6 ModelSpeedup(m, torch.rand(1, 3, 256, 256).to('cuda:0'), masks).speedup_model()
File /AN/lib/python3.9/site-packages/nni/compression/pytorch/speedup/compressor.py:512, in ModelSpeedup.speedup_model(self)
509 fix_mask_conflict(self.masks, self.bound_model, self.dummy_input)
511 _logger.info("infer module masks...")
--> 512 self.infer_modules_masks()
513 _logger.info('resolve the mask conflict')
515 # load the original stat dict before replace the model
File /AN/lib/python3.9/site-packages/nni/compression/pytorch/speedup/compressor.py:355, in ModelSpeedup.infer_modules_masks(self)
353 curnode = visit_queue.get()
354 # forward mask inference for curnode
--> 355 self.update_direct_sparsity(curnode)
356 successors = self.torch_graph.find_successors(curnode.unique_name)
357 for successor in successors:
File /AN/lib/python3.9/site-packages/nni/compression/pytorch/speedup/compressor.py:223, in ModelSpeedup.update_direct_sparsity(self, node)
221 weight_mask = self.masks[module_name]
222 _, module = get_module_by_name(self.bound_model, module_name)
--> 223 _auto_infer = AutoMaskInference(
224 module, dummy_input, in_masks, weight_mask, in_constants=in_constants,
225 state_dict=copy.deepcopy(module.state_dict()), batch_dim=self.batch_dim)
226 self.auto_inferences[unique_name] = _auto_infer
227 _auto_infer.name = node.unique_name
File/AN/lib/python3.9/site-packages/nni/compression/pytorch/speedup/infer_mask.py:80, in AutoMaskInference.__init__(self, module, dummy_input, in_masks, weight_mask, output_mask, name, in_constants, state_dict, batch_dim)
76 self.in_masks[in_id] = torch.ones_like(self.dummy_input[in_id])
77 # ones_like will put the created mask on the same device with the dummy_input
78
79 # Initialize the mask for output tensors
---> 80 self.output = self.module(*dummy_input)
81 # self.output.requires_grad_()
82 if output_mask is not None:
83 # assume the given output mask is right
File /AN/lib/python3.9/site-packages/torch/nn/modules/module.py:727, in Module._call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
730 self._forward_hooks.values()):
731 hook_result = hook(self, input, result)
TypeError: forward() missing 1 required positional argument: 'input'
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**: | closed | 2022-06-01T07:30:43Z | 2022-06-10T09:24:59Z | https://github.com/microsoft/nni/issues/4905 | [] | TomatoBoy90 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.