repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
ray-project/ray | python | 51,402 | Two IPs for Ray worker nodes: one for in-cluster communication and another for communication within the node itself? | ### Description
I will use a Tailscale network to provide connectivity for my Ray cluster.
ย
The Ray worker nodes run inside containers (not in the privileged mode), utilizing Tailscale Userspace Networking Mode with a SOCKS5 proxy for connectivity. Each Ray worker node can communicate with other nodes using their Tailscale IPs but cannot access itself via its own Tailscale IP.
ย
Can we configure a worker node to have two different IPs when joining the cluster: one (Tailscale IP) for in-cluster communication and another (localhost) for communication within the node itself?

### Use case
_No response_ | open | 2025-03-15T23:04:30Z | 2025-03-19T22:17:51Z | https://github.com/ray-project/ray/issues/51402 | [
"enhancement",
"P2",
"core"
] | rxsalad | 1 |
microsoft/Bringing-Old-Photos-Back-to-Life | pytorch | 237 | NameError: name 'GlobalGenerator_v2' is not defined | hello,when I run train_domain_B.py,it occurs error:
Traceback (most recent call last):
File "train_domain_B.py", line 48, in <module>
model = create_model(opt)
File "D:\pythonProject\6_18\BringOldBack\Global\models\models.py", line 17, in create_model
model.initialize(opt)
File "D:\pythonProject\6_18\BringOldBack\Global\models\pix2pixHD_model.py", line 40, in initialize
opt.n_blocks_local, opt.norm, gpu_ids=self.gpu_ids, opt=opt)
File "D:\pythonProject\6_18\BringOldBack\Global\models\networks.py", line 60, in define_G
netG = GlobalGenerator_v2(input_nc, output_nc, ngf, k_size, n_downsample_global, n_blocks_global, norm_layer, opt=opt)
NameError: name 'GlobalGenerator_v2' is not defined
How did you solve the error?could you please share with me ? thank you. | closed | 2022-06-18T08:35:12Z | 2022-06-22T01:05:05Z | https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/issues/237 | [] | CodeMadUser | 0 |
paperless-ngx/paperless-ngx | machine-learning | 7,713 | [BUG] User is able to delete pages even with read-only permissions | ### Description
I have a user who basically only has view permissions. This user can open and view a page. The delete button is not visible, which is correct.
**But!
It is possible to remove individual pages from the original PDF document using the Actions menu.
The correct behavior should be that even file manipulations are not possible.
### Steps to reproduce
- Assign view-only rights to a user
- Login with this user
- use the action to delete pages from an existing document
### Webserver logs
```bash
none
```
### Browser logs
_No response_
### Paperless-ngx version
2.12.0
### Host OS
Alpine Linux
### Installation method
Docker - official image
### System status
```json
{
"pngx_version": "2.12.0",
"server_os": "Linux-6.6.51-0-virt-x86_64-with-glibc2.36",
"install_type": "docker",
"storage": {
"total": 2274262908928,
"available": 2270932762624
},
"database": {
"type": "sqlite",
"url": "/usr/src/paperless/data/db.sqlite3",
"status": "OK",
"error": null,
"migration_status": {
"latest_migration": "paperless_mail.0001_initial_squashed_0009_mailrule_assign_tags",
"unapplied_migrations": []
}
},
"tasks": {
"redis_url": "redis://broker:6379",
"redis_status": "OK",
"redis_error": null,
"celery_status": "OK",
"index_status": "OK",
"index_last_modified": "2024-09-15T17:11:24.354256+02:00",
"index_error": null,
"classifier_status": "OK",
"classifier_last_trained": "2024-09-15T22:05:05.321619Z",
"classifier_error": null
}
}
```
### Browser
Firefox
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-09-15T22:33:06Z | 2024-10-17T03:08:26Z | https://github.com/paperless-ngx/paperless-ngx/issues/7713 | [
"not a bug"
] | marzlberger | 9 |
vastsa/FileCodeBox | fastapi | 187 | ๅธๆๆฏๆcf็r2ๅญๅจ | ๆข็ถๅคงไฝฌ้ฝๆฏๆAWS็s3ไบ๏ผไนๅธๆๆฏๆไธไธCF็R2๏ผไปไปฌๅบๆฌไธๆฏๅ
ผๅฎน็ใ | closed | 2024-07-27T02:46:30Z | 2024-09-04T12:25:52Z | https://github.com/vastsa/FileCodeBox/issues/187 | [] | hbestm | 3 |
giotto-ai/giotto-tda | scikit-learn | 678 | simplex index 9223716361969802160 in filtration is larger than maximum index 36028797018963967 | **Describe the bug**
File "/home/server/.local/lib/python3.10/site-packages/gtda/homology/simplicial.py", line 1126, in _weak_alpha_diagram
Xdgms = ripser(dm, maxdim=self._max_homology_dimension,
File "/home/server/.local/lib/python3.10/site-packages/gph/python/ripser_interface.py", line 603, in ripser_parallel
res = _compute_ph_vr_sparse(
File "/home/server/.local/lib/python3.10/site-packages/gph/python/ripser_interface.py", line 51, in _compute_ph_vr_sparse
ret = gph_ripser.rips_dm_sparse(I, J, V, I.size, N, coeff,
OverflowError: simplex index 9223716361969802160 in filtration is larger than maximum index 36028797018963967
**To reproduce**
Steps to reproduce the behavior:
The number points ~ 1 million in point cloud causes so.
<!-- Thanks for contributing! -->
| open | 2023-09-07T09:02:43Z | 2024-09-19T13:22:34Z | https://github.com/giotto-ai/giotto-tda/issues/678 | [
"bug"
] | jbeuria | 3 |
httpie/cli | api | 514 | When using the post sending a request, including & | #### including &, pipe pp_json no working

| closed | 2016-08-26T13:38:02Z | 2016-08-26T14:32:58Z | https://github.com/httpie/cli/issues/514 | [] | linglingqi007 | 2 |
FactoryBoy/factory_boy | django | 1,046 | post_generation hook appears to clash with Trait if they override the same attribute and both are called on create(). | #### Description
When a post_generation hook wraps a field name that is also overriden by a Trait, and both are called on .create(), then none of them appear to have their desired effect.
#### To Reproduce
Should happen just by running the provided code.
##### Model / Factory code
```python
# -------------------------- models.py --------------------------
class Category(models.Model):
name = models.CharField(max_length=255, unique=True)
slug = models.SlugField(max_length=255, unique=True)
available = models.BooleanField(default=True)
class Meta:
verbose_name = 'category'
verbose_name_plural = 'categories'
indexes = [models.Index(fields=['name'])]
def __str__(self):
return self.name
class Product(models.Model):
categories = models.ManyToManyField(
Category,
through='ProductInCategory',
related_name='products'
)
name = models.CharField(max_length=150)
slug = models.SlugField(max_length=255)
description = models.TextField(blank=True)
image = models.ImageField(upload_to='images/products/')
price = models.DecimalField(max_digits=7, decimal_places=2)
is_active = models.BooleanField(default=True)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
ordering = ('-created_at',)
indexes = [
models.Index(fields=['name']),
]
def __str__(self):
return f'{self.name}'
class ProductInCategory(models.Model):
'''
Intermediate model for many-to-many relationship between Category and Product.
'''
category = models.ForeignKey(Category, null=True, on_delete=models.SET_NULL)
product = models.ForeignKey(Product, null=True, on_delete=models.SET_NULL)
class Meta:
unique_together = ('category', 'product')
# -------------------------- factories.py --------------------------
class CategoryFactory(DjangoModelFactory):
'''
Category model factory.
Allows associated models at build time through the following keywords:
- Product:
- `products:` as an override.
- `with_products:` trait for automatic creation.
'''
class Meta:
model = 'shop.Category'
django_get_or_create = ('name',)
class Params:
with_products = factory.Trait(
products=factory.RelatedFactoryList(
'apps.shop.tests_v2.factories.ProductInCategoryFactory',
'category',
size=lambda: random.randint(1, 3),
))
name = factory.Sequence(lambda n: f'Category {n}')
slug = factory.LazyAttribute(lambda o: slugify(o.name))
available = True
@factory.post_generation
def products(self, create, extracted, **kwargs):
'''
Catch `products` keyword override at build/creation time.
'''
if not (create and extracted):
return
self.products.add(*extracted)
class ProductFactory(DjangoModelFactory):
'''
Product model factory.
'''
class Meta:
model = 'shop.Product'
django_get_or_create = ('name',)
name = factory.Sequence(lambda n: f'Product {n}')
slug = factory.LazyAttribute(lambda o: slugify(o.name))
description = factory.Faker('text')
image = factory.django.ImageField()
price = factory.Faker('pydecimal', left_digits=2, right_digits=2, positive=True)
is_active = True
class ProductInCategoryFactory(DjangoModelFactory):
'''
Product <--> Category relationship intermediate model factory.
'''
class Meta:
model = 'shop.ProductInCategory'
django_get_or_create = ('category', 'product')
category = factory.SubFactory(CategoryFactory)
product = factory.SubFactory(ProductFactory)
```
##### The issue
When used in isolation, `CategoryFactory.create(products=[...])` or `CategoryFactory.create(with_products=True)` work as expected, that is to say: the first one uses the provided products list and sets them to the Category model object, and the second one creates new products for the Category model object. But when used together as `CategoryFactory.create(with_products=True, products=[...])` then the resulting category object has no related products at all. I would understand if one were to override the other and the result was only one of the previous examples, but this seemed like a bug. Am I doing something wrong?
```python
n_products = random.randint(1, 10)
some_products = ProductFactory.create_batch(n_products)
category = CategoryFactory.create(with_products=True, products=some_products)
assert category.products.exists() # Fails.
assert category.products.count() > 0 # Also fails.
```
Edit 1: Apologies, I forgot to add the Category <-> Product intermediate model from models.py, it's there now.
Edit 2: Please note, I don't think it's because of where the M2M field is placed, since I tried it the other way around (setting the post_generation hook and the Trait on ProductFactory) and it happened that way as well. | open | 2023-09-29T23:22:27Z | 2023-09-29T23:30:14Z | https://github.com/FactoryBoy/factory_boy/issues/1046 | [] | kvothe9991 | 0 |
microsoft/unilm | nlp | 1,019 | Any timelines about Kosmos-1? | Hi, just came across https://arxiv.org/abs/2302.14045 and the paper references this repository. Wondering if Kosmos-1 will eventually make its way here? Any timelines would be much appreciated. That being said, thank you so much for your hard work and making this repository public in the first place! | open | 2023-03-10T21:19:07Z | 2024-02-02T01:30:10Z | https://github.com/microsoft/unilm/issues/1019 | [] | nikhilweee | 3 |
plotly/dash-table | plotly | 761 | Regression: dash-loading is no longer set, CSS spinners cannot be used | In release 4.6.1 (with dash 1.10), the div with class name 'dash-spreadsheet-inner' would add the class name "dash-loading" while the table was loading. Release 4.6.2 (with dash 1.11) no longer sets this class property.
This property is useful because dcc.Loading does not work well with dash-tables / DataTables. The dcc.Loading spinner is triggered when any callback affecting the table is triggered, which is often not the desired behavior (e.g. spinner is desired when table data is updated via a database call, but not when a cell is clicked).
I can't make sense of the organization of typescript projects, so I haven't been able to determine what change caused this regression.
| open | 2020-04-22T22:56:32Z | 2020-04-22T22:56:32Z | https://github.com/plotly/dash-table/issues/761 | [] | noisycomputation | 0 |
tfranzel/drf-spectacular | rest-api | 1,088 | There should be a separate description like responses. | when we write a description it's override the docstring, for response there is separate description available but for request it's not. | closed | 2023-10-17T18:32:07Z | 2023-11-01T08:34:54Z | https://github.com/tfranzel/drf-spectacular/issues/1088 | [] | bheemnitd | 1 |
ExpDev07/coronavirus-tracker-api | fastapi | 37 | Using your api for a simple visualization | Hi,
I am used your api to create a simple visualization [here](https://github.com/majidkorai/corona-viz) | closed | 2020-03-13T20:21:10Z | 2020-04-21T03:40:24Z | https://github.com/ExpDev07/coronavirus-tracker-api/issues/37 | [
"user-created"
] | majidkorai | 1 |
graphistry/pygraphistry | pandas | 42 | Visualizing DNA variation graphs | @lmeyerov
[vg](https://github.com/ekg/vg) is a system for working with sequence graphs that represent populations of genomes. There is wide support for this idea in genomics, and it is on the cusp of use in production contexts that are not well served by exisiting approaches based around a single reference genome sequence (such as the human MHC or in species with high diversity, like mosquitoes).
I have developed [techniques to visualize variation graphs](https://github.com/ekg/vg/wiki/visualization) but these rely on graph sunsetting operations to visualize larger graphs and eventually meaningful examination if an entire graph breaks down. I'd be interested in seeing what graphistry is doing to handle this kind of use!
| closed | 2015-11-16T09:39:33Z | 2017-02-22T04:12:49Z | https://github.com/graphistry/pygraphistry/issues/42 | [
"question"
] | ekg | 1 |
jina-ai/clip-as-service | pytorch | 5 | Undefined names: 'ident' and 'start' | [flake8](http://flake8.pycqa.org) testing of https://github.com/hanxiao/bert-as-service on Python 3.7.1
$ __flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics__
```
./service/server.py:176:44: F821 undefined name 'ident'
worker.send_multipart([ident, b'', pickle.dumps(self.result)])
^
./service/server.py:178:55: F821 undefined name 'start'
time_used = time.perf_counter() - start
^
./service/server.py:180:46: F821 undefined name 'ident'
(num_result, ident, time_used, int(num_result / time_used)))
^
./bert/tokenization.py:40:31: F821 undefined name 'unicode'
elif isinstance(text, unicode):
^
./bert/tokenization.py:63:31: F821 undefined name 'unicode'
elif isinstance(text, unicode):
^
5 F821 undefined name 'unicode'
5
``` | closed | 2018-11-13T19:13:36Z | 2018-11-14T11:49:35Z | https://github.com/jina-ai/clip-as-service/issues/5 | [] | cclauss | 4 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 341 | [BUG] get_tiktok_video_data่ฟๅไธ็ธๅ
ณ่ง้ขไฟกๆฏ็bug | ***ๅ็้่ฏฏ็ๅนณๅฐ๏ผ***
ๆ้ณ/TikTok
***ๅ็้่ฏฏ็็ซฏ็น๏ผ***
API-V2
***ๆไบค็่พๅ
ฅๅผ๏ผ***
ๆๆtiktok่ง้ข้พๆฅ
***ๆฏๅฆๆๅๆฌกๅฐ่ฏ๏ผ***
ๆฏ๏ผๅ็้่ฏฏๅXๆถ้ดๅ้่ฏฏไพๆงๅญๅจใ
***ไฝ ๆๆฅ็ๆฌ้กน็ฎ็่ช่ฟฐๆไปถๆๆฅๅฃๆๆกฃๅ๏ผ***
ๆ๏ผๅนถไธๅพ็กฎๅฎ่ฏฅ้ฎ้ขๆฏ็จๅบๅฏผ่ด็ใ

ๅพไธญ็บขๆกๅค็ไปฃ็ ๅบ่ฏฅๆทปๅ ๅฏนvideo_id็ๆฃๆฅ๏ผๅ ไธบๆๅฏ่ฝ่ง้ขๅทฒ็ปไธๅญๅจ๏ผ่ฟไผๅฏผ่ดapi่ฟๅๅ
ถไปไธ็ธๅ
ณ็ไปฃ็ ใ
ๅฏไปฅๆทปๅ ็ฑปไผผๅฆไธไปฃ็ ๏ผ
for video in json_data['aweme_list']:
if video['aweme_id'] == video_id:
return video
| closed | 2024-03-26T12:05:33Z | 2024-03-28T05:40:06Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/341 | [
"BUG"
] | guihui123 | 2 |
microsoft/nni | tensorflow | 5,433 | count_flops_params in nas,return 0 | **Describe the issue**:
i use count_flops_params or thop.profile in nas
import nni.retiarii.nn.pytorch as nn
the flops/parameters is return 0
**Environment**:
- NNI version:
- Training service (local|remote|pai|aml|etc):
- Client OS:
- Server OS (for remote mode only):
- Python version:
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?:
- Is running in Docker?:
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**: | closed | 2023-03-11T12:14:48Z | 2023-04-06T12:40:58Z | https://github.com/microsoft/nni/issues/5433 | [] | kingxp | 4 |
deepinsight/insightface | pytorch | 1,858 | RuntimeError: mat1 dim 1 must match mat2 dim 0 | Traceback (most recent call last):
File "train.py", line 141, in <module>
main(parser.parse_args())
File "train.py", line 110, in main
features = F.normalize(backbone(img))
File "/home/bahy/anaconda3/envs/fs_pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/bahy/anaconda3/envs/fs_pytorch/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 619, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/home/bahy/anaconda3/envs/fs_pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/bahy/GIABAO/insightface_1/recognition/arcface_torch/backbones/iresnet.py", line 152, in forward
x = self.fc(x.float() if self.fp16 else x)
File "/home/bahy/anaconda3/envs/fs_pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/bahy/anaconda3/envs/fs_pytorch/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 93, in forward
return F.linear(input, self.weight, self.bias)
File "/home/bahy/anaconda3/envs/fs_pytorch/lib/python3.6/site-packages/torch/nn/functional.py", line 1690, in linear
ret = torch.addmm(bias, input, weight.t())
RuntimeError: mat1 dim 1 must match mat2 dim 0
Traceback (most recent call last):
File "/home/bahy/anaconda3/envs/fs_pytorch/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/bahy/anaconda3/envs/fs_pytorch/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/bahy/anaconda3/envs/fs_pytorch/lib/python3.6/site-packages/torch/distributed/launch.py", line 260, in <module>
main()
File "/home/bahy/anaconda3/envs/fs_pytorch/lib/python3.6/site-packages/torch/distributed/launch.py", line 256, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/home/bahy/anaconda3/envs/fs_pytorch/bin/python', '-u', 'train.py', '--local_rank=0', 'configs/ms1mv3_r50']' returned non-zero exit status 1.
Hi i run trainning and i have this error
Can anyone help me :( | closed | 2021-12-14T01:37:44Z | 2021-12-14T08:18:53Z | https://github.com/deepinsight/insightface/issues/1858 | [] | hai19991331 | 0 |
deepset-ai/haystack | nlp | 9,076 | In SuperComponent utils update `_is_compatible(type1, type2)` to return the common type | In SuperComponents during the validation of an `input_mapping` provided by a user we check if the types of the combined inputs are compatible using the `_is_compatible` utility function. `_is_compatible` works by checking if the two types have some overlapping common type rather than using strict type validation.
This is helpful because it can quickly alert a user if a mapping is not possible due to an incompatible type.
However, after this compatibility check we then assign one of the types (e.g. `type1` or `type2`) as the overall type of the input socket to the SuperComponent. This isn't 100% accurate because we should use the overlapping type of the two types.
So my suggestion would be to expand on `_is_compatible` to also return the detected overlapping type between type1 and type2 which we could use to assign as the overall type for that input socket. | open | 2025-03-20T07:15:04Z | 2025-03-21T14:29:14Z | https://github.com/deepset-ai/haystack/issues/9076 | [
"P2"
] | sjrl | 0 |
microsoft/nni | machine-learning | 5,354 | Could not view any information on Trails detail. | **Describe the issue**:
I am using NNI for hyperparameters optimization. In the **Overview** tabs, I can see all the information like `Duration` etc. But one strange thing is I could not see any update in **# Trail numbers section** even though my experiments are running for the last `sixteen` hours. Second, the **Trail details** tab is still blank. Moreover, in the `dispatcher.log` I can see the following error:
```
[2023-02-14 19:57:14] INFO (nni.tuner.tpe/MainThread) Using random seed 2064954602
[2023-02-14 19:57:14] INFO (nni.runtime.msg_dispatcher_base/MainThread) Dispatcher started
[2023-02-14 19:57:14] ERROR (nni.runtime.msg_dispatcher_base/Thread-1) '_type'
Traceback (most recent call last):
File "/home/anafees/.local/lib/python3.9/site-packages/nni/runtime/msg_dispatcher_base.py", line 108, in command_queue_worker
self.process_command(command, data)
File "/home/anafees/.local/lib/python3.9/site-packages/nni/runtime/msg_dispatcher_base.py", line 154, in process_command
command_handlers[command](data)
File "/home/anafees/.local/lib/python3.9/site-packages/nni/runtime/msg_dispatcher.py", line 90, in handle_initialize
self.tuner.update_search_space(data)
File "/home/anafees/.local/lib/python3.9/site-packages/nni/algorithms/hpo/tpe_tuner.py", line 169, in update_search_space
self.space = format_search_space(space)
File "/home/anafees/.local/lib/python3.9/site-packages/nni/common/hpo_utils/formatting.py", line 99, in format_search_space
formatted = _format_search_space(tuple(), search_space)
File "/home/anafees/.local/lib/python3.9/site-packages/nni/common/hpo_utils/formatting.py", line 177, in _format_search_space
formatted.append(_format_parameter(key, spec['_type'], spec['_value']))
KeyError: '_type' ```
```
I am very new to using `NNI`, and I am not sure whether I should ask these questions or not. Thanks for your help.
**Environment**:
- NNI version: 2.10
- Training service (local|remote|pai|aml|etc): local
- Client OS: Linux
- Python version: 3.9
- PyTorch version: 1.11.0+cu113
- Is conda used?: yes
```
```
**Configuration**:
- Experiment config:
```
experimentName: Abc_D # An optional name to distinguish the experiments
searchSpaceFile: search_space.yaml # Specify the Search Space file path
useAnnotation: false # If it is true, searchSpaceFile will be ignore. default: false
trialCommand: python3.9 main.py # NOTE: change "python3" to "python" if you are using Windows
trialCodeDirectory: . # Specify the Trial file path
trialGpuNumber: 1 # Each trial needs 1 gpu
trialConcurrency: 30 # Run 30 trials concurrently
maxExperimentDuration: 24h # Stop generating all trials after 24 hour
maxTrialNumber: 1000 # Generate at most 1000 trials
tuner: # Configure the tuning algorithm
name: TPE
classArgs: # Algorithm specific arguments
optimize_mode: maximize # maximize or minimize the needed metrics
trainingService: # Configure the training platform
platform: local # Include local, remote, pai, etc.
gpuIndices: 0, 1, 2 # The gpu-id 2 and 3 will be used
useActiveGpu: True # Whether to use the gpu that has been used by other processes.
maxTrialNumberPerGpu: 10 # Default: 1. Specify how many trials can share one GPU.`
`
```
**Search space:**
```
`searchSpace:
batch_size:
_type: choice
_value: [20, 40, 60]
lr:
_type: choice
_value: [0.001,0.000001]
first_dim:
_type: choice
_value: [64, 128, 256]
last_dim:
_type: choice
_value: [16, 32, 64, 128]
epochs:
_type: choice
_value: [80]
dropout_prob:
_type: uniform
_value: [0.5, 0.7]
` | open | 2023-02-15T10:07:28Z | 2023-02-24T02:51:08Z | https://github.com/microsoft/nni/issues/5354 | [] | Nafees-060 | 7 |
microsoft/unilm | nlp | 936 | Some questions about tokenizer of BeiT v2 | ****A really nice work !****
But I have some questions about practical experience.
(1).**Does a new tokenizer based vqkd can be trained on a small dataset, such as 30k images ?** ( The pretrained tokenizer in paper will encode every image into 256 codes, which are too long for me. ) I have try to use CLIP-B/32 as teacher and the patch size of tokenizer is also 32( the new tokenizer will encode every image into 49 codes). However, with origin config (12 encoder layers and 3 decoder layers) , most codes will not be used. So I only use 6 encoder layers, 1 decoder layer and codebook_n_emd is set to 1000. Still about 50% tokens will not be used. So another questions :
(2).**How to increase the usage of codebook? Contuning to reduce the number of layers or dimension of hidden states?**
(3).To take it a step further๏ผ**with few codes (49) of every image and without teacher, can this tokenizer be trained** ? Since the patch size of teacher limites the code-numbers of image. Just make sure the tokenizer will be converaged and the quality of reconstructed images is not very important for me. If you don't try this situation, can you give me some **intuitive suggestions** ?
@pengzhiliang
| closed | 2022-12-01T07:35:41Z | 2025-01-14T05:52:22Z | https://github.com/microsoft/unilm/issues/936 | [] | lizhiustc | 4 |
netbox-community/netbox | django | 17,796 | Custom Field Choices -> Create & Add Another causes IndexError | ### Deployment Type
Self-hosted
### Triage priority
N/A
### NetBox Version
v4.1.4
### Python Version
3.12
### Steps to Reproduce
1. In Custom Field Choice Sets choose + Add
2. Provide name: test and add extra choice test
3. Click Create & Add Another
### Expected Behavior
Back to Add a new custom field choice set page.
### Observed Behavior
The complete exception is provided below:
<class 'IndexError'>
string index out of range
Python version: 3.12.3
NetBox version: 4.1.4
Plugins:
netbox_prometheus_sd: 0.6
| closed | 2024-10-17T06:40:33Z | 2025-02-25T22:44:11Z | https://github.com/netbox-community/netbox/issues/17796 | [
"type: bug",
"status: accepted",
"severity: low",
"netbox"
] | jacobw | 2 |
jina-ai/clip-as-service | pytorch | 617 | TypeError: cannot unpack non-iterable NoneType object | **Prerequisites**
> Please fill in by replacing `[ ]` with `[x]`.
* [x] Are you running the latest `bert-as-service`?
* [x] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`?
* [x] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)?
* [x] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)?
**System information**
> Some of this information can be collected via [this script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh).
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 2019 server 10.0.17763
- TensorFlow installed from (source or binary): source
- TensorFlow version: 1.14.0rc1
- Python version: 3.7.9
- `bert-as-service` version: 1.10.0
- GPU model and memory: None
- CPU model and memory: 8 GB RAM
---
### Description
> Please replace `YOUR_SERVER_ARGS` and `YOUR_CLIENT_ARGS` accordingly. You can also write your own description for reproducing the issue.
I'm using this command to start the server:
bert-serving-start -model_dir C:\Users\sportsaiuser\Downloads\VideosProject\sportsBERT -num_worker=1 -max_seq_len=256 -cpu
Then this issue shows up:
WARNING:tensorflow:From C:\Users\sportsaiuser\AppData\Local\Programs\Python\Python37\lib\site-packages\bert_serving\server\helper.py:186: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.
WARNING:tensorflow:From C:\Users\sportsaiuser\AppData\Local\Programs\Python\Python37\lib\site-packages\bert_serving\server\helper.py:186: The name tf.logging.ERROR is deprecated. Please use tf.compat.v1.logging.ERROR instead.
I:[36mGRAPHOPT[0m:model config: C:\Users\sportsaiuser\Downloads\VideosProject\sportsBERT\bert_config.json
I:[36mGRAPHOPT[0m:checkpoint: C:\Users\sportsaiuser\Downloads\VideosProject\sportsBERT\bert_model.ckpt
E:[36mGRAPHOPT[0m:fail to optimize the graph!
Traceback (most recent call last):
File "C:\Users\sportsaiuser\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\sportsaiuser\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\sportsaiuser\AppData\Local\Programs\Python\Python37\Scripts\bert-serving-start.exe\__main__.py", line 7, in <module>
File "C:\Users\sportsaiuser\AppData\Local\Programs\Python\Python37\lib\site-packages\bert_serving\server\cli\__init__.py", line 4, in main
with BertServer(get_run_args()) as server:
File "C:\Users\sportsaiuser\AppData\Local\Programs\Python\Python37\lib\site-packages\bert_serving\server\__init__.py", line 71, in __init__
self.graph_path, self.bert_config = pool.apply(optimize_graph, (self.args,))
TypeError: cannot unpack non-iterable NoneType object
... | closed | 2021-01-23T01:21:40Z | 2021-01-26T00:42:26Z | https://github.com/jina-ai/clip-as-service/issues/617 | [] | Prithvi103 | 2 |
pandas-dev/pandas | pandas | 60,254 | BUG: Challenges with Nested Metadata Extraction Using pandas.json_normalize( | ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
data = {
"level1": {
"rows": [
{"col1": 1, "col2": 2},
]
},
"meta1": {
"meta_sub1": 1,
},
}
df = pd.json_normalize(data, record_path=["level1", "rows"], meta=["meta1"])
print(df)
df = pd.json_normalize(
data,
record_path=["level1", "rows"],
meta=[["meta1", "meta_sub1"]], # Trying to access sub-fields within meta1
)
```
### Issue Description
# Description of the Issue
This reproducible example demonstrates the challenges and potential pitfalls when using `pandas.json_normalize()` to extract and flatten hierarchical data structures with nested metadata:
### Data Structure
The `data` dictionary is multi-layered, with nested dictionaries and a list of dictionaries (`rows`) under `level1`. Additionally, `meta1` is structured as a dictionary containing subfields.
### Successful Normalization
The first call to `pd.json_normalize()` extracts the data from `rows` under `level1 and includes `meta1` as a top-level metadata field. This works as intended because `meta1 is accessed directly as a single key.
#### Output:
```markdown
col1 col2 meta1
0 1 2 {'meta_sub1': 1}
```
### KeyError with Nested Meta Fields
The second `pd.json_normalize()` call attempts to extract subfields from `meta1` using a nested path (`meta=[["meta1", "meta_sub1"]]`). This results in a `KeyError` because `json_normalize()` does not natively support nested lists for specifying paths within the `meta` parameter.
### Expected Behavior
```python
df = pd.json_normalize(
data,
record_path=["level1", "rows"],
meta=[["meta1", "meta_sub1"]], # Trying to access sub-fields within meta1
)
```
```markdown
col1 col2 meta1
0 1 2 1
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.1
python-bits : 64
OS : Windows
OS-release : 11
Version : 10.0.22631
machine : AMD64
processor : Intel64 Family 6 Model 186 Stepping 2, GenuineIntel
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : de_DE.cp1252
pandas : 2.2.3
numpy : 1.26.2
pytz : 2024.1
dateutil : 2.8.2
pip : 24.3.1
Cython : None
sphinx : 8.1.3
IPython : 8.17.2
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.9.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.3
lxml.etree : 5.2.2
matplotlib : 3.8.3
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.2
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 15.0.0
pyreadstat : None
pytest : 8.1.1
python-calamine : None
pyxlsb : 1.0.10
s3fs : None
scipy : 1.11.4
sqlalchemy : 2.0.28
tables : None
tabulate : 0.9.0
xarray : None
xlrd : 2.0.1
xlsxwriter : 3.2.0
zstandard : None
tzdata : 2024.1
qtpy : None
pyqt5 : None
</details>
| open | 2024-11-08T20:06:24Z | 2024-11-10T14:46:34Z | https://github.com/pandas-dev/pandas/issues/60254 | [
"Bug",
"IO JSON",
"Needs Info"
] | DavidNaizheZhou | 4 |
jupyterhub/zero-to-jupyterhub-k8s | jupyter | 2,954 | Is it possible to rewrite URLs for the single user instance? | I am trying to get [Polynote](https://polynote.org/latest/) to work.
I created a container which is configured accordingly, so it should be picked up by Jupyterhub. However, the proxy forwards `/user/username` to the address `/user/username` at the single user pod. Polynote is served from `/`, but can be set to have `/user/username` in the `<base>` tag.
Is it possible to configure the proxy/spawner/what? on a per-session-basis to forward stuff to `/user/username` to actual address `/` at the single user instance? | closed | 2022-11-21T16:10:31Z | 2022-11-21T16:31:34Z | https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/2954 | [
"support"
] | rabejens | 2 |
gradio-app/gradio | data-visualization | 10,104 | Improve API docs for `gr.MultimodalTextbox` | - [x] I have searched to see if a similar issue already exists.
The API docs for `gr.MultimodalTextbox` looks like the following, but it's not clear to users that they need to use `handle_file` for the file paths.

```py
import gradio as gr
def fn(message):
print(message)
with gr.Blocks() as demo:
text = gr.MultimodalTextbox()
text.submit(fn=fn, inputs=text)
demo.launch()
```
It would be nice if the API docs for `gr.MultimodalTextbox` show info about `handle_file` like the API docs for `gr.Image`.

```py
import gradio as gr
def fn(image):
print(image)
with gr.Blocks() as demo:
image = gr.Image()
btn = gr.Button()
btn.click(fn=fn, inputs=image)
demo.launch()
``` | closed | 2024-12-03T07:59:33Z | 2024-12-05T17:20:21Z | https://github.com/gradio-app/gradio/issues/10104 | [
"bug"
] | hysts | 0 |
gto76/python-cheatsheet | python | 8 | Add floating table of contents | Navigating the HTML version of this cheat sheet would be easier if there was a floating table of contents. | open | 2018-12-28T14:54:52Z | 2019-03-19T15:44:51Z | https://github.com/gto76/python-cheatsheet/issues/8 | [] | brianly | 2 |
tensorflow/tensor2tensor | deep-learning | 1,587 | Error:Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. | ### Description
I want to use a small data set to fine tune my en-zh model .first I used the command code to generate data ,then I run the train code , but occured the error:NotFoundError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. anyone can help me ?
### Environment information
```
OS: ubuntu
$ pip freeze | grep tensor
tnesorflow-gpu=1.12.0
$ python -V
python=2.7
```
### For bugs: reproduction and error logs
# Steps to reproduce:
mkdir -p $DATA_DIR $TRAIN_DIR
#(1) Generate data
t2t-datagen \
--data_dir=$DATA_DIR \
--tmp_dir=$TMP_DIR \
--problem=$PROBLEM
#(2)Train
# * If you run out of memory, add --hparams='batch_size=1024'.
t2t-trainer \
--data_dir=$DATA_DIR --worker_gpu=1 \
--problem=$PROBLEM \
--model=$MODEL \
--hparams_set=$HPARAMS \
--hparams='batch_size=2048' \
--output_dir=$TRAIN_DIR \
--train_steps=700000
```
```
# Error logs:
NotFoundError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
Key training/transformer/symbol_modality_16203_512/input_emb/weights_0/Adam not found in checkpoint
[[node save/RestoreV2_1 (defined at /home/anaconda2/lib/python2.7/site-packages/tensor2tensor/utils/trainer_lib.py:438) = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2_1/tensor_names, save/RestoreV2_1/shape_and_slices)]]
```
| open | 2019-05-29T12:50:42Z | 2019-05-29T12:50:42Z | https://github.com/tensorflow/tensor2tensor/issues/1587 | [] | shiny1022 | 0 |
kizniche/Mycodo | automation | 749 | Add ability to set number of color ranges for dashboard gauges | The functionality exists to easily change the number of gauge stops. The option only needs to be implemented. This is a reminder for myself to do this. | closed | 2020-02-19T04:35:54Z | 2020-02-21T04:37:58Z | https://github.com/kizniche/Mycodo/issues/749 | [
"enhancement"
] | kizniche | 1 |
influxdata/influxdb-client-python | jupyter | 172 | How to get the most recent timestamp for a Measurement via Flux | I need to get the highest (aka most recent) timestamp for a specific Measurement in InfluxDB 2.0 via the Flux query language using the Python API. I have used the below query to get a timestamp (so I believe it's close to done), but I'm unsure how to ensure that the timestamp I extract via this method is indeed the most recent one.
In particular, the indexing I use, `last_data[0]`, is arbitrary as `last_data` is a list of objects like `<influxdb_client.client.flux_table.FluxTable object at 0x00000193FA906AC8>`, which I am unsure how to interpret. The timestamps do not seem to be sorted.
```
from influxdb_client import InfluxDBClient, WriteOptions
client = InfluxDBClient(url=self.influx_url, token=self.token, org=self.org_id, debug=False)
last_data = client.query_api().query(
f'from(bucket:"{self.influx_bucket}") |> range(start: -100d) |> filter(fn: (r) => r["_measurement"] == "{devices[0]}") |> last()'
)
timestamp = last_data[0].records[0]["_stop"]
print(timestamp)
``` | closed | 2020-11-26T20:56:24Z | 2024-06-07T14:58:53Z | https://github.com/influxdata/influxdb-client-python/issues/172 | [] | MatinF | 6 |
nteract/papermill | jupyter | 635 | The batch script runs on pressing Ctrl +C | ## ๐ Bug
<!-- A clear and concise description of what the bug is. -->
Usually the batch script with papermill runs automatically when triggered, but sometimes we need to press CTRL +C in order to continue the process or else it remains stuck. Usually CTRL+ C is use to terminate process, but sometime, I am facing this issue.
Please let me know if there is a stable version which excludes this bug.
Thanks | open | 2021-10-20T13:57:05Z | 2021-10-20T13:57:05Z | https://github.com/nteract/papermill/issues/635 | [
"bug",
"help wanted"
] | intekhab025 | 0 |
robinhood/faust | asyncio | 36 | Changelog topics not being created with correct configuration. | Changelog topics should be created as log compacted topics. | closed | 2017-10-23T17:41:49Z | 2018-07-31T14:39:11Z | https://github.com/robinhood/faust/issues/36 | [
"Issue Type: Bug"
] | vineetgoel | 0 |
napari/napari | numpy | 7,236 | It looks like a separated thread for reporting layer status is causing crashes in benchmarks and tests of another packages | ### ๐ Bug Report
Sample failure:
https://github.com/scverse/napari-spatialdata/actions/runs/10651735292/job/29524660994
[75.00%] ยทยทยทยท /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
Reported here:
https://github.com/scverse/napari-spatialdata/issues/308
I see this in other places but do not saved links
### ๐ก Steps to Reproduce
see linked workflow
### ๐ก Expected Behavior
Do not crash
### ๐ Environment
0.5.3
### ๐ก Additional Context
_No response_ | closed | 2024-09-02T12:39:50Z | 2024-09-07T17:02:28Z | https://github.com/napari/napari/issues/7236 | [
"bug"
] | Czaki | 1 |
stanfordnlp/stanza | nlp | 1,332 | Unable to download or create pipeline; mismatching md5 | **Describe the bug**
Whether I try and create a pipeline or download a model, I get mismatching md5 values
**To Reproduce**
```py
import stanza
stanza.download('en') # alternatively, stanza.Pipeline('en')
```
```
ValueError: md5 for C:\Users\vaavew\stanza_resources\en\default.zip is d788b1276f5eaa65c584543e2906db5f, expected d42b2d71cf57acd04ee9c4ef5b66a98f
# alternatively,
# ValueError: md5 for C:\Users\vaavew\stanza_resources\en\tokenize\combined.pt is d788b1276f5eaa65c584543e2906db5f, expected 10308f3db6c36e7c27aee30dea92c786
```
**Expected behavior**
I would have expected the models to download appropriately
**Environment (please complete the following information):**
- OS: Windows
- Python version: 3.11
- Stanza version: 1.7.0
**Additional context**
I'm working through a work VPN, and I turn the proxy on to download. The downloads reach 100%.
| closed | 2024-01-17T19:18:40Z | 2024-02-28T23:06:48Z | https://github.com/stanfordnlp/stanza/issues/1332 | [
"bug"
] | KyzEver | 2 |
tflearn/tflearn | tensorflow | 374 | LSTM not predicting time series correctly | So I am trying to predict the stock market, and before you say anything about the stock market being impossible to predict, I predicted Twitter's stock on Friday down to a cent. But I was looking to move my network over to Tflearn from Keras. Here is both Keras and Tflearn network code:
Keras
```
`model = Sequential()
model.add(LSTM(layers[0], input_dim=look_back, return_sequences = True))
model.add(Dropout(0.2))
model.add(LSTM(layers[1], return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(layers[2], return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(output_dim=1))
model.add(Activation("linear"))
start = time.time()
model.compile(loss="mse", optimizer="adam")
end = time.time()
print ("Build Time: ", end - start)`
```
Tflearn
```
`start = time.time()
model = input_data(shape=[None, trainX.shape[1]])
model = tflearn.embedding(model, input_dim=trainX.shape[1], output_dim=look_back)
model = lstm(model, layers[0], return_seq=True, activation="sigmoid")
model = dropout(model, 0.2)
model = lstm(model, layers[1], return_seq=True)
model = dropout(model, 0.2)
model = lstm(model, layers[2], return_seq=False)
model = dropout(model, 0.2)
model = fully_connected(model, 1)
model = activation(model, activation="linear")
model = regression(model, optimizer = "adam", learning_rate=0.01, loss='mean_square')
model = tflearn.DNN(model, tensorboard_verbose=0)
end = time.time()
print ("Build Time: ", end - start)`
```
With Keras, I was able to get great results, it was just very slow. But tflearn seems to operate faster, but my predictions range between 150, and -25, when changing around different parameters. But I kept 100 neurons on layer 1, 200 on layer 2, 300 on layer 3, and changed around things like activation, optimizer, and learning rate. But I can't get results anything near as close to keras.
I am happy to provide more code if necessary, but the code is the same between libraries, and the only change is the different code needed for each library. What should I do?
| open | 2016-10-03T03:38:03Z | 2016-10-07T22:37:50Z | https://github.com/tflearn/tflearn/issues/374 | [] | tgs266 | 4 |
fastapi-users/fastapi-users | fastapi | 358 | Body content type for /login and /register | Hi there!
I was looking into this project today. Why does `/login` require `Content-Type: application/x-www-form-urlencoded` (and not `Content-Type: application/json`), while `/register` requires `Content-Type: application/json`?
Is it due to being modelled after OAuth 2.0 specs? https://tools.ietf.org/html/rfc6749
However, some other routes require `Content-Type: application/x-www-form-urlencoded` as per the above document, while in `fastapi-users` the route `/login` is the only route with `Content-Type: application/x-www-form-urlencoded`.
Thanks for the clarification! | closed | 2020-10-08T12:24:07Z | 2020-10-09T20:09:15Z | https://github.com/fastapi-users/fastapi-users/issues/358 | [
"question"
] | visini | 2 |
dropbox/PyHive | sqlalchemy | 288 | Release 0.6.2 | Hello,
The latest release (0.6.1) has been made in September 2018, yet there has been some significant additions since then, for instance kerberos support for Presto (#229). Would it be possible to consider releasing version 0.6.2?
Thanks! | closed | 2019-06-07T13:05:40Z | 2020-03-16T19:51:45Z | https://github.com/dropbox/PyHive/issues/288 | [] | BenoitHanotte | 11 |
scikit-learn/scikit-learn | python | 30,249 | OrdinalEncoder not transforming nans as expected. | ### Describe the bug
When fitting an OrdinalEncoder with a pandas Series that contains a nan, transforming an array containing only nans fails, even though nan is one of the OrdinalEncoder classes.
This seems similar to this issue https://github.com/scikit-learn/scikit-learn/issues/22628
### Steps/Code to Reproduce
```python
from sklearn import preprocessing
import numpy as np
encoder = preprocessing.OrdinalEncoder()
data = np.array(['cat', 'dog', np.nan, 'fish', 'dog']).reshape(-1, 1)
encoder.fit(data)
only_nan = np.array([np.nan]).reshape(-1, 1)
encoder.transform(only_nan)
```
### Expected Results
Instead of the error, I'd expect the output to be `array([3])`.
### Actual Results
```bash
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/rafaelascensao/work/scikit-learn-test/.venv/lib/python3.10/site-packages/sklearn/utils/_set_output.py", line 316, in wrapped
data_to_wrap = f(self, X, *args, **kwargs)
File "/Users/rafaelascensao/work/scikit-learn-test/.venv/lib/python3.10/site-packages/sklearn/preprocessing/_encoders.py", line 1578, in transform
X_int, X_mask = self._transform(
File "/Users/rafaelascensao/work/scikit-learn-test/.venv/lib/python3.10/site-packages/sklearn/preprocessing/_encoders.py", line 206, in _transform
diff, valid_mask = _check_unknown(Xi, self.categories_[i], return_mask=True)
File "/Users/rafaelascensao/work/scikit-learn-test/.venv/lib/python3.10/site-packages/sklearn/utils/_encode.py", line 304, in _check_unknown
if np.isnan(known_values).any():
TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
```
### Versions
```shell
System:
python: 3.10.11 (main, Aug 22 2024, 14:00:26) [Clang 15.0.0 (clang-1500.3.9.4)]
executable: /Users/rafaelascensao/work/scikit-learn-test/.venv/bin/python
machine: macOS-15.0.1-arm64-arm-64bit
Python dependencies:
sklearn: 1.5.2
pip: 24.2
setuptools: 69.5.1
numpy: 2.1.3
scipy: 1.14.1
Cython: None
pandas: 2.2.3
matplotlib: 3.9.2
joblib: 1.4.2
threadpoolctl: 3.5.0
Built with OpenMP: True
threadpoolctl info:
user_api: openmp
internal_api: openmp
num_threads: 8
prefix: libomp
filepath: /Users/rafaelascensao/work/scikit-learn-test/.venv/lib/python3.10/site-packages/sklearn/.dylibs/libomp.dylib
version: None
```
| closed | 2024-11-08T13:03:08Z | 2024-11-08T16:00:21Z | https://github.com/scikit-learn/scikit-learn/issues/30249 | [
"Bug",
"Needs Triage"
] | rafaascensao | 4 |
jupyter-incubator/sparkmagic | jupyter | 623 | Prettier HTML tables? | **Is your feature request related to a problem? Please describe.**
In JupyterHub using Sparkmagic, the Pandas tables show up in plain text rather than as HTML tables. Is there a way to make tables show up like they would in a regular Jupyter kernel?
**Describe the solution you'd like**
Make tables show up with better formatting
**Describe alternatives you've considered**
Maybe local mode might fix this?
**Additional context** | closed | 2020-01-29T00:27:42Z | 2022-05-17T18:56:10Z | https://github.com/jupyter-incubator/sparkmagic/issues/623 | [] | sid-kap | 2 |
Kludex/mangum | fastapi | 177 | Allow user provided handlers | Hello there!
While investigating #176 I found it could be useful to add user provided handler factories to Mangum. It could look like
```python
def __init__(
self,
app: ASGIApp,
lifespan: str = "auto",
additional_handler_factories: Optional[List[Callable[[event: dict, context: "LambdaContext", Dict[str, Any]], AbstractHandler]]] = None
**handler_kwargs: Dict[str, Any]
) -> None:
self.app = app
self.lifespan = lifespan
self.additional_handler_factories = additional_handle_factories or []
self.additional_handler_factories.append(AbstractHandler.from_trigger)
self.handler_kwargs = handler_kwargs
if self.lifespan not in ("auto", "on", "off"):
raise ConfigurationError(
"Invalid argument supplied for `lifespan`. Choices are: auto|on|off"
)
def __call__(self, event: dict, context: "LambdaContext") -> dict:
logger.debug("Event received.")
with ExitStack() as stack:
if self.lifespan != "off":
lifespan_cycle: ContextManager = LifespanCycle(self.app, self.lifespan)
stack.enter_context(lifespan_cycle)
for handler_factory in self.additional_handler_factories:
handler = handler_factory(event, context, **self.handler_kwargs)
if handler:
break
else:
raise TypeError("Unable to determine handler from trigger event")
http_cycle = HTTPCycle(handler.request)
response = http_cycle(self.app, handler.body)
return handler.transform_response(response)
```
In place of:
https://github.com/jordaneremieff/mangum/blob/3f312acb67aac30c07dfb305d3cd1881c59755c4/mangum/adapter.py#L47-L73
In order to accept user-provider handler factories. Would you be interested in this? | closed | 2021-04-09T02:46:14Z | 2022-07-07T18:38:56Z | https://github.com/Kludex/mangum/issues/177 | [
"feature"
] | cblegare | 3 |
pytorch/vision | machine-learning | 8,141 | `to_pil_image` rounds down | ### ๐ Describe the bug
The current code is: `pic = pic.mul(255).byte()`.
If the input is, say, 0.9999, this will be rounded down to 254 instead of 255.
```python
>>> torch.as_tensor([0.9999]).mul(255).byte().item()
254
```
I would suggest that the code be: `pic = pic.mul(255).round().byte()`:
```python
>>> torch.as_tensor([0.9999]).mul(255).round().byte().item()
255
```
### Versions
N/A | open | 2023-12-04T16:54:27Z | 2023-12-04T16:54:27Z | https://github.com/pytorch/vision/issues/8141 | [] | rb-synth | 0 |
pytest-dev/pytest-xdist | pytest | 716 | -n auto doesn't scale workers properly in AWS Codebuild | I'm running tests with `pytest -n auto` in AWS CodeBuild and my build instance size is `BUILD_GENERAL1_MEDIUM`, which has 4 vCPUs (https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-compute-types.html), however xdist seem to spawn only two worker processes.
I also tried upgrading build instance size to `BUILD_GENERAL1_LARGE` (8 vCPUs) but xdist still spawns only 2 workers.
I am at loss on how to debug this. Any ideas on how I can make `pytest -n auto` scale workers to number of vCPUs in CodeBuild and is this a bug in xdist or I'm doing something wrong? Thanks! | closed | 2021-10-15T03:10:12Z | 2021-10-20T17:40:32Z | https://github.com/pytest-dev/pytest-xdist/issues/716 | [] | paunovic | 4 |
pallets/flask | python | 4,422 | Tutorial Blog Blueprint document SQL code issue | Hi team,
Flask is really a great tool that I have ever used:) much thanks for your work!
After I follow the tutorial for building a blog website, I found that there is a issue for session: [Blog Blueprint Index](https://flask.palletsprojects.com/en/2.0.x/tutorial/blog/#index).
The issue is that for query SQL that if I register many users, but when to login with single user, SQL output will be a full list of posts with other users' posts.
So I think it should add `user_id` check from current session, so that could see only exactly user's posts.
```python
def index():
db = get_db()
user_id = session.get('user_id')
posts = db.execute(
'SELECT p.id, title, body, created, author_id, username'
' FROM post p JOIN user u ON p.author_id = u.id where u.id={} ORDER BY created DESC'.format(user_id)
).fetchall()
``` | closed | 2022-01-19T01:59:55Z | 2022-02-03T00:04:03Z | https://github.com/pallets/flask/issues/4422 | [] | lugq1990 | 1 |
strawberry-graphql/strawberry | fastapi | 3,250 | optional input fields are not allowed to be strawberry.UNSET | Here's my code.
Type definition:
```python
@strawberry.input
class UpdatePlayerInput:
v1: Optional[int]
v2: Optional[int]
```
Mutation Resolver
```python
@strawberry.mutation
@sync_to_async
def update_player(self, info: Info, data: UpdatePlayerInput) -> Player:
if info.context['user'] is None:
raise UnauthorizedError
player = models.Player.objects.get(user_id=info.context['user'].id)
if data.v1 is not strawberry.UNSET:
player.v1 = data.v1
if data.v2 is not strawberry.UNSET:
player.v2 = data.v2
player.save()
return player
```
Query and Parameter
```graphql
mutation UpdatePlayer ($data: UpdatePlayerInput!) {
updatePlayer (data: $data) {
name
user {
id
email
}
}
}
````
```json
{
"data": {
"v1": 10
}
}
```
Result
```
__init__() missing 1 required keyword-only arguments: 'v2'"
```
When I have null value as v2 parameter, it works but it doesn't works for strawberry.UNSET value.
Please consider to make it work. | closed | 2023-11-23T09:58:17Z | 2025-03-20T15:56:29Z | https://github.com/strawberry-graphql/strawberry/issues/3250 | [] | goale-company | 2 |
deepfakes/faceswap | machine-learning | 1,017 | Check failed: vec.size() == NDIMS | 5/03/2020 20:04:39 INFO No existing state file found. Generating.
05/03/2020 20:04:42 INFO Creating new 'original' model in folder: 'E:\AiLearning\project\model'
05/03/2020 20:04:42 INFO Loading Trainer from Original plugin...
05/03/2020 20:04:42 INFO Enabled TensorBoard Logging
2020-05-03 20:05:01.916652: F .\tensorflow/core/util/bcast.h:111] Check failed: vec.size() == NDIMS (1 vs. 2)
Process exited.
why?
help me | closed | 2020-05-03T12:08:43Z | 2020-08-03T07:42:08Z | https://github.com/deepfakes/faceswap/issues/1017 | [] | chunxingque | 1 |
psf/black | python | 4,180 | additional newline added to docstring when the previous line length is less than the line length limit minus 1 | # before
```py
"""
87 characters ............................................................................
"""
```
# after
```py
"""
87 characters ............................................................................
"""
```
# playground
https://black.vercel.app/?version=stable&state=_Td6WFoAAATm1rRGAgAhARYAAAB0L-Wj4ADOAFldAD2IimZxl1N_Wk8JgdvQ8tSq0OhVZoEptWlmdC__omMouK3ooOylgkoYGVUTpRL_A_26nUoIFOzsQcUzX_N1N5WllahS-T5qlleGaKsl1U08Mc7wSjEUdEm8AAAAADdi2bh7NF1_AAF1zwEAAADAUENLscRn-wIAAAAABFla | closed | 2024-01-27T01:27:38Z | 2024-02-05T22:36:48Z | https://github.com/psf/black/issues/4180 | [
"T: bug"
] | DetachHead | 2 |
FlareSolverr/FlareSolverr | api | 445 | Sharemania does not give information | ### Environment
* **FlareSolverr version**:2.2.5
* **Last working FlareSolverr version**:
* **Operating system**:centos7
* **Are you using Docker**: [no]
* **FlareSolverr User-Agent (see log traces or / endpoint)**:
* **Are you using a proxy or VPN?** [no]
* **Are you using Captcha Solver:** [no]
* **If using captcha solver, which one:**
* **URL to test this issue: https://www.sharemania.us
### Description
https://www.sharemania.us
### Logged Error Messages
[Place any relevant error messages you noticed from the logs here.]
[Make sure you attach the full logs with your personal information removed in case we need more information]
### Screenshots
[Place any screenshots of the issue here if needed]
| closed | 2022-07-31T06:46:30Z | 2022-07-31T14:37:54Z | https://github.com/FlareSolverr/FlareSolverr/issues/445 | [
"more information needed"
] | 362227 | 1 |
BayesWitnesses/m2cgen | scikit-learn | 580 | LightGBM models | Hello.
(great project!)
I'm a bit lost of getting LightGBM (regression) models to work.
Are they supported? The main README seems to indicate they are, and the LightGBM project explicitely redirects here in the documentation when talking about generating code from the model.
I tried using the command line tool but I get this:
```
โฐโโ โ ต m2cgen -l c misc/models/test.txt on main|โ1โฆ6
Traceback (most recent call last):
File "/home/arthur/.local/bin/m2cgen", line 8, in <module>
sys.exit(main())
File "/home/arthur/.local/lib/python3.10/site-packages/m2cgen/cli.py", line 137, in main
print(generate_code(args))
File "/home/arthur/.local/lib/python3.10/site-packages/m2cgen/cli.py", line 112, in generate_code
model = pickle.load(f)
_pickle.UnpicklingError: could not find MARK
```
So I tried the python script I found in issue 99 :ย https://github.com/BayesWitnesses/m2cgen/issues/99
```
import lightgbm as lgb
import m2cgen as m2c
model = lgb.Booster(model_file='misc/models/test.txt')
# This works but is awkward
from lightgbm.sklearn import LGBMRegressor
r = LGBMRegressor()
r._Booster = model
code = m2c.export_to_java(r)
```
But that also fails:
```
โฐโโ โ ต python3 m2.py on main|โ1โฆ6
Traceback (most recent call last):
File "/home/arthur/dev/btc/champs/m2.py", line 11, in <module>
code = m2c.export_to_java(r)
File "/home/arthur/.local/lib/python3.10/site-packages/m2cgen/exporters.py", line 33, in export_to_java
return _export(model, interpreter)
File "/home/arthur/.local/lib/python3.10/site-packages/m2cgen/exporters.py", line 459, in _export
model_ast = assembler_cls(model).assemble()
File "/home/arthur/.local/lib/python3.10/site-packages/m2cgen/assemblers/boosting.py", line 222, in __init__
model_dump = model.booster_.dump_model()
File "/home/arthur/.local/lib/python3.10/site-packages/lightgbm/sklearn.py", line 854, in booster_
raise LGBMNotFittedError('No booster found. Need to call fit beforehand.')
sklearn.exceptions.NotFittedError: No booster found. Need to call fit beforehand.
```
With another version of the script I did myself I get:
```
โฐโโ โ ต python3 m2.py on main|โฆ6
Traceback (most recent call last):
File "/home/arthur/dev/btc/champs/m2.py", line 8, in <module>
code = m2c.export_to_java(bst);
File "/home/arthur/.local/lib/python3.10/site-packages/m2cgen/exporters.py", line 33, in export_to_java
return _export(model, interpreter)
File "/home/arthur/.local/lib/python3.10/site-packages/m2cgen/exporters.py", line 458, in _export
assembler_cls = get_assembler_cls(model)
File "/home/arthur/.local/lib/python3.10/site-packages/m2cgen/assemblers/__init__.py", line 147, in get_assembler_cls
raise NotImplementedError(f"Model '{model_name}' is not supported")
NotImplementedError: Model 'lightgbm_Booster' is not supported
```
Any idea what I'm doing wrong and how to get this to work?
Thanks a lot in advance!
| open | 2023-05-14T20:06:27Z | 2024-11-19T18:13:38Z | https://github.com/BayesWitnesses/m2cgen/issues/580 | [] | arthurwolf | 1 |
aio-libs/aiomysql | asyncio | 4 | Release 0.0.1 | Following features are not implemented or not finished:
1) documentation (#3)
2) ssl support
3) more examples
4) ???
So my question, should I make initial release without docs and ssl support?
| closed | 2015-02-03T18:56:40Z | 2015-02-18T22:13:59Z | https://github.com/aio-libs/aiomysql/issues/4 | [
"question"
] | jettify | 2 |
JoeanAmier/TikTokDownloader | api | 347 | ๅญๅจๆไปถ็ๅคๆญๆนๆณๆน่ฟ | ๅฆๆ่ฏฅ็ฎๅฝไธๆฏๅพ้๏ผๅฏ่ฝไผๆไธwไธชๆไปถ๏ผ่ฟไธชๆถๅๅคๆญ่ตทๆฅๅฐฑไผๅพๆ
ข๏ผๅฆๆไฝ ๆ 100 ไธชๆไปถ้่ฆๅคๆญ๏ผๅฎๅฐ้ๅ็ฎๅฝ 100 ๆฌก๏ผๅฐฑไผๅพๅก็กฌ็io
[ def is_exists(path: Path) -> bool:
return path.exists()](https://github.com/JoeanAmier/TikTokDownloader/blob/develop/src/downloader/download.py#L313)
ๅฏไปฅ่่ๅ
่ฏปๅ่ฏฅ็ฎๅฝไธญ็ๆๆๆไปถ๏ผ็ถๅๅจๅ
ๅญไธญ่ฟ่กๅคๆญ๏ผ่ไธๆฏ้ๅคๆฅ่ฏขๆไปถ็ณป็ปใไพๅฆ๏ผๅฏไปฅไฝฟ็จ os.listdir() ๆฅๅๅบ็ฎๅฝไธญ็ๆๆๆไปถ๏ผ็ถๅๅๆฃๆฅๆฏไธชๆไปถๆฏๅฆๅญๅจ๏ผ
```python
import os
from pathlib import Path
directory_path = "your_directory_path"
files_to_check = ["file1.txt", "file2.txt", "file3.txt"] # ็คบไพๆไปถๅ
# ๅๅบ็ฎๅฝไธญ็ๆๆๆไปถ
existing_files = set(os.listdir(directory_path))
# ๆฃๆฅๆไปถๆฏๅฆๅญๅจ
for file in files_to_check:
if file in existing_files:
print(f"{file} exists.")
else:
print(f"{file} does not exist.")
``` | open | 2024-12-08T13:44:53Z | 2024-12-08T14:57:53Z | https://github.com/JoeanAmier/TikTokDownloader/issues/347 | [
"้่ฆ่กฅๅ
(Incomplete)"
] | sansi98h | 1 |
marimo-team/marimo | data-science | 4,226 | mo.ui.experimental_data_editor breaks in v0.11.25+ with JSON parsing error | ### Describe the bug
### Description
Starting with Marimo version 0.11.25, `mo.ui.experimental_data_editor` fails to render even with simple valid tabular inputs. In version 0.11.24, the following code worked as expected:
```python
import marimo as mo
import pandas as pd
df = pd.DataFrame({"A": [1, 2, 3], "B": ["a", "b", "c"]})
df_editor = mo.ui.experimental_data_editor(data=df, label="Edit Data")
df_editor
```
However, in v0.11.25 and v0.11.26, this code now throws the following error:
```
Unexpected token 'A', "A,B 1,a 2,b 3,c " is not valid JSON
```
The same error occurs even if the input is a dictionary:
```python
data = {"A": [1, 2, 3], "B": ["a", "b", "c"]}
mo.ui.experimental_data_editor(data=data)
```
It seems that a regression may have been introduced in PR [#4122](https://github.com/marimo-team/marimo/pull/4122), possibly related to changes in how JSON serialization is handled internallyโbut I'm not entirely certain.
### Environment
<details>
```
{
"marimo": "0.11.26",
"OS": "Darwin",
"OS Version": "24.3.0",
"Processor": "i386",
"Python Version": "3.12.6",
"Binaries": {
"Browser": "134.0.6998.118",
"Node": "v20.5.1"
},
"Dependencies": {
"click": "8.1.8",
"docutils": "0.21.2",
"itsdangerous": "2.2.0",
"jedi": "0.19.2",
"markdown": "3.7",
"narwhals": "1.31.0",
"packaging": "24.2",
"psutil": "7.0.0",
"pygments": "2.19.1",
"pymdown-extensions": "10.14.3",
"pyyaml": "6.0.2",
"ruff": "0.11.2",
"starlette": "0.46.1",
"tomlkit": "0.13.2",
"typing-extensions": "4.12.2",
"uvicorn": "0.34.0",
"websockets": "15.0.1"
},
"Optional Dependencies": {},
"Experimental Flags": {
"chat_sidebar": true,
"inline_ai_tooltip": true,
"rtc": false
}
}
```
</details>
### Code to reproduce
```python
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "marimo",
# "pandas==2.2.3",
# ]
# ///
import marimo
__generated_with = "0.11.26"
app = marimo.App(width="medium")
@app.cell
def _():
import marimo as mo
data = {"A": [1, 2, 3], "B": ["a", "b", "c"]}
editor = mo.ui.experimental_data_editor(data=data, label="Edit Data")
editor
return data, editor, mo
@app.cell
def _(mo):
import pandas as pd
df = pd.DataFrame({"A": [1, 2, 3], "B": ["a", "b", "c"]})
editor_df = mo.ui.experimental_data_editor(data=df, label="Edit Data")
editor_df
return df, editor_df, pd
@app.cell
def _():
return
if __name__ == "__main__":
app.run()
``` | closed | 2025-03-24T03:57:55Z | 2025-03-24T13:30:01Z | https://github.com/marimo-team/marimo/issues/4226 | [
"bug"
] | t-edzuka | 1 |
saulpw/visidata | pandas | 2,208 | [aggregators] add histograms to aggregators other than count | closed | 2023-12-31T07:59:05Z | 2024-09-23T05:30:07Z | https://github.com/saulpw/visidata/issues/2208 | [
"wishlist",
"wish granted"
] | saulpw | 1 | |
vitalik/django-ninja | pydantic | 1,129 | [BUG] ModelSchema with ManyToManyField won't work under async views | **Describe the bug**
Say I have the following models.
```python
class Tag(models.Model):
text = models.CharField(max_length=32, unique=True)
class Doc(models.Model):
title = models.CharField(max_length=512)
content = models.TextField()
tags = models.ManyToManyField(Tag)
```
And the schema.
```python
class DocSchema(ModelSchema):
class Meta:
model = Doc
fields = ["title", "content", "tags"]
```
Then use it in an async view.
```python
@doc_router.get("/", response=List[DocSchema])
async def get_docs(request):
qs = Doc.objects.filter(user=request.auth)
return [doc async for doc in qs]
```
This will yield an error.
```log
response.0.tags
Error extracting attribute: SynchronousOnlyOperation: You cannot call this from an async context - use a thread or sync_to_async. [type=get_attribute_error, input_value=<DjangoGetter: <Doc: Basi...th Django and Postgres>>, input_type=DjangoGetter]
For further information visit https://errors.pydantic.dev/2.6/v/get_attribute_error
```
Looking into `DjangoGetter._convert_result`, it's returning `list(result.all())` since `result` is an instance of `Manager`.
It seems that async model validation is not taken into account now, so this may not be a bug, but rather a feature request?
If currently no plan to support this scenario, is there any workaround to address the problem?
**Versions (please complete the following information):**
- Python version: 3.12
- Django version: [5.0.3]
- Django-Ninja version: [1.1.0]
- Pydantic version: [2.6.4] | closed | 2024-04-13T13:40:58Z | 2024-04-25T11:55:01Z | https://github.com/vitalik/django-ninja/issues/1129 | [] | sunhs | 5 |
mwaskom/seaborn | matplotlib | 3,408 | Violin plot x-axis alignment issue | Dear Seaborn team,
thank you for this amazing library ! it makes my life so much simpler!
I encountered an issue using violinplot, violins are not aligned with the labels on the x-axis, see below :
```
plot = sns.violinplot(
data=df,
x='prediction', y='Mean(Nuclei_MeanIntensity)',
hue='prediction',
order = ("G1","eS","S","LS","G2")
)
```
<img width="385" alt="image" src="https://github.com/mwaskom/seaborn/assets/8309560/7d50b0e3-ff96-4750-bc86-71f901307b16">
```
plot = sns.violinplot(
data=df,
x='prediction', y='Mean(Nuclei_MeanIntensity)',
hue='prediction',
)
```
<img width="391" alt="image" src="https://github.com/mwaskom/seaborn/assets/8309560/d42d9e68-97ab-4700-962f-7db7d6653b16">
Removing the "hue" option is a current workaround but I would like to use the parameter "scale_hue" that requires "hue" to be set
```
plot = sns.violinplot(
data=df,
x='prediction', y='Mean(Nuclei_MeanIntensity)'
)
```
<img width="382" alt="image" src="https://github.com/mwaskom/seaborn/assets/8309560/e4c0fb16-6898-40f6-8237-693d01936268">
Thank you for your help !
Cheers,
Romain
| closed | 2023-06-28T11:03:53Z | 2024-01-17T18:57:38Z | https://github.com/mwaskom/seaborn/issues/3408 | [] | romainGuiet | 4 |
biolab/orange3 | pandas | 6,206 | exception with Load Model widget | 1) double-click the Load Model widget to select a pkcls file to load, sometimes (not always), an exception is raised immediately
2) if it isn't and you get to select the pkcls file, the attached model (saved from a previous neural network training session, see the DB link) consistently raises an exception and fails to load. Also, this model fails to load directly from python, so maybe there are 2 problems, with the Save Model and Load Model?
Orange v3.33
macOS 10.14.6 (happens on different Macs, all running Mojave)
How you installed Orange: from your dmg (but also happens when I installed via Anaconda)
The models are too large to upload to github so you can find them and the ows file here:
https://www.dropbox.com/s/2cwcthf7uuqs3ui/orangeBug.zip?dl=0
The smaller model loads OK, the larger one generated by a more complex NN fails every time. | closed | 2022-11-16T20:58:12Z | 2023-01-27T19:37:32Z | https://github.com/biolab/orange3/issues/6206 | [
"bug",
"follow up"
] | pkstys | 10 |
PeterL1n/BackgroundMattingV2 | computer-vision | 50 | Question about the function "compute_pixel_indices" | Hi Peter,
Thank you for sharing the source code which is well written and self-explanatory. However, could you explain the following lines, 278-279? I have pasted it below for your convenience. I know it would output the indices for the patch location corresponding to the original input x. However, could you explain the logic behind the following lines of code?
idx_pat = (c * H * W).view(C, 1, 1).expand([C, O, O]) + (o * W).view(1, O, 1).expand([C, O, O]) + o.view(1, 1, O).expand([C, O, O])
idx_loc = b * W * H + y * W * S + x * S
idx = idx_loc.view(-1, 1, 1, 1).expand([n, C, O, O]) + idx_pat.view(1, C, O, O).expand([n, C, O, O])
Kind regards, | closed | 2021-01-15T10:02:21Z | 2021-01-18T11:12:57Z | https://github.com/PeterL1n/BackgroundMattingV2/issues/50 | [] | yasar-rehman | 2 |
xlwings/xlwings | automation | 1,962 | Password protect at different levels | Versions of xlwings, Excel and Python (e.g. 0.24,Excel 2019, Python 3.7)
I see that excel can be password protected at different levels such as below
```
Workbook
Sheet
Cells
```
I managed to find xlwings approach for the Sheet level
```
import xlwings as xw
from xlwings.constants import DeleteShiftDirection
sales = xw.Book('BC_IND.xlsx', corrupt_load=1)
ws = sales.sheets[0]
ws.api.Protect(Password='test')
```
This doesn't allow me to make any changes to rows/cells/columns etc. I guess it is working fine.
But how can I apply password protection to the workbook?
`sales.api.SaveAs(r'C:\path\to\withpassword.xlsx', Password='password')`
When protecting worksheet prevents us from deleting data, what is the use of protecting workbook?
Is protecting the workbook same as protecting the file? Meaning, users will be prompted for password to open the file | closed | 2022-07-20T13:27:26Z | 2022-07-21T08:43:01Z | https://github.com/xlwings/xlwings/issues/1962 | [] | SSMK-wq | 1 |
521xueweihan/HelloGitHub | python | 2,116 | ใๅผๆบ่ช่ใawesome-cybersecurity -- ็ฌๅgithubไธๆๆๅ
ณไบ็ฝ็ปๅฎๅ
จ่ตๆๆถ้็้กน็ฎ๏ผๅนถ่ฟ่กๅ็ฑปๅ็ฒพ้ | ## ้กน็ฎๆจ่
- ้กน็ฎๅฐๅ๏ผhttps://github.com/liyansong2018/awesome-cybersecurity
- ็ฑปๅซ๏ผPythonใๅ
ถๅฎใไนฆ็ฑ
- ้กน็ฎๅ็ปญๆดๆฐ่ฎกๅ๏ผ็ปง็ปญ็ฌ่ซๅ้ๅๆไปทๅผ็็ฝ็ปๅฎๅ
จ็ธๅ
ณ่ตๆ
- ้กน็ฎๆ่ฟฐ๏ผ
- ่ฟๆฏไธไธช้่ฟpython็ฌ่ซ็ฌๅgithubไธๆๆๅ
ณไบ็ฝ็ปๅฎๅ
จ่ตๆ็้กน็ฎ
- ่ชๅจ็ฌๅ็ธๅบ้กน็ฎ๏ผๅนถ็ป่ฎกๅๅนดๆดๆฐ็้กน็ฎๆฐไปฅๅ็ธๅบ็starๆๅ
- ๆๅจ็ฒพ้ๅฎๅ
จ้กน็ฎ
- ๆจ่็็ฑ๏ผ
- ้่ฟ็ป่ฎกๅ็ฑป๏ผ็ฒพ้็ธๅบ็awesome-securityๅญๅจๅบ๏ผ็ฝ็ปๅฎๅ
จไปไธไบบๅๅฏๅฟซ้ๆพๅฐ่ชๅทฑๆณ่ฆๅญฆไน ็ๆนๅๅ่ตๆ
- ็คบไพไปฃ็ ๏ผ๏ผๅฏ้๏ผ้ฟๅบฆ๏ผ
```python
for i in range(20):
url = 'https://github.com/search?p={}&q=awesome-security&type=Repositories'.format(i + 1)
# github-error-rate-limit-exceeded
response = requests.get(url=url, headers=headers, verify=False)
soup = BeautifulSoup(response.text, 'lxml')
date = soup.findAll('relative-time')
```
- ๆชๅพ๏ผ

| closed | 2022-03-01T10:05:49Z | 2022-03-23T10:21:56Z | https://github.com/521xueweihan/HelloGitHub/issues/2116 | [] | liyansong2018 | 1 |
deepinsight/insightface | pytorch | 1,867 | Failed to achieve the reported verification performance in Table 2 in your paper with your suggestion configuration. | I download CASIA-webface dataset from the dataset wiki in this project and convert face images to record file via im2rec.py.
According to your paper, max step is set to 32,000, the start lr is 0.1 and lr decay septs are 20,000 and 28,000. Batch size is 128*4.
The training environment is V100 *4. network is resnet50. training dataset is casia-webface. val target is lfw and so on.
But the training loss always turned to be nan after few epochs. Is there any suggestion?
@nttstar | closed | 2021-12-26T08:55:26Z | 2021-12-27T06:26:44Z | https://github.com/deepinsight/insightface/issues/1867 | [] | tianxianhao | 1 |
allenai/allennlp | data-science | 4,757 | PretrainedTransformerTokenizer fails when disabling "fast" tokenizer | <!--
Please fill this template entirely and do not erase any of it.
We reserve the right to close without a response bug reports which are incomplete.
If you have a question rather than a bug, please ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/allennlp) rather than posting an issue here.
-->
## Checklist
<!-- To check an item on the list replace [ ] with [x]. -->
- [X] I have verified that the issue exists against the `master` branch of AllenNLP.
- [X] I have read the relevant section in the [contribution guide](https://github.com/allenai/allennlp/blob/master/CONTRIBUTING.md#bug-fixes-and-new-features) on reporting bugs.
- [X] I have checked the [issues list](https://github.com/allenai/allennlp/issues) for similar or identical bug reports.
- [X] I have checked the [pull requests list](https://github.com/allenai/allennlp/pulls) for existing proposed fixes.
- [X] I have checked the [CHANGELOG](https://github.com/allenai/allennlp/blob/master/CHANGELOG.md) and the [commit log](https://github.com/allenai/allennlp/commits/master) to find out if the bug was already fixed in the master branch.
- [X] I have included in the "Description" section below a traceback from any exceptions related to this bug.
- [X] I have included in the "Related issues or possible duplicates" section beloew all related issues and possible duplicate issues (If there are none, check this box anyway).
- [X] I have included in the "Environment" section below the name of the operating system and Python version that I was using when I discovered this bug.
- [X] I have included in the "Environment" section below the output of `pip freeze`.
- [X] I have included in the "Steps to reproduce" section below a minimally reproducible example.
## Description
**Tested on 1.2.0rc1 and master**
Intra work tokenizer doesn't work when we deliberately set use fast tokenizer to false (not sure if it's new transformers change).
I think that setting return_token_type_ids to None instead of False is solution here.
<!-- Please provide a clear and concise description of what the bug is here. -->
<details>
<summary><b>Python traceback:</b></summary>
<p>
<!-- Paste the traceback from any exception (if there was one) in between the next two lines below -->
```
Traceback (most recent call last):
File "bug_example.py", line 4, in <module>
tokenizer_kwargs={"use_fast": False}).intra_word_tokenize(["My", "text", "will"])
File "X/venv/lib/python3.6/site-packages/allennlp-1.2.0rc1-py3.6.egg/allennlp/data/tokenizers/pretrained_transformer_tokenizer.py", line 387, in intra_word_tokenize
tokens, offsets = self._intra_word_tokenize(string_tokens)
File "X/venv/lib/python3.6/site-packages/allennlp-1.2.0rc1-py3.6.egg/allennlp/data/tokenizers/pretrained_transformer_tokenizer.py", line 354, in _intra_word_tokenize
return_token_type_ids=False,
File "X/venv/lib/python3.6/site-packages/transformers-3.4.0-py3.6.egg/transformers/tokenization_utils_base.py", line 2229, in encode_plus
**kwargs,
File "X/venv/lib/python3.6/site-packages/transformers-3.4.0-py3.6.egg/transformers/tokenization_utils.py", line 490, in _encode_plus
verbose=verbose,
File "X/venv/lib/python3.6/site-packages/transformers-3.4.0-py3.6.egg/transformers/tokenization_utils_base.py", line 2617, in prepare_for_model
"Asking to return token_type_ids while setting add_special_tokens to False "
ValueError: Asking to return token_type_ids while setting add_special_tokens to False results in an undefined behavior. Please set add_special_tokens to True or set return_token_type_ids to None.
```
</p>
</details>
## Related issues or possible duplicates
Not to my knowledge
## Environment
<!-- Provide the name of operating system below (e.g. OS X, Linux) -->
OS:
Ubuntu 18.04 LTS
<!-- Provide the Python version you were using (e.g. 3.7.1) -->
Python version:
3.6.9 and 3.8.0
<details>
<summary><b>Output of <code>pip freeze</code>:</b></summary>
<p>
<!-- Paste the output of `pip freeze` in between the next two lines below -->
```
absl-py==0.9.0
allennlp==1.2.0rc1
attrs==20.2.0
blis==0.4.1
boto3==1.16.5
botocore==1.19.5
cached-property==1.5.2
cachetools==4.1.1
catalogue==1.0.0
certifi==2020.6.20
chardet==3.0.4
click==7.1.2
conllu==2.3.2
cymem==2.0.3
dataclasses==0.5
dataclasses-json==0.5.2
en-core-web-sm==2.3.1
filelock==3.0.12
future==0.18.2
google-auth==1.22.1
google-auth-oauthlib==0.4.1
grpcio==1.33.1
h5py==3.0.0rc1
idna==2.10
importlib-metadata==2.0.0
iniconfig==1.1.1
jmespath==0.10.0
joblib==0.14.1
jsonnet==0.15.0
jsonpickle==1.4.1
Markdown==3.3.3
marshmallow==3.8.0
marshmallow-enum==1.5.1
murmurhash==1.0.2
mypy-extensions==0.4.3
nltk==3.5
numpy==1.19.2
oauthlib==3.1.0
overrides==3.1.0
packaging==20.4
pkg-resources==0.0.0
plac==1.1.3
pluggy==0.13.1
preshed==3.0.2
protobuf==4.0.0rc2
py==1.9.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyparsing==3.0.0a2
pytest==6.1.1
python-dateutil==2.8.1
regex==2020.10.23
requests==2.23.0
requests-oauthlib==1.3.0
rsa==4.6
s3transfer==0.3.3
sacremoses==0.0.43
scikit-learn==0.23.2
scipy==1.5.3
sentencepiece==0.1.94
six==1.15.0
spacy==2.3.2
srsly==1.0.2
stringcase==1.2.0
tensorboard==2.1.0
tensorboardX==2.1
thinc==7.4.1
threadpoolctl==2.1.0
tokenizers==0.9.2
toml==0.10.1
torch==1.6.0
tqdm==4.43.0
transformers==3.4.0
typing-extensions==3.7.4.3
typing-inspect==0.6.0
urllib3==1.25.11
wasabi==0.8.0
Werkzeug==1.0.1
zipp==3.4.0
```
</p>
</details>
## Steps to reproduce
<details>
<summary><b>Example source:</b></summary>
<p>
<!-- Add a fully runnable example in between the next two lines below that will reproduce the bug -->
```
from allennlp.data.tokenizers import PretrainedTransformerTokenizer
PretrainedTransformerTokenizer("bert-base-cased",
tokenizer_kwargs={"use_fast": False}).intra_word_tokenize(["My", "text", "will"])
```
</p>
</details>
| closed | 2020-10-28T07:46:07Z | 2020-10-28T19:38:48Z | https://github.com/allenai/allennlp/issues/4757 | [
"bug"
] | mklimasz | 1 |
tflearn/tflearn | data-science | 546 | Python3 UnicodeDecodeError in oxflower17.load_data() | In python3, running the code:
```
>>> import tflearn.datasets.oxflower17 as oxflower17
>>> X, Y = oxflower17.load_data(one_hot=True, resize_pics=(227, 227))
```
Gives the error:
> File "/usr/local/lib/python3.4/dist-packages/tflearn/data_utils.py", line 339, in build_image_dataset_from_dir
> X, Y = pickle.load(open(dataset_file, 'rb'))
> **UnicodeDecodeError**: 'ascii' codec can't decode byte 0xf1 in position 0: ordinal not in range(128)
According to the solution suggested in [this reported issue](https://github.com/tflearn/tflearn/issues/57), I used:
`pickle.load(open(dataset_file, 'rb'), encoding='latin1')`
which solved the problem.
| open | 2017-01-04T21:21:55Z | 2017-01-04T21:21:55Z | https://github.com/tflearn/tflearn/issues/546 | [] | rotemmairon | 0 |
litestar-org/litestar | asyncio | 3,130 | Enhancement: Have OpenAPI Schema generation respect route handler ordering | ### Summary
Opening a new issue as discussed [here](https://github.com/litestar-org/litestar/issues/3059#issuecomment-1960609250).
From a DX perspective I personally find very helpful to have ordered routes in the `Controller`, that follow a certain logic:
- Progressive nesting (`GET /` comes before `GET /:id` and before `GET /:id/nested`)
- Logical actions ordering (`GET`, `POST`, `GET id`, `PATCH id`, `DELETE id`)
Example:
```python
class MyController(Controller):
tags = ["..."]
path = "/"
dependencies = {...}
@routes.get()
async def get_many(self):
...
@routes.post()
async def create(self, data: ...):
...
@routes.get("/{resource_id:uuid}")
async def get(self, resource_id: UUID):
...
@routes.patch("/{resource_id:uuid}")
async def update(self, resource_id: UUID):
...
@routes.delete("/{resource_id:uuid}")
async def delete(self, resource_id: UUID):
...
@routes.get("/{resource_id:uuid}/nested")
async def get_nested(self, resource_id: UUID):
...
```
Currently the ordering of the route definition at the Controller is not respected by the docs, so I end up having a Swagger that looks like this:
```
- GET /:id/nested/:nest_id/another
- POST /:id/nested
- GET /
- DELETE /:id
- PATCH /
```
Which I personally find very confusing since:
(1) ~It doesn't seem to follow a pre-defined logic (couldn't find any pattern for that when looking at the docs)~ it does seem to follow the ordering of the methods as defined [here](https://github.com/litestar-org/litestar/blob/3e5c179e714bb074bae13e02d10e2f3f51e24d5c/litestar/openapi/spec/path_item.py#L44) as shared by @guacs [here](https://github.com/litestar-org/litestar/issues/3059#issuecomment-1960609250)
(2) It doesn't respect the logic that was defined on the controller (specifically the nesting).
Was having a quick look and it seems to be related to logic when registering the route [here](https://github.com/litestar-org/litestar/blob/3e5c179e714bb074bae13e02d10e2f3f51e24d5c/litestar/app.py#L647), and then the handler:method map on the `HTTPRoute` [here](https://github.com/litestar-org/litestar/blob/3e5c179e714bb074bae13e02d10e2f3f51e24d5c/litestar/routes/http.py#L94).
It seems that by the time it reaches the plugin the nesting is already out of order - so it might make sense to handle when registering the routes maybe?
Anyways, if this is something of interest I'd help to contribute and take a deeper look into it. | open | 2024-02-23T02:21:14Z | 2025-03-20T15:54:26Z | https://github.com/litestar-org/litestar/issues/3130 | [
"Enhancement"
] | ccrvlh | 8 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 215 | sub_loss_names should be _sub_loss_names in IntraPairVarianceLoss and MarginLoss | I believe overriding ```sub_loss_names``` makes the ```embedding_regularizer``` useless | closed | 2020-10-11T12:03:01Z | 2020-11-06T22:53:34Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/215 | [
"bug",
"fixed in dev branch"
] | KevinMusgrave | 0 |
microsoft/MMdnn | tensorflow | 839 | How to add a build file? | Platform: win10
Python version: 3.7 (Anaconda)
Source framework: Tensorflow 2.0
Running scripts:
I tried to run `bazel build -c opt //tensorflow/contrib/android:libtensorflow_inference.so`
and I got the following error `ERROR: Skipping '//tensorflow/contrib/android:libtensorflow_inference.so': no such package 'tensorflow/contrib/android': BUILD file not found in any of the following directories. Add a BUILD file to a directory to mark it as a package.
- C:/users/aacjp/tensorflow/tensorflow/contrib/android` I think this mean that I'm supposed to create an empty file of type file called build and put it in tensorflow/contrib/android. I am going to try this but I'm curious to see if anyone else had a similar problem and if so how they solved it. | closed | 2020-05-20T15:29:55Z | 2020-05-28T02:40:32Z | https://github.com/microsoft/MMdnn/issues/839 | [] | spe301 | 1 |
TencentARC/GFPGAN | deep-learning | 475 | Mintu | open | 2023-12-29T06:26:12Z | 2023-12-29T06:26:12Z | https://github.com/TencentARC/GFPGAN/issues/475 | [] | amanul5 | 0 | |
modelscope/data-juicer | data-visualization | 451 | [Feat]: Unified LLM Calling Management | ### Search before continuing ๅ
ๆ็ดข๏ผๅ็ปง็ปญ
- [X] I have searched the Data-Juicer issues and found no similar feature requests. ๆๅทฒ็ปๆ็ดขไบ Data-Juicer ็ issue ๅ่กจไฝๆฏๆฒกๆๅ็ฐ็ฑปไผผ็ๅ่ฝ้ๆฑใ
### Description ๆ่ฟฐ
Currently, some LLM-dependent operators support `vllm`, while others utilize Hugging Face or the OpenAI API for model calling. It is necessary to review and unify these calling capabilities across the codebase.
Furthermore, could we abstract these calling mechanisms, rather than repeating similar code? This would enable unified management and ease the addition of support for more inference engines, such as custom Post APIs, TensorRT, and ONNX.
### Use case ไฝฟ็จๅบๆฏ
_No response_
### Additional ้ขๅคไฟกๆฏ
_No response_
### Are you willing to submit a PR for this feature? ๆจๆฏๅฆไนๆไธบๆญคๅ่ฝๆไบคไธไธช PR๏ผ
- [X] Yes I'd like to help by submitting a PR! ๆฏ็๏ผๆๆฟๆๆไพๅธฎๅฉๅนถๆไบคไธไธชPR๏ผ | open | 2024-10-16T03:23:38Z | 2024-10-16T08:47:01Z | https://github.com/modelscope/data-juicer/issues/451 | [
"enhancement"
] | drcege | 0 |
robusta-dev/robusta | automation | 1,146 | krr_scan pdf report does not include memory limits suggestions | **Describe the bug**
PDF report generated by krr_scan doesn't include memory limits suggestions.
**To Reproduce**
Steps to reproduce the behavior:
1. Create playbook with krr_scan action
2. Create slack sink
3. Trigger krr_scan
4. Check pdf report. It won't have memory limits suggestions
**Expected behavior**
A pdf report should include memory limits suggestions.
**Screenshots**

**Desktop (please complete the following information):**
N/A
**Smartphone (please complete the following information):**
N/A
**Additional context**
https://github.com/robusta-dev/robusta/blob/master/playbooks/robusta_playbooks/krr.py#L170-L177
[Slack discussion thread](https://robustacommunity.slack.com/archives/C054QUA4NHE/p1698997781635909) | closed | 2023-11-03T08:10:42Z | 2023-11-10T08:44:41Z | https://github.com/robusta-dev/robusta/issues/1146 | [
"bug",
"krr"
] | Disss | 1 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 506 | Use --extend-ignore in linter rule | closed | 2022-07-28T10:06:41Z | 2022-09-03T19:43:48Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/506 | [
"enhancement"
] | KevinMusgrave | 0 | |
robotframework/robotframework | automation | 5,089 | Ignore empty log messages (including messages cleared by listeners) | Listeners using the v3 API can modify the logged message, but they cannot remove it altogether. Even if they set the message to an empty string, that empty string is still logged. This adds unnecessary noise to the log file and increases the size of output.xml.
This is easiest to implement by ignoring log messages having an empty string as the value. Not writing them to output.xml is easy and that automatically takes care of removing them from the log file as well. They should not be included in the result model created during execution either, but I actually believe that model doesn't get log messages at all at the moment.
---
**UPDATE:** This change will obviously mean that all empty messages are ignored. Updated the title accordingly. | closed | 2024-03-22T21:23:42Z | 2024-08-21T21:25:21Z | https://github.com/robotframework/robotframework/issues/5089 | [
"enhancement",
"wont fix",
"priority: medium",
"effort: small"
] | pekkaklarck | 8 |
raphaelvallat/pingouin | pandas | 98 | Problems in results from mixed ANOVAs | I was performing a set of 11 mixed ANOVAs and in two of them a negative F was reported with a p = 1. I re-analysed this data in JASP and got different results with a regular F and a very high p (<1).
The error can be replicated using the files I've attached. Both are csv files. Each include the set of data I am referring to. All other results were identical.
[file_1.txt](https://github.com/raphaelvallat/pingouin/files/4720527/file_1.txt)
[file_2.txt](https://github.com/raphaelvallat/pingouin/files/4720528/file_2.txt)
| closed | 2020-06-03T01:36:42Z | 2020-06-03T15:05:13Z | https://github.com/raphaelvallat/pingouin/issues/98 | [
"invalid :triangular_flag_on_post:"
] | josea-rodas | 2 |
mars-project/mars | pandas | 2,713 | Support reading parquet from partitioned datasets for fastparquet engine | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
Now we support reading parquet from partitioned datasets for `pyarrow` engine, the `fastparquet` engine for partitioned datasets is needed as well. | closed | 2022-02-15T06:28:17Z | 2022-02-20T14:26:42Z | https://github.com/mars-project/mars/issues/2713 | [
"type: feature",
"mod: dataframe"
] | hekaisheng | 0 |
tortoise/tortoise-orm | asyncio | 1,553 | How can i print sql with register_tortoise() in fastapi? | How can i print sql with register_tortoise() in fastapi? | open | 2024-02-01T03:07:27Z | 2024-05-07T10:28:44Z | https://github.com/tortoise/tortoise-orm/issues/1553 | [] | zyk-miao | 3 |
coqui-ai/TTS | deep-learning | 2,538 | [Bug] Pre-computing takes so long | ### Describe the bug
Hello,
I have a bit of issues when trying to train fastspeech2.
Im using the code.
```
import os
import sys
from trainer import Trainer, TrainerArgs
from TTS.config.shared_configs import BaseAudioConfig, BaseDatasetConfig
from TTS.tts.configs.fastspeech2_config import Fastspeech2Config
from TTS.tts.datasets import load_tts_samples
from TTS.tts.models.forward_tts import ForwardTTS
from TTS.tts.utils.text.tokenizer import TTSTokenizer
from TTS.utils.audio import AudioProcessor
from TTS.utils.manage import ModelManager
output_path = os.path.join(
os.path.dirname(os.path.abspath(__file__)),
"output"
)
print(sys.path)
os.environ["CUDA_VISIBLE_DEVICES"] = "8"
# init configs
dataset_config = BaseDatasetConfig(
formatter="ljspeech",
meta_file_train="metadata.csv",
# meta_file_attn_mask=os.path.join(output_path, "../LJSpeech-1.1/metadata_attn_mask.txt"),
path="/cs/labs/adiyoss/amitroth/tts_train_pipeline/LJSpeech-1.1",
)
audio_config = BaseAudioConfig(
sample_rate=22050,
do_trim_silence=True,
trim_db=60.0,
signal_norm=False,
mel_fmin=0.0,
mel_fmax=8000,
spec_gain=1.0,
log_func="np.log",
ref_level_db=20,
preemphasis=0.0,
)
config = Fastspeech2Config(
run_name="fastspeech2_ljspeech",
audio=audio_config,
batch_size=32,
eval_batch_size=16,
num_loader_workers=8,
num_eval_loader_workers=4,
compute_input_seq_cache=True,
compute_f0=True,
f0_cache_path=os.path.join(output_path, "f0_cache"),
compute_energy=True,
energy_cache_path=os.path.join(output_path, "energy_cache"),
run_eval=True,
test_delay_epochs=-1,
epochs=1000,
text_cleaner="english_cleaners",
use_phonemes=True,
phoneme_language="en-us",
phoneme_cache_path=os.path.join(output_path, "phoneme_cache"),
precompute_num_workers=4,
print_step=50,
print_eval=False,
mixed_precision=False,
max_seq_len=500000,
output_path=output_path,
datasets=[dataset_config],
)
# compute alignments
if not config.model_args.use_aligner:
manager = ModelManager()
model_path, config_path, _ = manager.download_model("tts_models/en/ljspeech/tacotron2-DCA")
# TODO: make compute_attention python callable
os.system(
f"python TTS/bin/compute_attention_masks.py --model_path {model_path} --config_path {config_path} --dataset ljspeech --dataset_metafile metadata.csv --data_path ./recipes/ljspeech/LJSpeech-1.1/ --use_cuda true"
)
# INITIALIZE THE AUDIO PROCESSOR
# Audio processor is used for feature extraction and audio I/O.
# It mainly serves to the dataloader and the training loggers.
ap = AudioProcessor.init_from_config(config)
# INITIALIZE THE TOKENIZER
# Tokenizer is used to convert text to sequences of token IDs.
# If characters are not defined in the config, default characters are passed to the config
tokenizer, config = TTSTokenizer.init_from_config(config)
# LOAD DATA SAMPLES
# Each sample is a list of ```[text, audio_file_path, speaker_name]```
# You can define your custom sample loader returning the list of samples.
# Or define your custom formatter and pass it to the `load_tts_samples`.
# Check `TTS.tts.datasets.load_tts_samples` for more details.
train_samples, eval_samples = load_tts_samples(
dataset_config,
eval_split=True,
eval_split_max_size=config.eval_split_max_size,
eval_split_size=config.eval_split_size,
)
# init the model
model = ForwardTTS(config, ap, tokenizer, speaker_manager=None)
# init the trainer and ๐
trainer = Trainer(
TrainerArgs(), config, output_path, model=model, train_samples=train_samples, eval_samples=eval_samples
)
trainer.fit()
```
and I'm running it on a machine with a GPU, 40 cores and 40Gb of ram and still the code is not reacting into the epochs in a reasonable time.
He only pre computes the phoneme, f0s and energies.
if you need additional information I will upload.
Thank you!
### To Reproduce
python3 train_fp2.py
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
{
"CUDA": {
"GPU": [
"NVIDIA RTX A5000"
],
"available": true,
"version": "11.7"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "1.13.1+cu117",
"TTS": "0.12.0",
"numpy": "1.21.6"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "",
"python": "3.9.2",
"version": "#1 SMP Mon Jul 18 13:04:02 IDT 2022"
}
}
```
### Additional context
_No response_ | closed | 2023-04-18T16:16:52Z | 2025-01-17T12:10:29Z | https://github.com/coqui-ai/TTS/issues/2538 | [
"bug",
"wontfix"
] | MajoRoth | 3 |
microsoft/nni | tensorflow | 5,533 | RuntimeError: max(): Expected reduction dim to be specified for input.numel() == 0. Specify the reduction dim with the 'dim' argument | **Describe the bug**:
- scheduler: AGP
- pruner: TaylorFO
- mode: global
- using evaluator (new api)
- torchvision resnet 18 model
- iterations: 10
**Environment**:
- NNI version: 2.10
- Training service (local|remote|pai|aml|etc): local
- Python version: Python 3.9.12
- PyTorch version: 1.12.0 py3.9_cuda11.3_cudnn8.3.2_0 pytorch
- torchvision: 0.13.0 py39_cu113 pytorch
- Cpu or cuda version: cuda
**Reproduce the problem**
- Code|Example:
| closed | 2023-04-26T23:05:45Z | 2023-05-10T10:44:34Z | https://github.com/microsoft/nni/issues/5533 | [] | kriteshg | 9 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 892 | Is random cropping helpful for generative models? | I understand that random cropping is a technique often used in classification networks, and for good reasons. However, is it also helpful for generative models? If so, it would be great if you could refer a paper for me to read on a little more of the effect of random cropping on generative models.
Thanks | closed | 2020-01-05T04:40:37Z | 2020-01-07T20:29:31Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/892 | [] | ThisIsIsaac | 5 |
gradio-app/gradio | data-science | 10,473 | Streaming text output does not work | ### Describe the bug
I am trying to develop a streaming text chatbot, but I have met with an error:
```
app-1 | Traceback (most recent call last):
app-1 | File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 625, in process_events
app-1 | response = await route_utils.call_process_api(
app-1 | File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 322, in call_process_api
app-1 | output = await app.get_blocks().process_api(
app-1 | File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 2044, in process_api
app-1 | result = await self.call_function(
app-1 | File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1589, in call_function
app-1 | prediction = await fn(*processed_input)
app-1 | File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 850, in async_wrapper
app-1 | response = await f(*args, **kwargs)
app-1 | File "/usr/local/lib/python3.10/site-packages/gradio/chat_interface.py", line 869, in _submit_fn
app-1 | history = self._append_message_to_history(response, history, "assistant")
app-1 | File "/usr/local/lib/python3.10/site-packages/gradio/chat_interface.py", line 800, in _append_message_to_history
app-1 | message_dicts = self._message_as_message_dict(message, role)
app-1 | File "/usr/local/lib/python3.10/site-packages/gradio/chat_interface.py", line 838, in _message_as_message_dict
app-1 | for x in msg.get("files", []):
app-1 | AttributeError: 'generator' object has no attribute 'get'
```
I believe the issue occurred as gradio tried to get the latest message from the streaming generator function but is unable to.
### Have you searched existing issues? ๐
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
generate_answer(message, history):
for chunk in chain.stream(
{"input": message['text']},
config={"configurable": {"session_id": cookies['session_id']}}
):
yield chunk
chatbot = gr.ChatInterface(
fn=generate_answer,
chatbot=gr.Chatbot(height=1000, render=False),
multimodal=True
)
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
gradio==5.13.1
```
### Severity
Blocking usage of gradio | closed | 2025-01-31T01:42:09Z | 2025-02-03T17:08:19Z | https://github.com/gradio-app/gradio/issues/10473 | [
"bug",
"needs repro"
] | jasonngap1 | 5 |
KaiyangZhou/deep-person-reid | computer-vision | 192 | Change training args when resuming from a checkpoint | Is^ supported so that we can change training params, say, decreasing lr, when resuming from a checkpoint? | closed | 2019-06-20T04:53:23Z | 2019-10-22T21:36:43Z | https://github.com/KaiyangZhou/deep-person-reid/issues/192 | [] | johnzhang1999 | 1 |
browser-use/browser-use | python | 287 | [Bug] Error executing action send_keys: Keyboard.press: Unknown key: "some text" | Can we improve send_keys, so that it can also normal text? This is usefull, e.g. in google docs where the cursor is already active and we could send keys directly. | open | 2025-01-16T18:50:25Z | 2025-01-28T20:14:53Z | https://github.com/browser-use/browser-use/issues/287 | [] | MagMueller | 1 |
itamarst/eliot | numpy | 407 | New message writing API, designed for performance | The current abstraction is Action creates a Message, and the Message generates the dict to pass to the destination.
This adds a bunch of overhead (e.g. the Message has code to look up the current Action even when it was created by the Action!) and in general it's not clear that having a first-class Message abstraction is useful. E.g. only `Message.bind()` makes having a class at all relevant, and it's not clear anyone uses that.
So:
1. Prototype and benchmark adding `.log()` method as an alternative (there would also be a standalone logging function similar to what `Message.log` does where it doesn't have to be in Action context, for use in e.g. logging bridges or other code where there isn't necessarily current action).
2. If it's faster, implement and deprecate `Message` class. | closed | 2019-05-06T17:17:48Z | 2019-12-07T19:22:42Z | https://github.com/itamarst/eliot/issues/407 | [
"API enhancement"
] | itamarst | 2 |
horovod/horovod | tensorflow | 4,048 | A fatal error has been detected by the Java Runtime Environment | **Task Description:**
Training a simple classifier using keras + horovod spark and getting below error
**Error:**
```
[3]<stderr>:Error in sys.excepthook:
[3]<stderr>:
[3]<stderr>:Original exception was:
[3]<stdout>:#
[3]<stdout>:# A fatal error has been detected by the Java Runtime Environment:
[3]<stdout>:#
[3]<stdout>:# SIGSEGV (0xb) at pc=0x000000000055d91b, pid=1592523, tid=1592523
[3]<stdout>:#
[3]<stdout>:# JRE version: OpenJDK Runtime Environment (11.0.4+11) (build 11.0.4+11)
[3]<stdout>:# Java VM: OpenJDK 64-Bit Server VM (11.0.4+11, mixed mode, tiered, compressed oops, g1 gc, linux-amd64)
[3]<stdout>:# Problematic frame:
[3]<stdout>:# C [python3.10+0x15d91b] PyObject_GC_UnTrack+0x1b
[3]<stdout>:#
[3]<stdout>:# Core dump will be written. Default location: Core dumps may be processed with "/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e" (or dumping to /dfs/12/yarn/nm/usercache/userid/appcache/application_1718265469723_0399/container_e733_1718265469723_0399_01_000002/core.1592523)
[3]<stdout>:#
[3]<stdout>:# An error report file with more information is saved as:
[3]<stdout>:# /dfs/12/yarn/nm/usercache/userid/appcache/application_1718265469723_0399/container_e733_1718265469723_0399_01_000002/hs_err_pid1592523.log
[3]<stdout>:#
[3]<stdout>:# If you would like to submit a bug report, please visit:
[3]<stdout>:# http://bugreport.java.com/bugreport/crash.jsp
[3]<stdout>:# The crash happened outside the Java Virtual Machine in native code.
[3]<stdout>:# See problematic frame for where to report the bug.
[3]<stdout>:#
```
**Environment:**
1. Framework: (Keras)
2. Framework version: 2.15.0
3. Horovod version: 0.28.1
4. MPI version: NA
5. CUDA version: NA
6. NCCL version: NA
7. Python version: 3.10.10
8. Spark / PySpark version: 3.3.3
9. Ray version:
10. OS and version: Oracle Linux 8.8
11. GCC version: 9.2
12. CMake version: 3.26.4
13. Compute: Using CPU only
**Code Snippet:**
```
from tensorflow import keras
import horovod.spark.keras as hvd
from pyspark.sql import SparkSession
from pyspark.ml.feature import VectorAssembler
from horovod.spark.common.store import HDFSStore
import os
spark = SparkSession.builder. \
appName("HorovodTest"). \
config("spark.dynamicAllocation.enabled", False). \
config('spark.yarn.queue', 'queue_name'). \
config('spark.executor.instances', '4').\
config('spark.executor.cores', '8').\
config('spark.executor.memory', '20g').\
config('spark.driver.memory', '40g').\
config('spark.shuffle.io.maxRetries', '10').\
config('spark.shuffle.io.retryWait', '360s').\
config('spark.executor.memoryOverhead', '10g').\
config('spark.yarn.appMasterEnv.HOROVOD_GLOO_TIMEOUT_SECONDS', '3600').\
config('spark.executorEnv.HOROVOD_GLOO_TIMEOUT_SECONDS', '3600').\
config('spark.network.timeout', '18000s').master("yarn").getOrCreate()
file_path = "/user/userid/hvd_data/" # hdfs path
df = spark.read.parquet(file_path)
df = df.repartition(4)
feature_columns = df.columns[:-1] # all columns except the last one
label_column = df.columns[-1] # the last column
# Assemble the features into a single vector column
assembler = VectorAssembler(inputCols=feature_columns, outputCol="features")
df = assembler.transform(df)
# Define the Keras model
model = keras.Sequential([
keras.layers.Dense(8, input_dim=len(feature_columns)),
keras.layers.Activation('tanh'),
keras.layers.Dense(1),
keras.layers.Activation('sigmoid')
])
# Unscaled learning rate
optimizer = keras.optimizers.SGD(learning_rate=0.1)
loss = 'binary_crossentropy'
# Define the HDFS store for checkpointing
store = HDFSStore('/user/userid/tmp')
# Create Horovod Keras Estimator
keras_estimator = hvd.KerasEstimator(
num_proc=4,
store=store,
model=model,
optimizer=optimizer,
loss=loss,
feature_cols=['features'],
label_cols=['Fraud'],
batch_size=32,
epochs=4,
partitions_per_process=1
)
# Fit the model
keras_model = keras_estimator.fit(df)
print(">>>>>>>>>> Model Training Completed Successfully <<<<<<<<<<<<<<<<<<<<")
print(keras_model)
```
Note: It works fine if I run with num_proc=1 but running not this error when num_proc>1
| open | 2024-06-13T12:29:09Z | 2024-06-19T14:48:11Z | https://github.com/horovod/horovod/issues/4048 | [
"bug"
] | Parvez-Khan-1 | 3 |
praw-dev/praw | api | 1,717 | using inbox.stream() results in erroneus unread message count indicator | **Describe the bug**
when using inbox.stream() the streamed messages are NOT marked as read (correct), but unfortunately the unread messages counter (on reddit.com) is decreased by one.
**To Reproduce**
Steps to reproduce the behavior:
1. go to reddit.com/message/unread
2. observe the number of messages shown is equal to the count shown in the little letter indicator
3. use some code that uses inbox.stream()
4. reload the unread page from above
5. observe the unread message indicator is now 0, yet unread messages are shown on the page
**Expected behavior**
inbox.stream() should not change the unread messages count on reddit page
**Other info**
made a post on r/bugs: https://www.reddit.com/r/bugs/comments/n9re1a/message_indicator_count_of_unread_messages_shows/ because I actually think this is a reddit bug
**System Info**
- OS: archlinux
- Python: Python 3.9.4
- PRAW Version: 7.2.0
| closed | 2021-05-13T09:27:51Z | 2021-05-16T14:47:49Z | https://github.com/praw-dev/praw/issues/1717 | [] | molecular | 10 |
man-group/arctic | pandas | 744 | Empty DataFrame blow up in get_info function | In arctic/arctic/chunkstore/chunkstore.py, the get_info function will throw key error value, if the data in the node of the library is empty | closed | 2019-04-04T16:30:48Z | 2019-04-11T00:19:17Z | https://github.com/man-group/arctic/issues/744 | [] | jasonlocal | 2 |
xinntao/Real-ESRGAN | pytorch | 824 | KeyError: "No object named 'r2net' found in 'model' registry!" | ๆๆณๅบไบ่ฏฅๆกๆถๆทปๅ ไธไธชCNN็่ถ
ๅ็ฝ็ป๏ผ่ฎญ็ป่ชๅทฑ็่ถ
ๅๆจกๅ๏ผๆๆrealesrganๆไปถๅคนๅฎๆดๆท่ดไธไปฝ๏ผๆจกๅๆขๆ่ชๅทฑ็๏ผๆณจๅๅจๅจๅฏนๅบ็ๆจกๅ็ปๆไธไน้ฝๆทปๅ ไบ๏ผไธบไปไนๆ็คบๆพไธๅฐๅข๏ผ่ฏท้ฎๆไน่งฃๅณๅข๏ผ่ฐข่ฐข๏ผ | open | 2024-07-11T01:31:08Z | 2024-07-11T01:33:28Z | https://github.com/xinntao/Real-ESRGAN/issues/824 | [] | 1343464520 | 1 |
JaidedAI/EasyOCR | machine-learning | 1,058 | Can I get the result of negative number? | Hello I hope you are doing well.
In my code, I cannot get the result of negative number.
I didn't train my custom model.
```
import easyocr
reader = easyocr.Reader(['en'])
result = reader.readtext('directory')
```
Other characters are detected well(English characters and numbers), but only negative symbol " - " is not detected.
How can I solve this?
Thank you.
| open | 2023-06-20T08:27:59Z | 2024-11-26T06:46:47Z | https://github.com/JaidedAI/EasyOCR/issues/1058 | [] | chungminho1 | 1 |
autogluon/autogluon | scikit-learn | 4,553 | [Tabular] Add Support for Loading Excel Files | We might want to add support for excel format here: https://github.com/autogluon/autogluon/blob/a2ad006bf12f9cde018d17c17eade192e6c69859/common/src/autogluon/common/loaders/load_pd.py#L20
For more details, we may discuss offline. @Innixma @AnirudhDagar
| open | 2024-10-17T20:29:18Z | 2024-11-11T15:59:42Z | https://github.com/autogluon/autogluon/issues/4553 | [
"enhancement",
"module: tabular",
"module: common"
] | FANGAreNotGnu | 3 |
hyperspy/hyperspy | data-visualization | 2,712 | Improve RectangularROI error message | When initializing a `RectangularROI` using bad values for the `top`, `bottom`, `right` or `left` parameters, the error message is not very helpful. For example, when `top` and `bottom` have the same value:
```python
import hyperspy.api as hs
roi = hs.roi.RectangularROI(left=1.0, right=1.5, top=2.0, bottom=2.0)
```
The error message is very long (not shown here), and not very intuitive:
```python
hyperspy/hyperspy/roi.py in _bottom_changed(self, old, new)
923 def _bottom_changed(self, old, new):
924 if self._bounds_check and \
--> 925 self.top is not t.Undefined and new <= self.top:
926 self.bottom = old
927 else:
TypeError: '<=' not supported between instances of '_Undefined' and 'float'
```
There should be a check early in the initialization, to see if the `top`, `bottom`, `right` and `left` values are good. If they aren't, a more intuitive error message should be shown. | open | 2021-04-19T15:26:51Z | 2021-04-21T19:22:22Z | https://github.com/hyperspy/hyperspy/issues/2712 | [
"type: bug",
"good first PR",
"release: next patch"
] | magnunor | 0 |
MagicStack/asyncpg | asyncio | 250 | incorrect PostgreSQL version, failed to connect to PG10/Debian | * **asyncpg version**: 0.14.0
* **PostgreSQL version**: 10.1
* **Python version**: 3.6.4
* **Platform**: Linux Debian sid
* **Do you use pgbouncer?**: no
* **Did you install asyncpg with pip?**: yes
* **If you built asyncpg locally, which version of Cython did you use?**:
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: not related to event loop
I experience an issue when trying to connect to my local postgresql cluster. The error is :
```
File "/home/ludo/.virtualenvs/pyvt/lib/python3.6/site-packages/asyncpg/serverversion.py", line 49, in <listcomp>
versions = [int(p) for p in parts][:3]
ValueError: invalid literal for int() with base 10: '1 (Debian 10'
```
PostgreSQL version given by the server itself :
```
[ludo@postgres] # select version();
version
---------------------------------------------------------------------------------------------------------
PostgreSQL 10.1 (Debian 10.1-3) on x86_64-pc-linux-gnu, compiled by gcc (Debian 7.2.0-19) 7.2.0, 64-bit
(1 ligne)
```
PostgreSQL was installed from the main debian repositories :
```
apt-cache policy postgresql-10
postgresql-10:
Installรฉย : 10.1-3
Candidatย : 10.1-3
Table de versionย :
*** 10.1-3 500
500 http://httpredir.debian.org/debian sid/main amd64 Packages
100 /var/lib/dpkg/status
```
I looked into the code but failed to find where the version is fetched from PG.
The `version_string` received in the `split_server_version_string` function is equal to `"10.1 (Debian 10.1-3)"` but it should be only `"10.1"` according to the rest of the function.
Any ideas to fix it ? | closed | 2018-01-31T17:13:54Z | 2018-02-14T21:04:52Z | https://github.com/MagicStack/asyncpg/issues/250 | [] | ldgeo | 4 |
bigscience-workshop/petals | nlp | 541 | Prepull model data on private swarm | Hello, is it possible to pull all data for a model on a private swarm node ? To reduce download-traffic i would like to build pre-populated images.
Regards,
Wolfgang | closed | 2023-11-10T16:47:26Z | 2023-11-15T15:30:09Z | https://github.com/bigscience-workshop/petals/issues/541 | [] | wolfganghuse | 5 |
pbugnion/gmaps | jupyter | 343 | TraitError when creating a WeightedHeatmap with sample code | Hi,
I'm trying to create a weighted heatmap, and Python is throwing an error. I've actually replicated the error using the sample code [here](https://jupyter-gmaps.readthedocs.io/en/v0.3.3/gmaps.html). (I have a valid API key that otherwise works for pulling data from Google). I'm running the code in Jupyter notebook (v. 6.0.3) using Python 3.7.6
```
import gmaps
import gmaps.datasets
gmaps.configure(api_key="AI...") # I've filled this with my own key
earthquake_data = gmaps.datasets.load_dataset("earthquakes")
print(earthquake_data[:4]) # first four rows
m = gmaps.Map()
m.add_layer(gmaps.WeightedHeatmap(data=earthquake_data))
m
```
This results in the following output:
```
[(65.1933, -149.0725, 1.7), (38.791832, -122.7808304, 2.1), (38.8180008, -122.79216770000001, 0.48), (33.6016667, -116.72766670000001, 0.78)]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
~/opt/anaconda3/lib/python3.7/site-packages/traitlets/traitlets.py in get(self, obj, cls)
527 try:
--> 528 value = obj._trait_values[self.name]
529 except KeyError:
KeyError: 'locations'
During handling of the above exception, another exception occurred:
TraitError Traceback (most recent call last)
<ipython-input-23-46911e438658> in <module>
9
10 m = gmaps.Map()
---> 11 m.add_layer(gmaps.WeightedHeatmap(data=earthquake_data))
12 m
~/opt/anaconda3/lib/python3.7/site-packages/ipywidgets/widgets/widget.py in __init__(self, **kwargs)
413
414 Widget._call_widget_constructed(self)
--> 415 self.open()
416
417 def __del__(self):
~/opt/anaconda3/lib/python3.7/site-packages/ipywidgets/widgets/widget.py in open(self)
426 """Open a comm to the frontend if one isn't already open."""
427 if self.comm is None:
--> 428 state, buffer_paths, buffers = _remove_buffers(self.get_state())
429
430 args = dict(target_name='jupyter.widget',
~/opt/anaconda3/lib/python3.7/site-packages/ipywidgets/widgets/widget.py in get_state(self, key, drop_defaults)
516 for k in keys:
517 to_json = self.trait_metadata(k, 'to_json', self._trait_to_json)
--> 518 value = to_json(getattr(self, k), self)
519 if not PY3 and isinstance(traits[k], Bytes) and isinstance(value, bytes):
520 value = memoryview(value)
~/opt/anaconda3/lib/python3.7/site-packages/traitlets/traitlets.py in __get__(self, obj, cls)
554 return self
555 else:
--> 556 return self.get(obj, cls)
557
558 def set(self, obj, value):
~/opt/anaconda3/lib/python3.7/site-packages/traitlets/traitlets.py in get(self, obj, cls)
533 raise TraitError("No default value found for %s trait of %r"
534 % (self.name, obj))
--> 535 value = self._validate(obj, dynamic_default())
536 obj._trait_values[self.name] = value
537 return value
~/opt/anaconda3/lib/python3.7/site-packages/traitlets/traitlets.py in _validate(self, obj, value)
589 return value
590 if hasattr(self, 'validate'):
--> 591 value = self.validate(obj, value)
592 if obj._cross_validation_lock is False:
593 value = self._cross_validate(obj, value)
~/opt/anaconda3/lib/python3.7/site-packages/gmaps/geotraitlets.py in validate(self, obj, value)
27 _validate_latitude(latitude)
28 _validate_longitude(longitude)
---> 29 return super(LocationArray, self).validate(obj, locations_as_list)
30
31
~/opt/anaconda3/lib/python3.7/site-packages/traitlets/traitlets.py in validate(self, obj, value)
2322
2323 def validate(self, obj, value):
-> 2324 value = super(List, self).validate(obj, value)
2325 value = self.validate_elements(obj, value)
2326 return value
~/opt/anaconda3/lib/python3.7/site-packages/traitlets/traitlets.py in validate(self, obj, value)
2240 return value
2241
-> 2242 value = self.validate_elements(obj, value)
2243
2244 return value
~/opt/anaconda3/lib/python3.7/site-packages/traitlets/traitlets.py in validate_elements(self, obj, value)
2317 length = len(value)
2318 if length < self._minlen or length > self._maxlen:
-> 2319 self.length_error(obj, value)
2320
2321 return super(List, self).validate_elements(obj, value)
~/opt/anaconda3/lib/python3.7/site-packages/traitlets/traitlets.py in length_error(self, obj, value)
2312 e = "The '%s' trait of %s instance must be of length %i <= L <= %i, but a value of %s was specified." \
2313 % (self.name, class_of(obj), self._minlen, self._maxlen, value)
-> 2314 raise TraitError(e)
2315
2316 def validate_elements(self, obj, value):
TraitError: The 'locations' trait of a WeightedHeatmap instance must be of length 1 <= L <= 9223372036854775807, but a value of [] was specified.
```
Welcome any thoughts and happy to provide additional detail. | open | 2020-06-27T22:46:40Z | 2021-12-28T17:11:49Z | https://github.com/pbugnion/gmaps/issues/343 | [] | gerzhoy | 2 |
cleanlab/cleanlab | data-science | 882 | loosen the docs requirements | In [docs/requirements.txt](https://github.com/cleanlab/cleanlab/blob/master/docs/requirements.txt), we have many packages pinned to only a specific version.
Goal: figure out as many of these packages where we can replace the `==` with `>=` to ensure we're running the docs build with the latest versions in our CI.
Why? Because the docs build auto-runs all of our [tutorial notebooks](https://github.com/cleanlab/cleanlab/tree/master/docs/source/tutorials) and we'd like to automatically make sure the tutorials run with the latest versions of the dependency packages (and be aware when a dependency has released a new version that breaks things).
If working on this, you can feel free to only focus on a specific dependency and test the docs build if you replace `==` with `>=` for that one dependency. | open | 2023-11-02T21:08:42Z | 2023-11-02T21:08:42Z | https://github.com/cleanlab/cleanlab/issues/882 | [
"enhancement",
"good first issue",
"dependencies",
"help-wanted"
] | jwmueller | 0 |
ymcui/Chinese-LLaMA-Alpaca-2 | nlp | 427 | max_seq_length | ่ฏท้ฎmax_seq_lengthๆฏ่พๅ
ฅ้ฟๅบฆ่ฟๆฏ่พๅบ้ฟๅบฆ๏ผ
| closed | 2023-11-29T06:45:40Z | 2023-12-21T23:49:03Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/427 | [
"stale"
] | Go4miii | 5 |
graphql-python/graphene | graphql | 1,485 | Thanks for Thanks for GraphQL Python Roadmap | Hello, thanks for Thanks for GraphQL Python Roadmap
I see that this project is the most alive in https://github.com/graphql-python
I use python as GraphQL client and server in my projects and
I created the package for pydantic data-model generation from some GraphQL schema and
for queries and mutations generation: https://github.com/denisart/graphql2python
Docs: https://denisart.github.io/graphql2python/
Queries generation examples: https://github.com/denisart/graphql-query
Data-model generation examples: https://denisart.github.io/graphql2python/model.html
Using with gql: https://denisart.github.io/graphql2python/gql.html
This package so far has few settings. This will be fixed in future releases. For example:
- custom templates for render of data-model (your pydantic style or other packages (https://github.com/lidatong/dataclasses-json, ...));
- custom input for output file;
- load GraphQL as like in https://the-guild.dev/graphql/codegen/docs/config-reference/schema-field;
- custom base class;
- and more;
I hope that graphql2python can be useful to you and the python GraphQL community.
| closed | 2022-12-12T16:53:58Z | 2024-06-22T10:13:38Z | https://github.com/graphql-python/graphene/issues/1485 | [
"โจ enhancement"
] | denisart | 1 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 502 | Error | ModuleNotFoundError: No module named 'tensorflow.contrib'
What does this mean | closed | 2020-08-22T05:07:44Z | 2020-08-23T00:21:56Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/502 | [] | dillfrescott | 2 |
miguelgrinberg/Flask-SocketIO | flask | 1,517 | about start_background_task | **How many start_background_task can run at same time?**
I am a noob when it comes to server config and running things parallelly.
This is my server config running in heroku.
_Procfile_
`web: uwsgi uwsgi.ini`
_Uwsgi_
```
http-socket = :$(PORT)
master = true
processes = 4
die-on-term = true
wsgi-file = run.py
callable = app
memory-report = false
gevent-monkey-patch = true
gevent-early-monkey-patch = true
gevent = 1000
http-websockets = true
```
I need to return response in 30sec. So this is how i am running long task. then manually sending necessary data to api from backgroud task.
```
@app.route('/api')
def hello():
socketio.start_background_task(about_five_minute_task)
return f'demand response in 30 second'
```
**Everything works**. I just want to know how many _background_task_ can i keep running or start at same time? Also will they keep running or will wait for other background_task to finish?
**I am thinking of increasing the usage of start_background_task. So i'm afraid, is there any limit that i will hit?**
| closed | 2021-04-08T14:21:24Z | 2021-04-09T06:49:47Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1517 | [
"question"
] | 4n1qz5skwv | 2 |
liangliangyy/DjangoBlog | django | 255 | ๅฆไฝๆทปๅ ๅพ็ๅจๅๅฎขไธญ๏ผ | ๅฆไฝๆทปๅ ๅพ็ๅจๅๅฎขไธญ๏ผ | closed | 2019-04-29T07:20:25Z | 2019-05-06T07:20:31Z | https://github.com/liangliangyy/DjangoBlog/issues/255 | [] | GeorgeJunj | 1 |
lepture/authlib | django | 198 | cannot run authlib | Sorry for the stupid question (i am not used to python). How can I run the programm after the installation? Witch file to run with python command? | closed | 2020-02-25T22:03:34Z | 2020-02-26T00:06:40Z | https://github.com/lepture/authlib/issues/198 | [] | LionelHoudelier | 0 |
biolab/orange3 | data-visualization | 6,693 | Orange does not launch | Orange does not launch.
Orange has been added to a VMware App Volume it does not launch, no error, nothing.
Any suggestions for troubleshooting?
<!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
<!-- Be specific, clear, and concise. Include screenshots if relevant. -->
<!-- If you're getting an error message, copy it, and enclose it with three backticks (```). -->
**How can we reproduce the problem?**
<!-- Upload a zip with the .ows file and data. -->
<!-- Describe the steps (open this widget, click there, then add this...) -->
**What's your environment?**
<!-- To find your Orange version, see "Help โ About โ Version" or `Orange.version.full_version` in code -->
- Operating system:
- Orange version:
- How you installed Orange:
| open | 2024-01-04T19:47:33Z | 2024-01-05T17:31:22Z | https://github.com/biolab/orange3/issues/6693 | [
"bug report"
] | pbastiaans | 9 |
FactoryBoy/factory_boy | sqlalchemy | 342 | Django ArrayField property referencing other instance list | While writing unit tests for a Django app I'm building I noticed that if I followed these steps: 1) create an instance of a model through a factory using the default values; 2) replace one of the inner lists in a 2D list property; 3) create another instance with the same conditions as step 1; the second instance's 2D list wouldn't be the default value specified in the factory but the modified version of the first instance. Example:
```python
>>> p1 = SudokuPuzzleFactory.create()
>>> p1.unsolved_puzzle[0]
[0, 0, 0, 7, 0, 0, 4, 2, 8]
>>> lst = [0, 0, 0, 0, 0, 0, 0, 0, 0]
>>> p1.unsolved_puzzle[0] = list(lst)
>>> p1.unsolved_puzzle[0]
[0, 0, 0, 0, 0, 0, 0, 0, 0]
>>> p2 = SudokuPuzzleFactory.create()
>>> p2.unsolved_puzzle[0]
[0, 0, 0, 0, 0, 0, 0, 0, 0]
```
Is this expected behavior? Or am I missing something? If it is expected behavior, how to override it? I've already tried overriding the _create method and verified that this does not happen when I create the 2 instances directly through the model manager's create method. Model and factory source code included below:
Model
```python
class SudokuPuzzle(models.Model):
unsolved_puzzle = ArrayField(
base_field=ArrayField(
base_field=models.IntegerField(), size=9),
size=9)
... other properties ...
```
Factory
```python
EASY_PUZZLE = (
(0, 0, 0, 7, 0, 0, 4, 2, 8),
... other tuples ...
)
class SudokuPuzzleFactory(factory.django.DjangoModelFactory):
class Meta:
model = models.SudokuPuzzle
unsolved_puzzle = utils.clone_list(list(EASY_PUZZLE))
solved_puzzle = factory.LazyAttribute(
lambda obj: utils.clone_list(obj.unsolved_puzzle))
``` | open | 2017-01-12T11:19:39Z | 2017-10-11T01:31:09Z | https://github.com/FactoryBoy/factory_boy/issues/342 | [
"Q&A",
"Doc"
] | aledejesus | 3 |
modin-project/modin | pandas | 6,865 | BUG: Issue #6864 Concat and sort_value slow, to_feather has Memory bug | ### Modin version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest released version of Modin.
- [X] I have confirmed this bug exists on the main branch of Modin. (In order to do this you can follow [this guide](https://modin.readthedocs.io/en/stable/getting_started/installation.html#installing-from-the-github-master-branch).)
### Reproducible Example
```python
import modin.pandas as pd
import os
import ray
from tqdm import tqdm
def merge_feather_files(folder_path, suffix):
# ่ทๅๆๆ็ฌฆๅๆกไปถ็ๆไปถๅ
files = [f for f in os.listdir(folder_path) if f.endswith(suffix + '.feather')]
# ่ฏปๅๆๆๆไปถๅนถๅๅนถ
df_list = [pd.read_feather(os.path.join(folder_path, file)) for file in tqdm(files)]
merged_df = pd.concat(df_list)
# ๆ็
งๆฅๆๅๆๅบ
merged_df.sort_values(by='date', inplace=True)
merged_df.to_feather('factors_date/factors'+suffix+'.feather')
# ไฝฟ็จๆนๆณ็คบไพ
folder_path = 'factors_batch' # ่ฎพ็ฝฎๆไปถๅคน่ทฏๅพ
for i in range(1,25):
merged_df = merge_feather_files(folder_path, f'_{i}')
```
### Issue Description
At first, I did not think it was a bug. Thus I report it as issure #6864. Then I use `import pandas as pd` and it works fine, so it must be modin problem. BTW, the modin pandas read_feather is slower than the pandas.
I want to concatenate 5100 dataframes into one big dataframe. The memory usage of these 5100 dataframes is about 160G. However, the processing speed is really slow. I have been waiting for about 40 minutes, but it still hasn't finished. My machine has 128 cores, I am certain that they are not all utilized for processing.
### Expected Behavior
```
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5096/5096 [31:45<00:00, 2.67it/s]
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5096/5096 [33:31<00:00, 2.53it/s]
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5096/5096 [33:13<00:00, 2.56it/s]
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5096/5096 [33:09<00:00, 2.56it/s]
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 5096/5096 [33:08<00:00, 2.56it/s]
```
It should work fine and return this.
### Error Logs
<details>
```python-traceback
UserWarning: `DataFrame.to_feather` is not currently supported by PandasOnRay, defaulting to pandas implementation.
Please refer to https://modin.readthedocs.io/en/stable/supported_apis/defaulting_to_pandas.html for explanation.
(raylet) [2024-01-18 12:25:56,935 E 1383634 1383634] (raylet) node_manager.cc:3022: 6 Workers (tasks / actors) killed due to memory pressure (OOM), 0 Workers crashed due to other reasons at node (ID: 695b81b72bd60e890b66e98b555a4048a90dbdd0909377d214beb80f, IP: 192.168.8.180) over the last time period. To see more information about the Workers killed on this node, use `ray logs raylet.out -ip 192.168.8.180`
(raylet)
(raylet) Refer to the documentation on how to address the out of memory issue: https://docs.ray.io/en/latest/ray-core/scheduling/ray-oom-prevention.html. Consider provisioning more memory on this node or reducing task parallelism by requesting more CPUs per task. To adjust the kill threshold, set the environment variable `RAY_memory_usage_threshold` when starting Ray. To disable worker killing, set the environment variable `RAY_memory_monitor_refresh_ms` to zero.
(raylet) [2024-01-18 12:26:59,006 E 1383634 1383634] (raylet) node_manager.cc:3022: 8 Workers (tasks / actors) killed due to memory pressure (OOM), 0 Workers crashed due to other reasons at node (ID: 695b81b72bd60e890b66e98b555a4048a90dbdd0909377d214beb80f, IP: 192.168.8.180) over the last time period. To see more information about the Workers killed on this node, use `ray logs raylet.out -ip 192.168.8.180`
(raylet)
(raylet) Refer to the documentation on how to address the out of memory issue: https://docs.ray.io/en/latest/ray-core/scheduling/ray-oom-prevention.html. Consider provisioning more memory on this node or reducing task parallelism by requesting more CPUs per task. To adjust the kill threshold, set the environment variable `RAY_memory_usage_threshold` when starting Ray. To disable worker killing, set the environment variable `RAY_memory_monitor_refresh_ms` to zero.
(raylet) [2024-01-18 12:28:01,160 E 1383634 1383634] (raylet) node_manager.cc:3022: 8 Workers (tasks / actors) killed due to memory pressure (OOM), 0 Workers crashed due to other reasons at node (ID: 695b81b72bd60e890b66e98b555a4048a90dbdd0909377d214beb80f, IP: 192.168.8.180) over the last time period. To see more information about the Workers killed on this node, use `ray logs raylet.out -ip 192.168.8.180`
(raylet)
(raylet) Refer to the documentation on how to address the out of memory issue: https://docs.ray.io/en/latest/ray-core/scheduling/ray-oom-prevention.html. Consider provisioning more memory on this node or reducing task parallelism by requesting more CPUs per task. To adjust the kill threshold, set the environment variable `RAY_memory_usage_threshold` when starting Ray. To disable worker killing, set the environment variable `RAY_memory_monitor_refresh_ms` to zero.
(raylet) [2024-01-18 12:29:07,013 E 1383634 1383634] (raylet) node_manager.cc:3022: 5 Workers (tasks / actors) killed due to memory pressure (OOM), 0 Workers crashed due to other reasons at node (ID: 695b81b72bd60e890b66e98b555a4048a90dbdd0909377d214beb80f, IP: 192.168.8.180) over the last time period. To see more information about the Workers killed on this node, use `ray logs raylet.out -ip 192.168.8.180`
(raylet)
(raylet) Refer to the documentation on how to address the out of memory issue: https://docs.ray.io/en/latest/ray-core/scheduling/ray-oom-prevention.html. Consider provisioning more memory on this node or reducing task parallelism by requesting more CPUs per task. To adjust the kill threshold, set the environment variable `RAY_memory_usage_threshold` when starting Ray. To disable worker killing, set the environment variable `RAY_memory_monitor_refresh_ms` to zero.
@river7816
Add a comment
Comment
Add your comment here...
Remember, contributions to this repository should follow its [code of conduct](https://github.com/modin-project/modin/blob/602f8664e480def8edac7ad45aa9d68e1b3ad7c8/CODE_OF_CONDUCT.md).
Assignees
No one assigned
Labels
[question โ](https://github.com/modin-project/modin/labels/question%20%E2%9D%93)
[Triage ๐ฉน](https://github.com/modin-project/modin/labels/Triage%20%F0%9F%A9%B9)
Projects
None yet
Milestone
No milestone
Development
No branches or pull requests
Notifications
Customize
Youโre receiving notifications because you authored the thread.
1 participant
@river7816
```
</details>
### Installed Versions
<details>
UserWarning: Setuptools is replacing distutils.
INSTALLED VERSIONS
------------------
commit : 47a9a4a294c75cd7b67f0fd7f95f846ed53fbafa
python : 3.10.13.final.0
python-bits : 64
OS : Linux
OS-release : 6.2.0-39-generic
Version : #40~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Nov 16 10:53:04 UTC 2
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : zh_CN.UTF-8
LOCALE : zh_CN.UTF-8
Modin dependencies
------------------
modin : 0.26.0
ray : 2.9.0
dask : 2024.1.0
distributed : 2024.1.0
hdk : None
pandas dependencies
-------------------
pandas : 2.1.4
numpy : 1.26.2
pytz : 2023.3.post1
dateutil : 2.8.2
setuptools : 68.2.2
pip : 23.3.2
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : 1.4.6
psycopg2 : None
jinja2 : 3.1.2
IPython : 8.20.0
pandas_datareader : None
bs4 : 4.12.2
bottleneck : None
dataframe-api-compat: None
fastparquet : None
fsspec : 2023.12.2
gcsfs : None
matplotlib : 3.8.2
numba : 0.58.1
numexpr : 2.8.8
odfpy : None
openpyxl : 3.1.2
pandas_gbq : None
pyarrow : 14.0.2
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : 1.10.1
sqlalchemy : 1.4.50
tables : 3.9.2
tabulate : 0.9.0
xarray : 2023.12.0
xlrd : 1.2.0
zstandard : 0.22.0
tzdata : 2023.4
qtpy : None
pyqt5 : None
</details>
| open | 2024-01-18T12:40:20Z | 2024-01-21T11:14:09Z | https://github.com/modin-project/modin/issues/6865 | [
"bug ๐ฆ",
"Memory ๐พ",
"External",
"P3"
] | river7816 | 4 |
wiseodd/generative-models | tensorflow | 8 | ValueError: "Only call `sigmoid_cross_entropy_with_logits` with named arguments (labels=..., logits=..., ...) in vanilla_gan gan_tensorflow.py | closed | 2017-03-13T01:36:15Z | 2017-03-13T03:01:19Z | https://github.com/wiseodd/generative-models/issues/8 | [] | dleybz | 1 | |
joke2k/django-environ | django | 519 | Double quotes no longer work for including the `#` in a string | After installing a new version of `django-environ` it appears that double quotes no longer work for including `#` as a character in a string. This appears to be breaking change in how django-environ parses the env file. As far as I can tell it treats everything after # as a comment, and considers the string to just be the starting double quote `"`
For example in version `0.10.0` this worked fine:
```python
>>> import environ, io
>>> environ.__version__
'0.10.0'
>>> env = environ.Env()
>>> env.read_env(io.StringIO('COLOR="#fff"'))
>>> env.str("COLOR")
'#fff'
```
However in the latest version `0.11.2` this no longer works:
```python
>>> import environ, io
>>> environ.__version__
'0.11.2'
>>> env = environ.Env()
>>> env.read_env(io.StringIO('COLOR="#fff"'))
>>> env.str("COLOR")
'"'
```
As a workaround, single quotes do still work:
```python
>>> import environ, io
>>> env = environ.Env()
>>> env.read_env(io.StringIO("COLOR='#fff'"))
>>> env.str("COLOR")
'#fff'
```
---
If this is an intentional change, I suggest adding a note about it in the [changelog](https://django-environ.readthedocs.io/en/latest/changelog.html#v0-11-0-30-august-2023). I couldn't find anything about it at this point.
There does seem to be some related changes in #475, so maybe that's were this change occured. | open | 2024-02-09T11:59:46Z | 2024-03-11T22:00:37Z | https://github.com/joke2k/django-environ/issues/519 | [] | alexkiro | 5 |
pallets-eco/flask-sqlalchemy | flask | 1,102 | model.query isn't available when using custom declarative base | Discussed in https://github.com/pallets-eco/flask-sqlalchemy/discussions/1101 by @jrast.
When using a custom declarative base, the extension doesn't set `model.query`. | closed | 2022-09-26T15:17:21Z | 2022-10-11T00:22:07Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/1102 | [] | davidism | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.