repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers | deep-learning | 36,594 | Lora_B weight becomes 0 when using AuotModel | ### System Info
transformers version: 4.49.0
peft version: 0.14.0
### Who can help?
_No response_
### Information
When using `AutoModel` to load the base model and passing it to`PeftModel`, the LoRA weights become 0. It is not a problem when using `AutoModelForCausalLM`.
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModel, AutoModelForCausalLM
from peft import PeftModel
base_model_id = "meta-llama/Llama-3.2-1B"
adapter_id = "makcedward/Llama-3.2-1B-Instruct-LoRA-Adapter"
auto_model = PeftModel.from_pretrained(
AutoModel.from_pretrained(
base_model_id,
),
adapter_id
)
auto_casual_model = PeftModel.from_pretrained(
AutoModelForCausalLM.from_pretrained(
base_model_id,
),
adapter_id
)
print("Auto Model")
print(auto_model.base_model.model.layers[0].self_attn.q_proj.lora_A.default.weight)
# tensor([[-0.0168, 0.0056, -0.0009, ..., 0.0149, -0.0161, -0.0064],
print(auto_model.base_model.model.layers[0].self_attn.q_proj.lora_B.default.weight)
# tensor([[0., 0., 0., ..., 0., 0., 0.],
print("AutoModelForCausalLM")
print(auto_casual_model.base_model.model.model.layers[0].self_attn.q_proj.lora_A.default.weight)
# tensor([[ 1.5867e-02, 2.7307e-02, -1.8503e-02, ..., -1.2035e-02,
print(auto_casual_model.base_model.model.model.layers[0].self_attn.q_proj.lora_B.default.weight)
# tensor([[-7.1123e-04, -4.3834e-03, -1.7415e-03, ..., 4.3514e-03,
```
### Expected behavior
Able to load LoRA weights by using AutoModel | closed | 2025-03-06T19:36:11Z | 2025-03-06T19:45:41Z | https://github.com/huggingface/transformers/issues/36594 | [
"bug"
] | makcedward | 1 |
man-group/arctic | pandas | 859 | Behaviour of tickstore.delete | #### Arctic Version
```
1.79.4
```
#### Arctic Store
```
TickStore
```
#### Platform and version
MacOS Catalina 10.15.4
#### Description of problem and/or code sample that reproduces the issue
I tried to clean up the ticks which are duplicated accidentally via tickstore.delete(). I expected to delete all entries under the DateRange but eventually one entry will be leftover regardless my delete script.
Could you please tell me if it's expected design or a bug?
Regards,
Steve
Sample code:
```
ts_name = 'test_tickstore'
arctic.delete_library(ts_name)
arctic.initialize_library(ts_name, TICK_STORE)
tickstore = arctic[ts_name]
duplicate_ts = dt(2020, 4, 24, 15, 30, 39, tzinfo=mktz('UTC'))
df = DataFrame(
data = {'price': [108.193, 110.193, 111.193, 112.193]}
, index = [duplicate_ts
, dt(2020, 4, 24, 15, 30, 41, tzinfo=mktz('UTC'))
, dt(2020, 4, 24, 15, 30, 43, tzinfo=mktz('UTC'))
, dt(2020, 4, 24, 15, 30, 45, tzinfo=mktz('UTC'))])
df.index.name = "datetime"
tickstore.write('testsym',df)
print(f"\n\nwrite some price\n{df}")
df_dup = DataFrame(
data = {'price': [108.193]}
, index = [duplicate_ts])
df_dup.index.name = "datetime"
tickstore.write('testsym',df_dup)
tickstore.write('testsym',df_dup)
df_read = tickstore.read('testsym')
print(f"\n\nmake duplicates, expect arctic returns error: TimeSeries data is out of order\n{df_read}")
tickstore.delete('testsym',DateRange(duplicate_ts,duplicate_ts))
df_read = tickstore.read('testsym')
print(f"\n\nexpect delete all data with the same timestamp{duplicate_ts} but remain one\n{df_read}")
rng = DateRange(duplicate_ts,dt(2020, 4, 24, 15, 30, 42, tzinfo=mktz('UTC')))
print(f"\n\npass a date range to delete function {rng}")
tickstore.delete('testsym',rng)
df_read = tickstore.read('testsym')
print(f"\n\nexpect delete two rows but nothing was touched\n{df_read}")
```
Output logs:
```
WARNING:arctic.tickstore.tickstore:NB treating all values as 'exists' - no longer sparse
WARNING:arctic.tickstore.tickstore:NB treating all values as 'exists' - no longer sparse
write some price
price
datetime
2020-04-24 15:30:39+00:00 108.193
2020-04-24 15:30:41+00:00 110.193
2020-04-24 15:30:43+00:00 111.193
2020-04-24 15:30:45+00:00 112.193
WARNING:arctic.tickstore.tickstore:NB treating all values as 'exists' - no longer sparse
ERROR:arctic.tickstore.tickstore:TimeSeries data is out of order, sorting!
make duplicates, expect arctic returns error: TimeSeries data is out of order
price
2020-04-24 23:30:39+08:00 108.193
2020-04-24 23:30:39+08:00 108.193
2020-04-24 23:30:39+08:00 108.193
2020-04-24 23:30:41+08:00 110.193
2020-04-24 23:30:43+08:00 111.193
2020-04-24 23:30:45+08:00 112.193
expect delete all data with the same timestamp2020-04-24 15:30:39+00:00 but remain one
price
2020-04-24 23:30:39+08:00 108.193
2020-04-24 23:30:41+08:00 110.193
2020-04-24 23:30:43+08:00 111.193
2020-04-24 23:30:45+08:00 112.193
pass a date range to delete function [2020-04-24 15:30:39+00:00, 2020-04-24 15:30:42+00:00]
expect delete two rows but nothing was touched
price
2020-04-24 23:30:39+08:00 108.193
2020-04-24 23:30:41+08:00 110.193
2020-04-24 23:30:43+08:00 111.193
2020-04-24 23:30:45+08:00 112.193
```
| open | 2020-05-26T16:49:22Z | 2020-05-26T16:49:22Z | https://github.com/man-group/arctic/issues/859 | [] | soulaw-mkii | 0 |
microsoft/qlib | machine-learning | 1,324 | cn_index 获取指数成分股数据 |

我理解通过baostock 获取中证500指数成分股数据中的日期应该是开始日期,这里为什么是结束日期?
| closed | 2022-10-20T14:16:36Z | 2023-01-23T15:02:06Z | https://github.com/microsoft/qlib/issues/1324 | [
"question",
"stale"
] | louis-xuy | 1 |
DistrictDataLabs/yellowbrick | matplotlib | 498 | Allow ModelVisualizers to wrap Pipeline objects | **Describe the solution you'd like**
Our model visualizers expect to wrap classifiers, regressors, or clusters in order to visualize the model under the hood; they even do checks to ensure the right estimator is passed in. Unfortunately in many cases, passing a pipeline object as the model in question does not allow the visualizer to work, even though the model is acceptable as a pipeline, e.g. it is a classifier for classification score visualizers (more on this below). This is primarily because the Pipeline wrapper masks the attributes needed by the visualizer.
I propose that we modify the [`ModelVisualizer `](https://github.com/DistrictDataLabs/yellowbrick/blob/develop/yellowbrick/base.py#L274) to change the `ModelVisualizer.estimator` attribute to a `@property` - when setting the estimator property, we can perform a check to ensure that the Pipeline has a `final_estimator` attribute (e.g. that it is not a transformer pipeline). When getting the estimator property, we can return the final estimator instead of the entire Pipeline. This should ensure that we can use pipelines in our model visualizers.
**NOTE** however that we will still have to `fit()`, `predict()`, and `score()` on the entire pipeline, so this is a bit more nuanced than it seems on first glance. There will probably have to be `is_pipeline()` checking and other estimator access utilities.
**Is your feature request related to a problem? Please describe.**
Consider the following, fairly common code:
```python
from sklearn.pipeline import Pipeline
from sklearn.neural_network import MLPClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
from yellowbrick.classifier import ClassificationReport
model = Pipeline([
('tfidf', TfidfVectorizer()),
('mlp', MLPClassifier()),
])
oz = ClassificationReport(model)
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
oz.poof()
```
This seems to be a valid model for a classification report, unfortunately the classification report is not able to access the MLPClassiifer's `classes_` attribute since the Pipeline doesn't know how to pass that on to the final estimator.
I think the original idea for the `ScoreVisualizers` was that they would be inside of Pipelines, e.g.
```python
model = Pipeline([
('tfidf', TfidfVectorizer()),
('clf', ClassificationReport(MLPClassifier())),
])
model.fit(X, y)
model.score(X_test, y_test)
model.named_steps['clf'].poof()
```
But this makes it difficult to use more than one visualizer; e.g. ROCAUC visualizer and CR visualizer.
**Definition of Done**
- [ ] Update `ModelVisualizer` class with pipeline helpers
- [ ] Ensure current tests pass
- [ ] Add test to all model visualizer subclasses to pass in a pipeline as the estimator
- [ ] Add documentation about using visualizers with pipelines | open | 2018-07-13T13:06:32Z | 2020-01-08T15:58:39Z | https://github.com/DistrictDataLabs/yellowbrick/issues/498 | [
"type: feature",
"priority: medium",
"level: intermediate"
] | bbengfort | 4 |
chaos-genius/chaos_genius | data-visualization | 724 | [BUG] Email alert formatting breaks when forwarded | ## Describe the bug
When forwarded, email alerts and alert reports do not keep all of the formatting. Colors are button styles make some of the text unreadable.
## Explain the environment
- **Chaos Genius version**: 0.4.0-rc
- **OS Version / Instance**: -
- **Deployment type**: -
## Current behavior
Before forwarding:

After forwarding:

## Expected behavior
Formatting should remain the same after forwarding.
## Possible solution
All of the styles in the HTML email templates can be made inline. | closed | 2022-02-15T11:44:57Z | 2022-04-11T06:18:38Z | https://github.com/chaos-genius/chaos_genius/issues/724 | [
"❗alerts"
] | Samyak2 | 0 |
onnx/onnx | machine-learning | 6,284 | ImportError: DLL load failed while importing onnx_cpp2py_export: 动态链接库(DLL)初始化例程失败。 | # Bug Report
### Is the issue related to model conversion?
1.16.2

<img width="952" alt="onnx_bug" src="https://github.com/user-attachments/assets/4f0d6581-a62e-4fbb-931b-65eb844a7aae">
| closed | 2024-08-07T08:05:20Z | 2024-08-07T14:09:33Z | https://github.com/onnx/onnx/issues/6284 | [
"bug"
] | LHSSHL001 | 2 |
pinry/pinry | django | 159 | Python Imaging Library | To get this to work after running "make bootstrap" I had do manually rename the PIL folder in
C:\Users\username\.virtualenvs\pinry-master-GvTIGkmg\Lib\site-packages
to "pil" re-run "make bootstrap" and then re-rename the folder "PIL" before running "make serve."
Now with it up and running I do not have anything other than that (my memory) to go off. | closed | 2019-12-07T14:25:30Z | 2019-12-08T19:17:08Z | https://github.com/pinry/pinry/issues/159 | [
"bug",
"python"
] | brett-jpy | 2 |
wandb/wandb | data-science | 8,770 | [Feature]: Night mode follows system theme | ### Description
The night mode preview feature is wonderful, however it would be great if it could follow the system theme (as many websites do).
I switch between light and dark mode a lot, and it's a pain to do Ctrl + M on every W&B tab every time I switch.
### Suggested Solution
An option for night mode to follow the system theme, e.g. by checking:
```javascript
window.matchMedia && window.matchMedia('(prefers-color-scheme: dark)').matches
```
or watching:
```javascript
window.matchMedia('(prefers-color-scheme: dark)').addEventListener('change', ...)
```
| open | 2024-11-05T12:43:42Z | 2024-11-05T19:01:00Z | https://github.com/wandb/wandb/issues/8770 | [
"ty:feature",
"a:app"
] | SamAdamDay | 1 |
jina-ai/serve | deep-learning | 6,060 | Bi-directional Streaming | **Describe the feature**
<!-- A clear and concise description of what the feature is. -->
As I understand it, the current API only supports response streaming. Is there a way to support [bi-directional streaming](https://grpc.io/docs/languages/python/basics/#bidirectional-streaming-rpc)? I imagine it would look like the current streaming API, except the input would be a generator. This would be useful for applications such as chat-bots.
**Your proposal**
```python
# then define the Executor
class MyExecutor(Executor):
@requests(on='/hello')
async def task(self, docs: Generator[MyDocument], **kwargs) -> MyDocument:
for doc in docs:
yield MyDocument(text=f'{doc.text} output')
```
| closed | 2023-09-26T17:08:12Z | 2024-05-10T00:18:33Z | https://github.com/jina-ai/serve/issues/6060 | [
"Stale"
] | NarekA | 12 |
mirumee/ariadne | api | 306 | Implement GraphQL modules | GraphQL Modules are a JavaScript tool that allows types and resolvers to be grouped together into functional blocks that depend on each other. Modules can be composed to build larger parts, ultimately leading to a module that represents the entire application.
Importing a single module is easier than importing a list of types, resolvers, and any dependant types or resolvers. Modules are also a straightforward way to implement a code-first declarative interface like that offered by Strawberry. We could also provide tooling to build modules from Graphene and Strawberry objects. | closed | 2020-01-28T16:08:19Z | 2022-04-12T13:36:12Z | https://github.com/mirumee/ariadne/issues/306 | [
"roadmap",
"discussion"
] | patrys | 4 |
ijl/orjson | numpy | 213 | Feature request: Can mmap'ed files be support? | As bytearrays and memoryviews are already supported, would it possible to support mmap files also? It could severely reduce memory usage when dealing with large files. | closed | 2021-10-06T03:58:47Z | 2021-12-05T15:50:54Z | https://github.com/ijl/orjson/issues/213 | [] | Dobatymo | 1 |
paperless-ngx/paperless-ngx | machine-learning | 8,510 | [BUG] Mails not being collected/processed since 22-11-2024 | ### Description
Paperless does not pull mail from my mail account since 22-11-2024. The mail account test returns a success message for connecting to the mailbox. There have been no logs since 22-11-2024. I have updated to 2.13.5 and have restarted paperless-ngx and the host which it is running on multiple times. The install was done on ProxMox via helper-scripts.com
### Steps to reproduce
Observe mail.log
Notice that there have been no entries since 22-11-2024
### Webserver logs
```bash
Last log entry:
[2024-11-22 07:00:01,186] [DEBUG] [paperless_mail] Rule T-Online.T-Online Regel: Processed 42 matching mail(s)
```
### Browser logs
_No response_
### Paperless-ngx version
2.13.5
### Host OS
Linux paperless-ngx 6.5.11-8-pve #1 SMP PREEMPT_DYNAMIC PMX 6.5.11-8 (2024-01-30T12:27Z) x86_64 GNU/Linux
### Installation method
Bare metal
### System status
```json
{
"pngx_version": "2.13.5",
"server_os": "Linux-6.5.11-8-pve-x86_64-with-glibc2.36",
"install_type": "bare-metal",
"storage": {
"total": 10464022528,
"available": 2216288256
},
"database": {
"type": "postgresql",
"url": "paperlessdb",
"status": "OK",
"error": null,
"migration_status": {
"latest_migration": "paperless_mail.0028_alter_mailaccount_password_and_more",
"unapplied_migrations": []
}
},
"tasks": {
"redis_url": "redis://localhost:6379",
"redis_status": "OK",
"redis_error": null,
"celery_status": "OK",
"index_status": "OK",
"index_last_modified": "2024-12-17T20:48:20.886405Z",
"index_error": null,
"classifier_status": "WARNING",
"classifier_last_trained": null,
"classifier_error": "Classifier file does not exist (yet). Re-training may be pending."
}
}
```
### Browser
Safari
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-12-17T21:30:13Z | 2024-12-18T01:16:56Z | https://github.com/paperless-ngx/paperless-ngx/issues/8510 | [
"not a bug"
] | tibbors | 0 |
jmcnamara/XlsxWriter | pandas | 938 | Bug: Unexpected changes to Excel outputs from 3.0.5 to 3.0.6? | ### Current behavior
**Background:**
I'm currently using XlsxWriter for Excel reporting, where I have some convenience functionality built for generating more "templated" reports. I have unit tests which compare got to expected Excel outputs using the XlsxWriter file comparison utility [compare_xlsx_files](https://github.com/jmcnamara/XlsxWriter/blob/main/xlsxwriter/test/helperfunctions.py#L223). These tests run in a CI/CD pipeline on dependency update PRs from Dependabot.
It appears that the move from `3.0.5` to `3.0.6` has caused some (but not all) of these tests to begin to fail. (I didn't see a similar issue when moving from `3.0.4` to `3.0.5`.)
**Observations:**
- The pre-`3.0.6` files appear to be anywhere from 3-7 bytes larger than the `3.0.6` files.
- Some of the differences in the XML are around ranges, mins, maxes, and widths.
I used the `_compare_xlsx_files` utility to extract some of the differing XML below.
**Got `3.0.6` XML:**
```
['xl/worksheets/sheet1.xml', '<?xml version="1.0" encoding="UTF-8" standalone="yes"?>', '<worksheet xmlns="http://schemas.openxmlformats.org/spreadsheetml/2006/main" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships">', '<dimension ref="A1:H5"/>', '<sheetViews>', '<sheetView showGridLines="0" tabSelected="1" zoomScale="85" zoomScaleNormal="85" workbookViewId="0"/>', '</sheetViews>', '<sheetFormatPr defaultRowHeight="15"/>', '<cols>', '<col min="1" max="1" width="8.7109375" customWidth="1"/>', '<col min="2" max="2" width="14.7109375" customWidth="1"/>', '<col min="3" max="3" width="10.7109375" customWidth="1"/>', '<col min="4" max="4" width="16.7109375" customWidth="1"/>', '<col min="5" max="7" width="19.7109375" customWidth="1"/>', '<col min="8" max="8" width="16.7109375" customWidth="1"/>', '</cols>', '<sheetData>', '<row r="1" spans="1:8">', '<c r="A1" s="1" t="s">', '<v>0</v>', '</c>', '<c r="B1" s="1" t="s">', '<v>1</v>', '</c>', '<c r="C1" s="1" t="s">', '<v>2</v>', '</c>', '<c r="D1" s="1" t="s">', '<v>3</v>', '</c>', '<c r="E1" s="1" t="s">', '<v>4</v>', '</c>', '<c r="F1" s="1" t="s">', '<v>5</v>', '</c>', '<c r="G1" s="1" t="s">', '<v>6</v>', '</c>', '<c r="H1" s="1" t="s">', '<v>7</v>', '</c>', '</row>', '<row r="2" spans="1:8">', '<c r="A2" s="2" t="s">', '<v>8</v>', '</c>', '<c r="B2" s="3">', '<v>1</v>', '</c>', '<c r="C2" s="4">', '<v>1.2</v>', '</c>', '<c r="D2" s="4">', '<v>9.1</v>', '</c>', '<c r="E2" s="5">', '<v>29221</v>', '</c>', '<c r="F2" s="5">', '<v>25569</v>', '</c>', '<c r="G2" s="4">', '<v>3652</v>', '</c>', '<c r="H2" s="2" t="s">', '<v>8</v>', '</c>', '</row>', '<row r="3" spans="1:8">', '<c r="A3" s="2" t="s">', '<v>9</v>', '</c>', '<c r="B3" s="3">', '<v>2</v>', '</c>', '<c r="C3" s="4">', '<v>3.4</v>', '</c>', '<c r="D3" s="4">', '<v>1011</v>', '</c>', '<c r="E3" s="5">', '<v>32874</v>', '</c>', '<c r="F3" s="5">', '<v>29221</v>', '</c>', '<c r="G3" s="4">', '<v>3653</v>', '</c>', '<c r="H3" s="2" t="s">', '<v>9</v>', '</c>', '</row>', '<row r="4" spans="1:8">', '<c r="A4" s="2" t="s">', '<v>10</v>', '</c>', '<c r="B4" s="3">', '<v>3</v>', '</c>', '<c r="C4" s="4">', '<v>5.6</v>', '</c>', '<c r="D4" s="4">', '<v>1213</v>', '</c>', '<c r="E4" s="5">', '<v>36526</v>', '</c>', '<c r="F4" s="5">', '<v>32874</v>', '</c>', '<c r="G4" s="4">', '<v>3652</v>', '</c>', '<c r="H4" s="2" t="s">', '<v>10</v>', '</c>', '</row>', '<row r="5" spans="1:8">', '<c r="A5" s="2" t="s">', '<v>11</v>', '</c>', '<c r="B5" s="3">', '<v>4</v>', '</c>', '<c r="C5" s="4">', '<v>7.8</v>', '</c>', '<c r="D5" s="4">', '<v>1415.16</v>', '</c>', '<c r="E5" s="5">', '<v>40179</v>', '</c>', '<c r="F5" s="5">', '<v>36526</v>', '</c>', '<c r="G5" s="4">', '<v>3653</v>', '</c>', '<c r="H5" s="2" t="s">', '<v>11</v>', '</c>', '</row>', '</sheetData>', '<pageMargins left="0.7" right="0.7" top="0.75" bottom="0.75" header="0.3" footer="0.3"/>', '<tableParts count="1">', '<tablePart r:id="rId1"/>', '</tableParts>', '</worksheet>']
```
**Expected Pre-`3.0.6` XML:**
```
['xl/worksheets/sheet1.xml', '<?xml version="1.0" encoding="UTF-8" standalone="yes"?>', '<worksheet xmlns="http://schemas.openxmlformats.org/spreadsheetml/2006/main" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships">', '<dimension ref="A1:H5"/>', '<sheetViews>', '<sheetView showGridLines="0" tabSelected="1" zoomScale="85" zoomScaleNormal="85" workbookViewId="0"/>', '</sheetViews>', '<sheetFormatPr defaultRowHeight="15"/>', '<cols>', '<col min="1" max="1" width="8.7109375" customWidth="1"/>', '<col min="2" max="2" width="14.7109375" customWidth="1"/>', '<col min="3" max="3" width="10.7109375" customWidth="1"/>', '<col min="4" max="4" width="16.7109375" customWidth="1"/>', '<col min="5" max="5" width="19.7109375" customWidth="1"/>', '<col min="6" max="6" width="19.7109375" customWidth="1"/>', '<col min="7" max="7" width="19.7109375" customWidth="1"/>', '<col min="8" max="8" width="16.7109375" customWidth="1"/>', '</cols>', '<sheetData>', '<row r="1" spans="1:8">', '<c r="A1" s="1" t="s">', '<v>0</v>', '</c>', '<c r="B1" s="1" t="s">', '<v>1</v>', '</c>', '<c r="C1" s="1" t="s">', '<v>2</v>', '</c>', '<c r="D1" s="1" t="s">', '<v>3</v>', '</c>', '<c r="E1" s="1" t="s">', '<v>4</v>', '</c>', '<c r="F1" s="1" t="s">', '<v>5</v>', '</c>', '<c r="G1" s="1" t="s">', '<v>6</v>', '</c>', '<c r="H1" s="1" t="s">', '<v>7</v>', '</c>', '</row>', '<row r="2" spans="1:8">', '<c r="A2" s="2" t="s">', '<v>8</v>', '</c>', '<c r="B2" s="3">', '<v>1</v>', '</c>', '<c r="C2" s="4">', '<v>1.2</v>', '</c>', '<c r="D2" s="4">', '<v>9.1</v>', '</c>', '<c r="E2" s="5">', '<v>29221</v>', '</c>', '<c r="F2" s="5">', '<v>25569</v>', '</c>', '<c r="G2" s="4">', '<v>3652</v>', '</c>', '<c r="H2" s="2" t="s">', '<v>8</v>', '</c>', '</row>', '<row r="3" spans="1:8">', '<c r="A3" s="2" t="s">', '<v>9</v>', '</c>', '<c r="B3" s="3">', '<v>2</v>', '</c>', '<c r="C3" s="4">', '<v>3.4</v>', '</c>', '<c r="D3" s="4">', '<v>1011</v>', '</c>', '<c r="E3" s="5">', '<v>32874</v>', '</c>', '<c r="F3" s="5">', '<v>29221</v>', '</c>', '<c r="G3" s="4">', '<v>3653</v>', '</c>', '<c r="H3" s="2" t="s">', '<v>9</v>', '</c>', '</row>', '<row r="4" spans="1:8">', '<c r="A4" s="2" t="s">', '<v>10</v>', '</c>', '<c r="B4" s="3">', '<v>3</v>', '</c>', '<c r="C4" s="4">', '<v>5.6</v>', '</c>', '<c r="D4" s="4">', '<v>1213</v>', '</c>', '<c r="E4" s="5">', '<v>36526</v>', '</c>', '<c r="F4" s="5">', '<v>32874</v>', '</c>', '<c r="G4" s="4">', '<v>3652</v>', '</c>', '<c r="H4" s="2" t="s">', '<v>10</v>', '</c>', '</row>', '<row r="5" spans="1:8">', '<c r="A5" s="2" t="s">', '<v>11</v>', '</c>', '<c r="B5" s="3">', '<v>4</v>', '</c>', '<c r="C5" s="4">', '<v>7.8</v>', '</c>', '<c r="D5" s="4">', '<v>1415.16</v>', '</c>', '<c r="E5" s="5">', '<v>40179</v>', '</c>', '<c r="F5" s="5">', '<v>36526</v>', '</c>', '<c r="G5" s="4">', '<v>3653</v>', '</c>', '<c r="H5" s="2" t="s">', '<v>11</v>', '</c>', '</row>', '</sheetData>', '<pageMargins left="0.7" right="0.7" top="0.75" bottom="0.75" header="0.3" footer="0.3"/>', '<tableParts count="1">', '<tablePart r:id="rId1"/>', '</tableParts>', '</worksheet>']
```
**Example Files:**
[test_add_df_to_worksheet_001_got.xlsx](https://github.com/jmcnamara/XlsxWriter/files/10377050/test_add_df_to_worksheet_001_got.xlsx)
[test_add_df_to_worksheet_001_exp.xlsx](https://github.com/jmcnamara/XlsxWriter/files/10377049/test_add_df_to_worksheet_001_exp.xlsx)
**Takeaway:**
Are these changes expected given the updates in `3.0.6` related to auto fitting?
Thanks in advance for any thoughts and for this library! I've found it to be invaluable over the years.
### Expected behavior
The change from `3.0.5` to `3.0.6` does not result in differences between the files produced by XlsxWriter when executing the same code.
### Sample code to reproduce
```markdown
N/A
```
### Environment
```markdown
- XlsxWriter version: 3.0.6
- Python version: 3.9.16
- Excel version: N/A
- OS: Debian 11.6
```
### Any other information
_No response_
### OpenOffice and LibreOffice users
- [X] I have tested the output file with Excel. | closed | 2023-01-09T22:30:14Z | 2023-01-10T13:33:22Z | https://github.com/jmcnamara/XlsxWriter/issues/938 | [
"bug"
] | raeganbarker | 2 |
giotto-ai/giotto-tda | scikit-learn | 13 | fix windows wheel readme file | The `twine check` complains about the structuring of the README.rst file in the windows build. Needs to be fixed before being able to upload to PyPI. | closed | 2019-10-17T10:14:45Z | 2019-10-18T07:26:02Z | https://github.com/giotto-ai/giotto-tda/issues/13 | [] | matteocao | 0 |
Esri/arcgis-python-api | jupyter | 1,339 | `as_dict` parameter name is misleading | **Describe the bug**
The result of `gis.content.advanced_search` is always a dictionary regardless of `as_dict` parameter value.
They have different contents for sure but they are still "dictionary". The parameter name could be misleading.
**To Reproduce**
```python
from arcgis.gis import GIS
gis = GIS(url="home")
search_results = gis.content.advanced_search(query='Well', max_items=1, as_dict=False)
print(type(search_results))
print(search_results)
```
error (result):
```python
<class 'dict'>
{'query': 'Well', 'total': 10000, 'start': 1, 'num': 1, 'nextStart': 2, 'results': [<Item title:"
" type:StoryMap owner:RileyiShaw>]}
```
**Screenshots**
api reference document:

**Expected behavior**
Using a better name for the parameter.
**Platform (please complete the following information):**
- OS: `Windows 10`
- Browser `Chrome`
- Python API Version: `1.9.1`
| closed | 2022-08-31T22:09:24Z | 2022-08-31T23:21:47Z | https://github.com/Esri/arcgis-python-api/issues/1339 | [
"documentation"
] | azinsharaf | 1 |
ScottfreeLLC/AlphaPy | pandas | 21 | Startup problems | Hello Scott,
I like this project a lot but I have issues getting it up and running. I tried both under Windows and Ubuntu. In Ubuntu at least the pip3 install alphapy worked fine.
But when I move to one of the example directories and try to run the commands you show in the documentation, it just does not find the alphapy or market_flow files. Even when I put those files inside the same directory and try to run them from the Python editor environment, I get all kinds of errors.
How can you run the examples directly from the Python editor environment? | closed | 2018-06-03T19:15:18Z | 2019-03-21T01:20:12Z | https://github.com/ScottfreeLLC/AlphaPy/issues/21 | [] | roylend | 9 |
freqtrade/freqtrade | python | 10,862 | Integrate Alpaca exchange to do spot trading with freqtrade. | Here is the list of prior threads on the subject, and it appears to be implementable at least as a side to do some spot paper trading with alpaca using freqtrade strategies. Has anyone achieved this, can we get an unsupported hack to do this.
Integrate Alpaca exchange to do spot trading with freqtrade.
Threads:
- https://github.com/freqtrade/freqtrade/issues/10404
- https://github.com/freqtrade/freqtrade/issues/7953
- https://github.com/freqtrade/freqtrade/issues/7952
1. CCXT Alpaca already has fetch_ohlcv implemented.
2. alisalama states he has implemented: fetchTicker + fetchTickers and includes bid/ask/last.
He looks like he has forked and implemented the support.
| closed | 2024-10-31T06:15:58Z | 2024-10-31T12:08:13Z | https://github.com/freqtrade/freqtrade/issues/10862 | [
"Question"
] | Immortality-IMT | 2 |
napari/napari | numpy | 7,553 | [test-bot] pip install --pre is failing | The --pre Test workflow failed on 2025-01-24 12:18 UTC
The most recent failing test was on ubuntu-latest py3.13 pyqt6
with commit: 42a6b9eb4ed1e8c4dc688ea0bee67364bf570875
Full run: https://github.com/napari/napari/actions/runs/12949307302
(This post will be updated if another test fails, as long as this issue remains open.)
| closed | 2025-01-24T12:18:04Z | 2025-01-24T13:00:29Z | https://github.com/napari/napari/issues/7553 | [
"bug"
] | github-actions[bot] | 1 |
ansible/ansible | python | 84,814 | when statement giving error argument of type 'NoneType' is not iterable. | ### Summary
I am trying to execute few tasks with "when" conditional statement. The vars are defined as below:
```
ontap_license_key_format: #Options are "legacy" and "NLF"
# - legacy
# - NLF
```
When both the values are commented, the ansible tasks fails with the below error-
"msg": "The conditional check '('legacy' in ontap_license_key_format)' failed. The error was: Unexpected templating type error occurred on ({% if ('legacy' in ontap_license_key_format) %} True {% else %} False {% endif %}): argument of type 'NoneType' is not iterable. argument of type 'NoneType' is not iterable\n\nThe error appears to be in '/home/admin/ansible/FlexPod-Base-IMM/roles/ONTAP/ontap_primary_setup/tasks/main.yml': line 345, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# Add ONTAP Licenses using legacy keys\n- name: Add licenses using legacy keys\n ^ here\n"
}
But there could be situations, where users don't want to implement any license config. So, in such cases, both tasks should be skipped.
Please let me know what else needs to be done here so that the task skips even when the var value is empty like here.
### Issue Type
Bug Report
### Component Name
ansible-playbook
### Ansible Version
```console
[admin@ansible-ctrlvm FlexPod-Base-IMM]$ ansible --version
ansible [core 2.18.2]
config file = /home/admin/.ansible.cfg
configured module search path = ['/home/admin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/admin/.local/lib/python3.11/site-packages/ansible
ansible collection location = /home/admin/.ansible/collections:/usr/share/ansible/collections
executable location = /home/admin/.local/bin/ansible
python version = 3.11.9 (main, Dec 9 2024, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-2)] (/usr/bin/python3.11)
jinja version = 3.1.5
libyaml = True
```
### Configuration
```console
[admin@ansible-ctrlvm FlexPod-Base-IMM]$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /home/admin/.ansible.cfg
DEFAULT_JINJA2_NATIVE(/home/admin/.ansible.cfg) = True
INTERPRETER_PYTHON(/home/admin/.ansible.cfg) = /usr/bin/python3.11
GALAXY_SERVERS:
```
### OS / Environment
Using Rock Linux 9 as the Ansible Control VM
[admin@ansible-ctrlvm FlexPod-Base-IMM]$ cat /etc/os-release
NAME="Rocky Linux"
VERSION="9.5 (Blue Onyx)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="9.5"
Executing the ansible playbook for NetApp ONTAP storage with software ONTAP 9.16.1P1
### Steps to Reproduce
Sharing the tasks here-
```
# Add ONTAP Licenses using legacy keys
- name: Add licenses using legacy keys
netapp.ontap.na_ontap_license:
state: present
license_codes: "{{legacy_license_keys}}"
hostname: "{{inventory_hostname}}"
username: "{{username}}"
password: "{{password}}"
https: true
validate_certs: false
when: "('legacy' in ontap_license_key_format)"
tags:
- ontap_license_legacy
# Add ONTAP Licenses using NetApp License File (NLF)
- name: Add licenses using NLF
netapp.ontap.na_ontap_license:
state: present
license_codes:
- "{{ lookup('file', '{{item}}' ) | string }}"
hostname: "{{inventory_hostname}}"
username: "{{username}}"
password: "{{password}}"
https: true
validate_certs: false
with_items:
- "{{ nlf_filepath }}"
when: "('NLF' in ontap_license_key_format)"
tags:
- ontap_license_nlf
```
List of vars used here:
```
ontap_license_key_format: #Options are "legacy" and "NLF"
# - legacy
# - NLF
#List the Legacy License Keys for the different features that you need
legacy_license_keys:
- License-Key-1
- License-Key-2
- License-Key-3
#Path to NetApp License File (NLF)
nlf_filepath:
- "/root/license/EvalNLF-data-license.txt"
- "/root/license/EvalNLF-encryption-license.txt"
- "/root/license/EvalNLF-hybrid-license.txt"
- "/root/license/EvalNLF-NVMe-license.txt"
```
### Expected Results
I was expecting the tasks to be skipped, even when both legacy and NLF are commented under the var value.
### Actual Results
```console
Task is failing when both the values are commented.
Error message is as below-
Add licenses using legacy keys] *****************************************************************************************
task path: /home/admin/ansible/FlexPod-Base-IMM/roles/ONTAP/ontap_primary_setup/tasks/main.yml:345
fatal: [172.22.24.10]: FAILED! => {
"msg": "The conditional check '('legacy' in ontap_license_key_format)' failed. The error was: Unexpected templating type error occurred on ({% if ('legacy' in ontap_license_key_format) %} True {% else %} False {% endif %}): argument of type 'NoneType' is not iterable. argument of type 'NoneType' is not iterable\n\nThe error appears to be in '/home/admin/ansible/FlexPod-Base-IMM/roles/ONTAP/ontap_primary_setup/tasks/main.yml': line 345, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# Add ONTAP Licenses using legacy keys\n- name: Add licenses using legacy keys\n ^ here\n"
}
```
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct | closed | 2025-03-12T06:46:46Z | 2025-03-12T20:43:29Z | https://github.com/ansible/ansible/issues/84814 | [
"bug",
"affects_2.18"
] | kaminis85 | 5 |
aiortc/aiortc | asyncio | 552 | Add multiple users video streams to RTCPeerConnection offer. | Hi,
What I want that capture the video stream from multiple users browser (webcam) and send it to my server. Now the server will process those videos and send that video streams to another user (Admin). The admin is only the user who can see the all connected users stream and user can see only her/his video stream.
I have successfully establish the connection between multiple users and server and all those have `iceconnectionstate => connected` in real-time. Now I am creating an offer from send it to admin the admin will accepting the offer and again the `iceconnectionstate ` is changed to `connected` and now I can see the user video stream on admin page. But when a new peer is connected in previously created connection the video stream of first user in stuck and stream flow is break but still the `iceconnectionstate => connected`.
Is there any way in `aiortc` to handle the new peer on server add his stream to pervious connection instead of sending new offer.
I tried to look into documentations but not found anything that can help.
```
import json
from uuid import uuid4
from aiortc import RTCSessionDescription, RTCPeerConnection
from channels.generic.websocket import AsyncWebsocketConsumer
from aiortc.contrib.media import MediaRelay
relay = MediaRelay()
streams = {}
peers = {}
hosts = {}
pcs = set()
async def on_peer_offer(self, offer):
peer_offer = RTCSessionDescription(sdp=offer["sdp"], type=offer["type"])
pc = RTCPeerConnection()
@pc.on("connectionstatechange")
async def on_connectionstatechange():
print("connection state is", pc.connectionState)
if pc.connectionState == "failed":
peers.pop(peers[offer['id']], None)
streams.pop(streams[offer['id']], None)
await pc.close()
pcs.discard(pc)
@pc.on("track")
async def on_track(track):
# video = VideoTransformTrack(relay.subscribe(track))
streams.update({offer['id']: track})
await pc.setRemoteDescription(peer_offer)
answer = await pc.createAnswer()
await pc.setLocalDescription(answer)
peers.update({offer['id']: pc})
data = {
"type": "server-answer",
"id": offer['id'],
"answer": {
"type": pc.localDescription.type,
"sdp": pc.localDescription.sdp,
}
}
await self.channel_layer.send(
self.channel_name, {"type": "response", "message": data}
)
print('send offer to host')
await self.offer_to_host(answer=None, tracks=streams)
# when offer send to host
async def offer_to_host(self, answer=None, tracks=None):
if answer is None:
host_pc = RTCPeerConnection()
host_id = str(uuid4())
hosts.update({host_id: host_pc})
print(tracks)
for key, track in tracks.items():
host_pc.addTrack(track)
offer = await host_pc.createOffer()
await host_pc.setLocalDescription(offer)
data = {
"type": "server-offer",
"host_id": host_id,
"answer": {"type": host_pc.localDescription.type, "sdp": host_pc.localDescription.sdp}
}
await self.channel_layer.group_send(
self.room_group_name, {"type": "response", "message": data}
)
else:
sdp = RTCSessionDescription(type=answer['type'], sdp=answer['sdp'])
await hosts[answer['host_id']].setRemoteDescription(sdp)
```
Please refer the above code and help me to sort out this problem.
**The main problem I am facing is that how to add different `RTCPeerConnection` tracks to a offer at once and sent it to admin** | closed | 2021-08-16T12:11:24Z | 2022-03-11T17:57:12Z | https://github.com/aiortc/aiortc/issues/552 | [] | nikeshvarma | 0 |
streamlit/streamlit | python | 10,218 | Capturing events from interactive plotly line drawing | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
A way to be able to draw lines in plots and get the coordinates of that line would be very useful. In plotly, line drawing is implemented and is should fire a specific event, but streamlit has no way of accessing that event.
### Why?
I was working on a dashboard with a plot of 2D probability densities of my data. A great way to understand this specific data is to "select" features that stand out in this probability density and find instances in the underlying data which is close to that feature.
In my case, the features are lines of varying shapes and can always be thought of as the plot of a function y=f(x).
As such, the easiest way to find instances that are close to the feature is to draw a simple line over the feature and calculate the distance to that line for all instances in my dataset.
The trick is then to get the coordinates of a line that I draw on a heatmap plot.
The closest I can get currently is to draw a lasso around a thin slice, but that gives more complexity to the "search", since my target line now has a thickness.
I'm sure that there's plenty more cases where it makes sense to get the coordinates of a line drawn on a plot or an image.
### How?
Interactive line drawing in plotly is described [here](https://plotly.com/python/shapes/#drawing-shapes-with-a-mouse-on-cartesian-plots).
For my use-case, the `drawopenpath` option is the most suitable.
Reading in plotly's documentation, this will send a `relayout` event, but that doesn't seem to trigger anything on the streamlit side.
Short example of what I tried:
```
import streamlit as st
import plotly.graph_objects as go
fig = go.Figure()
fig.add_scatter(x=[0, 1, 2, 3], y=[1,2, 4, 6])
events = st.plotly_chart(
fig,
on_select="rerun",
config={
'modeBarButtonsToAdd': [
'drawopenpath',
]
}
)
events
```
### Additional Context
_No response_ | open | 2025-01-21T14:14:10Z | 2025-01-21T15:36:30Z | https://github.com/streamlit/streamlit/issues/10218 | [
"type:enhancement",
"feature:st.plotly_chart",
"area:events"
] | CarlAndersson | 1 |
httpie/cli | rest-api | 735 | Http headers appear in RED color | ### Issue
Is it possible to change the colors of the headers as they appear in red when I run a httpie request
```
ls -la /usr/local/bin/http
../Cellar/httpie/1.0.2/bin/http
http --version
1.0.2
http://$route_address/api/client/2
HTTP/1.1 200
Cache-control: private
Content-Type: application/json;charset=UTF-8
Date: Thu, 06 Dec 2018 18:26:34 GMT
Set-Cookie: ed92a83990a1834ab681cc2334241cd1=547aa9cfaa874ed4b0b40c806379c928; path=/; HttpOnly
Transfer-Encoding: chunked
{
"id": 2,
"name": "Apple"
}
```

### Debug info
```
HTTPie 1.0.2
Requests 2.20.1
Pygments 2.3.0
Python 3.7.1 (default, Nov 28 2018, 11:51:47)
[Clang 10.0.0 (clang-1000.11.45.5)]
/usr/local/Cellar/httpie/1.0.2/libexec/bin/python3.7
Darwin 18.2.0
<Environment {
"colors": 256,
"config": {
"__meta__": {
"about": "HTTPie configuration file",
"help": "https://httpie.org/doc#config",
"httpie": "1.0.2"
},
"default_options": "[]"
},
"config_dir": "/Users/dabou/.httpie",
"is_windows": false,
"stderr": "<_io.TextIOWrapper name='<stderr>' mode='w' encoding='UTF-8'>",
"stderr_isatty": true,
"stdin": "<_io.TextIOWrapper name='<stdin>' mode='r' encoding='UTF-8'>",
"stdin_encoding": "UTF-8",
"stdin_isatty": true,
"stdout": "<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>",
"stdout_encoding": "UTF-8",
"stdout_isatty": true
}>
``` | closed | 2018-12-06T18:28:57Z | 2021-02-18T23:42:54Z | https://github.com/httpie/cli/issues/735 | [] | cmoulliard | 4 |
ymcui/Chinese-LLaMA-Alpaca-2 | nlp | 498 | Chinese-Alpaca-2的7B指令精调,用作分类,微调的结果accuracy为什么16bit比8bit只高0.001,另外13B模型不升反降? | ### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案。
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[LangChain](https://github.com/hwchase17/langchain)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案。
### 问题类型
模型训练与精调
### 基础模型
Chinese-Alpaca-2 (7B/13B)
### 操作系统
Linux
### 详细描述问题
```
# Chinese-Alpaca-2的7B指令精调,用作分类,输出答案在10个中文字以内,微调100epochs的结果accuracy为什么16bit比8bit只高0.001,另外13B模型不升反降,这问题主要从哪些角度考虑?
```
### 依赖情况(代码类问题务必提供)
```
# transformers=4.34.0
# sentencepiece= 0.1.99
```
### 运行日志或截图
```
# 7B的8bit微调100epochs分类任务,测试集结果Class,precision,recall,f1-score,support
weighted avg,0.6490202416435714,0.6545275590551181,0.6430212355260709,2032.0
7B的16bit微调100epochs分类任务,测试集结果Class,precision,recall,f1-score,support
weighted avg,0.6485059628314677,0.655511811023622,0.6441143713844812,2032.0
``` | closed | 2024-01-09T09:53:29Z | 2024-02-02T22:04:14Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/498 | [
"stale"
] | feifei05 | 3 |
feature-engine/feature_engine | scikit-learn | 543 | Is sklearntransformerwrapper get list of transformers? | In this stage, the sklearntransformerwrapper is the wonderful feature but, Is it possible to get list of sklearntransformerwrapper and parallel transform the data.
Thank you. | open | 2022-10-14T05:00:06Z | 2022-10-14T11:52:10Z | https://github.com/feature-engine/feature_engine/issues/543 | [] | rawinan-soma | 3 |
suitenumerique/docs | django | 464 | Video integration from Youtube | ## Bug Report
**Problematic behavior**
Video integration doesn't work from YouTube

**Steps to Reproduce**
1. Video Integration with URL
**Environment**
- Impress version: prod
- Platform: Chromium 119.0.6038.0 & firefox 132.0.2 with windows 11
| open | 2024-11-29T15:06:11Z | 2024-11-29T15:06:11Z | https://github.com/suitenumerique/docs/issues/464 | [] | afouchereau | 0 |
collerek/ormar | sqlalchemy | 549 | json field set null | ```
class ItemConfig(ormar.Model):
pairs = ormar.Json()
it = await ItemConfig(pairs=None).save()
```
In database(pg):
the `pairs` was updated to

not the

so,
when execute the sql: `select * from item_config where pairs is null`, no result. why?
| closed | 2022-01-27T03:32:19Z | 2022-01-27T08:00:57Z | https://github.com/collerek/ormar/issues/549 | [
"bug"
] | ponytailer | 1 |
ivy-llc/ivy | numpy | 28,754 | Fix Frontend Failing Test: jax - averages_and_variances.numpy.mean | closed | 2024-05-22T07:06:10Z | 2024-05-23T21:57:21Z | https://github.com/ivy-llc/ivy/issues/28754 | [
"Sub Task"
] | ZenithFlux | 0 | |
andy-landy/traceback_with_variables | jupyter | 33 | custom_var_printers definition | How does custom_var_printers should work?
Here is the simple code copy from examples:
```
from traceback_with_variables import Format, ColorSchemes, hide, skip, is_ipython_global, print_exc
fmt_config = Format(
before=3,
after=1,
max_value_str_len=10000,
objects_details=0,
ellipsis_rel_pos=0.7,
max_exc_str_len=1000,
ellipsis_='...',
color_scheme=ColorSchemes.synthwave,
skip_files_except=['my_project', 'site-packages'],
brief_files_except='my_project',
custom_var_printers=[ # first matching is used
('password', hide), # by name, print const str
(list, lambda v: f'list{v}'), # by type, print fancy str
(lambda name, type_, filename, is_global: is_global, skip), # custom filter, skip printing
(is_ipython_global, lambda v: None), # same, handy for Jupyter
(['secret', dict, (lambda name, *_: 'asd' in name)], lambda v: '???'), # by different things, print const str
]
)
def func_divide(var1, var2):
try:
res = var1 / var2
except ZeroDivisionError as e:
print_exc(fmt=fmt_config)
return None
return res
one = 2
two = 0
result = func_divide(one, two)
```
This code is crashing with error:
```
Traceback (most recent call last):
File ".../tmp.py", line 51, in <module>
result = func_divide(one, two)
File ".../tmp.py", line 43, in func_divide
print_exc(fmt=fmt_config)
File ".../.venv/lib/python3.10/site-packages/traceback_with_variables/print.py", line 33, in print_exc
for line in iter_exc_lines(
File ".../.venv/lib/python3.10/site-packages/traceback_with_variables/core.py", line 225, in _iter_lines
var_str = _to_cropped_str(
File ".../.venv/lib/python3.10/site-packages/traceback_with_variables/core.py", line 275, in _to_cropped_str
print_ = get_print(name=name, obj=obj, filename=filename, is_global=is_global, var_printers=custom_var_printers)
File ".../.venv/lib/python3.10/site-packages/traceback_with_variables/core.py", line 37, in get_print
return next((p for should_p, p in var_printers if should_p(name, type_, filename, is_global)), show)
File ".../.venv/lib/python3.10/site-packages/traceback_with_variables/core.py", line 37, in <genexpr>
return next((p for should_p, p in var_printers if should_p(name, type_, filename, is_global)), show)
TypeError: 'str' object is not callable
```
Am i missing smthg? | open | 2025-02-27T08:11:51Z | 2025-02-27T22:59:19Z | https://github.com/andy-landy/traceback_with_variables/issues/33 | [] | JustMyD | 1 |
axnsan12/drf-yasg | rest-api | 584 | Cannot declare Array type as a type hint of a SerializerMethodField | Hi, I'm beggining to use this project to generate the documentation of our API and it's really useful, thanks !
I'm trying to use the [type hinting on a serializer method](https://drf-yasg.readthedocs.io/en/stable/custom_spec.html?#support-for-serializermethodfield) to declare the return type of my method as an array of objects :
```
class MySerializer(serializers.Serializer):
field = serializers.SerializerMethodField()
def get_field(self, instance) -> typing.List[dict]:
return [
{'key': 'value'}
]
```
But this generate
```
type: "object"
```
instead of
```
type: "array"
items:
type: "object"
```
I've traced down the issue to this line : https://github.com/axnsan12/drf-yasg/blob/master/src/drf_yasg/inspectors/field.py#L613 which keeps only the first args of a non-class hint.
What's strange is that there's some handling of Sequences in `inspect_collection_hint_class` but there's no way to reach it, because typing.Sequence subclasses are not classes and are discarded by the test `if inspect.isclass(hint_class)` in `SerializerMethodFieldInspector`.
Is there a way to generate my schema that I overlooked ?
Thanks
| open | 2020-05-01T20:22:33Z | 2025-03-07T12:13:59Z | https://github.com/axnsan12/drf-yasg/issues/584 | [
"triage"
] | thomasWajs | 10 |
jadore801120/attention-is-all-you-need-pytorch | nlp | 78 | Pytorch Exception in Thread: ValueError: signal number 32 out of range | Exception in thread Thread-2:
Traceback (most recent call last):
File "/home/wyf/anaconda3/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/wyf/anaconda3/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/wyf/anaconda3/lib/python3.6/multiprocessing/resource_sharer.py", line 139, in _serve
signal.pthread_sigmask(signal.SIG_BLOCK, range(1, signal.NSIG))
File "/home/wyf/anaconda3/lib/python3.6/signal.py", line 60, in pthread_sigmask
sigs_set = _signal.pthread_sigmask(how, mask)
ValueError: signal number 32 out of range
| closed | 2018-12-26T08:49:44Z | 2019-12-08T12:34:58Z | https://github.com/jadore801120/attention-is-all-you-need-pytorch/issues/78 | [] | Finley1991 | 1 |
apachecn/ailearning | python | 326 | 关于apriori算法的rulesFromConseq函数的问题 | 对于这个函数
```
def rulesFromConseq(freqSet, H, supportData, brl, minConf=0.6):
#参数:一个是频繁项集,另一个是可以出现在规则右部的元素列表 H
m = len(H[0])
if (len(freqSet) > (m + 1)): #频繁项集元素数目大于单个集合的元素数
Hmp1 = aprioriGen(H, m+1)#存在不同顺序、元素相同的集合,合并具有相同部分的集合
Hmp1 = calcConf(freqSet, Hmp1, supportData, brl, minConf)#计算可信度
if (len(Hmp1) > 1): #满足最小可信度要求的规则列表多于1,则递归
rulesFromConseq(freqSet, Hmp1, supportData, brl, minConf)
```
我认为忽略了3-项集及大于3-项集的类似{}-->{1}的关联规则,即关联规则右部只有一个元素的规则.
应该改为这样:
```def rulesFromConseq(freqSet, H, supportData, brl, minConf=0.6):
#参数:一个是频繁项集,另一个是可以出现在规则右部的元素列表 H
m = len(H[0])
**if m == 1:
calcConf(freqSet, H, supportData, brl, minConf)**
if (len(freqSet) > (m + 1)): #频繁项集元素数目大于单个集合的元素数
Hmp1 = aprioriGen(H, m+1)#存在不同顺序、元素相同的集合,合并具有相同部分的集合
Hmp1 = calcConf(freqSet, Hmp1, supportData, brl, minConf)#计算可信度
if (len(Hmp1) > 1): #满足最小可信度要求的规则列表多于1,则递归
rulesFromConseq(freqSet, Hmp1, supportData, brl, minConf)```
请看一下我的判断对不对? | closed | 2018-04-02T08:10:23Z | 2018-04-09T14:45:28Z | https://github.com/apachecn/ailearning/issues/326 | [] | chZzZzZz | 13 |
quantmind/pulsar | asyncio | 300 | How to spawn a Actor with `spawn(start=<CoroutineFunction>)` | On pulsar 1.x, it works with:
```python
async def some_fn(*args, *kwargs):
foo
monitor.spawn(start=some_fn)
```
And on 2.0:
It will warning coroutine was never waited. | closed | 2017-12-07T17:23:13Z | 2017-12-08T09:42:04Z | https://github.com/quantmind/pulsar/issues/300 | [
"question"
] | RyanKung | 3 |
vitalik/django-ninja | pydantic | 295 | Dynamic response schema depending on user data | Hi.
I'm currently trying out django ninja and find it really amazing. One thing I'm not able to do is to limit fields list depending on user data.
Example:
As normal user I'd like to access my user data via api where I can see my email, first_name, last_name.
As admin user I'd like to access my user data via api where I can see my email, first_name, last_name and list of my permissions.
both requests are going to the same api endpoint.
Currently I'm able to do it by checking request.user in view and return different JsonResponse objects depending of user.is_staff status. Is there a way to use two Schemas on same endpoint and switch them depending on request data? | closed | 2021-12-03T15:18:21Z | 2021-12-06T13:53:27Z | https://github.com/vitalik/django-ninja/issues/295 | [] | hazaard | 4 |
dropbox/PyHive | sqlalchemy | 51 | fetchmany argument ignored? | I am using PyHive and reading from a table with 527,000 rows, which takes quite a long time to read.
In trying to optimize the process, I found the following timings:
fetchmany(1000) takes 4.2s
fetchmany(2000) takes 8.4s
fetchmany(500) takes 4.2s
fetchmany(500) takes 0.02s if directly preceded by the other fetchmany(500)
It seems like the batch size is 1000 regardless of the argument to fetchmany(). Is this the prescribed behavior? Is there an "under the hood" way to change this to optimize batched reads? Is there a way to "prefetch" so that data can be pipelined?
thanks!
| closed | 2016-05-31T15:25:10Z | 2016-06-01T18:06:54Z | https://github.com/dropbox/PyHive/issues/51 | [] | mschmill | 2 |
sherlock-project/sherlock | python | 2,162 | False positive for: Fiverr | ### Additional info
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | closed | 2024-06-10T06:57:37Z | 2025-02-17T06:04:41Z | https://github.com/sherlock-project/sherlock/issues/2162 | [
"false positive"
] | BertKoor | 2 |
tortoise/tortoise-orm | asyncio | 1,145 | Cannt import name connection | `Traceback (most recent call last):
File "C:\Users\_\Desktop\_\cog.py", line 2, in <module>
from core import config, lib
File "C:\Users\_\Desktop\_\core\lib.py", line 5, in <module>
from tortoise.models import Model
File "C:\Users\_\AppData\Local\Programs\Python\Python310\lib\site-packages\tortoise\models.py", line 25, in <module>
from tortoise import connections
ImportError: cannot import name 'connections' from 'tortoise' (unknown location)`
How can i fix it? | closed | 2022-06-03T08:15:18Z | 2024-08-13T13:09:57Z | https://github.com/tortoise/tortoise-orm/issues/1145 | [] | mr-far | 2 |
snooppr/snoop | web-scraping | 34 | POP Up results Music and Full English version | English Language and Removing the pop up sound. | closed | 2020-07-31T21:13:35Z | 2020-09-12T15:37:12Z | https://github.com/snooppr/snoop/issues/34 | [
"question"
] | leemoh | 3 |
lepture/authlib | django | 309 | oidc raises exception when auth_time claim is decimal | **Describe the bug**
According to [spec](https://openid.net/specs/openid-connect-core-1_0.html#IDToken) 'auth_time' is JSON NUMBER which can be decimal. While most oidc providers set it to `int`, there are some (e.g. JetBrains Hub) that provide it in decimal.
**Error Stacks**
```
raceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/oidc_handler.py", line 671, in handle_oidc_callback
userinfo = await self._parse_id_token(token, nonce=nonce)
File "/usr/local/lib/python3.8/site-packages/synapse/handlers/oidc_handler.py", line 474, in _parse_id_token
claims.validate(leeway=120) # allows 2 min of clock skew
File "/usr/local/lib/python3.8/site-packages/authlib/oidc/core/claims.py", line 41, in validate
self.validate_auth_time()
File "/usr/local/lib/python3.8/site-packages/authlib/oidc/core/claims.py", line 66, in validate_auth_time
raise InvalidClaimError('auth_time')
authlib.jose.errors.InvalidClaimError: invalid_claim: Invalid claim "auth_time"
```
**To Reproduce**
id_token:
```
{'sub': 'redacted-aff5-461c-8f6c-4e143fad41df', 'name': 'Klaus Schwartz', 'preferred_username': 'klaus', 'profile': 'https://example.com/hub/users/redacted-aff5-461c-8f6c-4e143fad41df', 'picture': 'data:image/jpeg;base64,image-data', 'email': 'redacted@example.com', 'email_verified': True, 'iss': 'https://example.com/hub', 'aud': ['bb4872fc-12a6-4f74-8744-96263d20807b'], 'exp': 1617590286.706, 'iat': 1610302441.104, 'auth_time': 1609814286.706, 'nonce': 'redacted'}
```
**Expected behavior**
`auth_time` validation raises no errors
**Environment:**
- OS: any
- Python Version: 3.8
- Authlib Version: master
**Additional context**
JetBrains Hub sets `auth_time` as seconds with millis in decimal part | closed | 2021-01-10T19:30:27Z | 2021-01-12T02:22:49Z | https://github.com/lepture/authlib/issues/309 | [
"bug"
] | tntclaus | 0 |
pnkraemer/tueplots | matplotlib | 100 | Whitespace around figures when saving | To avoid unnecessary whitespace around a figure, we should look into setting "savefig.bbox": "tight", and maybe even play around with "savefig.pad_inches", to automatically crop whitespaces around figures.
This setting could be provided as part of the content in `tueplots.figsizes`, on the same level as `constrained_layout`.
While this issue has not been resolved yet: a user should consider saving figures as `plt.savefig(..., bbox="tight")``, which accomplishes the same task.
https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.savefig.html
Tagging @philipphennig here, who brought this to my attention.
Contributions are welcome :) | closed | 2022-08-06T08:33:00Z | 2022-08-10T12:36:04Z | https://github.com/pnkraemer/tueplots/issues/100 | [] | pnkraemer | 0 |
huggingface/transformers | tensorflow | 36,297 | Bug about num_update_steps_per_epoch in function _inner_training_loop | ### System Info
python 3.10
transformers 4.48.3
### Reproduction
In the _inner_training_loop method of /usr/local/lib/python3.10/dist-packages/transformers/trainer.py, the calculation logic for num_update_steps_per_epoch is inconsistent, which leads to the following issues:
When calculating max_steps, the logic
num_update_steps_per_epoch = len_dataloader // args.gradient_accumulation_steps
rounds down the number of steps.
Before the training loop, the calculation
total_updates = steps_in_epoch // args.gradient_accumulation_steps + 1
rounds up the number of steps. When fetching data during training,
num_batches = args.gradient_accumulation_steps if update_step != (total_updates - 1) else remainder
trains the last batch, which does not have enough data for one full gradient_accumulation_steps, causing do_sync_step to be set to True and updating global_step. This results in the total number of training steps exceeding the previously calculated max_steps, causing training termination before the entire dataset is fully trained.
This issue is particularly noticeable with small datasets. For example, with the following configuration, the training actually finished after only 6.8 epochs:
dataset length: 91
epochs:10
batchsize: 10
GPU:2
GradientAccumulation:2
[INFO|trainer.py:2369] 2025-02-20 05:31:54,186 >> ***** Running training *****
[INFO|trainer.py:2370] 2025-02-20 05:31:54,187 >> Num examples = 91
[INFO|trainer.py:2371] 2025-02-20 05:31:54,187 >> Num Epochs = 10
[INFO|trainer.py:2372] 2025-02-20 05:31:54,187 >> Instantaneous batch size per device = 10
[INFO|trainer.py:2375] 2025-02-20 05:31:54,187 >> Total train batch size (w. parallel, distributed & accumulation) = 40
[INFO|trainer.py:2376] 2025-02-20 05:31:54,187 >> Gradient Accumulation steps = 2
[INFO|trainer.py:2377] 2025-02-20 05:31:54,187 >> Total optimization steps = 20
[INFO|trainer.py:2378] 2025-02-20 05:31:54,229 >> Number of trainable parameters = 20,185,088
{'loss': 4.8291, 'grad_norm': 0.7818735241889954, 'learning_rate': 4.9692208514878444e-05, 'epoch': 0.4, 'num_input_tokens_seen': 2240}
{'loss': 4.5171, 'grad_norm': 1.987959384918213, 'learning_rate': 2.5e-05, 'epoch': 3.4, 'num_input_tokens_seen': 18880}
{'loss': 3.8988, 'grad_norm': 1.4728916883468628, 'learning_rate': 0.0, 'epoch': 6.8, 'num_input_tokens_seen': 38000}
{'train_runtime': 4008.0113, 'train_samples_per_second': 0.227, 'train_steps_per_second': 0.005, 'train_tokens_per_second': 6.786, 'train_loss': 4.223528718948364, 'epoch': 6.8, 'num_input_tokens_seen': 38000}
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [1:06:47<00:00, 200.40s/it]
***** train metrics *****
epoch = 6.8
num_input_tokens_seen = 38000
total_flos = 1505672GF
train_loss = 4.2235
train_runtime = 1:06:48.01
train_samples_per_second = 0.227
train_steps_per_second = 0.005
train_tokens_per_second = 6.786
2、When the dataset is small and gradient_accumulation_steps is large, and there is insufficient data for a full epoch, training does not trigger
do_sync_step = (step + 1) % args.gradient_accumulation_steps == 0 or (step + 1) == steps_in_epoch
As a result, do_sync_step remains False and global_step stays at 0, causing the model to fail to train properly. Example configuration:
dataset num: 91
epochs:10
batchsize: 4
GPU:2
GradientAccumulation:16
}
[INFO|trainer.py:2369] 2025-02-20 07:26:40,274 >> ***** Running training *****
[INFO|trainer.py:2370] 2025-02-20 07:26:40,533 >> Num examples = 91
[INFO|trainer.py:2371] 2025-02-20 07:26:40,939 >> Num Epochs = 10
[INFO|trainer.py:2372] 2025-02-20 07:26:41,320 >> Instantaneous batch size per device = 4
[INFO|trainer.py:2375] 2025-02-20 07:26:42,179 >> Total train batch size (w. parallel, distributed & accumulation) = 128
[INFO|trainer.py:2376] 2025-02-20 07:26:42,498 >> Gradient Accumulation steps = 16
[INFO|trainer.py:2377] 2025-02-20 07:26:43,023 >> Total optimization steps = 10
[INFO|trainer.py:2378] 2025-02-20 07:26:43,810 >> Number of trainable parameters = 20,185,088
0%| | 0/10 [00:00<?, ?it/s][INFO|trainer.py:2643] 2025-02-20 07:38:24,660 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
{'train_runtime': 515.6494, 'train_samples_per_second': 1.765, 'train_steps_per_second': 0.019, 'train_tokens_per_second': 47.164, 'train_loss': 45420.82977294922, 'epoch': 0, 'num_input_tokens_seen': 44128}
0%| | 0/10 [08:20<?, ?it/s]
[INFO|trainer.py:3910] 2025-02-20 07:38:31,120 >> Saving model checkpoint to /workspace/models/trained/DeepSeek-R1-Distill-Qwen-7B-007/adapter
***** train metrics *****
epoch = 0
num_input_tokens_seen = 44128
total_flos = 1748481GF
train_loss = 45420.8298
train_runtime = 0:08:35.64
train_samples_per_second = 1.765
train_steps_per_second = 0.019
train_tokens_per_second = 47.164
3、When the dataset is small and gradient_accumulation_steps is large, other configurations may trigger additional issues, but further examples are not provided.
4、There is also a training recovery logic related to this calculation.
epochs_trained = int(self.state.global_step // num_update_steps_per_epoch)
if not args.ignore_data_skip:
steps_trained_in_current_epoch = self.state.global_step % (num_update_steps_per_epoch)
steps_trained_in_current_epoch *= args.gradient_accumulation_steps
else:
steps_trained_in_current_epoch = 0
During training resume from checkpoint, epochs_trained depends on num_update_steps_per_epoch, and the calculated epoch may be greater than the actual epoch number in the checkpoint. Similarly, steps_trained_in_current_epoch is also inaccurate.
### Expected behavior
Please confirm the above issues.
Thank you very much! | open | 2025-02-20T09:00:46Z | 2025-03-23T08:03:33Z | https://github.com/huggingface/transformers/issues/36297 | [
"bug"
] | onenotell | 3 |
graphistry/pygraphistry | jupyter | 243 | [DOCS] Colors tutorial | **Is your feature request related to a problem? Please describe.**
Colors can use a full tutorial, due to issues like https://github.com/graphistry/pygraphistry/issues/241
**Describe the solution you'd like**
Cover:
* Explicit colors: palettes, RGB hex
* Symbolic encodings: categorical, continuous, defaults
* Points vs edges
* Linked from main docs + pygraphistry homepage
**Describe alternatives you've considered**
Current docs (code, readme) are not enough
| closed | 2021-07-12T19:02:28Z | 2021-10-11T02:02:12Z | https://github.com/graphistry/pygraphistry/issues/243 | [
"docs",
"good-first-issue"
] | lmeyerov | 0 |
strawberry-graphql/strawberry | asyncio | 3,572 | TypeError: MyCustomType fields cannot be resolved. unsupported operand type(s) for |: 'LazyType' and 'NoneType' | ## Describe the Bug
I had an error generating types with the next values and the error thrown was
> TypeError: MyCustomType fields cannot be resolved. unsupported operand type(s) for |: 'LazyType' and 'NoneType'
This was the status the packages and the error happened on the change from:
```
Django= 4.2.11
strawberry-graphql = "^0.235.2"
strawberry-graphql-django = "^0.44.2"
python= "3.11.9"
```
To
```
Django= 4.2.11
strawberry-graphql = "^0.235.2"
strawberry-graphql-django = "^0.45.0"
python= "3.11.9"
```
The code was like this when the error raised:
```
class MyCustomType:
my_custom_field: Annotated[AType, strawberry.lazy("xx.graphql.types")] | None
class AnotherMyCustomType:
my_another_custom_field: AType | None = None
```
Changed to this to get fixed:
```
class MyCustomType:
my_custom_field: AType | None = None
class AnotherMyCustomType:
my_another_custom_field: AType | None = None
```
I did not change it both to be lazy that would be the other solution
The error at the end was how the types were generated because I had in the same file a type that was declared as lazy and also as not lazy, I did not have the error before and when I upgraded to those values the error came up.
I fixed the error changing this to not lazy but I mention here @bellini666 that suggested me to fill up a bug in case is related to how the types are generated.
I hope this helps
Thanks!
## System Information
- Operating system: Apple M1 Pro Sonoma 14.4.1 -> But the system is on Docker python:3.11.9-bullseye (debian)
- Strawberry version (if applicable): "0.235.2"
## Additional Context
[error_traceback.txt](https://github.com/user-attachments/files/16231460/error_traceback.txt)
[discord thread](https://discord.com/channels/689806334337482765/689861980776955948/1261325861597089802) | closed | 2024-07-15T08:06:25Z | 2025-03-20T15:56:48Z | https://github.com/strawberry-graphql/strawberry/issues/3572 | [
"bug"
] | Ronjea | 3 |
PaddlePaddle/models | nlp | 4,786 | emotion_detection教程中基于 ERNIE 进行 Finetune报错 | 执行这段代码后报错 前面都按照1.8的教程走下来了 不知道是哪里的问题 请指教
#--init_checkpoint ./pretrain_models/ernie
sh run_ernie.sh train
报错内容:
----------- Configuration Arguments -----------
batch_size: 32
data_dir: None
dev_set: ./data/dev.tsv
do_infer: False
do_lower_case: True
do_train: True
do_val: True
epoch: 3
ernie_config_path: ./pretrain_models/ernie//ernie_config.json
infer_set: None
init_checkpoint: ./pretrain_models/ernie//params
label_map_config: None
lr: 2e-05
max_seq_len: 64
num_labels: 3
random_seed: 1
save_checkpoint_dir: ./save_models/ernie
save_steps: 500
skip_steps: 50
task_name: None
test_set: None
train_set: ./data/train.tsv
use_cuda: True
use_paddle_hub: False
validation_steps: 50
verbose: True
vocab_path: ./pretrain_models/ernie//vocab.txt
------------------------------------------------
attention_probs_dropout_prob: 0.1
hidden_act: relu
hidden_dropout_prob: 0.1
hidden_size: 768
initializer_range: 0.02
max_position_embeddings: 513
num_attention_heads: 12
num_hidden_layers: 12
type_vocab_size: 2
vocab_size: 18000
------------------------------------------------
Device count: 1
Num train examples: 9655
Max train steps: 906
Theoretical memory usage in training: 7954.669 - 8333.463 MB
W0804 18:40:22.526546 17467 device_context.cc:252] Please NOTE: device: 0, CUDA Capability: 75, Driver API Version: 10.2, Runtime API Version: 10.0
W0804 18:40:22.530467 17467 device_context.cc:260] device: 0, cuDNN Version: 7.6.
2020-08-04 18:40:24,913-WARNING: ./pretrain_models/ernie//params.pdparams not found, try to load model file saved with [ save_params, save_persistables, save_vars ]
2020-08-04 18:40:24,922-WARNING: variable file [ ./pretrain_models/ernie//params/mask_lm_trans_layer_norm_scale ./pretrain_models/ernie//params/tmp_51 ./pretrain_models/ernie//params/mask_lm_trans_layer_norm_bias ./pretrain_models/ernie//params/next_sent_3cls_fc.w_0 ./pretrain_models/ernie//params/@LR_DECAY_COUNTER@ ./pretrain_models/ernie//params/next_sent_3cls_fc.b_0 ./pretrain_models/ernie//params/mask_lm_trans_fc.w_0 ./pretrain_models/ernie//params/mask_lm_trans_fc.b_0 ./pretrain_models/ernie//params/reduce_mean_0.tmp_0 ./pretrain_models/ernie//params/mask_lm_out_fc.b_0 ] not used
/home/ubuntu/.virtualenvs/zxf_env/lib/python3.6/site-packages/paddle/fluid/executor.py:1070: UserWarning: The following exception is not an EOF exception.
"The following exception is not an EOF exception.")
Traceback (most recent call last):
File "run_ernie_classifier.py", line 402, in <module>
main(args)
File "run_ernie_classifier.py", line 326, in main
outputs = train_exe.run(program=train_program, fetch_list=fetch_list, return_numpy=False)
File "/home/ubuntu/.virtualenvs/zxf_env/lib/python3.6/site-packages/paddle/fluid/executor.py", line 1071, in run
six.reraise(*sys.exc_info())
File "/home/ubuntu/.virtualenvs/zxf_env/lib/python3.6/site-packages/six.py", line 703, in reraise
raise value
File "/home/ubuntu/.virtualenvs/zxf_env/lib/python3.6/site-packages/paddle/fluid/executor.py", line 1066, in run
return_merged=return_merged)
File "/home/ubuntu/.virtualenvs/zxf_env/lib/python3.6/site-packages/paddle/fluid/executor.py", line 1154, in _run_impl
use_program_cache=use_program_cache)
File "/home/ubuntu/.virtualenvs/zxf_env/lib/python3.6/site-packages/paddle/fluid/executor.py", line 1229, in _run_program
fetch_var_name)
paddle.fluid.core_avx.EnforceNotMet:
--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0 std::string paddle::platform::GetTraceBackString<std::string const&>(std::string const&, char const*, int)
1 paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int)
2 paddle::operators::ReadOp::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&) const
3 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, paddle::platform::Place const&)
4 paddle::framework::Executor::RunPartialPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, long, long, bool, bool, bool)
5 paddle::framework::Executor::RunPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, bool, bool, bool)
6 paddle::framework::Executor::Run(paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, std::vector<std::string, std::allocator<std::string> > const&, bool, bool)
------------------------------------------
Python Call Stacks (More useful to users):
------------------------------------------
File "/home/ubuntu/.virtualenvs/zxf_env/lib/python3.6/site-packages/paddle/fluid/framework.py", line 2610, in append_op
attrs=kwargs.get("attrs", None))
File "/home/ubuntu/.virtualenvs/zxf_env/lib/python3.6/site-packages/paddle/fluid/reader.py", line 1079, in _init_non_iterable
attrs={'drop_last': self._drop_last})
File "/home/ubuntu/.virtualenvs/zxf_env/lib/python3.6/site-packages/paddle/fluid/reader.py", line 977, in __init__
self._init_non_iterable()
File "/home/ubuntu/.virtualenvs/zxf_env/lib/python3.6/site-packages/paddle/fluid/reader.py", line 608, in from_generator
iterable, return_list, drop_last)
File "../shared_modules/models/representation/ernie.py", line 48, in ernie_pyreader
use_double_buffer=True)
File "run_ernie_classifier.py", line 216, in main
pyreader_name='train_reader')
File "run_ernie_classifier.py", line 402, in <module>
main(args)
----------------------
Error Message Summary:
----------------------
InvalidArgumentError: The fed Variable 1 should have dimensions = 3, shape = [-1, 64, 1], but received fed shape [32, 23, 1]
[Hint: Expected DimensionIsCompatibleWith(shapes[i], in_dims) == true, but received DimensionIsCompatibleWith(shapes[i], in_dims):0 != true:1.] at (/paddle/paddle/fluid/operators/reader/read_op.cc:137)
[operator < read > error]
| closed | 2020-08-04T10:55:51Z | 2021-07-20T07:06:50Z | https://github.com/PaddlePaddle/models/issues/4786 | [] | ML-ZXF | 4 |
roboflow/supervision | tensorflow | 1,444 | Incomplete docs | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
The docs to the `from_sam` method [here](https://github.com/roboflow/supervision/blob/d08d22dec6f932d273d3d217c64343a47d5972a1/supervision/detection/core.py#L624) feels like incomplete. In my opinion, the user should be provided with more details like what are the different `MODEL_TYPE` and the checkpoint for weights etc.
### Environment
_No response_
### Minimal Reproducible Example
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR! | closed | 2024-08-13T07:12:02Z | 2024-08-26T10:16:23Z | https://github.com/roboflow/supervision/issues/1444 | [
"bug"
] | Bhavay-2001 | 0 |
docarray/docarray | fastapi | 1,547 | docs: update contributing.md | the `contributing.md` is outdated in some places, refers to how things worked in docarray <0.30 etc. That needs to be fixed | closed | 2023-05-17T11:28:38Z | 2023-05-24T10:58:25Z | https://github.com/docarray/docarray/issues/1547 | [] | JohannesMessner | 0 |
miguelgrinberg/Flask-SocketIO | flask | 1,112 | Socket IO 400 Bad request. | https://github.com/miguelgrinberg/Flask-SocketIO/issues/913
Its the same issue.
I'm having the response headers as follows

| closed | 2019-11-28T00:19:05Z | 2019-12-18T17:48:11Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1112 | [
"question"
] | jkam273 | 13 |
pydata/xarray | numpy | 9,267 | Subtracting datasets in xarray 2024.6.0 leads to inconsistent chunks | ### What is your issue?
When I call `groupby()` on a dataset and try to subtract another dataset from the result, I get an error that says
``` ValueError: Object has inconsistent chunks along dimension lead. This can be fixed by calling unify_chunks(). ```
Adding a call to unify chunks beforehand resolves the issue, but for some strange reason this chunking issue only occurs with more recent versions of xarray. When I run the same code below with `xarray 2022.3.0`, I can run the same code without calling unify chunks. Does anyone know what may have caused the discrepency?
Here's the relevant section of code I was running when I encountered the problem. in the snippet below, the `members` variable is a list of paths to netcdf files that contain the output from an ensemble of ocean models. I think the error should be reproducible with any group of netcdf files and similar operations:
```
ds = xarray.open_mfdataset(members, combine='nested', concat_dim='member').sortby('init')
ensmean = ds.mean('member')
climo = ensmean.sel(init=slice('1993-01-01', '1993-12-31')).groupby('init.month').mean('init').load()
anom = model_ds.groupby('init.month') - climo
```
And here's the output from `xarray.show_version()`:
```
INSTALLED VERSIONS
------------------
commit: None
python: 3.12.4 | packaged by conda-forge | (main, Jun 17 2024, 10:23:07) [GCC 12.3.0]
python-bits: 64
OS: Linux
OS-release: 3.10.0-1160.102.1.el7.x86_64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US
LOCALE: ('en_US', 'ISO8859-1')
libhdf5: 1.14.3
libnetcdf: 4.9.2
xarray: 2024.6.0
pandas: 2.2.2
numpy: 2.0.0
scipy: None
netCDF4: 1.7.1
pydap: None
h5netcdf: None
h5py: None
zarr: None
cftime: 1.6.4
nc_time_axis: None
iris: None
bottleneck: None
dask: 2024.7.1
distributed: 2024.7.1
matplotlib: None
cartopy: None
seaborn: None
numbagg: None
fsspec: 2024.6.1
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 71.0.4
pip: 24.0
conda: None
pytest: None
mypy: None
IPython: None
sphinx: None
``` | open | 2024-07-22T18:13:31Z | 2024-07-23T06:29:20Z | https://github.com/pydata/xarray/issues/9267 | [
"bug",
"topic-groupby",
"topic-dask",
"regression"
] | uwagura | 2 |
MaartenGr/BERTopic | nlp | 1,038 | Arabic text with visualize_documents | If the topics in Arabic or similar language there will be a problem visualizing them with visualize_documents function .. the text should be edited to be appropriate to be displayed in Plotly gra

phs. | open | 2023-02-23T18:19:27Z | 2023-06-06T14:53:05Z | https://github.com/MaartenGr/BERTopic/issues/1038 | [] | apoalquaary | 6 |
localstack/localstack | python | 12,025 | bug: API Gateway v2 HTTP_PROXY override path is not working | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
When using "override:path" as a request parameter in an API Gateway v2 Integration, instead of doing the path replacement it corrupts the whole path string and cant communicate with the proxy backend server.
Example:
Set a Request Template of : `overwrite:path: "/static"`
Results in a url of :
```//statich//statict//statict//staticp//static://static///static///statich//statico//statics//statict//static.//staticd//statico//staticc//statick//statice//staticr//static.//statici//staticn//statict//statice//staticr//staticn//statica//staticl//static://static8//static0//static8//static0//static```
Where every character of the proxy URL has had `/static` appended to it !
Note: This issues exists (https://github.com/localstack/localstack/issues/10623) and is closed but this is different from the behaviour reported in that bug.
### Expected Behavior
A request to the API Gateway V2 Stage should request the proxy url with the new path : e.g `http://host.docker.internal:8080/static`
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack-pro:latest
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
Simple proxy route..
awslocal apigatewayv2 get-routes --api-id a3209d50
```json
{
"Items": [
{
"ApiKeyRequired": false,
"AuthorizationType": "NONE",
"RouteId": "2eb653cb",
"RouteKey": "ANY /api/v1/{proxy+}",
"Target": "integrations/b4631dc9"
}
]
}
```
* Set up integration using the `overwrite:path` request parameters
`awslocal apigatewayv2 get-integrations --api-id a3209d50`
```json
{
"Items": [
{
"ConnectionType": "INTERNET",
"Description": "Proxy",
"IntegrationId": "b4631dc9",
"IntegrationMethod": "ANY",
"IntegrationType": "HTTP_PROXY",
"IntegrationUri": "http://host.docker.internal:8080",
"PayloadFormatVersion": "1.0",
"RequestParameters": {
"overwrite:path": "/static"
},
"TimeoutInMillis": 30000
}
]
}
```
Basic stage setup ..
awslocal apigatewayv2 get-stages --api-id a3209d50
```{
"Items": [
{
"AutoDeploy": true,
"CreatedDate": "2024-12-12T21:18:52.945862+00:00",
"DefaultRouteSettings": {
"DetailedMetricsEnabled": false
},
"DeploymentId": "cf973a6f",
"LastDeploymentStatusMessage": "Successfully deployed stage with deployment ID 'cf973a6f'",
"LastUpdatedDate": "2024-12-12T21:53:48.692896+00:00",
"RouteSettings": {},
"StageName": "v1",
"StageVariables": {}
}
]
}
```
### Environment
```markdown
- OS: OSX 15.1
- LocalStack:
LocalStack version: 4.0.4.dev41
LocalStack build date: 2024-12-12
LocalStack build git hash: 81df78251
```
### Anything else?
Logs:
```
2024-12-12 21:54:02 2024-12-12T21:54:02.792 DEBUG --- [et.reactor-1] l.p.c.s.a.n.e.router : APIGW v2 HTTP Endpoint called
2024-12-12 21:54:02 2024-12-12T21:54:02.793 DEBUG --- [et.reactor-1] l.p.c.s.a.n.e.h.h.parse : Initializing $context='{'accountId': '000000000000', 'apiId': 'a3209d50', 'domainName': 'a3209d50.execute-api.localhost.localstack.cloud:4566', 'domainPrefix': 'a3209d50', 'extendedRequestId': 'c5812226', 'httpMethod': 'GET', 'identity': {'accountId': None, 'accessKey': None, 'caller': None, 'cognitoAmr': None, 'cognitoAuthenticationProvider': None, 'cognitoAuthenticationType': None, 'cognitoIdentityId': None, 'cognitoIdentityPoolId': None, 'principalOrgId': None, 'sourceIp': '127.0.0.1', 'user': None, 'userAgent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:133.0) Gecko/20100101 Firefox/133.0', 'userArn': None}, 'path': '/v1/api/v1/badgers', 'protocol': 'HTTP/1.1', 'requestId': 'c5812226', 'requestTime': '12/Dec/2024:21:54:02 ', 'requestTimeEpoch': 1734040442793, 'routeKey': '', 'stage': 'v1'}'
2024-12-12 21:54:02 2024-12-12T21:54:02.793 DEBUG --- [et.reactor-1] l.p.c.s.a.n.e.h.h.parse : Initializing $stageVariables='None'
2024-12-12 21:54:02 2024-12-12T21:54:02.798 DEBUG --- [et.reactor-1] l.p.c.s.a.n.e.h.h.router : Updating $context.routeKey='ANY /api/v1/{proxy}'
2024-12-12 21:54:02 2024-12-12T21:54:02.798 DEBUG --- [et.reactor-1] l.p.c.s.a.n.e.h.i.http : Sending request to //statich//statict//statict//staticp//static://static///static///statich//statico//statics//statict//static.//staticd//statico//staticc//statick//statice//staticr//static.//statici//staticn//statict//statice//staticr//staticn//statica//staticl//static://static8//static0//static8//static0//static
2024-12-12 21:54:02 2024-12-12T21:54:02.799 WARN --- [et.reactor-1] l.p.c.s.a.n.e.h.i.http : Execution failed due to configuration error: Invalid endpoint address
2024-12-12 21:54:02 2024-12-12T21:54:02.799 DEBUG --- [et.reactor-1] l.p.c.s.a.n.e.h.i.http : The URI specified for the HTTP/HTTP_PROXY integration is invalid: //statich//statict//statict//staticp//static://static///static///statich//statico//statics//statict//static.//staticd//statico//staticc//statick//statice//staticr//static.//statici//staticn//statict//statice//staticr//staticn//statica//staticl//static://static8//static0//static8//static0//static
2024-12-12 21:54:02 2024-12-12T21:54:02.799 INFO --- [et.reactor-1] l.p.c.s.a.n.e.h.h.exceptio : Error raised during invocation: Internal Server Error
``` | closed | 2024-12-12T22:10:26Z | 2024-12-13T13:46:16Z | https://github.com/localstack/localstack/issues/12025 | [
"type: bug",
"status: resolved/fixed",
"aws:apigatewayv2"
] | stephenpope | 5 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,368 | Setting driver_executable_path | Should I let undetected_chromedriver automatically download and patchs the chromedriver or can I download it myself and set the path to it? My question is regarding keeping the low detection property. | open | 2023-06-27T16:11:05Z | 2023-06-27T16:11:05Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1368 | [] | rafamelo01 | 0 |
Farama-Foundation/Gymnasium | api | 647 | [Question] why `call`, `get_attr`, and `set_attr` in the vec env interface? | ### Question
I found when developing my own vectorized environments, I had not ever used the get and set attr in the way i think they were expected to be used. it's a bit confusing as the vector env class doesn't have a default implementation and it's unclear as to why it even exists.
would it be possible to remove these from the vector env interface? or maybe i'm missing key information | closed | 2023-08-03T08:21:26Z | 2023-08-05T11:59:39Z | https://github.com/Farama-Foundation/Gymnasium/issues/647 | [
"question"
] | verbiiyo | 4 |
aimhubio/aim | tensorflow | 2,702 | Dashboard shows log messages multiple times | ## 🐛 Bug
As soon as you log a message and view it on the dashboard, the newest message is duplicated every second.
When you refresh the page, it is gone and starts over again.
### To reproduce
1. Log a message using log_debug/info etc.
2. View it on the dashboard.
### Expected behavior
Displaying the log messages without the new message being displayed repeatedly.
### Environment
- Aim Version: 3.17.3
- Python version 3.9.16
- pip version 23.1.1
- OS MacOS 13.3.1
- Browsers: Safari / Firefox
### Additional context
https://user-images.githubusercontent.com/53063597/235601527-1589c919-b958-44a6-b815-7cd99f7a4f98.mov
| closed | 2023-05-02T07:07:26Z | 2023-05-22T11:28:54Z | https://github.com/aimhubio/aim/issues/2702 | [
"type / bug",
"help wanted",
"area / Web-UI",
"phase / shipped"
] | Robert27 | 3 |
dnouri/nolearn | scikit-learn | 138 | More frequent feedback from NeuralNet | As mentioned [here](https://github.com/dnouri/nolearn/pull/69), there are situations where the user wants more frequent feedback from the net than just after each epoch. Especially so with the arrival of RNNs, which are hungry for tons of data but slow. Having more frequent feedback also allows more neat stuff, for instance, to stop early after, 2.5 epochs etc.
The solution proposed in the PR would solve the issue but feels a little bit like cheating, since the batch iterator will pretend the epoch to be over when really it isn't.
I have an implementation lying around that has an _on_epoch_finished_ callback. Unfortunately, that complicates matters, since you have to synchronize the loop through train and eval (which in turn requires adjusting the batch size for eval).
So does anybody have another solution? I would help out with coding if necessary.
| open | 2015-08-13T18:58:40Z | 2016-03-26T03:57:45Z | https://github.com/dnouri/nolearn/issues/138 | [] | BenjaminBossan | 1 |
pytest-dev/pytest-xdist | pytest | 604 | Distribute parameterized tests to different workers | Hello!
One issue I currently face is that I have a test which takes a long time to execute (~2 minutes). This test itself is parameterized, let say with 8 options. This means that this test will take 16 minutes to complete. In this case, it would make more sense for me if each parameterized instance of the test was considered a different test and as such would be distributed to a different worker, which could potentially bring this test down to 2 minutes.
```
@pytest.mark.parametrize("random", range(0, 8))
def test_lasting_2_minutes(random: int):
# do something lasting 2 minutes
```
From a quick inspection of the `pytes-dist` code I understand that the parameterized tests are considered individual tests, it is that they are chunked as a single unit and sent to the same node that causes the issue here. | closed | 2020-10-27T23:31:42Z | 2021-05-07T02:44:41Z | https://github.com/pytest-dev/pytest-xdist/issues/604 | [] | tomzx | 2 |
microsoft/nni | deep-learning | 5,156 | template for model compression and speedup | template for `model compression and model speedup`
**Describe the issue**:
**NNI version**:
**Demo or example**:
**Reproduce progress**:
**Type(bug | feature)**:
| closed | 2022-10-12T08:02:34Z | 2022-10-17T07:45:54Z | https://github.com/microsoft/nni/issues/5156 | [] | Lijiaoa | 1 |
Nike-Inc/koheesio | pydantic | 13 | [FEATURE] Windows CICD is not working | <!-- We follow Design thinking principles to bring the new feature request to life. Please read through [Design thinking](https://www.interaction-design.org/literature/article/5-stages-in-the-design-thinking-process) principles if you are not familiar. -->
Various errors occur when running tests against Windows runners. This issue serves to capture this effort as a whole.
<!-- This is the [Board](https://github.com/orgs/Nike-Inc/projects/4) your feature request would go through, so keep in mind that there would be more back and forth on this. If you are very clear with all phases, please describe them here for faster development. -->
## Is your feature request related to a problem? Please describe.
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
...
## Describe the solution you'd like
<!-- A clear and concise description of what you want to happen. -->
Tests work fine when running against Windows.
## Describe alternatives you've considered
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
N/A
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
...
| open | 2024-05-29T13:48:04Z | 2024-05-29T13:48:05Z | https://github.com/Nike-Inc/koheesio/issues/13 | [
"enhancement"
] | dannymeijer | 0 |
FactoryBoy/factory_boy | sqlalchemy | 979 | django_get_or_create with SubFactory | #### Description
The django_get_or_create feature is working well when the concerned Factory is used as a BaseFactory but not when it's used as a SubFactory
When used as a SubFactory, the following error occurs :
`AttributeError: type object 'CurrencyFactory' has no attribute '_original_params'`
#### To Reproduce
With the below code, the error will occur randomly when the same currency code/name is used.
To reproduce it systematically, just change the PairFactory with the following line :
```python
base_currency = factory.SubFactory(CurrencyFactory, name="Bitcoin")
```
It will force the IntegrityError in the Currency Factory, so it will try the get_or_create feature.
And then call the PairFactory() two times and voila.
```
E AttributeError: type object 'CurrencyFactory' has no attribute '_original_params'
../../../.cache/pypoetry/virtualenvs/belfort-backend-KmGKt2wB-py3.10/lib/python3.10/site-packages/factory/django.py:151: AttributeError
```
##### Model / Factory code
```python
# Models
class Currency(models.Model):
name = models.CharField(max_length=100, unique=True)
symbol = models.CharField(max_length=20, unique=True)
class Pair(models.Model):
base_currency = models.ForeignKey(
Currency, on_delete=models.CASCADE, related_name="pairs_as_base"
)
quoted_currency = models.ForeignKey(
Currency, on_delete=models.CASCADE, related_name="pairs_as_quoted"
)
# Factories
class CurrencyFactory(factory.django.DjangoModelFactory):
class Meta:
model = "Currency"
django_get_or_create = ("name", "symbol")
name = factory.Faker("cryptocurrency_name")
symbol = factory.Faker("cryptocurrency_code")
class PairFactory(factory.django.DjangoModelFactory):
class Meta:
model = "Pair"
base_currency = factory.SubFactory(CurrencyFactory)
quoted_currency = factory.SubFactory(CurrencyFactory)
```
#### Notes
I believe this is linked to the _generate() method which is not called for SubFactory, while the _original_params is set only in this method.
| open | 2022-10-11T09:07:53Z | 2022-10-24T18:17:05Z | https://github.com/FactoryBoy/factory_boy/issues/979 | [
"Bug",
"Django"
] | VRohou | 6 |
HIT-SCIR/ltp | nlp | 488 | 在使用分词功能时报错AttributeError: 'Version' object has no attribute 'major' | closed | 2021-02-04T09:27:07Z | 2021-02-20T04:37:00Z | https://github.com/HIT-SCIR/ltp/issues/488 | [] | Chen8566 | 1 | |
akfamily/akshare | data-science | 5,703 | stock_zh_a_spot_em无法获取全部沪深京 A 股,只能获取200只股票 | stock_zh_a_spot_em无法获取全部沪深京 A 股,只能获取200只股票 | closed | 2025-02-19T15:15:26Z | 2025-02-20T08:44:56Z | https://github.com/akfamily/akshare/issues/5703 | [
"bug"
] | zgpnuaa | 0 |
inducer/pudb | pytest | 201 | NULL result without error in PyObject_Call | The program can run smoothly with python command,but raise an error mentioned above when debugging in the pudb. I don't know how to deal with it and,solutions in the google search are not clear.
| open | 2016-09-08T15:28:45Z | 2016-09-10T08:12:38Z | https://github.com/inducer/pudb/issues/201 | [] | marearth | 4 |
google-research/bert | tensorflow | 1,299 | Does LSTM better than BERT ? | closed | 2022-02-21T04:53:19Z | 2022-05-22T23:55:11Z | https://github.com/google-research/bert/issues/1299 | [] | SamMohel | 0 | |
seleniumbase/SeleniumBase | pytest | 2,865 | Major updates have arrived in `4.28.0` (mostly for UC Mode) | For anyone that hasn't been following https://github.com/seleniumbase/SeleniumBase/issues/2842, CF pushed an update that prevented UC Mode from easily bypassing CAPTCHA Turnstiles on Linux servers. Additionally, `uc_click()` was rendered ineffective for clicking Turnstile CAPTCHA checkboxes when clicking the checkbox was required. I've been working on solutions to these situations.
As I mentioned earlier in https://github.com/seleniumbase/SeleniumBase/issues/2842#issuecomment-2176310108, if CF detects **either** Selenium in the browser **or** JavaScript involvement in clicking the CAPTCHA, then they don't let the click through. (The JS-detection part is new.) I read online that CF employees borrowed ideas from https://github.com/kaliiiiiiiiii/brotector (a Selenium detector) in order to improve their CAPTCHA. Naturally, I was skeptical at first, but I have confirmed that the two algorithms do appear to get similar results. (Brotector was released 6 weeks ago, while the Cloudflare update happened 2 weeks ago.)
The solution to bypassing the improved CAPTCHAs requires using `pyautogui` to stay undetected. There was also the matter of how to make `pyautogui` work well on headless Linux servers. (Thanks to some ideas by @EnmeiRyuuDev in https://github.com/seleniumbase/SeleniumBase/issues/2842#issuecomment-2168829685, that problem was overcome by setting `pyautogui._pyautogui_x11._display` to `Xlib.display.Display(os.environ['DISPLAY'])` on Linux in order to sync up `pyautogui` with the `X11` virtual display.)
The improved SeleniumBase UC Mode will have these new methods:
```python
driver.uc_gui_press_key(key) # Use PyAutoGUI to press the keyboard key
driver.uc_gui_press_keys(keys) # Use PyAutoGUI to press a list of keys
driver.uc_gui_write(text) # Similar to uc_gui_press_keys(), but faster
driver.uc_gui_handle_cf(frame="iframe") # PyAutoGUI click CF Turnstile
```
It'll probably be easier to understand how those work via examples. Here's one for `uc_gui_handle_cf` based on the example in https://github.com/seleniumbase/SeleniumBase/issues/2842#issuecomment-2159004018:
```python
import sys
from seleniumbase import SB
agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/126.0.0.0"
if "linux" in sys.platform:
agent = None # Use the default UserAgent
with SB(uc=True, test=True, rtf=True, agent=agent) as sb:
url = "https://www.virtualmanager.com/en/login"
sb.uc_open_with_reconnect(url, 4)
sb.uc_gui_handle_cf() # Ready if needed!
sb.assert_element('input[name*="email"]')
sb.assert_element('input[name*="login"]')
sb.set_messenger_theme(location="bottom_center")
sb.post_message("SeleniumBase wasn't detected!")
```
Above, I deliberately gave it an incomplete UserAgent so that CAPTCHA-clicking is required to advance. On macOS and Windows, the default UserAgent that SeleniumBase gives you is already enough to bypass the CAPTCHA screen entirely. The `uc_gui_handle_cf()` method is designed such that if there's no CAPTCHA that needs to be clicked on the page you're on, then nothing happens. Therefore, you can add the line whenever you think you'll encounter a CAPTCHA or not. In case there's more than one iframe on a website, you can specify the CSS Selector of the iframe as an arg when calling `uc_gui_handle_cf()`. There will be new examples in the `SeleniumBase/examples/` folder for all the new UC Mode methods. To sum up, you may need to use the newer `uc_gui_*` methods in order to get past some CAPTCHAs on Linux where `uc_click()` worked previously.
On the topic of Brotector, (which is the open source bot-detector library that CF borrowed ideas from), there is a huge opportunity: Now that effective bot-detection software is available to the general public (all the code is open source!), anyone can now build their own CAPTCHA services (or just add CAPTCHAs to sites without the "service" part). I've already jumped on this with the Brotector CAPTCHA: https://seleniumbase.io/apps/brotector. I've also created a few test sites that utilize it:
* https://seleniumbase.io/hobbit/login
* https://seleniumbase.io/antibot/login
> I did make some improvements to the original Brotector algorithm in order to be suitable for CAPTCHAs: I needed a definite Allow/Block answer, rather than a number between 0 and 1 determining the likelihood of a bot, etc. I've been using these new test sites for testing the improved UC Mode.
That covers the major updates from `4.28.0` (with the exception of Brotector CAPTCHA test sites, which were already available to the public at the URLs listed above).
There will also be some other improvements:
* More `sb` methods added directly into the `driver`.
* An improvement to the Recorder to better handle autogenerated IDs in selectors.
* Python dependency updates.
* Some method simplifications.
* Some timing updates.
* Some updates to default settings.
Now, when using UC Mode on Linux, the default setting is NOT using headless mode. If for some reason you decide to use UC Mode and Headless Mode together, note that although Chrome will launch, you'll definitely be detected by anti-bots, and on top of that, `pyautogui` methods won't work. Use `xvfb=True` / `--xvfb` in order to be sure that the improved X11 virtual display on Linux activates. You'll need that for the `uc_gui_*` methods to work properly.
Much of that will get covered in the 3rd UC Mode video tutorial on YouTube (expected sometime in the near future).
In case anyone has forgotten, SeleniumBase is still a Test Automation Framework at heart, (which includes an extremely popular feature for stealth called "UC Mode"). UC Mode has gathered a lot of the attention, but SeleniumBase is more than just that. | open | 2024-06-23T16:52:22Z | 2024-10-29T15:40:37Z | https://github.com/seleniumbase/SeleniumBase/issues/2865 | [
"enhancement",
"documentation",
"UC Mode / CDP Mode"
] | mdmintz | 75 |
pydata/xarray | numpy | 9,544 | Allow `.groupby().map()` to return scalars? | ### Is your feature request related to a problem?
I'm trying to get a count of unique values along a dimension. It's not so easy, unless I'm missing something.
One approach is:
```python
da = xr.tutorial.load_dataset('air_temperature').to_dataarray()
xr.apply_ufunc(lambda x: len(np.unique(x)), da, input_core_dims=[["lat","lon"]], vectorize=True) # NB: requires vectorize to work!
```
but `apply_ufunc` is generally too complex for normal users to use, I think.
Another approach could be
```python
da.groupby('time').map(lambda x: len(np.unique(x)))
```
But this raises:
```
AttributeError: 'int' object has no attribute 'dims'
```
Instead, surrounding the expression with `DataArray` makes it work:
```python
da.groupby('uptr').map(lambda x: xr.DataArray(len(np.unique(x))))
<xarray.DataArray (time: 2920)> Size: 23kB
array([546, 547, 545, ..., 555, 558, 566])
Coordinates:
* time (time) datetime64[ns] 23kB 2013-01-01 ... 2014-12-31T18:00:00
```
### Describe the solution you'd like
Should we allow returning scalars from `.groupby().map()`?
I don't think there can be any ambiguity on what the result should be...
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | closed | 2024-09-25T00:46:21Z | 2024-09-25T17:34:47Z | https://github.com/pydata/xarray/issues/9544 | [
"enhancement",
"topic-groupby"
] | max-sixty | 8 |
SALib/SALib | numpy | 188 | Morris sampling `brute` vs. `local` do not match, sometimes. | We discovered in #186 that the test comparing Morris LocalOptimisation and Brute (methods to identify maximally-distant trajectories) will fail under random conditions.
Under this issue I'll keep track of attempts to figure out what's going on. Things I have noticed so far:
* This problem is not specific to using group sampling. It also happens without group sampling.
* Using `N=8` and `k_choices=4` (as in the test case), the frequency of errors with random input sampling is roughly 7%.
Here is one test case with input samples that result in failure and success:
```python
import numpy as np
from SALib.sample.morris.local import LocalOptimisation
from SALib.sample.morris.brute import BruteForce
N = 8
k_choices = 4
problem = {
'num_vars': 7,
'names': ['x1', 'x2', 'x3', 'x4', 'x5', 'x6', 'x7'],
'bounds': [[0,1]]*7,
}
num_params = problem['num_vars']
input_sample = np.loadtxt('samples-success.csv', delimiter=',')
strategy = LocalOptimisation()
actual = strategy.find_local_maximum(input_sample, N, num_params,
k_choices)
brute = BruteForce()
desired = brute.brute_force_most_distant(input_sample,
N,
num_params,
k_choices)
print('Actual', actual)
print('Desired', desired)
```
Here is `samples_success.csv` (sampled from the settings above):
```
0.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01,3.333333333333333148e-01,0.000000000000000000e+00,3.333333333333333148e-01
6.666666666666666297e-01,0.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01,3.333333333333333148e-01,0.000000000000000000e+00,3.333333333333333148e-01
6.666666666666666297e-01,0.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01,3.333333333333333148e-01,0.000000000000000000e+00,3.333333333333333148e-01
6.666666666666666297e-01,0.000000000000000000e+00,3.333333333333333148e-01,0.000000000000000000e+00,3.333333333333333148e-01,0.000000000000000000e+00,3.333333333333333148e-01
6.666666666666666297e-01,0.000000000000000000e+00,3.333333333333333148e-01,0.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01,3.333333333333333148e-01
6.666666666666666297e-01,0.000000000000000000e+00,3.333333333333333148e-01,0.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01,3.333333333333333148e-01
6.666666666666666297e-01,0.000000000000000000e+00,3.333333333333333148e-01,0.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00
6.666666666666666297e-01,6.666666666666666297e-01,3.333333333333333148e-01,0.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00
0.000000000000000000e+00,3.333333333333333148e-01,1.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01
0.000000000000000000e+00,1.000000000000000000e+00,1.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01
0.000000000000000000e+00,1.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01
0.000000000000000000e+00,1.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01,3.333333333333333148e-01,0.000000000000000000e+00,6.666666666666666297e-01
0.000000000000000000e+00,1.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01,3.333333333333333148e-01,6.666666666666666297e-01,6.666666666666666297e-01
0.000000000000000000e+00,1.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01,3.333333333333333148e-01,6.666666666666666297e-01,0.000000000000000000e+00
6.666666666666666297e-01,1.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01,3.333333333333333148e-01,6.666666666666666297e-01,0.000000000000000000e+00
6.666666666666666297e-01,1.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01,3.333333333333333148e-01,6.666666666666666297e-01,0.000000000000000000e+00
3.333333333333333148e-01,6.666666666666666297e-01,0.000000000000000000e+00,1.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00,3.333333333333333148e-01
1.000000000000000000e+00,6.666666666666666297e-01,0.000000000000000000e+00,1.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00,3.333333333333333148e-01
1.000000000000000000e+00,6.666666666666666297e-01,6.666666666666666297e-01,1.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00,3.333333333333333148e-01
1.000000000000000000e+00,6.666666666666666297e-01,6.666666666666666297e-01,1.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00
1.000000000000000000e+00,6.666666666666666297e-01,6.666666666666666297e-01,1.000000000000000000e+00,6.666666666666666297e-01,0.000000000000000000e+00,1.000000000000000000e+00
1.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,6.666666666666666297e-01,0.000000000000000000e+00,1.000000000000000000e+00
1.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,6.666666666666666297e-01,6.666666666666666297e-01,1.000000000000000000e+00
1.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01,3.333333333333333148e-01,6.666666666666666297e-01,6.666666666666666297e-01,1.000000000000000000e+00
0.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01,3.333333333333333148e-01,1.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01
6.666666666666666297e-01,3.333333333333333148e-01,6.666666666666666297e-01,3.333333333333333148e-01,1.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01
6.666666666666666297e-01,1.000000000000000000e+00,6.666666666666666297e-01,3.333333333333333148e-01,1.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01
6.666666666666666297e-01,1.000000000000000000e+00,6.666666666666666297e-01,3.333333333333333148e-01,1.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00
6.666666666666666297e-01,1.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,1.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00
6.666666666666666297e-01,1.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,3.333333333333333148e-01,0.000000000000000000e+00,0.000000000000000000e+00
6.666666666666666297e-01,1.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01,0.000000000000000000e+00
6.666666666666666297e-01,1.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01,0.000000000000000000e+00
6.666666666666666297e-01,3.333333333333333148e-01,6.666666666666666297e-01,1.000000000000000000e+00,1.000000000000000000e+00,1.000000000000000000e+00,1.000000000000000000e+00
6.666666666666666297e-01,3.333333333333333148e-01,0.000000000000000000e+00,1.000000000000000000e+00,1.000000000000000000e+00,1.000000000000000000e+00,1.000000000000000000e+00
6.666666666666666297e-01,3.333333333333333148e-01,0.000000000000000000e+00,1.000000000000000000e+00,3.333333333333333148e-01,1.000000000000000000e+00,1.000000000000000000e+00
6.666666666666666297e-01,3.333333333333333148e-01,0.000000000000000000e+00,3.333333333333333148e-01,3.333333333333333148e-01,1.000000000000000000e+00,1.000000000000000000e+00
0.000000000000000000e+00,3.333333333333333148e-01,0.000000000000000000e+00,3.333333333333333148e-01,3.333333333333333148e-01,1.000000000000000000e+00,1.000000000000000000e+00
0.000000000000000000e+00,1.000000000000000000e+00,0.000000000000000000e+00,3.333333333333333148e-01,3.333333333333333148e-01,1.000000000000000000e+00,1.000000000000000000e+00
0.000000000000000000e+00,1.000000000000000000e+00,0.000000000000000000e+00,3.333333333333333148e-01,3.333333333333333148e-01,3.333333333333333148e-01,1.000000000000000000e+00
0.000000000000000000e+00,1.000000000000000000e+00,0.000000000000000000e+00,3.333333333333333148e-01,3.333333333333333148e-01,3.333333333333333148e-01,3.333333333333333148e-01
3.333333333333333148e-01,1.000000000000000000e+00,6.666666666666666297e-01,0.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01,3.333333333333333148e-01
3.333333333333333148e-01,1.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01,3.333333333333333148e-01
3.333333333333333148e-01,3.333333333333333148e-01,0.000000000000000000e+00,0.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01,3.333333333333333148e-01
3.333333333333333148e-01,3.333333333333333148e-01,0.000000000000000000e+00,0.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01,1.000000000000000000e+00
3.333333333333333148e-01,3.333333333333333148e-01,0.000000000000000000e+00,0.000000000000000000e+00,3.333333333333333148e-01,0.000000000000000000e+00,1.000000000000000000e+00
3.333333333333333148e-01,3.333333333333333148e-01,0.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00
3.333333333333333148e-01,3.333333333333333148e-01,0.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00
1.000000000000000000e+00,3.333333333333333148e-01,0.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00
6.666666666666666297e-01,1.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01,0.000000000000000000e+00
6.666666666666666297e-01,1.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00
6.666666666666666297e-01,1.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01
6.666666666666666297e-01,1.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01,0.000000000000000000e+00,6.666666666666666297e-01
6.666666666666666297e-01,3.333333333333333148e-01,0.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01,0.000000000000000000e+00,6.666666666666666297e-01
6.666666666666666297e-01,3.333333333333333148e-01,6.666666666666666297e-01,1.000000000000000000e+00,6.666666666666666297e-01,0.000000000000000000e+00,6.666666666666666297e-01
6.666666666666666297e-01,3.333333333333333148e-01,6.666666666666666297e-01,3.333333333333333148e-01,6.666666666666666297e-01,0.000000000000000000e+00,6.666666666666666297e-01
0.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01,3.333333333333333148e-01,6.666666666666666297e-01,0.000000000000000000e+00,6.666666666666666297e-01
0.000000000000000000e+00,6.666666666666666297e-01,6.666666666666666297e-01,0.000000000000000000e+00,1.000000000000000000e+00,3.333333333333333148e-01,1.000000000000000000e+00
0.000000000000000000e+00,6.666666666666666297e-01,0.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00,3.333333333333333148e-01,1.000000000000000000e+00
0.000000000000000000e+00,6.666666666666666297e-01,0.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,3.333333333333333148e-01,1.000000000000000000e+00
0.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,3.333333333333333148e-01,1.000000000000000000e+00
6.666666666666666297e-01,0.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,3.333333333333333148e-01,1.000000000000000000e+00
6.666666666666666297e-01,0.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,3.333333333333333148e-01,3.333333333333333148e-01
6.666666666666666297e-01,0.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,1.000000000000000000e+00,3.333333333333333148e-01
6.666666666666666297e-01,0.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01,3.333333333333333148e-01,1.000000000000000000e+00,3.333333333333333148e-01
```
And here is `samples-fail.csv`:
```
3.333333333333333148e-01,1.000000000000000000e+00,1.000000000000000000e+00,3.333333333333333148e-01,0.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01
3.333333333333333148e-01,1.000000000000000000e+00,3.333333333333333148e-01,3.333333333333333148e-01,0.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01
3.333333333333333148e-01,1.000000000000000000e+00,3.333333333333333148e-01,1.000000000000000000e+00,0.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01
3.333333333333333148e-01,1.000000000000000000e+00,3.333333333333333148e-01,1.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01
3.333333333333333148e-01,3.333333333333333148e-01,3.333333333333333148e-01,1.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01
3.333333333333333148e-01,3.333333333333333148e-01,3.333333333333333148e-01,1.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,6.666666666666666297e-01
1.000000000000000000e+00,3.333333333333333148e-01,3.333333333333333148e-01,1.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,6.666666666666666297e-01
1.000000000000000000e+00,3.333333333333333148e-01,3.333333333333333148e-01,1.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,0.000000000000000000e+00
1.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01,3.333333333333333148e-01,3.333333333333333148e-01,1.000000000000000000e+00,1.000000000000000000e+00
1.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01,3.333333333333333148e-01,3.333333333333333148e-01,3.333333333333333148e-01,1.000000000000000000e+00
1.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00,3.333333333333333148e-01,3.333333333333333148e-01,3.333333333333333148e-01,1.000000000000000000e+00
1.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00,3.333333333333333148e-01,3.333333333333333148e-01,1.000000000000000000e+00
1.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00,1.000000000000000000e+00,3.333333333333333148e-01,1.000000000000000000e+00
3.333333333333333148e-01,0.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00,1.000000000000000000e+00,3.333333333333333148e-01,1.000000000000000000e+00
3.333333333333333148e-01,0.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00,1.000000000000000000e+00,3.333333333333333148e-01,3.333333333333333148e-01
3.333333333333333148e-01,6.666666666666666297e-01,0.000000000000000000e+00,1.000000000000000000e+00,1.000000000000000000e+00,3.333333333333333148e-01,3.333333333333333148e-01
6.666666666666666297e-01,0.000000000000000000e+00,6.666666666666666297e-01,6.666666666666666297e-01,6.666666666666666297e-01,1.000000000000000000e+00,6.666666666666666297e-01
6.666666666666666297e-01,0.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01,6.666666666666666297e-01,1.000000000000000000e+00,6.666666666666666297e-01
6.666666666666666297e-01,0.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01,6.666666666666666297e-01,3.333333333333333148e-01,6.666666666666666297e-01
0.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01,6.666666666666666297e-01,3.333333333333333148e-01,6.666666666666666297e-01
0.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01,0.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01
0.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01
0.000000000000000000e+00,6.666666666666666297e-01,0.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01
0.000000000000000000e+00,6.666666666666666297e-01,0.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00,3.333333333333333148e-01,0.000000000000000000e+00
3.333333333333333148e-01,0.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00,3.333333333333333148e-01,1.000000000000000000e+00,6.666666666666666297e-01
3.333333333333333148e-01,0.000000000000000000e+00,0.000000000000000000e+00,3.333333333333333148e-01,3.333333333333333148e-01,1.000000000000000000e+00,6.666666666666666297e-01
1.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00,3.333333333333333148e-01,3.333333333333333148e-01,1.000000000000000000e+00,6.666666666666666297e-01
1.000000000000000000e+00,0.000000000000000000e+00,0.000000000000000000e+00,3.333333333333333148e-01,3.333333333333333148e-01,1.000000000000000000e+00,0.000000000000000000e+00
1.000000000000000000e+00,6.666666666666666297e-01,0.000000000000000000e+00,3.333333333333333148e-01,3.333333333333333148e-01,1.000000000000000000e+00,0.000000000000000000e+00
1.000000000000000000e+00,6.666666666666666297e-01,6.666666666666666297e-01,3.333333333333333148e-01,3.333333333333333148e-01,1.000000000000000000e+00,0.000000000000000000e+00
1.000000000000000000e+00,6.666666666666666297e-01,6.666666666666666297e-01,3.333333333333333148e-01,1.000000000000000000e+00,1.000000000000000000e+00,0.000000000000000000e+00
1.000000000000000000e+00,6.666666666666666297e-01,6.666666666666666297e-01,3.333333333333333148e-01,1.000000000000000000e+00,3.333333333333333148e-01,0.000000000000000000e+00
0.000000000000000000e+00,0.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01,1.000000000000000000e+00,1.000000000000000000e+00,3.333333333333333148e-01
0.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,1.000000000000000000e+00,3.333333333333333148e-01
0.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,3.333333333333333148e-01,3.333333333333333148e-01
0.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01,3.333333333333333148e-01,3.333333333333333148e-01,3.333333333333333148e-01
0.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01,3.333333333333333148e-01,3.333333333333333148e-01,1.000000000000000000e+00
0.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,6.666666666666666297e-01,3.333333333333333148e-01,3.333333333333333148e-01,1.000000000000000000e+00
0.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,0.000000000000000000e+00,3.333333333333333148e-01,3.333333333333333148e-01,1.000000000000000000e+00
6.666666666666666297e-01,6.666666666666666297e-01,1.000000000000000000e+00,0.000000000000000000e+00,3.333333333333333148e-01,3.333333333333333148e-01,1.000000000000000000e+00
6.666666666666666297e-01,3.333333333333333148e-01,0.000000000000000000e+00,3.333333333333333148e-01,3.333333333333333148e-01,0.000000000000000000e+00,0.000000000000000000e+00
6.666666666666666297e-01,3.333333333333333148e-01,0.000000000000000000e+00,3.333333333333333148e-01,3.333333333333333148e-01,0.000000000000000000e+00,6.666666666666666297e-01
0.000000000000000000e+00,3.333333333333333148e-01,0.000000000000000000e+00,3.333333333333333148e-01,3.333333333333333148e-01,0.000000000000000000e+00,6.666666666666666297e-01
0.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01,3.333333333333333148e-01,3.333333333333333148e-01,0.000000000000000000e+00,6.666666666666666297e-01
0.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01,3.333333333333333148e-01,1.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01
0.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01,1.000000000000000000e+00,1.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01
0.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,1.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01
0.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01,6.666666666666666297e-01
6.666666666666666297e-01,1.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01,6.666666666666666297e-01,0.000000000000000000e+00,1.000000000000000000e+00
6.666666666666666297e-01,1.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01,6.666666666666666297e-01,6.666666666666666297e-01,1.000000000000000000e+00
0.000000000000000000e+00,1.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01,6.666666666666666297e-01,6.666666666666666297e-01,1.000000000000000000e+00
0.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01,6.666666666666666297e-01,6.666666666666666297e-01,6.666666666666666297e-01,1.000000000000000000e+00
0.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01,0.000000000000000000e+00,6.666666666666666297e-01,6.666666666666666297e-01,1.000000000000000000e+00
0.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01,0.000000000000000000e+00,6.666666666666666297e-01,6.666666666666666297e-01,3.333333333333333148e-01
0.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01,0.000000000000000000e+00,6.666666666666666297e-01,6.666666666666666297e-01,3.333333333333333148e-01
0.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01,0.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01,3.333333333333333148e-01
1.000000000000000000e+00,1.000000000000000000e+00,3.333333333333333148e-01,0.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01,0.000000000000000000e+00
1.000000000000000000e+00,1.000000000000000000e+00,1.000000000000000000e+00,0.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01,0.000000000000000000e+00
1.000000000000000000e+00,1.000000000000000000e+00,1.000000000000000000e+00,0.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01,6.666666666666666297e-01
3.333333333333333148e-01,1.000000000000000000e+00,1.000000000000000000e+00,0.000000000000000000e+00,3.333333333333333148e-01,6.666666666666666297e-01,6.666666666666666297e-01
3.333333333333333148e-01,1.000000000000000000e+00,1.000000000000000000e+00,0.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01,6.666666666666666297e-01
3.333333333333333148e-01,1.000000000000000000e+00,1.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,6.666666666666666297e-01,6.666666666666666297e-01
3.333333333333333148e-01,3.333333333333333148e-01,1.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,6.666666666666666297e-01,6.666666666666666297e-01
3.333333333333333148e-01,3.333333333333333148e-01,1.000000000000000000e+00,6.666666666666666297e-01,1.000000000000000000e+00,0.000000000000000000e+00,6.666666666666666297e-01
```
| closed | 2018-01-21T20:12:28Z | 2018-01-24T16:55:52Z | https://github.com/SALib/SALib/issues/188 | [] | jdherman | 3 |
minimaxir/textgenrnn | tensorflow | 108 | textgen.generate() aborts with Error #15 | Hello,
Thanks for designing this tool!
I'm running Python 3.6.8, tensorflow 1.12.0. When I start working through the examples, the program aborts with the following error:
`>>> textgen.generate()
OMP: Error #15: Initializing libiomp5.dylib, but found libiomp5.dylib already initialized.
OMP: Hint: This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.
Abort trap: 6`
Thanks in advance. | closed | 2019-02-21T17:27:21Z | 2019-02-21T17:43:01Z | https://github.com/minimaxir/textgenrnn/issues/108 | [] | vrl2 | 1 |
comfyanonymous/ComfyUI | pytorch | 6,470 | NSFW hide or blur option where shown | ### Feature Idea
NSFW hide or blur option where shown - I like to have some, but when I'm at the coffee shop looking for SFW, it doesn't filter them out.
### Existing Solutions
delete NSFW loras etc..
### Other
_No response_ | open | 2025-01-14T20:24:48Z | 2025-01-14T20:24:48Z | https://github.com/comfyanonymous/ComfyUI/issues/6470 | [
"Feature"
] | NinjaKristo | 0 |
google-deepmind/sonnet | tensorflow | 104 | Layers in Keras? | This may be a really naive question, but what is the difference between this and Keras layers? My guess is that Keras doesn’t afford as much variable sharing flexibility. But to the best of my understanding, their abstract layers class can be subclassed and would (I believe, though I’m not sure) reuse variables upon subsequent calls. They do not afford decorating custom layer methods with reuse variables, however (as far as I’m aware). Is this the primary difference or are there others that I’m overlooking? I’m trying to decide between the two, and I’m leaning towards Sonnet just because it’s more to the point, but it’d be great if the difference between the two could be explained to me. Thank you! | closed | 2018-11-17T15:22:49Z | 2018-11-24T02:47:02Z | https://github.com/google-deepmind/sonnet/issues/104 | [] | slerman12 | 0 |
SYSTRAN/faster-whisper | deep-learning | 556 | What are the minimum requirements to run faster whisper locally? | I want to run it locally but I don't know if my machine can do it. I would like to know the requirements for each version of whisper | closed | 2023-11-10T13:45:52Z | 2024-07-10T07:12:10Z | https://github.com/SYSTRAN/faster-whisper/issues/556 | [] | heloisypr | 4 |
mlfoundations/open_clip | computer-vision | 839 | Question: How/where to evaluate GeoDE and Dollar Street? | Hello,
thank you for the great work of benchmarking all those CLIP models here: https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_classification_results.csv
I am interested in the evaluation on GeoDE and Dollar Street datasets and wanted to compare with your setup there.
Afaik, you use CLIP-Benchmark (https://github.com/LAION-AI/CLIP_benchmark) for evaluating the models but I cannot find those two datasets mentioned anywhere in the repository.
Can you point me to where to find the setup you used for evaluation here? Thank you very much! | closed | 2024-03-08T15:12:42Z | 2024-03-09T14:31:10Z | https://github.com/mlfoundations/open_clip/issues/839 | [] | gregor-ge | 1 |
pyeve/eve | flask | 1,147 | Documentation builds on Read the Docs are currently broken | It appears that since we departed from `requirements.txt` files, RTD cannot build the documentation anymore. This would not be too much of a problem if we didn't link to RTD for the docs of the stable version. Either drop RTD altogether, or see how we can support it again (maybe by re-adding the requirements file - something I would avoid if possible). | closed | 2018-05-14T14:48:33Z | 2018-05-14T15:38:05Z | https://github.com/pyeve/eve/issues/1147 | [
"documentation"
] | nicolaiarocci | 3 |
yunjey/pytorch-tutorial | deep-learning | 60 | Error with Image captioning | When I Run the evaluation command
python sample.py --image='png/example.png'
I got this error
Traceback (most recent call last):
File "sample.py", line 97, in <module>
main(args)
File "sample.py", line 61, in main
sampled_ids = decoder.sample(feature)
File "/users/PAS1273/osu8235/pytorch/pytorch-tutorial/tutorials/03-advanced/image_captioning/model.py", line 62, in sample
hiddens, states = self.lstm(inputs, states) # (batch_size, 1, hidden_size),
File "/users/PAS1273/osu8235/.local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 224, in __call__
result = self.forward(*input, **kwargs)
File "/users/PAS1273/osu8235/.local/lib/python2.7/site-packages/torch/nn/modules/rnn.py", line 162, in forward
output, hidden = func(input, self.all_weights, hx)
File "/users/PAS1273/osu8235/.local/lib/python2.7/site-packages/torch/nn/_functions/rnn.py", line 351, in forward
return func(input, *fargs, **fkwargs)
File "/users/PAS1273/osu8235/.local/lib/python2.7/site-packages/torch/autograd/function.py", line 284, in _do_forward
flat_output = super(NestedIOFunction, self)._do_forward(*flat_input)
File "/users/PAS1273/osu8235/.local/lib/python2.7/site-packages/torch/autograd/function.py", line 306, in forward
result = self.forward_extended(*nested_tensors)
File "/users/PAS1273/osu8235/.local/lib/python2.7/site-packages/torch/nn/_functions/rnn.py", line 293, in forward_extended
cudnn.rnn.forward(self, input, hx, weight, output, hy)
File "/users/PAS1273/osu8235/.local/lib/python2.7/site-packages/torch/backends/cudnn/rnn.py", line 208, in forward
'input must have 3 dimensions, got {}'.format(input.dim()))
RuntimeError: input must have 3 dimensions, got 2
| closed | 2017-09-11T19:07:25Z | 2018-08-20T16:35:38Z | https://github.com/yunjey/pytorch-tutorial/issues/60 | [] | mhsamavatian | 5 |
mwaskom/seaborn | data-science | 3,296 | Scale issue when plotting on top of sns.pairplot diagonal | I am plotting some red theoretical curves (ellipses and density functions) on top of the blue scatterplots and histograms produced by `sns.pairplot`.
As you can see in the image, the ellipses work well, but not the diagonal plots because it seems that they are shifted and scaled internally and automatically by `pairplot` to match the shared y axis. Therefore, my plots start at the zero of the scatter plots instead of the virtual zero of the diagonal plots.
Is it possible to change this behavior or at least to access the shift and scaling constants used by pairplot to do the shift and scaling manually?
Details: I am using `g.axes.diagonal()` to access the diagonal axes and `ax.plot(x,y)` to plot the densities. I tried also with `g = sns.PairGrid(df, diag_sharey=False)` without any difference in the scale.

| closed | 2023-03-14T15:47:25Z | 2023-03-14T22:01:03Z | https://github.com/mwaskom/seaborn/issues/3296 | [] | caph1993 | 1 |
keras-team/keras | pytorch | 20,177 | Obscure validation failure due to `_use_cached_eval_dataset` | I'll preface by saying that I encountered this issue with tf_keras == 2.15, but the source code regarding evaluation is hardly different from v2.15, I feel that it's still applicable here.
The issue is that no matter what [`fit` forces `evaluate` to use stored dataset object for the validation step](https://github.com/keras-team/keras/blob/d4a51168bfedf69a9aae7ddff289277972dfd85d/keras/src/backend/tensorflow/trainer.py#L353) instead of whatever object you supply to fit. This is super obscure, but it's probably done for some performance reasons, so whatever.
**Why is this an issue?**
If you change something about your dataset (like, initially you forgot to turn on `.ignore_errors()`) mid training, and then you pass the new DS instance to `fit`, it completely ignores this fact. And in this particular case, it would fail if any errors arise on the DS preprocessing steps.
Yes, you can cure it by `model._eval_data_handler = None`, which in turn forces `evaluate` to cache the new object, but to figure this out, you have to spend some time on diving into the source code.
So what I propose is:
1) a mention about said functionality in `fit`'s documentation
2) some actual public API for either cleaning cached validation objects, or disabling caching behavior entirely
P.S. I'd provide a colab link, but it turns out that making a tf.Dataset that randomly fails when **I** want it to is actually way harder than it seems | closed | 2024-08-28T12:57:12Z | 2024-09-27T02:02:18Z | https://github.com/keras-team/keras/issues/20177 | [
"stat:awaiting response from contributor",
"stale",
"type:Bug"
] | DLumi | 7 |
alpacahq/alpaca-trade-api-python | rest-api | 175 | Websocket streaming does not work with paper trading creds | This script is failing since yesterday with the following error:
```
#!/usr/bin/env python
import alpaca_trade_api as tradeapi
from alpaca_trade_api.stream2 import StreamConn
from pprint import pprint
base_url = 'https://paper-api.alpaca.markets'
api_key_id = 'PKTKMECJGYAJ87YN9V07'
api_secret = 'myfakeapisecret'
conn = StreamConn(base_url=base_url, key_id=api_key_id, secret_key=api_secret)
api = tradeapi.REST(
base_url=base_url,
key_id=api_key_id,
secret_key=api_secret
)
account = api.get_account()
pprint(account)
@conn.on(r'.*')
async def on_data(conn, channel, data):
print(channel)
pprint(data)
# A.SPY will work, because it only goes to Polygon
# conn.run(['A.SPY'])
# account_updates fail, being sent to websocket stream
conn.run(['account_updates', 'trade_updates', 'A.SPY'])
```
Error that I see is :
```
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
Traceback (most recent call last):
File "/Users/smishra/PycharmProjects/AlgoT/test.py", line 29, in <module>
conn.run(['account_updates', 'trade_updates', 'A.SPY'])
File "/Users/smishra/anaconda3/lib/python3.7/site-packages/alpaca_trade_api/stream2.py", line 159, in run
loop.run_until_complete(self.subscribe(initial_channels))
File "/Users/smishra/anaconda3/lib/python3.7/asyncio/base_events.py", line 584, in run_until_complete
return future.result()
File "/Users/smishra/anaconda3/lib/python3.7/site-packages/alpaca_trade_api/stream2.py", line 128, in subscribe
await self._ensure_polygon()
File "/Users/smishra/anaconda3/lib/python3.7/site-packages/alpaca_trade_api/stream2.py", line 85, in _ensure_polygon
await self.polygon.connect()
File "/Users/smishra/anaconda3/lib/python3.7/site-packages/alpaca_trade_api/polygon/streamconn.py", line 44, in connect
if await self.authenticate():
File "/Users/smishra/anaconda3/lib/python3.7/site-packages/alpaca_trade_api/polygon/streamconn.py", line 70, in authenticate
raise ValueError('Invalid Polygon credentials, '
ValueError: Invalid Polygon credentials, Failed to authenticate: {'ev': 'status', 'status': 'auth_failed', 'message': 'authentication failed'}
```
Has there been a change since yesterday or am I doing something wrong? | closed | 2020-03-31T15:11:58Z | 2020-03-31T23:05:10Z | https://github.com/alpacahq/alpaca-trade-api-python/issues/175 | [] | qlikstar | 4 |
seleniumbase/SeleniumBase | web-scraping | 2,351 | [Request] Multiple or condition wait tuple | Taking advantage of the fact that it is an extra layer of Selenium, I would like to include a condition to wait for the first element found.
I would like to see this feature but well applied and there is no need to specify what element it is (I would like it to be automatic like here, as is the case with seleniumbase). I think it would be extremely useful, especially if it returned the element in question.
Basic concept:
```
class WaitForOrClickableElement:
def __init__(self, driver, locators, timeout=20, poll_frequency=0.5):
self.driver = driver
self.locators = locators
self.wait_time = timeout
self.poll_frequency = poll_frequency
self.element, self.locator = self._wait_for_clickable_element()
def _wait_for_clickable_element(self):
end_time = time.time() + self.wait_time
try:
sub_wait_time = self.wait_time / len(self.locators)
except ZeroDivisionError:
sub_wait_time = self.wait_time
while True:
for locator in self.locators:
try:
wait = WebDriverWait(self.driver, timeout=sub_wait_time, poll_frequency=self.poll_frequency)
element = wait.until(EC.element_to_be_clickable(locator))
return element, locator[1]
except TimeoutException:
if time.time() > end_time:
raise Exception("Neither element was clickable after waiting for the specified time")
except:
raise
```
```
element = WaitForOrClickableElement(driver, ((By.XPATH, '//*[@id="auth_email"]'), (By.XPATH, '//*[@id="other"]')))
if '//*[@id="other"]' in element.locator:
....
```
Unfortunately it is not possible to do this with native selenium using a single function. | closed | 2023-12-08T20:06:52Z | 2023-12-16T20:39:56Z | https://github.com/seleniumbase/SeleniumBase/issues/2351 | [
"duplicate",
"question"
] | boludoz | 2 |
paperless-ngx/paperless-ngx | django | 7,399 | [BUG] UI flaw - create new document type without having permissions | ### Description
I have a user which does not have permissions to create / edit document types. There are places in the GUI which shows a button to create a new entry - here the missing permissions are not considered. Since adding the new entry does not work in the end, this is no security flaw, but offering the option without proper functionality confuses non-experienced users.
### Steps to reproduce
1. Create a user who can view document types, but cannot create them.
2. in the menu, click on "document types"
3. there is no button to create a new type -> this works as as expected
4. double click on an existing document to open the details
5. On the field "document type" click on "+"
6. A mask opens where you can create a document type and which has a "save" button
7. Enter a new document type
8. When you click "save" nothing happens, and there is no error message
### Webserver logs
```bash
.
```
### Browser logs
_No response_
### Paperless-ngx version
2.11.2
### Host OS
Windows via WSL
### Installation method
Docker - official image
### System status
```json
{
"pngx_version": "2.11.2",
"server_os": "Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.36",
"install_type": "docker",
"storage": {
"total": 262143995904,
"available": 134809735168
},
"database": {
"type": "sqlite",
"url": "/usr/src/paperless/data/db.sqlite3",
"status": "OK",
"error": null,
"migration_status": {
"latest_migration": "paperless_mail.0011_remove_mailrule_assign_tag_squashed_0024_alter_mailrule_name_and_more",
"unapplied_migrations": []
}
},
"tasks": {
"redis_url": "redis://broker:6379",
"redis_status": "OK",
"redis_error": null,
"celery_status": "OK",
"index_status": "OK",
"index_last_modified": "2024-08-06T00:56:04.176712+02:00",
"index_error": null,
"classifier_status": "OK",
"classifier_last_trained": "2024-08-05T22:05:06.496545Z",
"classifier_error": null
}
}
```
### Browser
Firefox
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-08-05T23:07:18Z | 2024-09-05T03:04:02Z | https://github.com/paperless-ngx/paperless-ngx/issues/7399 | [
"bug",
"frontend"
] | arnschi | 1 |
apify/crawlee-python | automation | 546 | Error handler is not called for session errors | See https://github.com/apify/crawlee/pull/2683 | closed | 2024-09-27T09:04:16Z | 2024-10-01T08:54:07Z | https://github.com/apify/crawlee-python/issues/546 | [
"bug",
"t-tooling"
] | janbuchar | 0 |
hbldh/bleak | asyncio | 793 | Slow connection with nest_asyncio | * bleak version: 0.14.2
* Python version: 3.8.6
* Operating System: Windows 10 Home
* Computer specs: Intel(R) Core(TM) i7-5500U CPU @ 2.40GHz, 12.0 GB RAM
### Description
Trying to establish a connection with a `nest_asyncio` applied considerably slows down the connection ~7 times (takes 21 seconds). If I comment out the `nest_asyncio.apply()`, the connection is no longer slow.
I've tested this behaviour in several computers. However, in one of them it didn't happen (it was fast with our without `nest_asyncio`), possibly because it's an extremely fast computer.
### What I Did
```
import asyncio
import nest_asyncio
from bleak import BleakClient
import time
def connect_to_device(mac_address) -> None:
loop = asyncio.new_event_loop()
set_connection(mac_address, loop)
def set_connection(mac_address, loop):
loop.run_until_complete(establish_connection(mac_address, loop))
async def establish_connection(mac_address: str, loop):
print('> Establishing a new BTH connection with {device}'.format(device=mac_address))
client = BleakClient(mac_address, loop=loop)
init_time = time.time()
await client.connect()
print('> Connection established. Elapsed time: {elapsed_time}'.format(elapsed_time=time.time()-init_time))
await client.disconnect()
if __name__ == '__main__':
nest_asyncio.apply()
connect_to_device('F3:87:21:F0:A7:7C')
```
**Outputs:**
With `nest_asyncio.apply()`:
```
> Establishing a new BTH connection with F3:87:21:F0:A7:7C
> Connection established. Elapsed time: 21.593122720718384
```
Without `nest_asyncio.apply()`:
```
> Establishing a new BTH connection with F3:87:21:F0:A7:7C
> Connection established. Elapsed time: 3.2514888286590576
``` | closed | 2022-03-24T13:27:42Z | 2023-03-20T18:53:17Z | https://github.com/hbldh/bleak/issues/793 | [] | diogotecelao | 3 |
AirtestProject/Airtest | automation | 418 | Do you have an improvement plan for the report form? | hi
There was an opinion that the form was improved as a result of the test in my team.
Do you have an improvement plan for the report form? | open | 2019-05-31T06:24:44Z | 2019-06-17T06:08:36Z | https://github.com/AirtestProject/Airtest/issues/418 | [] | JJunM | 5 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 16,289 | [Bug]: Winodws 11 error - ModuleNotFoundError - No module named 'setuptools.command.test' - build stops | ### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Followed the instructions for a build from a git clone in Powershell
got this error
```
stderr: error: subprocess-exited-with-error
Getting requirements to build wheel did not run successfully.
exit code: 1
[17 lines of output]
Traceback (most recent call last):
File "D:\LLM\stable-diffusion-webui\venv\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "D:\LLM\stable-diffusion-webui\venv\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "D:\LLM\stable-diffusion-webui\venv\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "C:\Users\smuch\AppData\Local\Temp\pip-build-env-i3mg2iw1\overlay\Lib\site-packages\setuptools\build_meta.py", line 327, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=[])
File "C:\Users\smuch\AppData\Local\Temp\pip-build-env-i3mg2iw1\overlay\Lib\site-packages\setuptools\build_meta.py", line 297, in _get_build_requires
self.run_setup()
File "C:\Users\smuch\AppData\Local\Temp\pip-build-env-i3mg2iw1\overlay\Lib\site-packages\setuptools\build_meta.py", line 497, in run_setup
super().run_setup(setup_script=setup_script)
File "C:\Users\smuch\AppData\Local\Temp\pip-build-env-i3mg2iw1\overlay\Lib\site-packages\setuptools\build_meta.py", line 313, in run_setup
exec(code, locals())
File "<string>", line 2, in <module>
ModuleNotFoundError: No module named 'setuptools.command.test'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
Getting requirements to build wheel did not run successfully.
exit code: 1
See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
Press any key to continue . . .
```
I also tried it with the zip file installer setup. similar result
```
stderr: error: subprocess-exited-with-error
python setup.py egg_info did not run successfully.
exit code: 1
[6 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "C:\Users\smuch\AppData\Local\Temp\pip-install-srpzlrov\ffmpy_db25df8b775d445b9a70509bf2a87e3d\setup.py", line 2, in <module>
from setuptools.command.test import test as TestCommand # noqa
ModuleNotFoundError: No module named 'setuptools.command.test'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
Encountered error while generating package metadata.
See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
```
### Steps to reproduce the problem
Installed python 3.10.6
Installed git
ran the instructions as listed - both for the auto installer and the git clone method
on Drive D and not drive c
Also tried it on drive C in C:\bin\sd\sd.webui
all failures
### What should have happened?
The installer should have built
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
THis is in the installation - haven't even gotten far enbough
### Console logs
```Shell
this is the run.bat version
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Installing torch and torchvision
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121
Collecting torch==2.1.2
Using cached https://download.pytorch.org/whl/cu121/torch-2.1.2%2Bcu121-cp310-cp310-win_amd64.whl (2473.9 MB)
Collecting torchvision==0.16.2
Using cached https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-win_amd64.whl (5.6 MB)
Collecting filelock (from torch==2.1.2)
Using cached filelock-3.15.4-py3-none-any.whl.metadata (2.9 kB)
Collecting typing-extensions (from torch==2.1.2)
Using cached typing_extensions-4.12.2-py3-none-any.whl.metadata (3.0 kB)
Collecting sympy (from torch==2.1.2)
Using cached sympy-1.13.1-py3-none-any.whl.metadata (12 kB)
Collecting networkx (from torch==2.1.2)
Using cached networkx-3.3-py3-none-any.whl.metadata (5.1 kB)
Collecting jinja2 (from torch==2.1.2)
Using cached jinja2-3.1.4-py3-none-any.whl.metadata (2.6 kB)
Collecting fsspec (from torch==2.1.2)
Using cached fsspec-2024.6.1-py3-none-any.whl.metadata (11 kB)
Collecting numpy (from torchvision==0.16.2)
Using cached numpy-2.0.1-cp310-cp310-win_amd64.whl.metadata (60 kB)
Collecting requests (from torchvision==0.16.2)
Using cached requests-2.32.3-py3-none-any.whl.metadata (4.6 kB)
Collecting pillow!=8.3.*,>=5.3.0 (from torchvision==0.16.2)
Using cached pillow-10.4.0-cp310-cp310-win_amd64.whl.metadata (9.3 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch==2.1.2)
Using cached https://download.pytorch.org/whl/MarkupSafe-2.1.5-cp310-cp310-win_amd64.whl (17 kB)
Collecting charset-normalizer<4,>=2 (from requests->torchvision==0.16.2)
Using cached charset_normalizer-3.3.2-cp310-cp310-win_amd64.whl.metadata (34 kB)
Collecting idna<4,>=2.5 (from requests->torchvision==0.16.2)
Using cached idna-3.7-py3-none-any.whl.metadata (9.9 kB)
Collecting urllib3<3,>=1.21.1 (from requests->torchvision==0.16.2)
Using cached urllib3-2.2.2-py3-none-any.whl.metadata (6.4 kB)
Collecting certifi>=2017.4.17 (from requests->torchvision==0.16.2)
Using cached certifi-2024.7.4-py3-none-any.whl.metadata (2.2 kB)
Collecting mpmath<1.4,>=1.1.0 (from sympy->torch==2.1.2)
Using cached https://download.pytorch.org/whl/mpmath-1.3.0-py3-none-any.whl (536 kB)
Using cached pillow-10.4.0-cp310-cp310-win_amd64.whl (2.6 MB)
Using cached filelock-3.15.4-py3-none-any.whl (16 kB)
Using cached fsspec-2024.6.1-py3-none-any.whl (177 kB)
Using cached jinja2-3.1.4-py3-none-any.whl (133 kB)
Using cached networkx-3.3-py3-none-any.whl (1.7 MB)
Using cached numpy-2.0.1-cp310-cp310-win_amd64.whl (16.6 MB)
Using cached requests-2.32.3-py3-none-any.whl (64 kB)
Using cached sympy-1.13.1-py3-none-any.whl (6.2 MB)
Using cached typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Using cached certifi-2024.7.4-py3-none-any.whl (162 kB)
Using cached charset_normalizer-3.3.2-cp310-cp310-win_amd64.whl (100 kB)
Using cached idna-3.7-py3-none-any.whl (66 kB)
Using cached urllib3-2.2.2-py3-none-any.whl (121 kB)
Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, fsspec, filelock, charset-normalizer, certifi, requests, jinja2, torch, torchvision
Successfully installed MarkupSafe-2.1.5 certifi-2024.7.4 charset-normalizer-3.3.2 filelock-3.15.4 fsspec-2024.6.1 idna-3.7 jinja2-3.1.4 mpmath-1.3.0 networkx-3.3 numpy-2.0.1 pillow-10.4.0 requests-2.32.3 sympy-1.13.1 torch-2.1.2+cu121 torchvision-0.16.2+cu121 typing-extensions-4.12.2 urllib3-2.2.2
Installing clip
Installing open_clip
Cloning assets into C:\bin\sd\sd.webui\webui\repositories\stable-diffusion-webui-assets...
Cloning into 'C:\bin\sd\sd.webui\webui\repositories\stable-diffusion-webui-assets'...
remote: Enumerating objects: 20, done.
remote: Counting objects: 100% (20/20), done.
remote: Compressing objects: 100% (18/18), done.
Receiving objects: 100% (20/20)sed 20 (delta 0), pack-reused 0
Receiving objects: 100% (20/20), 132.70 KiB | 1.29 MiB/s, done.
Cloning Stable Diffusion into C:\bin\sd\sd.webui\webui\repositories\stable-diffusion-stability-ai...
Cloning into 'C:\bin\sd\sd.webui\webui\repositories\stable-diffusion-stability-ai'...
remote: Enumerating objects: 580, done.
remote: Counting objects: 100% (571/571), done.
remote: Compressing objects: 100% (304/304), done.
remote: Total 580 (delta 278), reused 448 (delta 249), pack-reused 9Receiving objects: 92% (534/580), 55.48 MiB | 36.98 MiB/s
Receiving objects: 100% (580/580), 73.44 MiB | 38.99 MiB/s, done.
Resolving deltas: 100% (278/278), done.
Cloning Stable Diffusion XL into C:\bin\sd\sd.webui\webui\repositories\generative-models...
Cloning into 'C:\bin\sd\sd.webui\webui\repositories\generative-models'...
remote: Enumerating objects: 983, done.
remote: Counting objects: 100% (42/42), done.
remote: Compressing objects: 100% (35/35), done.
remote: Total 983 (delta 8), reused 31 (delta 7), pack-reused 941
Receiving objects: 100% (983/983), 52.06 MiB | 40.32 MiB/s, done.
Resolving deltas: 100% (499/499), done.
Cloning K-diffusion into C:\bin\sd\sd.webui\webui\repositories\k-diffusion...
Cloning into 'C:\bin\sd\sd.webui\webui\repositories\k-diffusion'...
remote: Enumerating objects: 1345, done.
remote: Counting objects: 100% (1345/1345), done.
remote: Compressing objects: 100% (443/443), done.
remote: Total 1345 (delta 947), reused 1249 (delta 895), pack-reused 0
Receiving objects: 100% (1345/1345), 232.84 KiB | 2.08 MiB/s, done.
Resolving deltas: 100% (947/947), done.
Cloning BLIP into C:\bin\sd\sd.webui\webui\repositories\BLIP...
Cloning into 'C:\bin\sd\sd.webui\webui\repositories\BLIP'...
remote: Enumerating objects: 277, done.
remote: Counting objects: 100% (165/165), done.
remote: Compressing objects: 100% (30/30), done.
remote: Total 277 (delta 137), reused 136 (delta 135), pack-reused 112
Receiving objects: 100% (277/277), 7.03 MiB | 20.23 MiB/s, done.
Resolving deltas: 100% (152/152), done.
Installing requirements
Traceback (most recent call last):
File "C:\bin\sd\sd.webui\webui\launch.py", line 48, in <module>
main()
File "C:\bin\sd\sd.webui\webui\launch.py", line 39, in main
prepare_environment()
File "C:\bin\sd\sd.webui\webui\modules\launch_utils.py", line 423, in prepare_environment
run_pip(f"install -r \"{requirements_file}\"", "requirements")
File "C:\bin\sd\sd.webui\webui\modules\launch_utils.py", line 144, in run_pip
return run(f'"{python}" -m pip {command} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}", live=live)
File "C:\bin\sd\sd.webui\webui\modules\launch_utils.py", line 116, in run
raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't install requirements.
Command: "C:\bin\sd\sd.webui\system\python\python.exe" -m pip install -r "requirements_versions.txt" --prefer-binary
Error code: 1
stdout: Collecting setuptools==69.5.1 (from -r requirements_versions.txt (line 1))
Using cached setuptools-69.5.1-py3-none-any.whl.metadata (6.2 kB)
Collecting GitPython==3.1.32 (from -r requirements_versions.txt (line 2))
Using cached GitPython-3.1.32-py3-none-any.whl.metadata (10.0 kB)
Collecting Pillow==9.5.0 (from -r requirements_versions.txt (line 3))
Using cached Pillow-9.5.0-cp310-cp310-win_amd64.whl.metadata (9.7 kB)
Collecting accelerate==0.21.0 (from -r requirements_versions.txt (line 4))
Using cached accelerate-0.21.0-py3-none-any.whl.metadata (17 kB)
Collecting blendmodes==2022 (from -r requirements_versions.txt (line 5))
Using cached blendmodes-2022-py3-none-any.whl.metadata (12 kB)
Collecting clean-fid==0.1.35 (from -r requirements_versions.txt (line 6))
Using cached clean_fid-0.1.35-py3-none-any.whl.metadata (36 kB)
Collecting diskcache==5.6.3 (from -r requirements_versions.txt (line 7))
Using cached diskcache-5.6.3-py3-none-any.whl.metadata (20 kB)
Collecting einops==0.4.1 (from -r requirements_versions.txt (line 8))
Using cached einops-0.4.1-py3-none-any.whl.metadata (10 kB)
Collecting facexlib==0.3.0 (from -r requirements_versions.txt (line 9))
Using cached facexlib-0.3.0-py3-none-any.whl.metadata (4.6 kB)
Collecting fastapi==0.94.0 (from -r requirements_versions.txt (line 10))
Using cached fastapi-0.94.0-py3-none-any.whl.metadata (25 kB)
Collecting gradio==3.41.2 (from -r requirements_versions.txt (line 11))
Using cached gradio-3.41.2-py3-none-any.whl.metadata (17 kB)
Collecting httpcore==0.15 (from -r requirements_versions.txt (line 12))
Using cached httpcore-0.15.0-py3-none-any.whl.metadata (15 kB)
Collecting inflection==0.5.1 (from -r requirements_versions.txt (line 13))
Using cached inflection-0.5.1-py2.py3-none-any.whl.metadata (1.7 kB)
Collecting jsonmerge==1.8.0 (from -r requirements_versions.txt (line 14))
Using cached jsonmerge-1.8.0.tar.gz (26 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting kornia==0.6.7 (from -r requirements_versions.txt (line 15))
Using cached kornia-0.6.7-py2.py3-none-any.whl.metadata (12 kB)
Collecting lark==1.1.2 (from -r requirements_versions.txt (line 16))
Using cached lark-1.1.2-py2.py3-none-any.whl.metadata (1.7 kB)
Collecting numpy==1.26.2 (from -r requirements_versions.txt (line 17))
Using cached numpy-1.26.2-cp310-cp310-win_amd64.whl.metadata (61 kB)
Collecting omegaconf==2.2.3 (from -r requirements_versions.txt (line 18))
Using cached omegaconf-2.2.3-py3-none-any.whl.metadata (3.9 kB)
Collecting open-clip-torch==2.20.0 (from -r requirements_versions.txt (line 19))
Using cached open_clip_torch-2.20.0-py3-none-any.whl.metadata (46 kB)
Collecting piexif==1.1.3 (from -r requirements_versions.txt (line 20))
Using cached piexif-1.1.3-py2.py3-none-any.whl.metadata (3.7 kB)
Requirement already satisfied: protobuf==3.20.0 in c:\bin\sd\sd.webui\system\python\lib\site-packages (from -r requirements_versions.txt (line 21)) (3.20.0)
Collecting psutil==5.9.5 (from -r requirements_versions.txt (line 22))
Using cached psutil-5.9.5-cp36-abi3-win_amd64.whl.metadata (21 kB)
Collecting pytorch_lightning==1.9.4 (from -r requirements_versions.txt (line 23))
Using cached pytorch_lightning-1.9.4-py3-none-any.whl.metadata (22 kB)
Collecting resize-right==0.0.2 (from -r requirements_versions.txt (line 24))
Using cached resize_right-0.0.2-py3-none-any.whl.metadata (551 bytes)
Collecting safetensors==0.4.2 (from -r requirements_versions.txt (line 25))
Using cached safetensors-0.4.2-cp310-none-win_amd64.whl.metadata (3.9 kB)
Collecting scikit-image==0.21.0 (from -r requirements_versions.txt (line 26))
Using cached scikit_image-0.21.0-cp310-cp310-win_amd64.whl.metadata (14 kB)
Collecting spandrel==0.3.4 (from -r requirements_versions.txt (line 27))
Using cached spandrel-0.3.4-py3-none-any.whl.metadata (14 kB)
Collecting spandrel-extra-arches==0.1.1 (from -r requirements_versions.txt (line 28))
Using cached spandrel_extra_arches-0.1.1-py3-none-any.whl.metadata (3.0 kB)
Collecting tomesd==0.1.3 (from -r requirements_versions.txt (line 29))
Using cached tomesd-0.1.3-py3-none-any.whl.metadata (9.1 kB)
Requirement already satisfied: torch in c:\bin\sd\sd.webui\system\python\lib\site-packages (from -r requirements_versions.txt (line 30)) (2.1.2+cu121)
Collecting torchdiffeq==0.2.3 (from -r requirements_versions.txt (line 31))
Using cached torchdiffeq-0.2.3-py3-none-any.whl.metadata (488 bytes)
Collecting torchsde==0.2.6 (from -r requirements_versions.txt (line 32))
Using cached torchsde-0.2.6-py3-none-any.whl.metadata (5.3 kB)
Collecting transformers==4.30.2 (from -r requirements_versions.txt (line 33))
Using cached transformers-4.30.2-py3-none-any.whl.metadata (113 kB)
Collecting httpx==0.24.1 (from -r requirements_versions.txt (line 34))
Using cached httpx-0.24.1-py3-none-any.whl.metadata (7.4 kB)
Collecting pillow-avif-plugin==1.4.3 (from -r requirements_versions.txt (line 35))
Using cached pillow_avif_plugin-1.4.3-cp310-cp310-win_amd64.whl.metadata (1.7 kB)
Collecting gitdb<5,>=4.0.1 (from GitPython==3.1.32->-r requirements_versions.txt (line 2))
Using cached gitdb-4.0.11-py3-none-any.whl.metadata (1.2 kB)
Requirement already satisfied: packaging>=20.0 in c:\bin\sd\sd.webui\system\python\lib\site-packages (from accelerate==0.21.0->-r requirements_versions.txt (line 4)) (24.1)
Requirement already satisfied: pyyaml in c:\bin\sd\sd.webui\system\python\lib\site-packages (from accelerate==0.21.0->-r requirements_versions.txt (line 4)) (6.0.1)
Collecting aenum<4,>=3.1.7 (from blendmodes==2022->-r requirements_versions.txt (line 5))
Using cached aenum-3.1.15-py3-none-any.whl.metadata (3.7 kB)
Collecting deprecation<3,>=2.1.0 (from blendmodes==2022->-r requirements_versions.txt (line 5))
Using cached deprecation-2.1.0-py2.py3-none-any.whl.metadata (4.6 kB)
Requirement already satisfied: torchvision in c:\bin\sd\sd.webui\system\python\lib\site-packages (from clean-fid==0.1.35->-r requirements_versions.txt (line 6)) (0.16.2+cu121)
Collecting scipy>=1.0.1 (from clean-fid==0.1.35->-r requirements_versions.txt (line 6))
Using cached scipy-1.14.0-cp310-cp310-win_amd64.whl.metadata (60 kB)
Requirement already satisfied: tqdm>=4.28.1 in c:\bin\sd\sd.webui\system\python\lib\site-packages (from clean-fid==0.1.35->-r requirements_versions.txt (line 6)) (4.66.4)
Requirement already satisfied: requests in c:\bin\sd\sd.webui\system\python\lib\site-packages (from clean-fid==0.1.35->-r requirements_versions.txt (line 6)) (2.32.3)
Collecting filterpy (from facexlib==0.3.0->-r requirements_versions.txt (line 9))
Using cached filterpy-1.4.5-py3-none-any.whl
Collecting numba (from facexlib==0.3.0->-r requirements_versions.txt (line 9))
Using cached numba-0.60.0-cp310-cp310-win_amd64.whl.metadata (2.8 kB)
Collecting opencv-python (from facexlib==0.3.0->-r requirements_versions.txt (line 9))
Using cached opencv_python-4.10.0.84-cp37-abi3-win_amd64.whl.metadata (20 kB)
Collecting pydantic!=1.7,!=1.7.1,!=1.7.2,!=1.7.3,!=1.8,!=1.8.1,<2.0.0,>=1.6.2 (from fastapi==0.94.0->-r requirements_versions.txt (line 10))
Using cached pydantic-1.10.17-cp310-cp310-win_amd64.whl.metadata (153 kB)
Collecting starlette<0.27.0,>=0.26.0 (from fastapi==0.94.0->-r requirements_versions.txt (line 10))
Using cached starlette-0.26.1-py3-none-any.whl.metadata (5.8 kB)
Collecting aiofiles<24.0,>=22.0 (from gradio==3.41.2->-r requirements_versions.txt (line 11))
Using cached aiofiles-23.2.1-py3-none-any.whl.metadata (9.7 kB)
Collecting altair<6.0,>=4.2.0 (from gradio==3.41.2->-r requirements_versions.txt (line 11))
Using cached altair-5.3.0-py3-none-any.whl.metadata (9.2 kB)
Collecting ffmpy (from gradio==3.41.2->-r requirements_versions.txt (line 11))
Using cached ffmpy-0.3.2.tar.gz (5.5 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'error'
stderr: error: subprocess-exited-with-error
python setup.py egg_info did not run successfully.
exit code: 1
[6 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "C:\Users\smuch\AppData\Local\Temp\pip-install-ykhnx1ee\ffmpy_d3c3baef266048a9b4928174b12ae2e9\setup.py", line 2, in <module>
from setuptools.command.test import test as TestCommand # noqa
ModuleNotFoundError: No module named 'setuptools.command.test'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
Encountered error while generating package metadata.
See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
Press any key to continue . . .
```
### Additional information
_No response_ | open | 2024-07-29T03:55:16Z | 2024-08-09T15:04:46Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16289 | [
"bug-report"
] | smuchow1962 | 32 |
skypilot-org/skypilot | data-science | 4,444 | [SERVE][AUTOSCALERS] Replica scaling sampling period and stability. | In autoscalers.py within serve:
https://github.com/skypilot-org/skypilot/blob/3f625886bf1b13ee463a9f8e0f6741f620f7f66f/sky/serve/autoscalers.py#L258-L269
When a single qps check is below or above the threshold, the `downscale_counter` or `upscale_counter` is set to 0.
This means a single jitter in qps could disrupt scaling.
I propose we allow a sampling over a period to allow scaling to occur based on a percentage of occurrences vs resetting to 0.
This could be set in the scaling policy.
---
Also, since scaling utilizes `math.ceil`, it errors on scaling and keeping qps below the value as a max bar vs a target.
https://github.com/skypilot-org/skypilot/blob/3f625886bf1b13ee463a9f8e0f6741f620f7f66f/sky/serve/autoscalers.py#L192
---
_Version & Commit info:_
* `sky -v`: 0.7.0
* `sky -c`: 3f62588
| open | 2024-12-05T17:30:42Z | 2024-12-21T17:17:18Z | https://github.com/skypilot-org/skypilot/issues/4444 | [] | JGSweets | 5 |
Kav-K/GPTDiscord | asyncio | 421 | Force json response for autodraw drawing agent | Within `/gpt converse` there is an agent that creates a prompt and determines if drawing is needed when a conversation message is sent with gpt-vision, currently, when I set response_format in the request to the model, the upstream API seems to complain saying that that parameter isn't allowed.
It may be not supported by gpt-4-vision? Otherwise, we need to find a fix so we don't depend on retries to enforce JSON. | closed | 2023-11-17T08:11:43Z | 2023-11-30T04:38:09Z | https://github.com/Kav-K/GPTDiscord/issues/421 | [
"bug",
"help wanted",
"good first issue",
"high-prio",
"help-wanted-important"
] | Kav-K | 1 |
microsoft/nni | data-science | 4,972 | Can I directly integrate the model.py when using tuning code instead of calling script through command | **Describe the issue**:
Now the way NNI calls the train script is:
experiment. config. trial_ command = 'python train. py'
This method of calling by command is not convenient for the integration of our current projects, because we do not want to start the train script by calling the file. We want to start it directly by calling the code, such as import train, and then NNI directly calls the train() function. We do not know whether NNI provides this form
| open | 2022-06-29T02:37:31Z | 2022-07-06T05:54:41Z | https://github.com/microsoft/nni/issues/4972 | [] | xiaoerqi | 3 |
indico/indico | flask | 5,966 | [A11Y/UX] Replace session bar with a global pull-down menu | **Describe the bug**
This is a blanket fix for a number of accessibility issues, but mainly:
- Inability to access popups after operating the buttons in the session bar
- Difficult to make the session bar responsive
The intended fix is described in the forum post (link below). Here is a pen that demonstrates the proposed fix:
- https://codepen.io/hayavuk/pen/qBLEzQj
**Additional context**
- https://talk.getindico.io/t/reflow-support-for-high-zoom-ratio/3212/4
- https://www.w3.org/WAI/WCAG21/Understanding/info-and-relationships.html
- https://www.w3.org/WAI/WCAG21/Understanding/keyboard.html
- https://www.w3.org/WAI/WCAG21/Understanding/labels-or-instructions.html
| open | 2023-10-02T09:03:22Z | 2023-10-02T09:04:09Z | https://github.com/indico/indico/issues/5966 | [
"bug"
] | foxbunny | 1 |
django-import-export/django-import-export | django | 1,042 | ForeignKey is 0 error | # widgets.py
def clean(self, value, row=None, *args, **kwargs):
val = super(ForeignKeyWidget, self).clean(value)
if val: # when ForeignKey is 0
return self.get_queryset(value, row, *args, **kwargs).get(**{self.field: val})
else:
return None
There is a foreign key value of 0 in my imported data. At this time, there will be a prompt that the field cannot be found. It took me an afternoon to find this problem. Please correct the code! | closed | 2019-11-27T10:10:56Z | 2020-05-25T11:35:53Z | https://github.com/django-import-export/django-import-export/issues/1042 | [
"stale"
] | ruanhailiang | 1 |
seleniumbase/SeleniumBase | pytest | 3,507 | Add `get_parent()` to various APIs (eg: The CDP API) | ### Add `get_parent()` to various APIs (eg: The CDP API)
----
For the CDP API, calling `element.parent` does not add in the SB CDP methods to the parent method received. So instead, I'll create a `CDP_Element.get_parent()` method to retrieve the parent element and add in the CDP API methods into there. The SB CDP API will also get it via `sb.cdp.get_parent(element)`.
Also, I'll add `get_parent()` to the `driver` API and to the `SB` API. (Note that `get_parent()` **won't** be added to the `WebElement` API, which is separate, and exists within Selenium, rather than within SeleniumBase.) | closed | 2025-02-12T00:59:59Z | 2025-02-12T02:12:58Z | https://github.com/seleniumbase/SeleniumBase/issues/3507 | [
"enhancement",
"UC Mode / CDP Mode"
] | mdmintz | 1 |
RobertCraigie/prisma-client-py | pydantic | 95 | Add support for setting httpx client options | ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
While running the `postgresql` integration tests on a slow connection I ran into a lot of `httpx.ReadTimeout` errors.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We should either increase the timeout option or allow users to set it themselves, maybe something like:
```py
client = Client(
http={
'timeout': 10,
},
)
```
| closed | 2021-11-05T15:41:16Z | 2021-12-29T11:04:31Z | https://github.com/RobertCraigie/prisma-client-py/issues/95 | [
"kind/feature"
] | RobertCraigie | 0 |
paperless-ngx/paperless-ngx | django | 7,330 | [BUG] PaperlessNGX container unhealthy - not starting | ### Description
I had to to reinstall Paperless NGX. Removed all container, folders and exectued docker prune.
Now when I starting docker compose, all starting except paperless-ngx. I have the latest version and let it run as root.
There are no further messages after "Adjusting permissions of paperless files. This may take a while.".
I deleted all files, this means all paperless related files should be recreated automatically new
### Steps to reproduce
1. Start docker compose
### Webserver logs
```bash
Paperless-ngx docker container starting...
Installing languages...
Get:1 http://deb.debian.org/debian bookworm InRelease [151 kB]
Get:2 http://deb.debian.org/debian bookworm-updates InRelease [55.4 kB]
Get:3 http://deb.debian.org/debian-security bookworm-security InRelease [48.0 kB]
Get:4 http://deb.debian.org/debian bookworm/main amd64 Packages [8788 kB]
Get:5 http://deb.debian.org/debian bookworm-updates/main amd64 Packages [13.8 kB]
Get:6 http://deb.debian.org/debian-security bookworm-security/main amd64 Packages [169 kB]
Fetched 9225 kB in 2s (5307 kB/s)
Reading package lists...
Installing package tesseract-ocr-pol...
Package tesseract-ocr-ita already installed!
Package tesseract-ocr-deu already installed!
Package tesseract-ocr-eng already installed!
Installing package tesseract-ocr-nld...
Creating directory scratch directory /tmp/paperless
mkdir: created directory '/tmp/paperless'
Adjusting permissions of paperless files. This may take a while.
```
### Browser logs
_No response_
### Paperless-ngx version
latest
### Host OS
Proxmox LCX Debian
### Installation method
Docker - official image
### System status
_No response_
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-07-27T17:27:29Z | 2024-07-27T18:54:24Z | https://github.com/paperless-ngx/paperless-ngx/issues/7330 | [
"not a bug"
] | fendle | 0 |
sktime/pytorch-forecasting | pandas | 1,454 | Is there a reason for the large difference in training and validation loss with pt-forecasting models? | I have experimented with several models from pytorch-forecasting (TFT, DeepAR, LSTM) and I have noticed, that in all the cases, the training loss is very different to validation loss and it does not seem to be an issue concerning model performance (overfitting/underfitting) or weird data separation.
Is it possible, that the loss is calculated on transformed features for one prediction and not for the other? | open | 2023-11-28T23:12:17Z | 2023-11-28T23:12:17Z | https://github.com/sktime/pytorch-forecasting/issues/1454 | [] | chododom | 0 |
skypilot-org/skypilot | data-science | 4,918 | A missing `google.protobuf.duration_pb2` dependency error in serve k8s smoke tests `test_skyserve_failures` | When running `pytest tests/smoke_tests/test_sky_serve.py::test_skyserve_failures --kubernetes --serve`, in the second part where we execute
```python
sky serve update {name} --cloud {generic_cloud} -y tests/skyserve/failures/probing.yaml
```
I encountered the following error in the log:
```
D 03-08 09:52:03 config_utils.py:149] User config: -> {'jobs': {'controller': {'resources': {'cloud': 'gcp'}}}, 'serve': {'controller': {'resources': {'cloud': 'kubernetes', 'cpus': 2, 'disk_size': 100}}}}
D 03-08 09:52:03 skypilot_config.py:155] Using config path: /tmp/skypilot_configbvyh2qla
D 03-08 09:52:03 skypilot_config.py:160] Config loaded:
D 03-08 09:52:03 skypilot_config.py:160] {'jobs': {'controller': {'resources': {'cloud': 'gcp'}}},
D 03-08 09:52:03 skypilot_config.py:160] 'serve': {'controller': {'resources': {'cloud': 'kubernetes',
D 03-08 09:52:03 skypilot_config.py:160] 'cpus': 2,
D 03-08 09:52:03 skypilot_config.py:160] 'disk_size': 100}}}}
D 03-08 09:52:03 skypilot_config.py:172] Config syntax check passed.
D 03-08 09:52:03 backend_utils.py:1637] Querying Kubernetes cluster 'sky-serve-controller-e2dc6f0f' status:
D 03-08 09:52:03 backend_utils.py:1637] {'sky-serve-controller-e2dc6f0f-e2dc6f0f-head': <ClusterStatus.UP: 'UP'>}
I 03-08 09:52:12 controller_utils.py:806] ⚙︎ Translating workdir to SkyPilot Storage...
I 03-08 09:52:12 controller_utils.py:863] Workdir: '/home/andyl/skypilot/tests/skyserve/failures' -> storage: 'skypilot-filemounts-andyl-e2dc6f0f-3mgmmr4j'.
D 03-08 09:52:12 skypilot_config.py:155] Using config path: /home/andyl/.sky/config.yaml
D 03-08 09:52:12 skypilot_config.py:160] Config loaded:
D 03-08 09:52:12 skypilot_config.py:160] {'jobs': {'controller': {'resources': {'cloud': 'gcp'}}},
D 03-08 09:52:12 skypilot_config.py:160] 'serve': {'controller': {'resources': {'cloud': 'kubernetes',
D 03-08 09:52:12 skypilot_config.py:160] 'cpus': 2,
D 03-08 09:52:12 skypilot_config.py:160] 'disk_size': 100}}}}
D 03-08 09:52:12 skypilot_config.py:172] Config syntax check passed.
ImportError: cannot import name 'duration_pb2' from 'google.protobuf' (unknown location)
```
Afterward, the service status shows:
```
Services
NAME VERSION UPTIME STATUS REPLICAS ENDPOINT
sky-service-6dd3 - - NO_REPLICA 0/0 http://localhost:30046/skypilot/default/sky-serve-controller-e2dc6f0f-e2dc6f0f/30001
Service Replicas
SERVICE_NAME ID VERSION ENDPOINT LAUNCHED RESOURCES STATUS REGION
sky-service-6dd3 1 1 - - - FAILED_INITIAL_DELAY -
sky-service-6dd3 2 1 - - - FAILED_INITIAL_DELAY -
```
Which is expected. But just to confirm, is this `ImportError` the expected behavior for this test? | open | 2025-03-08T10:01:30Z | 2025-03-10T04:24:30Z | https://github.com/skypilot-org/skypilot/issues/4918 | [] | andylizf | 1 |
okken/pytest-check | pytest | 102 | Set colorama version to >=0.4.5 |
Latest refactor adds a dependency to `colorama >= 0.4.6`, meanwhile awscl sets a `colorama <= 0.4.5`
https://github.com/okken/pytest-check/blob/b96e3654c57c1e1dcd14690848a06426bf9ad1af/pyproject.toml#L19
This is currently breaking `pip` when installing `pytest-check` alongside `awscli` as it cannot resolve a valid `colorama` package to install.
Is it posible to reduce the version? | closed | 2022-11-29T16:08:19Z | 2022-12-01T15:51:16Z | https://github.com/okken/pytest-check/issues/102 | [] | fuster92 | 5 |
nschloe/tikzplotlib | matplotlib | 257 | Hatching is not mapped to pattern | Hello,
I have a matplotlib code that plots a bar plot with bars that are colored and given some pattern, like this one:
```python
import pandas as pd
from matplotlib2tikz import get_tikz_code
speed = [0.1, 17.5, 40, 48, 52, 69, 88]
lifespan = [2, 8, 70, 1.5, 25, 12, 28]
index = ['snail', 'pig', 'elephant', 'rabbit', 'giraffe', 'coyote', 'horse']
df = pd.DataFrame({'speed': speed,\
'lifespan': lifespan}, index=index)
ax = df.plot.bar(rot=0)
bars = ax.patches
patterns =('o', '', '//', '\\\\', 'O', '-', '+', 'x','/','\\')
hatches = [p for p in patterns for i in range(len(df))]
for bar, hatch in zip(bars, hatches):
bar.set_hatch(hatch)
ax.legend()
print(get_tikz_code("example.tex"))
```
Unfortunately, in the tikz output the pattern is lost. I wonder whether it is possible to do what I'd like at all with tikz, but I see online examples that seem to do exactly that.
Is it a feature not implemented yet, or a bug? | closed | 2018-11-14T14:33:48Z | 2019-12-19T10:35:01Z | https://github.com/nschloe/tikzplotlib/issues/257 | [] | lucaventurini | 1 |
sgl-project/sglang | pytorch | 3,806 | [Feature] mla speed | ### Checklist
- [ ] 1. If the issue you raised is not a feature but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [ ] 2. Please use English, otherwise it will be closed.
### Motivation
https://github.com/flashinfer-ai/flashinfer/pull/887
### Related resources
_No response_ | closed | 2025-02-24T05:40:23Z | 2025-02-24T15:56:11Z | https://github.com/sgl-project/sglang/issues/3806 | [] | MichoChan | 1 |
pyg-team/pytorch_geometric | deep-learning | 9,727 | FutureWarning using torch.load with torch>2.4, torch.serialization.add_safe_globals does not work for torch_geometric.data.Data | ### 🐛 Describe the bug
Hello,
I wanted to report on a warning related to the latest pytorch versions, which may become an issue moving forward.
Since I've moved to pytorch version >2.4, doing `torch.save` and `torch.load` of a `torch_geometric.data.Data` object results in the following warning:
>FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
This can be reproduced for instance by running
```python
import torch
from torch_geometric.data import Data
data = Data(x=torch.randn(10))
torch.save(data, 'data.pt')
data = torch.load('data.pt')
```
However, if I do as suggested, that is using `torch.serialization.add_safe_globals` to whitelist `Data` and adding the `weights_only` option in the `torch.load` call, i.e.
```python
import torch
from torch_geometric.data import Data
torch.serialization.add_safe_globals([Data])
data = Data(x=torch.randn(10))
torch.save(data, 'data.pt')
data = torch.load('data.pt', weights_only=True)
```
I get the following error
```
UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options
(1) Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
WeightsUnpickler error: Unsupported global: GLOBAL torch_geometric.data.data.DataEdgeAttr was not an allowed global by default. Please use `torch.serialization.add_safe_globals([DataEdgeAttr])` to allowlist this global if you trust this class/function.
```
I may be mistaken, but I think it is not intended that I also add `DataEdgeAttr` to the serialization whitelist.
This is clearly not a greatly concerning bug right now, as `torch.load` still works, but a fix may become necessary in the future.
### Versions
```
Collecting environment information...
PyTorch version: 2.4.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Professionnel (10.0.26100 64-bit)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.4 | packaged by Anaconda, Inc. | (main, Jul 5 2023, 13:47:18) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.26100-SP0
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A1000 Laptop GPU
Nvidia driver version: 556.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: 12th Gen Intel(R) Core(TM) i7-12700H
Manufacturer: GenuineIntel
Family: 198
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2300
MaxClockSpeed: 2300
L2CacheSize: 11776
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.0
[pip3] onnx==1.14.1
[pip3] onnxruntime-gpu==1.16.0
[pip3] optree==0.11.0
[pip3] pytorch-ignite==0.4.12
[pip3] torch==2.4.1+cu118
[pip3] torch_cluster==1.6.3+pt24cu118
[pip3] torch-geometric==2.6.1
[pip3] torch_scatter==2.1.2+pt24cu118
[pip3] torch_sparse==0.6.18+pt24cu118
[pip3] torch_spline_conv==1.2.2+pt24cu118
[pip3] torch-tb-profiler==0.4.3
[pip3] torchaudio==2.4.1+cu118
[pip3] torchmetrics==1.2.0
[pip3] torchvision==0.19.1+cu118
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h6b88ed4_46357
[conda] mkl-service 2.4.0 py311h2bbff1b_1
[conda] mkl_fft 1.3.8 py311h2bbff1b_0
[conda] mkl_random 1.2.4 py311h59b6b97_0
[conda] numpy 1.26.0 py311hdab7c0b_0
[conda] numpy-base 1.26.0 py311hd01c5d8_0
[conda] optree 0.11.0 pypi_0 pypi
[conda] pytorch-ignite 0.4.12 pypi_0 pypi
[conda] torch 2.4.1+cu118 pypi_0 pypi
[conda] torch-cluster 1.6.3+pt24cu118 pypi_0 pypi
[conda] torch-geometric 2.6.1 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt24cu118 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt24cu118 pypi_0 pypi
[conda] torch-spline-conv 1.2.2+pt24cu118 pypi_0 pypi
[conda] torch-tb-profiler 0.4.3 pypi_0 pypi
[conda] torchaudio 2.4.1+cu118 pypi_0 pypi
[conda] torchmetrics 1.2.0 pypi_0 pypi
[conda] torchvision 0.19.1+cu118 pypi_0 pypi
``` | open | 2024-10-23T15:15:09Z | 2025-02-02T20:51:41Z | https://github.com/pyg-team/pytorch_geometric/issues/9727 | [
"bug"
] | lposti | 4 |
PokeAPI/pokeapi | api | 876 | Some Pokemon get requests do not respond correct to name, but do with ID # | <!--
Thanks for contributing to the PokéAPI project. To make sure we're effective, please check the following:
- Make sure your issue hasn't already been submitted on the issues tab. (It has search functionality!)
- If your issue is one of outdated API data, please note that we get our data from [veekun](https://github.com/veekun/pokedex/). If they are not up to date either, please look for or create an issue there. Otherwise, feel free to create an issue here.
- Provide a clear description of the issue.
- Provide a clear description of the steps to reproduce.
- Provide a clear description of the expected behavior.
Thank you!
-->
I think there its a naming / species issue:
example data
```
"name": "basculin-red-striped",
"order": 666,
"past_types": [],
"species": {
"name": "basculin",
"url": "https://pokeapi.co/api/v2/pokemon-species/550/"
},
```
Should the name property not be 'basculin' and the species.name be 'basculin-red-striped'
I am not sure if this is a database storage issue or possible code bug, but I wanted to bring it to attention.
Steps to Reproduce:
1. go to link https://pokeapi.co/api/v2/pokemon/550 & https://pokeapi.co/api/v2/pokemon/basculin
2.The second should return same results as the first however it returns a 404 response.
| open | 2023-05-07T22:44:09Z | 2023-05-09T15:12:57Z | https://github.com/PokeAPI/pokeapi/issues/876 | [] | d0rf47 | 2 |
saleor/saleor | graphql | 16,989 | Bug: There is no CustomerDeleted event in graphql schema and AsyncWebhookEventType.ACCOUNT_DELETED in app-sdk. | ### What are you trying to achieve?
I am trying to subscribe to ACCOUNT_DELETED or CUSTOMER_DELETED event. But in case of ACCOUNT_DELETED there is no such type in AsyncWebhookEventType (app-sdk) and in case of CUSTOMER_DELETED there is no CustomerDeleted event in graphql API and I was unable to generate schema. So I am stuck trying to get any of these events. Maybe I am missing something but CUSTOMER_CREATED and CUSTOMER_UPDATED are working fine. Is there a proper way to subscribe to customer delete event?
### Steps to reproduce the problem
I'm trying to compile this fragment (Error 0: Unknown type "CustomerDeleted"):
fragment CustomerDeletedPayload on CustomerDeleted {
user {
id
}
}
subscription CustomerDeleted {
event {
...CustomerDeletedPayload
}
}
or create web hook (error on (event: "ACCOUNT_DELETED")):
`const accountDeletedWebhook = new SaleorAsyncWebhook<AccountDeletedPayloadFragment>({
name: "Account deleted",
event: "ACCOUNT_DELETED",
webhookPath: "api/webhooks/account-deleted",
apl: saleorApp.apl,
query: AccountDeletedDocument,
});`
### What did you expect to happen?
In case of ACCOUNT_DELETED I expected that AsyncWebhookEventType.ACCOUNT_DELETED exists.
In case of CUSTOMER_DELETED I expected CustomerDeleted event type.
### Logs
CustomerDeleted:
✖ GraphQL Document Validation failed with 1 errors;
Error 0: Unknown type "CustomerDeleted". Did you mean "CustomerDelete", "CustomerCreated", "CustomerUpdated", "CustomerBulkDelete", or "CustomerCreate"?
AccountDeleted:
Type '"ACCOUNT_DELETED"' is not assignable to type 'AsyncWebhookEventType | undefined'.
### Environment
Saleor version: 3.20.46 on saleor cloud.
saleor/app-sdk: 0.50.3
| open | 2024-11-13T10:32:23Z | 2024-11-29T13:12:20Z | https://github.com/saleor/saleor/issues/16989 | [
"bug",
"accepted"
] | i-n-n-k-e-e-p-e-r | 1 |
huggingface/datasets | pytorch | 6,833 | Super slow iteration with trivial custom transform | ### Describe the bug
Dataset is 10X slower when applying trivial transforms:
```
import time
import numpy as np
from datasets import Dataset, Features, Array2D
a = np.zeros((800, 800))
a = np.stack([a] * 1000)
features = Features({"a": Array2D(shape=(800, 800), dtype="uint8")})
ds1 = Dataset.from_dict({"a": a}, features=features).with_format('numpy')
def transform(batch):
return batch
ds2 = ds1.with_transform(transform)
%time sum(1 for _ in ds1)
%time sum(1 for _ in ds2)
```
```
CPU times: user 472 ms, sys: 319 ms, total: 791 ms
Wall time: 794 ms
CPU times: user 9.32 s, sys: 443 ms, total: 9.76 s
Wall time: 9.78 s
```
In my real code I'm using set_transform to apply some post-processing on-the-fly for the 2d array, but it significantly slows down the dataset even if the transform itself is trivial.
Related issue: https://github.com/huggingface/datasets/issues/5841
### Steps to reproduce the bug
Use code in the description to reproduce.
### Expected behavior
Trivial custom transform in the example should not slowdown the dataset iteration.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.15.0-79-generic-x86_64-with-glibc2.35
- Python version: 3.11.4
- `huggingface_hub` version: 0.20.2
- PyArrow version: 15.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.12.2 | open | 2024-04-23T20:40:59Z | 2024-10-08T15:41:18Z | https://github.com/huggingface/datasets/issues/6833 | [] | xslittlegrass | 7 |
gtalarico/django-vue-template | rest-api | 48 | Error: yarn build | ```
$ yarn build
| Building for production...
ERROR Failed to compile with 3 errors 11:32:56
error in ./src/App.vue?vue&type=style&index=0&lang=css&
Module build failed (from ./node_modules/mini-css-extract-plugin/dist/loader.js)
:
TypeError: this[NS] is not a function
at D:\Documents\Coding\web\django-vue-template\node_modules\mini-css-extract
-plugin\dist\loader.js:148:15
at D:\Documents\Coding\Web\django-vue-template\node_modules\webpack\lib\Comp
iler.js:296:11
at D:\Documents\Coding\Web\django-vue-template\node_modules\webpack\lib\Comp
iler.js:553:14
at AsyncSeriesHook.eval [as callAsync] (eval at create (D:\Documents\Coding\
Web\django-vue-template\node_modules\tapable\lib\HookCodeFactory.js:24:12), <ano
nymous>:13:1)
at AsyncSeriesHook.lazyCompileHook [as _callAsync] (D:\Documents\Coding\Web\
django-vue-template\node_modules\tapable\lib\Hook.js:35:21)
at D:\Documents\Coding\Web\django-vue-template\node_modules\webpack\lib\Comp
iler.js:550:30
at AsyncSeriesHook.eval [as callAsync] (eval at create (D:\Documents\Coding\
Web\django-vue-template\node_modules\tapable\lib\HookCodeFactory.js:24:12), <ano
nymous>:4:1)
at AsyncSeriesHook.lazyCompileHook [as _callAsync] (D:\Documents\Coding\Web\
django-vue-template\node_modules\tapable\lib\Hook.js:35:21)
at D:\Documents\Coding\Web\django-vue-template\node_modules\webpack\lib\Comp
ilation.js:1294:35
at AsyncSeriesHook.eval [as callAsync] (eval at create (D:\Documents\Coding\
Web\django-vue-template\node_modules\tapable\lib\HookCodeFactory.js:24:12), <ano
nymous>:4:1)
at AsyncSeriesHook.lazyCompileHook [as _callAsync] (D:\Documents\Coding\Web\
django-vue-template\node_modules\tapable\lib\Hook.js:35:21)
at D:\Documents\Coding\Web\django-vue-template\node_modules\webpack\lib\Comp
ilation.js:1285:32
at eval (eval at create (D:\Documents\Coding\Web\django-vue-template\node_mo
dules\tapable\lib\HookCodeFactory.js:24:12), <anonymous>:9:1)
at D:\Documents\Coding\Web\django-vue-template\node_modules\uglifyjs-webpack
-plugin\dist\index.js:282:11
at _class.runTasks (D:\Documents\Coding\Web\django-vue-template\node_modules
\uglifyjs-webpack-plugin\dist\uglify\index.js:63:9)
at UglifyJsPlugin.optimizeFn (D:\Documents\Coding\Web\django-vue-template\no
de_modules\uglifyjs-webpack-plugin\dist\index.js:195:16)
at AsyncSeriesHook.eval [as callAsync] (eval at create (D:\Documents\Coding\
Web\django-vue-template\node_modules\tapable\lib\HookCodeFactory.js:24:12), <ano
nymous>:5:1)
at AsyncSeriesHook.lazyCompileHook [as _callAsync] (D:\Documents\Coding\Web\
django-vue-template\node_modules\tapable\lib\Hook.js:35:21)
at D:\Documents\Coding\Web\django-vue-template\node_modules\webpack\lib\Comp
ilation.js:1280:36
at AsyncSeriesHook.eval [as callAsync] (eval at create (D:\Documents\Coding\
Web\django-vue-template\node_modules\tapable\lib\HookCodeFactory.js:24:12), <ano
nymous>:4:1)
at AsyncSeriesHook.lazyCompileHook [as _callAsync] (D:\Documents\Coding\Web\
django-vue-template\node_modules\tapable\lib\Hook.js:35:21)
at D:\Documents\Coding\Web\django-vue-template\node_modules\webpack\lib\Comp
ilation.js:1276:32
@ ./src/App.vue?vue&type=style&index=0&lang=css& 1:0-403 1:419-422 1:424-824 1:
424-824
@ ./src/App.vue
@ ./src/main.js
@ multi ./src/main.js
error in ./src/components/VueDemo.vue?vue&type=style&index=0&id=1fef0148&scope
d=true&lang=css&
Module build failed (from ./node_modules/mini-css-extract-plugin/dist/loader.js)
:
TypeError: this[NS] is not a function
at D:\Documents\Coding\web\django-vue-template\node_modules\mini-css-extract
-plugin\dist\loader.js:148:15
at D:\Documents\Coding\Web\django-vue-template\node_modules\webpack\lib\Comp
iler.js:296:11
at D:\Documents\Coding\Web\django-vue-template\node_modules\webpack\lib\Comp
iler.js:553:14
at AsyncSeriesHook.eval [as callAsync] (eval at create (D:\Documents\Coding\
Web\django-vue-template\node_modules\tapable\lib\HookCodeFactory.js:24:12), <ano
nymous>:13:1)
at AsyncSeriesHook.lazyCompileHook [as _callAsync] (D:\Documents\Coding\Web\
django-vue-template\node_modules\tapable\lib\Hook.js:35:21)
at D:\Documents\Coding\Web\django-vue-template\node_modules\webpack\lib\Comp
iler.js:550:30
at AsyncSeriesHook.eval [as callAsync] (eval at create (D:\Documents\Coding\
Web\django-vue-template\node_modules\tapable\lib\HookCodeFactory.js:24:12), <ano
nymous>:4:1)
at AsyncSeriesHook.lazyCompileHook [as _callAsync] (D:\Documents\Coding\Web\
django-vue-template\node_modules\tapable\lib\Hook.js:35:21)
at D:\Documents\Coding\Web\django-vue-template\node_modules\webpack\lib\Comp
ilation.js:1294:35
at AsyncSeriesHook.eval [as callAsync] (eval at create (D:\Documents\Coding\
Web\django-vue-template\node_modules\tapable\lib\HookCodeFactory.js:24:12), <ano
nymous>:4:1)
at AsyncSeriesHook.lazyCompileHook [as _callAsync] (D:\Documents\Coding\Web\
django-vue-template\node_modules\tapable\lib\Hook.js:35:21)
at D:\Documents\Coding\Web\django-vue-template\node_modules\webpack\lib\Comp
ilation.js:1285:32
at eval (eval at create (D:\Documents\Coding\Web\django-vue-template\node_mo
dules\tapable\lib\HookCodeFactory.js:24:12), <anonymous>:9:1)
at D:\Documents\Coding\Web\django-vue-template\node_modules\uglifyjs-webpack
-plugin\dist\index.js:282:11
at _class.runTasks (D:\Documents\Coding\Web\django-vue-template\node_modules
\uglifyjs-webpack-plugin\dist\uglify\index.js:63:9)
at UglifyJsPlugin.optimizeFn (D:\Documents\Coding\Web\django-vue-template\no
de_modules\uglifyjs-webpack-plugin\dist\index.js:195:16)
at AsyncSeriesHook.eval [as callAsync] (eval at create (D:\Documents\Coding\
Web\django-vue-template\node_modules\tapable\lib\HookCodeFactory.js:24:12), <ano
nymous>:5:1)
at AsyncSeriesHook.lazyCompileHook [as _callAsync] (D:\Documents\Coding\Web\
django-vue-template\node_modules\tapable\lib\Hook.js:35:21)
at D:\Documents\Coding\Web\django-vue-template\node_modules\webpack\lib\Comp
ilation.js:1280:36
at AsyncSeriesHook.eval [as callAsync] (eval at create (D:\Documents\Coding\
Web\django-vue-template\node_modules\tapable\lib\HookCodeFactory.js:24:12), <ano
nymous>:4:1)
at AsyncSeriesHook.lazyCompileHook [as _callAsync] (D:\Documents\Coding\Web\
django-vue-template\node_modules\tapable\lib\Hook.js:35:21)
at D:\Documents\Coding\Web\django-vue-template\node_modules\webpack\lib\Comp
ilation.js:1276:32
@ ./src/components/VueDemo.vue?vue&type=style&index=0&id=1fef0148&scoped=true&l
ang=css& 1:0-449 1:465-468 1:470-916 1:470-916
@ ./src/components/VueDemo.vue
@ ./src/router.js
@ ./src/main.js
@ multi ./src/main.js
error in ./src/components/Messages.vue?vue&type=style&index=0&id=34ae3f99&scop
ed=true&lang=css&
Module build failed (from ./node_modules/mini-css-extract-plugin/dist/loader.js)
:
TypeError: this[NS] is not a function
at D:\Documents\Coding\web\django-vue-template\node_modules\mini-css-extract
-plugin\dist\loader.js:148:15
at D:\Documents\Coding\Web\django-vue-template\node_modules\webpack\lib\Comp
iler.js:296:11
at D:\Documents\Coding\Web\django-vue-template\node_modules\webpack\lib\Comp
iler.js:553:14
at AsyncSeriesHook.eval [as callAsync] (eval at create (D:\Documents\Coding\
Web\django-vue-template\node_modules\tapable\lib\HookCodeFactory.js:24:12), <ano
nymous>:13:1)
at AsyncSeriesHook.lazyCompileHook [as _callAsync] (D:\Documents\Coding\Web\
django-vue-template\node_modules\tapable\lib\Hook.js:35:21)
at D:\Documents\Coding\Web\django-vue-template\node_modules\webpack\lib\Comp
iler.js:550:30
at AsyncSeriesHook.eval [as callAsync] (eval at create (D:\Documents\Coding\
Web\django-vue-template\node_modules\tapable\lib\HookCodeFactory.js:24:12), <ano
nymous>:4:1)
at AsyncSeriesHook.lazyCompileHook [as _callAsync] (D:\Documents\Coding\Web\
django-vue-template\node_modules\tapable\lib\Hook.js:35:21)
at D:\Documents\Coding\Web\django-vue-template\node_modules\webpack\lib\Comp
ilation.js:1294:35
at AsyncSeriesHook.eval [as callAsync] (eval at create (D:\Documents\Coding\
Web\django-vue-template\node_modules\tapable\lib\HookCodeFactory.js:24:12), <ano
nymous>:4:1)
at AsyncSeriesHook.lazyCompileHook [as _callAsync] (D:\Documents\Coding\Web\
django-vue-template\node_modules\tapable\lib\Hook.js:35:21)
at D:\Documents\Coding\Web\django-vue-template\node_modules\webpack\lib\Comp
ilation.js:1285:32
at eval (eval at create (D:\Documents\Coding\Web\django-vue-template\node_mo
dules\tapable\lib\HookCodeFactory.js:24:12), <anonymous>:9:1)
at D:\Documents\Coding\Web\django-vue-template\node_modules\uglifyjs-webpack
-plugin\dist\index.js:282:11
at _class.runTasks (D:\Documents\Coding\Web\django-vue-template\node_modules
\uglifyjs-webpack-plugin\dist\uglify\index.js:63:9)
at UglifyJsPlugin.optimizeFn (D:\Documents\Coding\Web\django-vue-template\no
de_modules\uglifyjs-webpack-plugin\dist\index.js:195:16)
at AsyncSeriesHook.eval [as callAsync] (eval at create (D:\Documents\Coding\
Web\django-vue-template\node_modules\tapable\lib\HookCodeFactory.js:24:12), <ano
nymous>:5:1)
at AsyncSeriesHook.lazyCompileHook [as _callAsync] (D:\Documents\Coding\Web\
django-vue-template\node_modules\tapable\lib\Hook.js:35:21)
at D:\Documents\Coding\Web\django-vue-template\node_modules\webpack\lib\Comp
ilation.js:1280:36
at AsyncSeriesHook.eval [as callAsync] (eval at create (D:\Documents\Coding\
Web\django-vue-template\node_modules\tapable\lib\HookCodeFactory.js:24:12), <ano
nymous>:4:1)
at AsyncSeriesHook.lazyCompileHook [as _callAsync] (D:\Documents\Coding\Web\
django-vue-template\node_modules\tapable\lib\Hook.js:35:21)
at D:\Documents\Coding\Web\django-vue-template\node_modules\webpack\lib\Comp
ilation.js:1276:32
@ ./src/components/Messages.vue?vue&type=style&index=0&id=34ae3f99&scoped=true&
lang=css& 1:0-450 1:466-469 1:471-918 1:471-918
@ ./src/components/Messages.vue
@ ./src/router.js
@ ./src/main.js
@ multi ./src/main.js
ERROR Build failed with errors.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
``` | closed | 2020-02-07T04:36:20Z | 2020-04-26T07:22:53Z | https://github.com/gtalarico/django-vue-template/issues/48 | [] | HotPotatoC | 0 |
modin-project/modin | pandas | 7,302 | Pin numpy<2 and release 0.30.1, 0.29.1, 0.28.3, 0.27.1 versions | closed | 2024-06-10T10:30:55Z | 2024-06-11T17:37:04Z | https://github.com/modin-project/modin/issues/7302 | [
"dependencies 🔗",
"P0"
] | anmyachev | 3 | |
HIT-SCIR/ltp | nlp | 568 | how to connect electra output with linear output? | https://github.com/HIT-SCIR/ltp/blob/4151b88fed3899b7eaf0e80a7a01fa5842f4df77/ltp/transformer_linear.py#L171
Here the output_hidden is disabled, then the output dimension is 1. And how to connect to the next linear layer with hidden_size as input dimension? | closed | 2022-05-30T13:36:05Z | 2022-05-30T14:14:11Z | https://github.com/HIT-SCIR/ltp/issues/568 | [] | npuichigo | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.