repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
plotly/dash-core-components | dash | 778 | Undesired behaviour (interaction?) with two `dcc.Store` | The issue originates from https://community.plot.ly/t/components-triggered-by-table-not-updating/36288/4
With
```
import dash
import dash_table
import pandas as pd
import dash_html_components as html
import dash_core_components as dcc
from dash.dependencies import Input, Output, State
from dash.exceptions import PreventUpdate
import plotly.graph_objs as go
app = dash.Dash(__name__)
server = app.server
app.layout = html.Div(children=[
dash_table.DataTable(
id='table-data',
data=[{'x':'test', 'graph': 0}],
columns=[
{'id': 'x', 'name': 'x', 'editable':False},
{'id': 'graph', 'name': 'graph', 'presentation': 'dropdown', 'editable':True}],
dropdown={
'graph': {
'options': [
{'label': 'Do not show', 'value': 0x0},
{'label': 'Plot 1', 'value': 1},
{'label': 'Plot 2', 'value': 2}],
},
},
row_deletable=False,
editable=True,
),
dcc.Store(id='g1buffer', storage_type='memory'),
dcc.Store(id='g2buffer', storage_type='memory'),
dcc.Graph(id='plot-graph1'),
dcc.Graph(id='plot-graph2'),
])
@app.callback(
Output('plot-graph1', 'figure'),
[Input('g1buffer', 'data')],
)
def update_graph1(data):
if data is None:
raise PreventUpdate
return data
@app.callback(
Output('plot-graph2', 'figure'),
[Input('g2buffer', 'data')],
)
def update_graph2(data):
if data is None:
raise PreventUpdate
return data
@app.callback(
[
Output('g1buffer', 'data'),
Output('g2buffer', 'data'),
],
[Input('table-data', 'data')],
)
def update_on_table(table_data):
data = go.Scatter(
x=[1,2,3,4],
y=[2,5,1,3],
)
g1 = {}
g2 = {}
if table_data[0]['graph'] == 1:
g1 = {'data': [data]}
if table_data[0]['graph'] == 2:
g2 = {'data': [data]}
return g1, g2
if __name__ == '__main__':
app.run_server(debug=True)
```
the dropdown does not update the figures as expected (sometimes plot 1 does not show up as it should). Modifying the layout to have a single `dcc.Store` seems to solve the problem (see below). Could there be undesired interactions between the two `dcc.Store`?
```
import dash
import dash_table
import pandas as pd
import dash_html_components as html
import dash_core_components as dcc
from dash.dependencies import Input, Output, State
from dash.exceptions import PreventUpdate
import plotly.graph_objs as go
app = dash.Dash(__name__)
server = app.server
app.layout = html.Div(children=[
dash_table.DataTable(
id='table-data',
data=[{'x':'test', 'graph': 0}],
columns=[
{'id': 'x', 'name': 'x', 'editable':False},
{'id': 'graph', 'name': 'graph', 'presentation': 'dropdown', 'editable':True}],
dropdown={
'graph': {
'options': [
{'label': 'Do not show', 'value': 0x0},
{'label': 'Plot 1', 'value': 1},
{'label': 'Plot 2', 'value': 2}],
},
},
editable=True,
),
dcc.Store(id='gbuffer'),
dcc.Graph(id='plot-graph1'),
dcc.Graph(id='plot-graph2'),
])
@app.callback(
Output('plot-graph1', 'figure'),
[Input('gbuffer', 'data')],
)
def update_graph1(data):
print('update_graph1', data)
if data is None:
raise PreventUpdate
return data[0]
@app.callback(
Output('plot-graph2', 'figure'),
[Input('gbuffer', 'data')],
)
def update_graph2(data):
print('update_graph2', data)
if data is None:
raise PreventUpdate
return data[1]
@app.callback(
Output('gbuffer', 'data'),
[Input('table-data', 'data')],
)
def update_on_table(table_data):
data = go.Scatter(
x=[1,2,3,4],
y=[2,5,1,3],
)
g1 = {}
g2 = {}
if table_data[0]['graph'] == 1:
g1 = {'data': [go.Scatter(x=[1, 2], y=[1, 2])]}
if table_data[0]['graph'] == 2:
g2 = {'data': [go.Scatter(x=[1, 3], y=[2, 3])]}
return [g1, g2]
if __name__ == '__main__':
app.run_server(debug=True)
``` | closed | 2020-03-17T19:21:11Z | 2020-05-05T00:10:57Z | https://github.com/plotly/dash-core-components/issues/778 | [] | emmanuelle | 1 |
davidteather/TikTok-Api | api | 572 | [BUG] - by_hashtag and get_hashtag_object both fail when using Selenium | **Describe the bug**
When using selenium, at least by_hashtag and get_hashtag_object fail.
If Selenium is not used, these two objects work as expected it is just when Selenium=True.
Changing out proxies, use_test_endpoints, custom_verifyFp doesn't seem to impact the response.
According to the error trace, TikTok responds with a useless string:
{statusCode: 0,body: {userData: {},statusCode: -1,shareUser: {}}}
Running this on a VM and testing locally on windows 10, which is why I am using Selenium, not the other option.
A clear and concise description of what the bug is.
**The buggy code**
Please insert the code that is throwing errors or is giving you weird unexpected results.
```
import requests
from TikTokApi import TikTokApi
import logging
custom_verifyFp= 'value here'
api = TikTokApi.get_instance(use_selenium=True, use_test_endpoints=True)
r = api.by_hashtag('MyNeutrogenaMoment', count=30, custom_verifyFp=ip)
print(r)
```
**Error Trace (if any)**
```
ERROR:root:TikTok response: {statusCode: 0,body: {userData: {},statusCode: -1,shareUser: {}}}
ERROR:root:Converting response to JSON failed
ERROR:root:Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
Traceback (most recent call last):
File "C:\Anaconda\lib\site-packages\TikTokApi\tiktok.py", line 264, in get_data
json = r.json()
File "C:\Anaconda\lib\site-packages\requests\models.py", line 889, in json
return complexjson.loads(
File "C:\Anaconda\lib\json\__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "C:\Anaconda\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Anaconda\lib\json\decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
```
**Desktop (please complete the following information):**
- OS: windows 10
- TikTokApi Version 3.9.5
**Additional context**
Add any other context about the problem here.
| closed | 2021-04-22T21:38:06Z | 2021-08-07T00:30:33Z | https://github.com/davidteather/TikTok-Api/issues/572 | [
"bug"
] | bmader12 | 1 |
matplotlib/matplotlib | data-visualization | 29,219 | [Bug]: Missing axes limits auto-scaling support for LineCollection | ### Bug summary
Matplotlib is missing auto-scale support for LineCollection.
Related issues:
- https://github.com/matplotlib/matplotlib/issues/23317/
- https://github.com/matplotlib/matplotlib/pull/28403
### Code for reproduction
```Python
from matplotlib import pyplot as plt
from matplotlib.collections import LineCollection
lc = LineCollection([[(x, x ** 2) for x in range(5)]])
ax = plt.gca()
ax.add_collection(lc)
# ax.autoscale() # need to manually call this
plt.show()
```
### Actual outcome

### Expected outcome

### Additional information
_No response_
### Operating system
macOS 14.6.1
### Matplotlib Version
3.9.3
### Matplotlib Backend
macosx
### Python version
3.12.7
### Jupyter version
7.2.2
### Installation
pip | open | 2024-12-02T19:38:32Z | 2024-12-07T12:35:01Z | https://github.com/matplotlib/matplotlib/issues/29219 | [
"status: confirmed bug"
] | carlosgmartin | 13 |
facebookresearch/fairseq | pytorch | 4,947 | what's the actual learning_rate in data2vec2.0 ? | Hi,
I noticed that in the data2vec2.0 code, the losses for different samples, patches and channels are accumulated with "sum" op instead of "mean":
```
# d2v loss is first computed in func d2v_loss:
loss = F.mse_loss(x, y, reduction="none") # data2vec2.py:708
scale = 1 / math.sqrt(x.size(-1)) # data2vec2.py:715
reg_loss = loss * scale # data2vec2.py:717
# then returned in result['losses']
result["losses"][n] = reg_loss * self.cfg.d2v_loss #
# finally reduced to scaler with "sum" in criterions/model_criterion.py:66-75
for lk, p in losses.items():
scaled_losses[lk] = coef * p.float().sum() #
```
which is different from the MAE repository, which are:
```
loss = (pred - target) ** 2
loss = loss.mean(dim=-1) # [N, L], mean loss per patch
loss = (loss * mask).sum() / mask.sum() # mean loss on removed patches
```
Therefore, for similar base learning rate (4e-4 for ViT-Large in data2vec2 vs 1.5e-4 in mae),
the actual learning rate for data2vec2 is about (8*16*14*14*0.75*32) times compared to the "mean" reduction,
which means if the losses were averaged in data2vec2.0, the equivalent lr would be 240, which was incrediblly large.
So I suspect that my reasoning is wrong, but I can't find why.
Can you help me?
| closed | 2023-01-18T04:27:04Z | 2024-04-27T09:56:39Z | https://github.com/facebookresearch/fairseq/issues/4947 | [
"question",
"needs triage"
] | flishwang | 0 |
deezer/spleeter | tensorflow | 199 | Please Add Freez model as well | ## Description
<!-- Describe your feature request here. -->
## Additional information
<!-- Add any additional description -->
I am getting error while freezing model please upload freezed model and see the opened issue.
| closed | 2019-12-26T09:10:38Z | 2019-12-30T14:58:00Z | https://github.com/deezer/spleeter/issues/199 | [
"enhancement",
"feature"
] | waqasakram117 | 1 |
biolab/orange3 | pandas | 6,576 | Distributions outputs wrong data | **What's wrong**
The widget's output does not match the selection.
**How can we reproduce the problem?**
Load Zoo and pass it to Distributions. Show "type" and check "Sort categories by frequency".
When selecting the n-th column, the widget outputs data referring to the n-th value of the variable in the original, unsorted order.
It is pretty amazing that nobody noticed this so far.
**When fixing this** consider that the user can click "Sort categories by frequency" while something is selected. This shouldn't affect which values are selected (e.g. if one selects mammals and insects, they must still be selected).
**Don't forget** selection using keyboard, including, e.g. Shift-Right.
**What's your environment?**
- Operating system: macOS
- Orange version: latest master
- How you installed Orange: pip
| closed | 2023-09-14T14:45:06Z | 2023-09-20T20:21:07Z | https://github.com/biolab/orange3/issues/6576 | [
"bug",
"meal"
] | janezd | 4 |
pytorch/pytorch | python | 149,196 | (Will PR) Multiprocessing with CUDA_VISIBLE_DEVICES seems to give the wrong device | ### EDIT: PR to fix this
PR is here: https://github.com/pytorch/pytorch/pull/149248
### 🐛 Describe the bug
Hi thanks for the helpful library! When two processes have different CUDA_VISIBLE_DEVICES and pass around tensor between them, it seems the `.device` attribute is incorrect.
Example code:
```python
import os
def _run_second_process(queue):
print(f'[second] {os.environ.get("CUDA_VISIBLE_DEVICES")=}')
value_from_queue = queue.get()
print(f'[second] queue.get {value_from_queue=} {value_from_queue.device=}')
def _run_main_process():
import torch
print(f'[first] {os.environ.get("CUDA_VISIBLE_DEVICES")=}')
queue = torch.multiprocessing.Queue()
os.environ['CUDA_VISIBLE_DEVICES'] = '1,2'
p = torch.multiprocessing.Process(
target=_run_second_process,
kwargs=dict(queue=queue),
)
p.start()
del os.environ['CUDA_VISIBLE_DEVICES']
value_to_queue = torch.tensor([1.0, 2.0], device='cuda:1')
print(f'[first] queue.put {value_to_queue=} {value_to_queue.device=}')
queue.put(value_to_queue)
p.join()
if __name__ == '__main__':
_run_main_process()
```
Output:
```
[first] os.environ.get("CUDA_VISIBLE_DEVICES")=None
[second] os.environ.get("CUDA_VISIBLE_DEVICES")='1,2'
[first] queue.put value_to_queue=tensor([1., 2.], device='cuda:1') value_to_queue.device=device(type='cuda', index=1)
[second] queue.get value_from_queue=tensor([1., 2.], device='cuda:1') value_from_queue.device=device(type='cuda', index=1)
```
It seems `cuda:0` in the second process should mean `cuda:1` in the first process, thus the second process wrongly recognize the tensor as `cuda:1`.
This seems to be related to issues like github.com/volcengine/verl/pull/ 490#issuecomment-2720212225.
If I manage to find some spare time, I am happy to PR for this.
### Versions
<details>
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.39
Python version: 3.10.16 (main, Dec 4 2024, 08:53:38) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-1017-aws-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.127.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7R13 Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 1
BogoMIPS: 5299.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save vaes vpclmulqdq rdpid
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 48 MiB (96 instances)
L3 cache: 384 MiB (12 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flashinfer-python==0.2.3+cu124torch2.5
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.14.1
[pip3] torch==2.5.1
[pip3] torch_memory_saver==0.0.2
[pip3] torchao==0.9.0
[pip3] torchaudio==2.5.1
[pip3] torchdata==0.11.0
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] Could not collect
cc @VitalyFedyunin @albanD @ptrblck @msaroufim @eqy | open | 2025-03-14T14:36:24Z | 2025-03-19T11:41:11Z | https://github.com/pytorch/pytorch/issues/149196 | [
"module: multiprocessing",
"module: cuda",
"triaged"
] | fzyzcjy | 10 |
LAION-AI/Open-Assistant | machine-learning | 2,953 | oasst-sft-1-pythia-12b model is giving weird answers | I run the Open Assistant but oasst-sft-1-pythia-12b model is giving weird answers
Hardware : Nvidia T4 , 8 cpu , 60GB ram
<img width="763" alt="image" src="https://user-images.githubusercontent.com/33727088/235101716-df320924-1e21-4d05-818e-1f661c439b6e.png">
| closed | 2023-04-28T09:04:03Z | 2023-04-29T10:15:11Z | https://github.com/LAION-AI/Open-Assistant/issues/2953 | [] | jithinkpraveen | 0 |
kymatio/kymatio | numpy | 480 | ModuleNotFoundErrors | - [x] 1D
- [x] 2D
- [x] 3D
Hi everyone,
In the current kymatio-v2 branch, there are no __init__.py files in the kymatio.frontend, kymatio.scattering2d.backend and kymatio.scattering2d.core packages. It leads on my side to ModuleNotFoundErrors when trying to run for instance examples/2d/cifar.py after installing kymatio by python setup.py install (or similarly pip install . in the kymatio head folder). It seems to be linked to the find_packages function of setuptools: "find_packages() walks the target directory, filtering by inclusion patterns, and finds Python packages (any directory). Packages are only recognized if they include an __init__.py file." ([https://setuptools.readthedocs.io/en/latest/setuptools.html#using-find-packages](url)).
Adding empty __init__.py in those packages solved the problem for me. | closed | 2020-01-16T19:00:50Z | 2020-01-27T02:48:07Z | https://github.com/kymatio/kymatio/issues/480 | [] | anakin-datawalker | 2 |
python-visualization/folium | data-visualization | 1,402 | Get lat/lng programmatically from a mouse click event | Is it possible to get the lat/lng **programmatically** from a mouse click event on the map? The lat/lng is needed for subsequent computation. Thanks. | closed | 2020-10-26T22:33:03Z | 2020-10-27T08:35:43Z | https://github.com/python-visualization/folium/issues/1402 | [] | giswqs | 1 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,782 | Nodriver: Running in docker | I have trouble trying to get nodriver/undetected-chromedriver running in docker.
Nomatter what I do, I always end up with the following error:
```
File "/usr/local/lib/python3.12/socket.py", line 837, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
```
For example, some versions of Dockerfiles I tried:
- [Version 1](https://pastebin.com/hyZNQ0Ti) (`FROM python:latest`)
- [Version 2](https://pastebin.com/iggUc6Bm) (`FROM ultrafunk/undetected-chromedriver`)
Thankful for any hints!
Also, if someone figured out how to run nodriver with proxies that need authorization, I'd be happy to hear about it! Cheers. | open | 2024-03-10T09:16:23Z | 2025-01-25T21:16:29Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1782 | [] | ven0ms99 | 12 |
ccxt/ccxt | api | 25,174 | Binance Futures - Edit Order on Binance Futures doesn't work with priceMatch parameter | ### Operating System
Windows/Linux
### Programming Languages
JavaScript
### CCXT Version
4.4.53
### Description
### Issue Description
When using CCXT's editOrder with Binance's priceMatch parameter set to 'Queue' or other enum value, the price parameter must be undefined. However, CCXT currently throws an error if price is not provided, creating a conflict with Binance's API requirements.
https://developers.binance.com/docs/derivatives/usds-margined-futures/trade/rest-api/Modify-Order

### Current Behavior
```javascript
// This throws CCXT error due to missing price
await exchange.editOrder(
orderId,
symbol,
type,
side,
amount,
undefined, // CCXT requires price
{ priceMatch: 'Queue' } // Binance requires price to be undefined
);
```
### Error
```
ArgumentsRequired: binance editOrder() requires a price argument for portfolio margin and linear orders
at BinanceCcxtPositions.editContractOrder
```
```
async editContractOrder(id, symbol, type, side, amount, price = undefined, params = {}) {
await this.loadMarkets();
const market = this.market(symbol);
let isPortfolioMargin = undefined;
[isPortfolioMargin, params] = this.handleOptionAndParams2(params, 'editContractOrder', 'papi', 'portfolioMargin', false);
if (market['linear'] || isPortfolioMargin) {
if (price === undefined) {
throw new errors.ArgumentsRequired(this.id + ' editOrder() requires a price argument for portfolio margin and linear orders');
}
}
```
### Code
_No response_ | open | 2025-02-03T21:03:55Z | 2025-02-05T16:09:40Z | https://github.com/ccxt/ccxt/issues/25174 | [
"bug"
] | lostless13 | 3 |
scikit-hep/awkward | numpy | 2,513 | Error formatting is broken (an error in the error handling) | ### Version of Awkward Array
HEAD
### Description and code to reproduce
I have a real error and should be getting a properly formatted error message, but there's an error in the error-handling.
To reproduce it:
```python
import awkward as ak
f = ak.Array([[1, 2, 3], [], [4, 5]]).layout.form
ak.from_buffers(f, 0, {"": b"\x00\x00\x00\x00\x00\x00\x00\x00"}, buffer_key="{form_key}")
```
The error message is
```
Traceback (most recent call last):
File "/home/jpivarski/irishep/awkward/src/awkward/operations/ak_from_buffers.py", line 89, in from_buffers
return _impl(
File "/home/jpivarski/irishep/awkward/src/awkward/operations/ak_from_buffers.py", line 146, in _impl
out = reconstitute(form, length, container, getkey, backend, byteorder, simplify)
File "/home/jpivarski/irishep/awkward/src/awkward/operations/ak_from_buffers.py", line 349, in reconstitute
raw_array = container[getkey(form, "offsets")]
KeyError: 'None'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jpivarski/irishep/awkward/src/awkward/operations/ak_from_buffers.py", line 89, in from_buffers
return _impl(
File "/home/jpivarski/irishep/awkward/src/awkward/_errors.py", line 56, in __exit__
self.handle_exception(exception_type, exception_value)
File "/home/jpivarski/irishep/awkward/src/awkward/_errors.py", line 71, in handle_exception
raise self.decorate_exception(cls, exception)
KeyError: "'None'\n\nThis error occurred while calling\n\n ak.from_buffers(\n form = ListOffsetForm-instance\n length = 0\n container = {'': b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00'}\n buffer_key = '{form_key}'\n backend = 'cpu'\n byteorder = '<'\n highlevel = True\n behavior = None\n )"
```
The `KeyError: 'None'` is the actual error, and it was supposed to be decorated like this:
```
KeyError: 'None'
This error occurred while calling
ak.from_buffers(
form = ListOffsetForm-instance
length = 0
container = {'': b'\x00\x00\x00\x00\x00\x00\x00\x00'}
buffer_key = '{form_key}'
backend = 'cpu'
byteorder = '<'
highlevel = True
behavior = None
```
The line number didn't shift between my copy of Awkward and `main`:
https://github.com/scikit-hep/awkward/blob/be876a0ae3d78de29a33c635cdcc75315c4b1740/src/awkward/_errors.py#L71
I'm in the second branch because I'm not using Python 3.11 (with its exception decorators).
```python
>>> sys.version_info
sys.version_info(major=3, minor=9, micro=15, releaselevel='final', serial=0)
``` | closed | 2023-06-07T22:35:51Z | 2023-06-14T18:44:03Z | https://github.com/scikit-hep/awkward/issues/2513 | [
"bug"
] | jpivarski | 4 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 1,132 | Incompatibility between Flask-SQLAlchemy >= 3.0.0 and PySerde | It seems there is an incompatibility between Flask-SQLAlchemy >= 3.0.0 and PySerde ([https://github.com/yukinarit/pyserde](https://github.com/yukinarit/pyserde)) when applying ORM to a dataclass.
Example:
```
from dataclasses import dataclass
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from serde import serde
db = SQLAlchemy()
app = Flask(__name__)
app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///project.db"
db.init_app(app)
@serde
@dataclass
class User(db.Model):
Id: int = db.Column("id", db.Integer, primary_key=True)
Name: str = db.Column("name", db.String)
@app.route('/')
def hello_world():
return 'Hello World!'
if __name__ == '__main__':
app.run()
```
Exits with an error:
```
serde.compat.SerdeError: Failed to resolve type hints for User:
NameError: name 'SQLAlchemy' is not defined
If you are using forward references make sure you are calling deserialize & serialize after all classes are globally visible.
```
This is caused by the `db.Model` typing of `__fsa__`
https://github.com/pallets-eco/flask-sqlalchemy/blob/d0568f54deb6310a4059201cc3c8d5ee95ad1ad9/src/flask_sqlalchemy/model.py#L36-L40
The same code works fine with Flask-SQLAlchemy 2.5.1
Environment:
- Python version: 3.9.15
- Flask-SQLAlchemy version: 3.0.1
- SQLAlchemy version: 1.4.42
| closed | 2022-10-26T14:25:36Z | 2023-02-01T01:18:10Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/1132 | [] | barsa-net | 5 |
zappa/Zappa | django | 648 | [Migrated] Delayed asynchronous task execution using SQS as a task source. | Originally from: https://github.com/Miserlou/Zappa/issues/1647 by [oliviersels](https://github.com/oliviersels)
## Context
Implement delayed asynchronous task execution using SQS as a task source.
Now that we have support for SQS as an event source we should extend this to have SQS as an asynchronous task source. Because SQS allows delaying messages up to 900 seconds this also allows delaying task invocation for up to this time.
## Expected Behavior
Support the following scenario:
```python
@task(service='sqs', delay_seconds=600)
make_pie():
""" This task is invoked asynchronously 10 minutes after it is initially run. """
```
## Possible Fix
See pull request
| closed | 2021-02-20T12:32:23Z | 2024-04-13T17:36:24Z | https://github.com/zappa/Zappa/issues/648 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
jupyterlab/jupyter-ai | jupyter | 352 | Better error handling in Chat UI | ## Description
Some users are encountering an error when opening the Chat UI, which is difficult to reproduce because the UI does not include any information regarding the error.
## Reproduce
See #346.
| open | 2023-08-18T15:38:56Z | 2023-08-30T18:30:14Z | https://github.com/jupyterlab/jupyter-ai/issues/352 | [
"enhancement"
] | dlqqq | 0 |
seleniumbase/SeleniumBase | pytest | 2,136 | `--driver-version="keep"` is only being applied to drivers in the `seleniumbase/drivers` folder | ## `--driver-version="keep"` is only being applied to drivers in the `seleniumbase/drivers` folder
It should also be applied to drivers that exist on the System PATH.
The current bug example: Setup: Chrome 117 was installed, with no driver in the `seleniumbase/drivers` folder, but chromedriver 115 on System PATH. What happened: chromedriver 115 was downloaded into the `seleniumbase/drivers` folder and used. What was expected: SeleniumBase should have just used the existing chromedriver 115 that was already on the System PATH.
--------
An explanation of how `--driver-version="keep"` is supposed to work:
If there's a already a driver in the `seleniumbase/drivers` folder, (or there's one on your System PATH), then SeleniumBase should use that driver for tests, even if the browser version does not match the driver version. Eg. If Chrome 117 is installed, but you have chromedriver 115, then SeleniumBase should keep using that existing chromedriver 115, rather than downloading chromedriver 117 to match your browser (which is the default behavior).
(NOTE that for some [Syntax Formats](https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/syntax_formats.md), the driver version is passed via method arg: `driver_version="VERSION"`) | closed | 2023-09-24T13:35:30Z | 2023-09-26T01:35:51Z | https://github.com/seleniumbase/SeleniumBase/issues/2136 | [
"bug"
] | mdmintz | 1 |
mljar/mljar-supervised | scikit-learn | 400 | How to know the order of classes for multiclass problem when using predict_proba? | Assume I have a multiclass classification problem where my target `y` in the training data is a 1D-vector with strings for the labels. In the example below, the labels can be `['Fair', 'Good', 'Ideal', 'Premium', 'Very Good']`.
After fitting a multiclass model given this `y`, I want to use the `predict_proba` function. This function gives me a NumPy array with shape (n_rows, 5) because there are 5 classes. The problem is that I don't know which level of the second dimension corresponds to which class.
**Question:**
How do I find out which level of the second dimension corresponds to which class?
Maybe it would be better to return a data frame with columns representing class labels here? Or to let the user specify the order of class labels somehow? Or to force the user to provide the target in a (n_rows, 5) format after one-hot encoding?
**Example:**
```shell
import pandas as pd
from supervised import AutoML
# Import data
url = "https://raw.githubusercontent.com/mwaskom/seaborn-data/master/diamonds.csv"
df = pd.read_csv(url)
display(df.head())
# Split in train and target
x = df.drop(columns = ["cut"])
y = df.cut.to_numpy()
# Fit model
model = AutoML(mode="Perform",
eval_metric="logloss",
explain_level=0,
total_time_limit=60,
results_path=None,
ml_task = "multiclass_classification")
model.fit(x, y)
# Predict probabilities for training data
pred = model.predict_proba(x)
print(pred.shape)
print(pred)
```
| closed | 2021-05-22T13:56:24Z | 2021-06-08T10:55:57Z | https://github.com/mljar/mljar-supervised/issues/400 | [
"docs"
] | juliuskittler | 2 |
pydantic/FastUI | pydantic | 241 | Plan for adding remark-math for math formula rendering in markdown? | As titled | open | 2024-03-11T06:01:43Z | 2024-03-14T10:25:19Z | https://github.com/pydantic/FastUI/issues/241 | [] | zhoubin-me | 3 |
python-gino/gino | asyncio | 439 | Load models from joined query automatically | ### Description
Hello. Thanks for the Gino, looks awesome! I gathered that Gino cannot yet load rows into models if joins are used in a query. Is it so? If yes, do you plan to add such feature, or is it even feasible, at least for simple cases?
| closed | 2019-02-13T20:16:31Z | 2019-03-03T09:10:31Z | https://github.com/python-gino/gino/issues/439 | [
"question"
] | WouldYouKindly | 4 |
davidteather/TikTok-Api | api | 652 | 'TikTokApi' object has no attribute 'region'tagtagtagtag | When I tried
api.by_hashtag('test')
or a few other functions, I got the error: 'TikTokApi' object has no attribute 'region'tagtagtagtag
However, api.get_user('test') works for me
A few other functions that run into the 'region' error
api.by_trending()
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-16-f4af7f86df90> in <module>
----> 1 api.by_trending()
~\anaconda3\lib\site-packages\TikTokApi\tiktok.py in by_trending(self, count, **kwargs)
424 }
425 api_url = "{}api/recommend/item_list/?{}&{}".format(
--> 426 BASE_URL, self.__add_url_params__(), urlencode(query)
427 )
428 res = self.get_data(url=api_url, **kwargs)
~\anaconda3\lib\site-packages\TikTokApi\tiktok.py in __add_url_params__(self)
1653 "device_platform": "web_mobile",
1654 # "device_id": random.randint(),
-> 1655 "region": self.region or "US",
1656 "priority_region": "",
1657 "os": "ios",
AttributeError: 'TikTokApi' object has no attribute 'region' | closed | 2021-08-06T06:05:56Z | 2022-05-04T09:39:47Z | https://github.com/davidteather/TikTok-Api/issues/652 | [] | michael01810 | 8 |
huggingface/datasets | pytorch | 7,112 | cudf-cu12 24.4.1, ibis-framework 8.0.0 requires pyarrow<15.0.0a0,>=14.0.1,pyarrow<16,>=2 and datasets 2.21.0 requires pyarrow>=15.0.0 | ### Describe the bug
!pip install accelerate>=0.16.0 torchvision transformers>=4.25.1 datasets>=2.19.1 ftfy tensorboard Jinja2 peft==0.7.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
cudf-cu12 24.4.1 requires pyarrow<15.0.0a0,>=14.0.1, but you have pyarrow 17.0.0 which is incompatible.
ibis-framework 8.0.0 requires pyarrow<16,>=2, but you have pyarrow 17.0.0 which is incompatible.
to solve above error
!pip install pyarrow==14.0.1
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
datasets 2.21.0 requires pyarrow>=15.0.0, but you have pyarrow 14.0.1 which is incompatible.
### Steps to reproduce the bug
!pip install datasets>=2.19.1
### Expected behavior
run without dependency error
### Environment info
Diffusers version: 0.31.0.dev0
Platform: Linux-6.1.85+-x86_64-with-glibc2.35
Running on Google Colab?: Yes
Python version: 3.10.12
PyTorch version (GPU?): 2.3.1+cu121 (True)
Flax version (CPU?/GPU?/TPU?): 0.8.4 (gpu)
Jax version: 0.4.26
JaxLib version: 0.4.26
Huggingface_hub version: 0.23.5
Transformers version: 4.42.4
Accelerate version: 0.32.1
PEFT version: 0.7.0
Bitsandbytes version: not installed
Safetensors version: 0.4.4
xFormers version: not installed
Accelerator: Tesla T4, 15360 MiB
Using GPU in script?:
Using distributed or parallel set-up in script?: | open | 2024-08-20T08:13:55Z | 2024-09-20T15:30:03Z | https://github.com/huggingface/datasets/issues/7112 | [] | SoumyaMB10 | 2 |
ultralytics/ultralytics | pytorch | 18,710 | Which hyperparameters are suitable for me? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
Hello. I've already done finetuning and just trained YOLOv11 from scratch. But I have the following problem. Your pretrained model works well with the `car` class in some scenes that I need, but poorly in others that I also need. When I do finetuning of your pretrained model, for some reason, where your model coped well, the quality drops, and where it did not recognize anything, everything is fine. For some reason, somehow finetuning spoils what is already good and improves what was bad. I want to adapt YOLOv11 to work at night.
Can you tell me what hyperparameters I need to set so that everything is fine and the way I need it? YOLOv4 just does what it needs to do for some reason. And I want a newer version of YOLO. Maybe I need to freeze something or turn on augmentation?
Here is my training startup configuration:
```
task: detect
mode: train
model: yolov11m.yaml
data: ./yolov11_custom.yaml
epochs: 500
time: null
patience: 100
batch: 32
imgsz: 640
save: true
save_period: -1
val_period: 1
cache: false
device: 0
workers: 8
project: /YOLOv11_m_night_640
name: yolov11_custom_night
exist_ok: false
pretrained: true
optimizer: auto
verbose: true
seed: 0
deterministic: true
single_cls: false
rect: false
cos_lr: false
close_mosaic: 10
resume: false
amp: true
fraction: 1.0
profile: false
freeze: null
multi_scale: false
overlap_mask: true
mask_ratio: 4
dropout: 0.0
val: true
split: val
save_json: false
save_hybrid: false
conf: null
iou: 0.7
max_det: 300
half: false
dnn: false
plots: true
source: null
vid_stride: 1
stream_buffer: false
visualize: false
augment: false
agnostic_nms: false
classes: null
retina_masks: false
embed: null
show: false
save_frames: false
save_txt: false
save_conf: false
save_crop: false
show_labels: true
show_conf: true
show_boxes: true
line_width: null
format: torchscript
keras: false
optimize: false
int8: false
dynamic: false
simplify: false
opset: null
workspace: 4
nms: false
lr0: 0.01
lrf: 0.01
momentum: 0.937
weight_decay: 0.0005
warmup_epochs: 3.0
warmup_momentum: 0.8
warmup_bias_lr: 0.1
box: 7.5
cls: 0.5
dfl: 1.5
pose: 12.0
kobj: 1.0
label_smoothing: 0.0
nbs: 64
hsv_h: 0.015
hsv_s: 0.7
hsv_v: 0.4
degrees: 0.0
translate: 0.1
scale: 0.5
shear: 0.0
perspective: 0.0
flipud: 0.0
fliplr: 0.5
bgr: 0.0
mosaic: 1.0
mixup: 0.0
copy_paste: 0.0
auto_augment: randaugment
erasing: 0.4
crop_fraction: 1.0
cfg: null
tracker: botsort.yaml
save_dir: /YOLOv11_m_night_640
```
my `yolov11_custom.yaml`:
```
path: ./data
train: ./data/train.txt
val: /data/val.txt
# Classes
names:
0: trailer
1: train
2: trafficlight
3: sign
4: bus
5: truck
6: person
7: bicycle
8: motorcycle
9: car
10: streetlight
```
@glenn-jocher @Y-T-G and others. Please help me.
### Additional
_No response_ | open | 2025-01-16T11:51:54Z | 2025-01-24T06:07:24Z | https://github.com/ultralytics/ultralytics/issues/18710 | [
"question",
"detect"
] | Egorundel | 35 |
jupyterhub/repo2docker | jupyter | 843 | Failures to install readtext package | Hi,
I cannot install readtext package on binder. Here's part of the error message --
Configuration failed because poppler-cpp was not found. Try installing:
* deb: libpoppler-cpp-dev (Debian, Ubuntu, etc)
* On Ubuntu 16.04 or 18.04 use this PPA:
sudo add-apt-repository -y ppa:cran/poppler
sudo apt-get update
sudo sudo apt-get install -y libpoppler-cpp-dev
* rpm: poppler-cpp-devel (Fedora, CentOS, RHEL)
* csw: poppler_dev (Solaris)
* brew: poppler (Mac OSX)
If poppler-cpp is already installed, check that 'pkg-config' is in your
PATH and PKG_CONFIG_PATH contains a poppler-cpp.pc file. If pkg-config
is unavailable you can set INCLUDE_DIR and LIB_DIR manually via:
R CMD INSTALL --configure-vars='INCLUDE_DIR=... LIB_DIR=...'
| closed | 2020-02-05T10:01:03Z | 2020-02-26T22:36:50Z | https://github.com/jupyterhub/repo2docker/issues/843 | [] | zwguo95 | 1 |
erdewit/ib_insync | asyncio | 397 | Crypto and Fractional Size | First off, thanks for the awesome library. I originally gave the native api a shot and it was a nightmare trying to navigate.
I am looking to trade crypto via the API, but am getting the following error around fractional size rules. It sounds like the IB API didn't allow fractional trading, however the below error suggests that upgrading to 163. will solve this issue. The confusing part is that the IB API version is currently v9.72+.
Do you have any work arounds for this error, or know what the upgrade to 163. refers to?

```
from ib_insync import *
util.startLoop()
ib = IB()
ib.connect('127.0.0.1', 7496, clientId=1, readonly=True)
contract = Contract(secType='CRYPTO',
conId=479624278,
symbol='BTC',
currency='USD',
localSymbol='BTC.USD',
tradingClass='BTC',
exchange='PAXOS')
bars = ib.reqHistoricalData(
contract, endDateTime='', durationStr='30 D',
barSizeSetting='1 day', whatToShow='MIDPOINT', useRTH=False)
``` | closed | 2021-10-05T23:51:14Z | 2022-08-13T10:00:32Z | https://github.com/erdewit/ib_insync/issues/397 | [] | fletch-man | 1 |
Farama-Foundation/PettingZoo | api | 691 | [Proposal] Fix pyright code checking | ### Proposal
Right now, `continue-on-error` is set to `true` in Linux tests for pyright checking. All of the errors are stemming from `utils/env.py`, and not all of them are solvable because of some stuff with gym. It would be great if we can set `continue-on-error` to `false` and have things pass tests. | closed | 2022-05-02T22:46:58Z | 2022-10-13T10:45:09Z | https://github.com/Farama-Foundation/PettingZoo/issues/691 | [
"bug",
"enhancement",
"help wanted",
"dependencies"
] | jjshoots | 2 |
numba/numba | numpy | 9,519 | [Feature Request] `key_equal`, `copy_key`, `zero_key` in dict is slower than direct assignment if key type is primitive | Hi, I noticed an unoptimized situation that `key_equal`, `copy_key`, `zero_key` in dict are slower than direct assignment if key type is primitive. The root cause is if key_type doesn't contain meminfo, then `key_equal` will rollback to using `memcmp`, which is pretty slow compared to directly `this_key == an_integer`. Other two functions will rollback to using `memcpy`, which is also slow.
https://github.com/numba/numba/blob/a0605597430bb12c434dd116bc5eb84fb30513e0/numba/cext/dictobject.c#L448
https://github.com/numba/numba/blob/a0605597430bb12c434dd116bc5eb84fb30513e0/numba/cext/dictobject.c#L434
I think we can do more things in this intrinsic function, which includes generation of corresponding functions (i.e., `key_equal`, `copy_key`, `zero_key`) for primitive types. I have already tested this in an internal use-case, this optimization can boost performance at least 5%~10% if using numba typed.dict heavily (i.e., lots of dict lookup, insert operations)
https://github.com/numba/numba/blob/a0605597430bb12c434dd116bc5eb84fb30513e0/numba/typed/dictobject.py#L264 | open | 2024-04-01T18:30:44Z | 2024-05-02T01:56:42Z | https://github.com/numba/numba/issues/9519 | [
"enhancement"
] | dlee992 | 3 |
microsoft/hummingbird | scikit-learn | 20 | Simplify convert_sklearn API | In its current implementation to convert a sklearn model we have something like:
```python
convert_sklearn(model, initial_types=[('input', FloatTensorType([4, 3]))])
```
but we actually don't need the specification of input types (this is more a onnx converter thing). So we can have something like:
```python
convert_sklearn(model)
```
which is nice and short. The problem with this is that XGBoostRegressor models do not surface information on the number of input features (while instead XGBoostClassifier does). Then if we go with the above API we will need a workaround for XGBoostRegressor. One possibility is to have the following specifically for XGBoostRegression models:
```python
extra_config["n_features"] = 200
pytorch_model = convert_sklearn(model, extra_config=extra_config)
```
Another possibility is to pass some input data as for other converters:
```python
pytorch_model = convert_sklearn(model, some_input_data)
```
One last possibility is to have a different API for each converter (Sklearn, LightGBM and XGBoost; as ONNXMLTools are doing right now). The for Sklearn we will have:
Another possibility is to pass some input data as for other converters:
```python
pytorch_model = convert_sklearn(model)
```
For LightGBM we will have
Another possibility is to pass some input data as for other converters:
```python
pytorch_model = convert_lightgbm(model)
```
And for XGboost we will have either to pass an extra param or the input data. For example:
```python
pytorch_model = convert_xgboost(model, some_input_data)
```
| closed | 2020-04-06T23:15:25Z | 2020-04-07T22:32:48Z | https://github.com/microsoft/hummingbird/issues/20 | [] | interesaaat | 2 |
dask/dask | numpy | 11,679 | dask shuffle pyarrow.lib.ArrowTypeError: struct fields don't match or are in the wrong orders | Hello, I met a problem when I shuffle the data among 160 dask partitions.
I got the error when each partition contains 200 samples. But the error is gone when it contains 400 samples or more. I really appreciate it if someone can help me.
```bash
pyarrow.lib.ArrowTypeError: struct fields don't match or are in the wrong orders Input fields: struct<image_url: struct<url: string>, text: string, type: string> output fields: struct<text: string, type: string, image_url: struct<url: string>>
```
**Environment**:
- Dask version: '2024.12.1'
- Python version: '3.10'
| open | 2025-01-17T22:27:22Z | 2025-03-24T02:06:10Z | https://github.com/dask/dask/issues/11679 | [
"dataframe",
"needs attention",
"bug",
"dask-expr"
] | MikeChenfu | 0 |
aio-libs/aiohttp | asyncio | 10,027 | AssertionError | assert not url.absolute raisedon a WSS URL | ### Describe the bug
Discord's [` Get Gateway `](https://discord.com/developers/docs/events/gateway#get-gateway) endpoint returns a ` url ` field containing ` "wss://gateway.discord.gg" `. This WSS URL is used to establish connection with their gateway. Though this raises an exception:
```
File "...\aiohttp\client.py", line 467, in _build_url
assert not url.absolute
^^^^^^^^^^^^^^^^^
AssertionError
```
This is also tested with applying ` yarl.URL(wss_url) ` ( maybe ), but same issue.
Side notes:
1. I am testing in both Python 3.12 and 3.13, but would be more favorable to me if there is a fix already for 3.13
2. I may provide other information should you ask relating to it
### To Reproduce
```py
from aiohttp import ClientSession
from asyncio import run
from typing import *
class DiscordWebSocket:
session = lambda: ClientSession(base_url = "https://discord.com/api/v10/")
connection = None # would likely be replaced by DiscordWebSocket.connect()
@classmethod
async def connect(cls) -> NoReturn:
wss : Dict = await cls.get("gateway") # {"url": "wss://gateway.discord.gg"}
async with cls.session() as session:
response = await session.ws_connect(f"{wss['url']}/") # AssertionError
return response # debug stuff lol
@classmethod
async def get(cls, endpoint : str) -> Dict:
async with cls.session() as session:
response = session.get(endpoint)
return await response.json()
async def main() -> NoReturn:
print(f"{await DiscordWebSocket.connect() = }")
run(main())
```
1. Retrieve the WSS URL from [` Get Gateway `](https://discord.com/developers/docs/events/gateway#get-gateway) endpoint and pass it to ` async ClientSession.ws_connect() `
### Expected behavior
It *would* ( ? should ? ) print ` await DiscordWebSocket.connect() = <aiohttp.ClientWebSocketResponse ...> ` in the console
### Logs/tracebacks
```python-traceback
> python .\main.py
Traceback (most recent call last):
File "C:\Users\demo\OneDrive\Documents\python\test\main.py", line 32, in <module>
run(main())
File "C:\Users\demo\AppData\Local\Programs\Python\Python312\Lib\asyncio\runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "C:\Users\demo\AppData\Local\Programs\Python\Python312\Lib\asyncio\runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\demo\AppData\Local\Programs\Python\Python312\Lib\asyncio\base_events.py", line 687, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "C:\Users\demo\OneDrive\Documents\python\test\main.py", line 29, in main
print(f"{await WebSocket.connect() = }")
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\demo\OneDrive\Documents\python\test\main.py", line 16, in connect
response = await session.ws_connect(str(URL(f"{wss['url']}/")))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\demo\AppData\Local\Programs\Python\Python312\Lib\site-packages\aiohttp\client.py", line 1002, in _ws_connect
resp = await self.request(
^^^^^^^^^^^^^^^^^^^
File "C:\Users\demo\AppData\Local\Programs\Python\Python312\Lib\site-packages\aiohttp\client.py", line 535, in _request
url = self._build_url(str_or_url)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\demo\AppData\Local\Programs\Python\Python312\Lib\site-packages\aiohttp\client.py", line 467, in _build_url
assert not url.absolute
^^^^^^^^^^^^^^^^
AssertionError
```
### Python Version
```console
$ python --version
Python 3.12.4
Python 3.13.0
```
### aiohttp Version
```console
$ python -m pip show aiohttp
Name: aiohttp
Version: 3.11.7
Summary: Async http client/server framework (asyncio)
Home-page: https://github.com/aio-libs/aiohttp
Author:
Author-email:
License: Apache-2.0
Location: C:\Users\demo\AppData\Local\Programs\Python\Python312\Lib\site-packages
Requires: aiohappyeyeballs, aiosignal, attrs, frozenlist, multidict, propcache, yarl
Required-by: discord.py
```
### multidict Version
```console
$ python -m pip show multidict
Version: 6.0.5
Summary: multidict implementation
Home-page: https://github.com/aio-libs/multidict
Author: Andrew Svetlov
Author-email: andrew.svetlov@gmail.com
License: Apache 2
Location: C:\Users\demo\AppData\Local\Programs\Python\Python312\Lib\site-packages
Requires:
Required-by: aiohttp, yarl
```
### propcache Version
```console
$ python -m pip show propcache
Name: propcache
Version: 0.2.0
Summary: Accelerated property cache
Home-page: https://github.com/aio-libs/propcache
Author: Andrew Svetlov
Author-email: andrew.svetlov@gmail.com
License: Apache-2.0
Location: C:\Users\demo\AppData\Local\Programs\Python\Python312\Lib\site-packages
Requires:
Required-by: aiohttp, yarl
```
### yarl Version
```console
$ python -m pip show yarl
Name: yarl
Version: 1.18.0
Summary: Yet another URL library
Home-page: https://github.com/aio-libs/yarl
Author: Andrew Svetlov
Author-email: andrew.svetlov@gmail.com
License: Apache-2.0
Location: C:\Users\demo\AppData\Local\Programs\Python\Python312\Lib\site-packages
Requires: idna, multidict, propcache
Required-by: aiohttp
```
### OS
Windows 10
### Related component
Client
### Additional context
_No response_
### Code of Conduct
- [X] I agree to follow the aio-libs Code of Conduct | closed | 2024-11-23T10:49:59Z | 2024-12-02T14:32:25Z | https://github.com/aio-libs/aiohttp/issues/10027 | [
"invalid",
"client"
] | demoutreiii | 4 |
DistrictDataLabs/yellowbrick | scikit-learn | 979 | Visualize the results without fitting the model | Let's say I have to visualize a confusion matrix.
I can use yellowbrick and use the LogisticRegression and visualize like this:
https://www.scikit-yb.org/en/latest/api/classifier/confusion_matrix.html
```
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split as tts
from sklearn.linear_model import LogisticRegression
from yellowbrick.classifier import ConfusionMatrix
iris = load_iris()
X = iris.data
y = iris.target
classes = iris.target_names
X_train, X_test, y_train, y_test = tts(X, y, test_size=0.2)
model = LogisticRegression(multi_class="auto", solver="liblinear")
iris_cm = ConfusionMatrix(
model, classes=classes,
label_encoder={0: 'setosa', 1: 'versicolor', 2: 'virginica'}
)
iris_cm.fit(X_train, y_train)
iris_cm.score(X_test, y_test)
iris_cm.show()
```
But, most of the times I use scikit-learn and I already have confusion matrix:
For example:
```
cm = np.array([[56750, 114],
[ 95, 3]])
```
Can we now simply use this result in YELLOWBRICK, give label names visualize it? | closed | 2019-10-12T18:26:54Z | 2019-10-12T18:48:32Z | https://github.com/DistrictDataLabs/yellowbrick/issues/979 | [
"type: question"
] | bhishanpdl | 1 |
mljar/mljar-supervised | scikit-learn | 618 | AutoML import fails due to dependency ImportError: cannot import name 'Concatenate' from 'typing_extensions' | I installed `pip install mljar-supervised` and manually fixed a dependency conflict with numba and the numpy version, but when I try `from supervised.automl import AutoML`, it does not work due to an ImportError way down the dependencies.
The complete traceback:
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-13-1488fde12bbc> in <module>
----> 1 from supervised.automl import AutoML # mljar
~/.local/lib/python3.9/site-packages/supervised/__init__.py in <module>
1 __version__ = "0.11.5"
2
----> 3 from supervised.automl import AutoML
~/.local/lib/python3.9/site-packages/supervised/automl.py in <module>
1 import logging
----> 2 from supervised.base_automl import BaseAutoML
3 from supervised.utils.config import LOG_LEVEL
4
5 # libraries for type hints
~/.local/lib/python3.9/site-packages/supervised/base_automl.py in <module>
27 from supervised.callbacks.learner_time_constraint import LearnerTimeConstraint
28 from supervised.callbacks.total_time_constraint import TotalTimeConstraint
---> 29 from supervised.ensemble import Ensemble
30 from supervised.exceptions import AutoMLException
31 from supervised.exceptions import NotTrainedException
~/.local/lib/python3.9/site-packages/supervised/ensemble.py in <module>
12 from supervised.algorithms.registry import BINARY_CLASSIFICATION
13 from supervised.algorithms.registry import MULTICLASS_CLASSIFICATION
---> 14 from supervised.model_framework import ModelFramework
15 from supervised.utils.metric import Metric
16 from supervised.utils.config import LOG_LEVEL
~/.local/lib/python3.9/site-packages/supervised/model_framework.py in <module>
34 from supervised.utils.learning_curves import LearningCurves
35
---> 36 import optuna
37 import joblib
38
~/.local/lib/python3.9/site-packages/optuna/__init__.py in <module>
3 from optuna import integration
4 from optuna import logging
----> 5 from optuna import multi_objective
6 from optuna import pruners
7 from optuna import samplers
~/.local/lib/python3.9/site-packages/optuna/multi_objective/__init__.py in <module>
1 from optuna._imports import _LazyImport
----> 2 from optuna.multi_objective import samplers
3 from optuna.multi_objective import study
4 from optuna.multi_objective import trial
5 from optuna.multi_objective.study import create_study
~/.local/lib/python3.9/site-packages/optuna/multi_objective/samplers/__init__.py in <module>
----> 1 from optuna.multi_objective.samplers._adapter import _MultiObjectiveSamplerAdapter
2 from optuna.multi_objective.samplers._base import BaseMultiObjectiveSampler
3 from optuna.multi_objective.samplers._motpe import MOTPEMultiObjectiveSampler
4 from optuna.multi_objective.samplers._nsga2 import NSGAIIMultiObjectiveSampler
5 from optuna.multi_objective.samplers._random import RandomMultiObjectiveSampler
~/.local/lib/python3.9/site-packages/optuna/multi_objective/samplers/_adapter.py in <module>
4 from optuna import multi_objective
5 from optuna.distributions import BaseDistribution
----> 6 from optuna.samplers import BaseSampler
7 from optuna.study import Study
8 from optuna.trial import FrozenTrial
~/.local/lib/python3.9/site-packages/optuna/samplers/__init__.py in <module>
----> 1 from optuna.samplers import nsgaii
2 from optuna.samplers._base import BaseSampler
3 from optuna.samplers._brute_force import BruteForceSampler
4 from optuna.samplers._cmaes import CmaEsSampler
5 from optuna.samplers._grid import GridSampler
~/.local/lib/python3.9/site-packages/optuna/samplers/nsgaii/__init__.py in <module>
----> 1 from optuna.samplers.nsgaii._crossovers._base import BaseCrossover
2 from optuna.samplers.nsgaii._crossovers._blxalpha import BLXAlphaCrossover
3 from optuna.samplers.nsgaii._crossovers._sbx import SBXCrossover
4 from optuna.samplers.nsgaii._crossovers._spx import SPXCrossover
5 from optuna.samplers.nsgaii._crossovers._undx import UNDXCrossover
~/.local/lib/python3.9/site-packages/optuna/samplers/nsgaii/_crossovers/_base.py in <module>
3 import numpy as np
4
----> 5 from optuna.study import Study
6
7
~/.local/lib/python3.9/site-packages/optuna/study/__init__.py in <module>
2 from optuna.study._study_direction import StudyDirection
3 from optuna.study._study_summary import StudySummary
----> 4 from optuna.study.study import copy_study
5 from optuna.study.study import create_study
6 from optuna.study.study import delete_study
~/.local/lib/python3.9/site-packages/optuna/study/study.py in <module>
23 from optuna import pruners
24 from optuna import samplers
---> 25 from optuna import storages
26 from optuna import trial as trial_module
27 from optuna._convert_positional_args import convert_positional_args
~/.local/lib/python3.9/site-packages/optuna/storages/__init__.py in <module>
3 from optuna._callbacks import RetryFailedTrialCallback
4 from optuna.storages._base import BaseStorage
----> 5 from optuna.storages._cached_storage import _CachedStorage
6 from optuna.storages._heartbeat import fail_stale_trials
7 from optuna.storages._in_memory import InMemoryStorage
~/.local/lib/python3.9/site-packages/optuna/storages/_cached_storage.py in <module>
16 from optuna.storages import BaseStorage
17 from optuna.storages._heartbeat import BaseHeartbeat
---> 18 from optuna.storages._rdb.storage import RDBStorage
19 from optuna.study._frozen import FrozenStudy
20 from optuna.study._study_direction import StudyDirection
~/.local/lib/python3.9/site-packages/optuna/storages/_rdb/storage.py in <module>
25 from optuna.storages._base import DEFAULT_STUDY_NAME_PREFIX
26 from optuna.storages._heartbeat import BaseHeartbeat
---> 27 from optuna.storages._rdb.models import TrialValueModel
28 from optuna.study._frozen import FrozenStudy
29 from optuna.study._study_direction import StudyDirection
~/.local/lib/python3.9/site-packages/optuna/storages/_rdb/models.py in <module>
6 from typing import Tuple
7
----> 8 from sqlalchemy import asc
9 from sqlalchemy import case
10 from sqlalchemy import CheckConstraint
~/.local/lib/python3.9/site-packages/sqlalchemy/__init__.py in <module>
10 from typing import Any
11
---> 12 from . import util as _util
13 from .engine import AdaptedConnection as AdaptedConnection
14 from .engine import BaseRow as BaseRow
~/.local/lib/python3.9/site-packages/sqlalchemy/util/__init__.py in <module>
13
14 from . import preloaded as preloaded
---> 15 from ._collections import coerce_generator_arg as coerce_generator_arg
16 from ._collections import coerce_to_immutabledict as coerce_to_immutabledict
17 from ._collections import column_dict as column_dict
~/.local/lib/python3.9/site-packages/sqlalchemy/util/_collections.py in <module>
37
38 from ._has_cy import HAS_CYEXTENSION
---> 39 from .typing import Literal
40 from .typing import Protocol
41
~/.local/lib/python3.9/site-packages/sqlalchemy/util/typing.py in <module>
35 if True: # zimports removes the tailing comments
36 from typing_extensions import Annotated as Annotated # 3.8
---> 37 from typing_extensions import Concatenate as Concatenate # 3.10
38 from typing_extensions import (
39 dataclass_transform as dataclass_transform, # 3.11,
ImportError: cannot import name 'Concatenate' from 'typing_extensions' (/home/myusername/.local/lib/python3.9/site-packages/typing_extensions.py)
```
| open | 2023-05-15T11:00:22Z | 2023-05-15T14:30:43Z | https://github.com/mljar/mljar-supervised/issues/618 | [] | xekl | 2 |
JaidedAI/EasyOCR | machine-learning | 731 | finetuning easyocr using persian handwritten data | can easyocr be finetuned using persian handwritten data? | open | 2022-05-19T08:41:07Z | 2022-05-19T08:41:07Z | https://github.com/JaidedAI/EasyOCR/issues/731 | [] | Nadiam75 | 0 |
marcomusy/vedo | numpy | 1,133 | Axisymmetric mesh with extrude | Hello Marco,
I wanted to make an axisymmetric mesh and tried to use the extrude function for it, as recommended by you. I am now wondering how to 'sweep' the shape that I want to use as an outline (in my case it's a spline). If I set a single angle the outline is rotated but the connection between the two outlines is straight which then obviously doesn't lead to round meshes.
I searched for an example but couldn't find one, sorry if I missed it.

```py
from vedo import *
import vedo
plotter = Plotter(axes = 2)
points= [[0,0,15],[1,0,15],[2,0,14.5],[2.7,0,10],[1,0,1],[0.3,0,0],[0,0,0],
[-0.3,0,0],[-0.5,0,0.2],[-2.7,0,10],[-2,0,14.5],[-1,0,15],[-0,0,15]]
spline = Spline(points)
mslices= [s.triangulate() for s in spline.join_segments()]
slice = merge(mslices).color('red')
extruded = slice.extrude(zshift=0.0,rotation=90,dr=0,cap=True,res=1).color('grey')
extruded2 = slice.extrude(zshift=0.0,rotation=170,dr=0,cap=True,res=1).color('green')
plotter.show(extruded, extruded2, slice)
``` | closed | 2024-06-04T08:42:39Z | 2024-06-10T07:59:34Z | https://github.com/marcomusy/vedo/issues/1133 | [] | IsabellaPa | 4 |
Lightning-AI/LitServe | rest-api | 282 | More complex model management (multiple models, model reloading etc...) | ## 🚀 Feature
Supporting model reloads (when a new version is available) and multiple models.
### Motivation
Other servers supports this so to be more attractive that would be a nice feature.
### Pitch
Right now it's obvious on how to serve one model, but what if there are multiple ones (and the request (binary, or HTTP arguments) will tell which model should be used).
### Alternatives
Run N instances for the N models present at a certain time, but if a new model appear, that won't work.
### Additional context
We have an internal C++ server that supports this, torch.serve support that too with I believe what they call an orchestrator.
| closed | 2024-09-19T21:32:56Z | 2024-10-07T11:04:50Z | https://github.com/Lightning-AI/LitServe/issues/282 | [
"enhancement",
"help wanted"
] | bsergean | 2 |
Johnserf-Seed/TikTokDownload | api | 31 | 好看小姐姐投稿 | 小橙子 抖音主页: https://v.douyin.com/evLNohM/
黑色闪光 抖音主页: https://v.douyin.com/evLhSNB/ | open | 2021-07-27T08:42:18Z | 2021-07-28T07:04:41Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/31 | [
"需求建议(enhancement)"
] | dongbulang | 0 |
browser-use/browser-use | python | 732 | Assessment of Microsoft OmniParser 2.0 | ### Problem Description
Microsoft just released its OmniParser 2.0 model. Let's do assessment whether/if/how much it can be leveraged to advance BrowseUse.
This in turn fixes https://github.com/browser-use/browser-use/issues/206 so that would be awesome!
### Proposed Solution
Microsoft OmniParser 2.0.
Compare the performance of our extraction layer compared to OmniParser (for example, for captcha solving, etc). A hybrid approach would be awesome (if beneficial).
### Additional Context
There is a bounty of $100. | open | 2025-02-15T13:21:30Z | 2025-03-03T02:40:03Z | https://github.com/browser-use/browser-use/issues/732 | [
"enhancement",
"💎 Bounty"
] | vishaldwdi | 9 |
reloadware/reloadium | pandas | 19 | Pickle fails in Reloadium (at least from within PyCharm plugin) | **Describe the bug**
Pickling fails when Reloadium is used to run the following code. Non-reloadium runs fine.
**To Reproduce**
```
from builtins import *
import pickle
import jsonpickle
class A:
def __init__(self, *args, **kwargs):
self.b = None
def test_serializer(obj, pickler):
pickled_doc = pickler.dumps(obj)
new_doc = pickler.loads(pickled_doc)
if type(obj) != type(new_doc):
print('ERROR: Serialization changed object type.')
print(f' type: {type(new_doc)} does not match original type: {type(obj)}')
print(' ', pickler)
else:
print('GOOD: Serialization preserved object type.')
print(f' type: {type(new_doc)} matches original type: {type(obj)}')
print(' ', pickler)
# As of 2022-06-05 Reloadium plugin ver. 0.8.2 (shows Reloadium 0.8.8 when running) fails, but non-Reloadium works.
# Running PyCharm 2021.3.1 Community Edition.
if __name__ == '__main__':
# Try JSON first.
json_orig = A()
test_serializer(json_orig, jsonpickle)
# Second, try plain pickle.
py_orig = A()
test_serializer(py_orig, pickle)
```
**Expected behavior**
Unpickled type changes from original type pickled. Tested both normal 'pickle' and 'jsonpickle'. Running normal, works, but running through Reloadium fails.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: Windows
- OS version: 10
- Reloadium package version: 0.8.8
- PyCharm plugin version: 0.8.2
- Editor: PyCharm
- Run mode: Run & Debug
| closed | 2022-06-05T21:48:07Z | 2022-06-16T10:19:25Z | https://github.com/reloadware/reloadium/issues/19 | [] | erjo-mojo | 1 |
netbox-community/netbox | django | 18,780 | Connect to external databases | ### NetBox version
4.1.11
### Feature type
Data model extension
### Proposed functionality
I have a very specific use case, I'm developing a plugin, and I need to query an external DB.
I thought about being able to define the connection on `configuration.py` and them merging with Netbox's default db
Here's POC:
https://github.com/fmluizao/netbox/commit/ce478be45646d73dca69a522d72e4933187c2ad3
Would you accept a PR for this?
### Use case
You can create a model in a plugin which can query other databases, like
```python
class MyPluginModel(models.model):
# ...
MyPluginModel.objects.using('otherdbconnection')
```
Maybe we can even define a custom router to avoid `using`:
https://docs.djangoproject.com/en/5.2/topics/db/multi-db/#using-routers
### Database changes
_No response_
### External dependencies
_No response_ | closed | 2025-02-28T18:06:12Z | 2025-03-17T17:34:28Z | https://github.com/netbox-community/netbox/issues/18780 | [
"status: accepted",
"type: feature",
"complexity: low"
] | fmluizao | 1 |
bmoscon/cryptofeed | asyncio | 21 | L3 messages feed and storage | If I was looking to store L3 book data (let's assume with Arctic), wouldn't it be more efficient to create and store a stream of standardized delta messages as opposed to the entire book?
I only ask because the book callback takes `feed`, `pair` and `book` as the inputs. Using that callback for book updates would not provide any information about the updates themselves. I guess a user can just define a custom callback for this, but I figured it would make more sense to just have the BookCallback do this if it was meant to be called for book updates as mentioned in the docs. Many exchanges provide multiple updates per second which would result in the entire book being passed around as opposed to just the changed items.
Also, if that change does make sense, then as pointed out in my other recently opened issue regarding dropped messages (#20), we would have to generate the missing messages by diffing our current order book with a fresh snapshot.
| closed | 2018-05-09T21:23:52Z | 2018-07-04T21:49:55Z | https://github.com/bmoscon/cryptofeed/issues/21 | [] | rjbks | 10 |
vitalik/django-ninja | django | 1,093 | pydantic2 incompatibility with django-ninja 1.* | Hi there.
i am trying to install django-ninja 1.* (latest), on Linux, using pip and I am constantly having issues. These issues regard the incompatibility of django-ninja and pydantic2, calling for deprecation.
` File "/home/olddog/Documents/Python_Scipts/DOM_Webpage/WEBPAGE_Bilengual/Test3/.venv2/lib/python3.10/site-packages/pydantic/_internal/_config.py", line 274, in prepare_config
warnings.warn(DEPRECATION_MESSAGE, DeprecationWarning)
pydantic.warnings.PydanticDeprecatedSince20: Support for class-based `config` is deprecated, use ConfigDict instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.7/migration/
`
I tried to install binaries for pydantic, manually after going through the sequential installation (see shell script attached
[Venv_creation.sh.txt](https://github.com/vitalik/django-ninja/files/14342404/Venv_creation.sh.txt)
) of my requirements.txt file.
**My requirements.txt goes attached to this message.**
[requirements2_mod.txt](https://github.com/vitalik/django-ninja/files/14342368/requirements2_mod.txt)
Anyone experienced this already?
Please be so kind to advice on how to solve this problem.
Thank you
Marco | open | 2024-02-20T09:40:49Z | 2024-03-07T15:59:07Z | https://github.com/vitalik/django-ninja/issues/1093 | [] | MM-cyi | 3 |
kennethreitz/responder | graphql | 209 | Cannot set the same route on two different methods | Hello.
I'm using class-based views, and when I try to set the same route on two different methods, say get and put, each in a different class, I get an assertion error due to route already inserted.
> assert route not in self.routes
> AssertionError
>
If I move both method views under the same class, I do not get the error, but this is not the
desired behavior since it makes documentation not to get autogenerated correctly. It seems
that it only works when the docstring appears just under the class name definition.
I think the assertion should also take into account the specific method(s) that registered the route.
If a new method is trying to register the same route, it should be ok.
The code to reproduce the issue is:
```
import responder
api = responder.API(title='Cats Web Service', version='1.0', openapi='3.0.0',
docs_route='/docs')
@api.route('/cats')
class GetCatsResource:
"""
A Cats endpoint.
---
get:
summary: Obtain cats info.
description: Get info about all cats.
responses:
200:
description: A json with the info for all the cats.
"""
async def on_get(self, req, resp):
resp.text = (f'Obtained HTTP {req.method} request for all cats')
@api.route('/cats')
class PutCatsResource:
"""
A Cats endpoint.
---
put:
summary: Upload cats info.
description: Update/Create info for all cats.
responses:
200:
description: Information was successfully created/updated
500:
description: Server error
"""
async def on_put(self, req, resp):
resp.text = (f'Uploaded HTTP {req.method} request for a bunch of cats')
if __name__ == '__main__':
api.run()
```
Thanks.
-Bob V
| closed | 2018-11-06T21:31:58Z | 2018-11-07T09:02:59Z | https://github.com/kennethreitz/responder/issues/209 | [] | emacsuser123 | 6 |
tflearn/tflearn | data-science | 990 | Cannot feed value of shape (96, 227, 227) for Tensor 'InputData/X:0', which has shape '(?, 227, 227, 1)' | I am trying to use different data in your example:
```
from __future__ import division, print_function, absolute_import
import scipy
import tflearn
from tflearn.data_utils import shuffle, to_categorical
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.estimator import regression
from tflearn.data_preprocessing import ImagePreprocessing
from tflearn.data_augmentation import ImageAugmentation
dataset_file = 'train'
from tflearn.data_utils import image_preloader
X, Y = image_preloader(dataset_file, image_shape=(227, 227,1), mode='folder', categorical_labels=True,grayscale=True)
# Real-time data preprocessing
img_prep = ImagePreprocessing()
img_prep.add_featurewise_zero_center()
img_prep.add_featurewise_stdnorm()
# Real-time data augmentation
img_aug = ImageAugmentation()
img_aug.add_random_flip_leftright()
img_aug.add_random_rotation(max_angle=25.)
# Convolutional network building
network = input_data(shape=[None, 227, 227, 1],
data_preprocessing=img_prep,
data_augmentation=img_aug)
network = conv_2d(network, 32, 3, activation='relu')
network = max_pool_2d(network, 2)
network = conv_2d(network, 64, 3, activation='relu')
network = conv_2d(network, 64, 3, activation='relu')
network = max_pool_2d(network, 2)
network = fully_connected(network, 512, activation='relu')
network = dropout(network, 0.5)
network = fully_connected(network, 12, activation='softmax')
network = regression(network, optimizer='adam',
loss='categorical_crossentropy',
learning_rate=0.001)
# Train using classifier
model = tflearn.DNN(network, tensorboard_verbose=0)
model.fit(X, Y, n_epoch=50, shuffle=True, validation_set=.1,
show_metric=True, batch_size=96, run_id='cifar10_cnn')
```
I am getting this error:
`
Cannot feed value of shape (96, 227, 227) for Tensor 'InputData/X:0', which has shape '(?, 227, 227, 1)'`
The data I have is from https://www.kaggle.com/c/plant-seedlings-classification/data
| closed | 2018-01-04T05:05:21Z | 2018-01-09T04:53:44Z | https://github.com/tflearn/tflearn/issues/990 | [] | Lan131 | 2 |
pallets/quart | asyncio | 224 | Unable to suppress Quart serving logs after Python 3.10 upgrade | I'm moving a Quart webapp from python 3.7 to python 3.10 and I'm suddenly unable to suppress the server logs.
I would expect `getLogger('quart.serving').setLevel(ERROR)` to suppress most logging messages, but after switching to 3.10 I get everything.
Environment:
- Python version: 3.10.9
- Quart version: 0.18.3
| closed | 2023-03-10T16:23:22Z | 2023-10-01T00:20:35Z | https://github.com/pallets/quart/issues/224 | [] | johndonor3 | 11 |
microsoft/nni | data-science | 5,586 | How to import L1FilterPruner ? | **Environment**: VS Code
- NNI version: 3.0
- Training service (local|remote|pai|aml|etc): remote
- Client OS: Windows
- Server OS (for remote mode only): Ubuntu
- Python version: 3.9
- PyTorch/TensorFlow version: PyTorch 1.12
- Is conda/virtualenv/venv used?: conda
- Is running in Docker?: No
Hi, I am trying to use L1FilterPruner but I can't import it ? Have you removed it ? I have try to import it like other users did in other issues but I am not able to do it.
`from nni.compression.pytorch import L1FilterPruner
` | open | 2023-05-29T08:24:23Z | 2023-06-12T10:21:05Z | https://github.com/microsoft/nni/issues/5586 | [] | gkrisp98 | 10 |
lexiforest/curl_cffi | web-scraping | 466 | AsyncSession requests 特定情况下请求报错了 | **Describe the bug**
当去掉代码中的第一次请求, 只保留第二次请求,可以正常返回数据,
然而加上第一次请求, 就会报curl_cffi.requests.exceptions.ConnectionError: Failed to perform, curl: (55) Recv failure: Connection was reset.
**To Reproduce**
```
import asyncio
from curl_cffi.requests import AsyncSession
from httpx import AsyncClient
async def curl_cffi_main():
async with AsyncSession() as s:
response_frist = await s.get(
"https://chemrxiv.org/engage/chemrxiv/article-details/673bac22f9980725cfa41e0b"
)
print(response_frist.status_code)
response_second = await s.get(
'https://chemrxiv.org/engage/api-gateway/chemrxiv/assets/orp/resource/item/67204e5883f22e42147e4d99/original/mn-o2-decorated-n-doped-mesoporous-carbon-electrodes-boost-enhanced-removal-of-cu2-and-pb2-ions-from-wastewater-via-a-hybrid-capacitive-deionization-platform.pdf',
)
print(response_second.content)
async def httpx_main():
async with AsyncClient() as s:
response_frist = await s.get(
"https://chemrxiv.org/engage/chemrxiv/article-details/673bac22f9980725cfa41e0b"
)
print(response_frist.status_code)
response_second = await s.get(
'https://chemrxiv.org/engage/api-gateway/chemrxiv/assets/orp/resource/item/67204e5883f22e42147e4d99/original/mn-o2-decorated-n-doped-mesoporous-carbon-electrodes-boost-enhanced-removal-of-cu2-and-pb2-ions-from-wastewater-via-a-hybrid-capacitive-deionization-platform.pdf',
)
print(response_second.content)
if __name__ == '__main__':
asyncio.run(curl_cffi_main())
# asyncio.run(httpx_main())
```
**Expected behavior**
正常返回第二次请求内容才对.
**Versions**
- OS: [Windows 11]
- curl_cffi version [0.7.4]
**Additional context**
- 我使用的是 async
- 我尝试了 httpx,requests 可以正常获取
| closed | 2024-12-19T09:04:32Z | 2024-12-19T09:21:12Z | https://github.com/lexiforest/curl_cffi/issues/466 | [
"bug"
] | PythonZhao | 2 |
modin-project/modin | pandas | 6,629 | PERF: HDK triggers LazyProxyCategoricalDtype materialization on merge | Before the merge, HDK checks dtypes and it triggers LazyProxyCategoricalDtype materialization. | closed | 2023-10-04T15:10:43Z | 2023-10-06T10:01:31Z | https://github.com/modin-project/modin/issues/6629 | [
"Performance 🚀",
"HDK"
] | AndreyPavlenko | 0 |
plotly/plotly.py | plotly | 4,829 | add "Zen of Plotly" similar to Narwhals | `import narwhals.this` prints a message about the project's philosophy - it would be a nice addition to Plotly / Plotly Express if `import plotly.this` (or similar) did the same. | open | 2024-10-24T14:33:49Z | 2024-10-24T14:34:04Z | https://github.com/plotly/plotly.py/issues/4829 | [
"feature",
"P3"
] | gvwilson | 0 |
CorentinJ/Real-Time-Voice-Cloning | python | 666 | No gui? | i run python demo_toolbox.py and what is returned is:
(voice-clone) S:\path\path\path\path\Real-Time-Voice-Cloning-master>python demo_toolbox.py
S:\path\path\path\path\Real-Time-Voice-Cloning-master\encoder\audio.py:13: UserWarning: Unable to import 'webrtcvad'. This package enables noise removal and is recommended.
warn("Unable to import 'webrtcvad'. This package enables noise removal and is recommended.")
Arguments:
datasets_root: None
enc_models_dir: encoder\saved_models
syn_models_dir: synthesizer\saved_models
voc_models_dir: vocoder\saved_models
cpu: False
seed: None
no_mp3_support: False
Error: Model files not found. Follow these instructions to get and install the models:
https://github.com/CorentinJ/Real-Time-Voice-Cloning/wiki/Pretrained-models
which confuses me because no GUI launches and it does not give an error either. | closed | 2021-02-17T00:06:59Z | 2021-02-17T20:22:19Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/666 | [] | ghost | 3 |
lux-org/lux | jupyter | 362 | [BUG] Matplotlib code missing computed data for BarChart, LineChart and ScatterChart | **Describe the bug**
Without `self.code += f”df = pd.DataFrame({str(self.data.to_dict())})\n”`, exported BarChart, LineChart and ScatterChart that contain computed data throw an error.
**To Reproduce**
```
df = pd.read_csv('https://github.com/lux-org/lux-datasets/blob/master/data/hpi.csv?raw=true')
df
```
```
vis = df.recommendation["Occurrence"][0]
vis
print (vis.to_code("matplotlib"))
```
**Expected behavior**
Should render single BarChart
**Screenshots**
<img width="827" alt="Screen Shot 2021-04-15 at 11 22 25 AM" src="https://user-images.githubusercontent.com/11529801/114919503-2b04cc00-9ddd-11eb-90b1-1db3e59caa68.png">
Expected:
<img width="856" alt="Screen Shot 2021-04-15 at 11 22 51 AM" src="https://user-images.githubusercontent.com/11529801/114919507-2d672600-9ddd-11eb-8206-10801c9eb055.png">
| open | 2021-04-15T18:25:33Z | 2021-04-15T20:47:32Z | https://github.com/lux-org/lux/issues/362 | [
"bug"
] | caitlynachen | 0 |
plotly/dash-core-components | dash | 547 | the options description for dcc.dropdown is not clear about the props rules | this will be an improvement for https://github.com/plotly/dash/issues/708 | closed | 2019-05-08T18:26:51Z | 2019-05-09T01:30:51Z | https://github.com/plotly/dash-core-components/issues/547 | [] | byronz | 0 |
deepfakes/faceswap | deep-learning | 710 | dlib no compile | **To Reproduce**
Steps to reproduce the behavior:
1. Run command "python setup.py -G ...." in INSTALL.md
2. Show error messages; no have parameter "--yes"
**Screenshots**
If applicable, add screenshots to help explain your problem.

**Desktop (please complete the following information):**
- OS: Windows 10 64bit
| closed | 2019-04-18T16:51:45Z | 2019-04-18T17:05:19Z | https://github.com/deepfakes/faceswap/issues/710 | [] | bluems | 1 |
davidsandberg/facenet | computer-vision | 920 | How to use Step By Step with Webcam | Hi,I want to use Webcam for this project but i don't know how to run it , when i use facenet.py nothing happens. | open | 2018-11-14T13:47:41Z | 2018-12-05T06:28:17Z | https://github.com/davidsandberg/facenet/issues/920 | [] | mehradds | 3 |
waditu/tushare | pandas | 1,420 | 公募基金列表 接口:fund_basic 数据缺失 | 同步中基金基础列表数据缺失。如果070005 嘉实债券没有。6月份同步中还有此部分数据。请问什么原因? | open | 2020-09-01T02:45:20Z | 2020-09-01T02:45:20Z | https://github.com/waditu/tushare/issues/1420 | [] | simon-zzm | 0 |
open-mmlab/mmdetection | pytorch | 11,997 | How to use the Mask2Former model for semantic segmentation? | mmdet has examples of mask2former for instance segmentation and panoptic segmentation, but how to do semantic segmentation? How can I modify it? | open | 2024-10-14T09:09:52Z | 2024-10-14T09:10:08Z | https://github.com/open-mmlab/mmdetection/issues/11997 | [] | Invincible-predator | 0 |
mkhorasani/Streamlit-Authenticator | streamlit | 206 | Login issue Username/password is incorrect | I am trying out a demo example and no matter if I set the autohash to True or False, I cannot get authentication with an username or password in config. I am lost as to what is the issue. Any suggestion would be great.
ST_Version : 1.38.0
```
import yaml
import streamlit as st
from yaml.loader import SafeLoader
import streamlit_authenticator as stauth
from streamlit_authenticator.utilities import (CredentialsError,
ForgotError,
Hasher,
LoginError,
RegisterError,
ResetError,
UpdateError)
# Loading config file
with open('./data/config.yaml', 'r', encoding='utf-8') as file:
config = yaml.load(file, Loader=SafeLoader)
print(config)
# Hashing all plain text passwords once
# Hasher.hash_passwords(config['credentials'])
# Creating the authenticator object
authenticator = stauth.Authenticate(
config['credentials'],
config['cookie']['name'],
config['cookie']['key'],
config['cookie']['expiry_days'],
config['pre-authorized'],
auto_hash=True,
)
# Creating a login widget
try:
authenticator.login()
except LoginError as e:
st.error(e)
if st.session_state["authentication_status"]:
authenticator.logout()
st.write(f'Welcome *{st.session_state["name"]}*')
st.title('Some content')
elif st.session_state["authentication_status"] is False:
st.error('Username/password is incorrect')
elif st.session_state["authentication_status"] is None:
st.warning('Please enter your username and password')
# Saving config file
with open('../config.yaml', 'w', encoding='utf-8') as file:
yaml.dump(config, file, default_flow_style=False)
```
And here is the config file
```
credentials:
usernames:
jsmith:
email: jsmith@gmail.com
failed_login_attempts: 0 # Will be managed automatically
logged_in: False # Will be managed automatically
name: John Smith
password: abc # Will be hashed automatically
rbriggs:
email: rbriggs@gmail.com
failed_login_attempts: 0 # Will be managed automatically
logged_in: False # Will be managed automatically
name: Rebecca Briggs
password: def # Will be hashed automatically
cookie:
expiry_days: 30
key: "e324670610d643aa0f4f04717f4ed8713297343c45bec4024f9c01e1f8fa9a97"
name: test_cookie
pre-authorized:
emails:
- melsby@gmail.com
``` | closed | 2024-09-20T05:26:19Z | 2024-10-04T19:48:58Z | https://github.com/mkhorasani/Streamlit-Authenticator/issues/206 | [
"help wanted"
] | AvisP | 5 |
ray-project/ray | data-science | 51,279 | [core][gpu-objects] Method decorator for performance improvement | ### Description
Specifying shape ahead of time, so then we don’t need to wait for sender to finish the task before triggering receive.
### Use case
_No response_ | open | 2025-03-11T23:00:41Z | 2025-03-11T23:01:28Z | https://github.com/ray-project/ray/issues/51279 | [
"enhancement",
"P2",
"core",
"gpu-objects"
] | kevin85421 | 0 |
adbar/trafilatura | web-scraping | 58 | Extracting Text from HTML: Unordered List Description\Header | I have been using trafilatura to extract text from HTML pages. I have noticed that sometimes the text following an unordered list is not extracted, the list items are extracted but not the text following the unordered list tag.
```
<ul>Description of the list:
<li>List item 1</li>
<li>List item 2</li>
<li>List item 3</li>
</ul>
```
In the previous code example, the extracted text would be:
- List item 1
- List item 2
- List item 3
"Description of the list" would not be extracted into the text file. This is probably due to incorrect HTML coding practices but I'm wondering if Trafilatura can capture that text. | closed | 2021-03-01T18:42:23Z | 2021-03-05T16:59:19Z | https://github.com/adbar/trafilatura/issues/58 | [
"bug"
] | zmeharen | 1 |
Textualize/rich | python | 3,013 | rich.pretty.install does not work for IPython | Version:
```
Python 3.8.13
IPython 8.12.2
```
It seems that `get_ipython` is not in globals when executed in `pretty.py`, causing the rich text formatter not being installed. It is actually in `globals()['__builtins__']`. I suggest just use `try` to replace this check. The problem happens here: https://github.com/Textualize/rich/blob/8c7449f987a5c423a162aacdf969d647e6085918/rich/pretty.py#L214
The fix is:
```python
try:
ip = get_ipython() # type: ignore[name-defined]
from IPython.core.formatters import BaseFormatter
class RichFormatter(BaseFormatter): # type: ignore[misc]
pprint: bool = True
def __call__(self, value: Any) -> Any:
if self.pprint:
return _ipy_display_hook(
value,
console=get_console(),
overflow=overflow,
indent_guides=indent_guides,
max_length=max_length,
max_string=max_string,
max_depth=max_depth,
expand_all=expand_all,
)
else:
return repr(value)
# replace plain text formatter with rich formatter
rich_formatter = RichFormatter()
ip.display_formatter.formatters["text/plain"] = rich_formatter
except NameError:
sys.displayhook = display_hook
``` | closed | 2023-07-01T01:54:20Z | 2023-07-29T16:05:50Z | https://github.com/Textualize/rich/issues/3013 | [] | zhengyu-yang | 2 |
tensorpack/tensorpack | tensorflow | 907 | [Mask RCNN] HOW to deal with masks with holes? | @ppwwyyxx The masks in Mask RCNN are represented by polygons. If an object has holes, then it will contains multiple polygons. However, when the polygons are converted to mask, the holes become foreground masks(see [this line](https://github.com/tensorpack/tensorpack/blob/7b8728f96b76774a5d345390cfb5607c8935d9e3/examples/FasterRCNN/data.py#L366)).
If I load the masks using the binary mask format, I can not use the augmentations through coordinates.
If I want to correctly handle the holes with the polygons representation, what should I do?
Thanks. | closed | 2018-09-23T14:28:18Z | 2023-06-18T22:00:32Z | https://github.com/tensorpack/tensorpack/issues/907 | [
"usage"
] | wangg12 | 8 |
pywinauto/pywinauto | automation | 741 | Add support for AtspiDocument interface | Creating this issue to define requirements for AtsiDocument support as below:
- Add or extend an existing GTK sample app with controls supporting AtspiDocument interface.
- Add low-level interface class AtspiDocument in atspi_objects.py
- Add support of AtspiDocument interface in atspi_element_element_info.py | closed | 2019-05-25T09:53:53Z | 2019-09-20T06:32:21Z | https://github.com/pywinauto/pywinauto/issues/741 | [
"atspi"
] | airelil | 0 |
Ehco1996/django-sspanel | django | 786 | support multi ehco config for proxy node | closed | 2023-02-02T23:55:56Z | 2023-04-30T02:16:25Z | https://github.com/Ehco1996/django-sspanel/issues/786 | [
"help wanted",
"Stale"
] | Ehco1996 | 0 | |
polakowo/vectorbt | data-visualization | 748 | VectorBT - Telegram - Issue: ImportError: cannot import Unauthorized, ChatMigrated | Hello
I am trying to install vbt on a win11 machine python 3.12 environment.
I am getting errors, even after having installed python-telegram-bot, v21.5:
"from telegram.error import Unauthorized, ChatMigrated; ImportError: cannot import name 'Unauthorized' from 'telegram.error'
I just saw that the teegram bot max version is 20.
Please, could you assist to elimitate this error.
Thanks, Greetings, Peter | open | 2024-09-19T10:25:25Z | 2025-02-11T12:14:17Z | https://github.com/polakowo/vectorbt/issues/748 | [] | pte1601 | 7 |
tensorlayer/TensorLayer | tensorflow | 392 | How to use BiDynamicRNNLayer for Text classification?Do not support return_last at the moment ? | self.x = tf.placeholder("float", [None, None, alphabet_size], name="inputs")
self.y = tf.placeholder(tf.int64, [None, ], name="labels")
self.dropout_keep_prob = tf.placeholder(tf.float32, name="dropout_keep_prob")
#n_hidden = 64 # hidden layer num of features
self.network = tl.layers.InputLayer(self.x, name='input_layer')
self.network = tl.layers.BiDynamicRNNLayer(self.network,
cell_fn = tf.contrib.rnn.BasicLSTMCell,
n_hidden = n_hidden,
dropout = dropout_keep_prob,
sequence_length = tl.layers.retrieve_seq_length_op(self.x),
return_seq_2d = True,
return_last = True,
n_layer = 3,
name = 'dynamic_rnn')
self.network = tl.layers.DenseLayer(self.network, n_units=2,
act=tf.identity, name="output")
self.network.outputs_op = tf.argmax(tf.nn.softmax(self.network.outputs), 1)
self.loss = tl.cost.cross_entropy(self.network.outputs, self.y, 'xentropy')
Raise Exception :Do not support return_last at the moment
why? | closed | 2018-03-11T01:40:59Z | 2019-05-13T15:24:36Z | https://github.com/tensorlayer/TensorLayer/issues/392 | [] | chaiyixuan | 2 |
modelscope/modelscope | nlp | 783 | 请问modelscope的数据集部分可以添加上LLM的预训练,指令微调,奖励模型分类吗 | **Describe the feature**
Features description
**Motivation**
A clear and concise description of the motivation of the feature. Ex1. It is inconvenient when [....]. Ex2. There is a recent paper [....], which is very helpful for [....].
**Related resources**
If there is an official code release or third-party implementations, please also provide the information here, which would be very helpful.
**Additional context**
Add any other context or screenshots about the feature request here. If you would like to implement the feature and create a PR, please leave a comment here and that would be much appreciated.
| closed | 2024-02-26T06:26:33Z | 2024-05-22T01:49:07Z | https://github.com/modelscope/modelscope/issues/783 | [
"Stale"
] | lainxx | 3 |
dynaconf/dynaconf | flask | 274 | [RFC] Add OS X to CI | Looks like we can have OSX builds
https://devblogs.microsoft.com/devops/azure-pipelines-now-supports-additional-hosted-macos-versions/
We need to add it to our Azure Pipeline | closed | 2019-12-16T19:46:25Z | 2020-03-02T01:55:40Z | https://github.com/dynaconf/dynaconf/issues/274 | [
"Not a Bug",
"RFC"
] | rochacbruno | 0 |
Kanaries/pygwalker | matplotlib | 619 | Possibility to save spec when spec param is not json file | Currently, the walker instance does not have updated spec unless you click on the save button that inject the newest spec back to python backend (which does not work when spec param is not json_file).
I am providing pygwalker in streamlit as a standalone application for a group of people as an online tool. Instead of having a json file on server for each user, it is more practical to keep their spec in their session/localstorage. However, as the save does not work for spec in memory mode, the user has to export first their spec and then copy back so that I could regenerate a renderer with the new spec.
Is it technically feasible?
Thank you !
| closed | 2024-09-13T21:10:44Z | 2024-09-14T16:23:08Z | https://github.com/Kanaries/pygwalker/issues/619 | [] | ymurong | 2 |
pydantic/logfire | pydantic | 363 | Temporal.io integration | ### Description
[Temporal](https://temporal.io/) has a [python sdk](https://github.com/temporalio/sdk-python) with at least some level of [opentelemetry support](https://github.com/temporalio/sdk-python?tab=readme-ov-file#opentelemetry-support).
It would be great to be able to instrument it in logfire.
More info here: https://docs.temporal.io/develop/python/observability#tracing
and opentelemetry sample here: https://github.com/temporalio/samples-python/tree/main/open_telemetry | open | 2024-08-05T21:55:55Z | 2024-12-31T11:31:58Z | https://github.com/pydantic/logfire/issues/363 | [
"Feature Request"
] | slingshotvfx | 2 |
keras-team/keras | machine-learning | 20,350 | argmax returns incorrect result for input containing -0.0 (Keras using TensorFlow backend) | Description:
When using keras.backend.argmax with an input array containing -0.0, the result is incorrect. Specifically, the function returns 1 (the index of -0.0) as the position of the maximum value, while the actual maximum value is 1.401298464324817e-45 at index 2.
This issue is reproducible in TensorFlow and JAX as well, as they share similar backend logic for the argmax function. However, PyTorch correctly returns the expected index 2 for the maximum value.
Expected Behavior:
keras.backend.argmax should return 2, as the value at index 2 (1.401298464324817e-45) is greater than both -1.0 and -0.0.
```
import numpy as np
import torch
import tensorflow as tf
import jax.numpy as jnp
from tensorflow import keras
def test_argmax():
# Input data
input_data = np.array([-1.0, -0.0, 1.401298464324817e-45], dtype=np.float32)
# PyTorch argmax
pytorch_result = torch.argmax(torch.tensor(input_data, dtype=torch.float32)).item()
print(f"PyTorch argmax result: {pytorch_result}")
# TensorFlow argmax
tensorflow_result = tf.math.argmax(input_data).numpy()
print(f"TensorFlow argmax result: {tensorflow_result}")
# Keras argmax (Keras internally uses TensorFlow, so should be the same)
keras_result = keras.backend.argmax(input_data).numpy()
print(f"Keras argmax result: {keras_result}")
# JAX argmax
jax_result = jnp.argmax(input_data)
print(f"JAX argmax result: {jax_result}")
if __name__ == "__main__":
test_argmax()
```
```
PyTorch argmax result: 2
TensorFlow argmax result: 1
Keras argmax result: 1
JAX argmax result: 1
``` | closed | 2024-10-14T10:15:25Z | 2025-01-25T06:13:46Z | https://github.com/keras-team/keras/issues/20350 | [
"stat:awaiting keras-eng",
"type:Bug"
] | LilyDong0127 | 1 |
dpgaspar/Flask-AppBuilder | flask | 1,661 | encrypt uploaded file | hi, I need to encrypt the uploaded files and of course decrypt them on download.
I guess this needs to be done by defining a new filemanager but I don't know how to configure the app to use the new filemanager and not the default one.
Can you give me advice? | closed | 2021-06-23T10:30:13Z | 2021-06-24T07:22:35Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1661 | [] | enricosecco | 2 |
google-research/bert | nlp | 1,185 | mBERT Pre-training Procedure | I want to pre-train multilingual BERT using the existing mBERT weights.
I have tried to find it but I could not find any mention of how mBERT was pre-trained. Like
If data for all the languages was fed at once during pre-training ?
OR
Pre-trained for all languages one at a time like,
Pre-train for English
Use the English weights and pre-train on french
then use eng-fr weights and train for german then use en-fr-de and so on.
I think the model was pre-trained using the previous approach but if we opt for 2nd approach considering less compute power, would it help ? | open | 2020-12-11T15:23:19Z | 2021-12-06T10:22:42Z | https://github.com/google-research/bert/issues/1185 | [] | muhammadfahid51 | 3 |
FactoryBoy/factory_boy | sqlalchemy | 902 | Use aware_time for DjangoModelFactory | Hi maintainers, thank you for this project :)
#### The problem
- Version info:
- Django: 3.2.10
- Faker: 11.1.0
I got warning message below, using `DjangoModelFactory` and `factory.Faker`.
```
~~/lib/python3.9/site-packages/django/db/models/fields/__init__.py:1416: RuntimeWarning: DateTimeField AccessEvent.accessed_at received a naive datetime (2022-01-03 02:58:15) while time zone support is active.
warnings.warn("DateTimeField %s received a naive datetime (%s)"
```
my faker code is below:
```python
from factory import Faker
from factory.django import DjangoModelFactory
class MyModelFactory(DjangoModelFactory):
class Meta:
model = models.AccessEvent
accessed_at = Faker('date_time_between')
...
if __init__ == '__main__:
# Generate dummy data
obj = MyModelFactory.build()
obj.save()
```
#### Proposed solution
I want to know how to use `django.utils.timezone.make_aware` while generating dummy data using by faker.
| closed | 2022-01-06T10:01:35Z | 2022-01-12T08:49:17Z | https://github.com/FactoryBoy/factory_boy/issues/902 | [
"Q&A",
"Fixed"
] | skokado | 2 |
huggingface/peft | pytorch | 1,579 | error merge_and_unload for adapter with a prefix | ### System Info
peft version: 0.9.0
transforemrs version: 4.37.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [X] My own task or dataset (give details below)
### Reproduction
I have an adapter model which weights have a prefix (base_model.model),here's my merge code:
```
from peft import AutoPeftModelForCausalLM, AutoPeftModel
import sys
path_to_adapter = sys.argv[1]
new_model_directory = sys.argv[2]
model = AutoPeftModelForCausalLM.from_pretrained(
path_to_adapter, # path to the output directory
device_map="cpu",
trust_remote_code=True
).eval()
merged_model = model.merge_and_unload()
# max_shard_size and safe serialization are not necessary.
# They respectively work for sharding checkpoint and save the model to safetensors
merged_model.save_pretrained(new_model_directory, safe_serialization=False)
```
After run it, I found that the saved model's weights is same with the base model. I assume that this may cased by my adapter's weights have a prefix and not merged correctly.
### Expected behavior
How to correctly merge and save such adapter | closed | 2024-03-21T11:58:38Z | 2024-04-29T15:03:46Z | https://github.com/huggingface/peft/issues/1579 | [] | afalf | 23 |
sktime/sktime | scikit-learn | 7,671 | [BUG] The name for the timepoints index level is not included after prediction. | **Describe the bug**
The name for the timepoints index level is not included after prediction.
The other index names are, except for the timepoints.
**To Reproduce**
```python
from sktime.utils._testing.hierarchical import _make_hierarchical
from sktime.forecasting.arima import ARIMA
y = _make_hierarchical()
forecaster = ARIMA()
y_pred = forecaster.fit(y, fh=[1, 2]).predict()
y_pred
expected_index_names = ["h0","h1","time"]
assert y.index.names == expected_index_names
assert y_pred.index.names== expected_index_names
```
**Expected behavior**
The index names of predictions should represent the index names of the data learned on.
**Versions**
0.34.0
| open | 2025-01-20T12:33:35Z | 2025-02-11T08:58:47Z | https://github.com/sktime/sktime/issues/7671 | [
"bug",
"module:forecasting"
] | kdekker-kdr4 | 7 |
public-apis/public-apis | api | 4,140 | Add more | Add more API examples | open | 2025-02-11T22:09:36Z | 2025-02-11T22:09:36Z | https://github.com/public-apis/public-apis/issues/4140 | [] | HumaizaNaz | 0 |
marshmallow-code/flask-marshmallow | sqlalchemy | 59 | Project Status | Is this project still actively maintained? | closed | 2017-04-14T19:33:39Z | 2017-04-15T19:44:24Z | https://github.com/marshmallow-code/flask-marshmallow/issues/59 | [] | cesarmarroquin | 1 |
pallets-eco/flask-wtf | flask | 226 | how do i keep the original filestorage object inside the form when validation errors? | how do i keep the original filestorage object inside the form when validation errors?
scenario:
input 1 ok
input 2 failed
fileinput 1 ok
user POSTs
then error/validation on input 2 is not good, so it redirects them back to the same page with error in message box.
However, the fileinput is gone but is valid. It dissapeared/got cleared out. How do I not let this happen?
Thanks!
| closed | 2016-02-19T17:49:28Z | 2021-05-28T01:03:57Z | https://github.com/pallets-eco/flask-wtf/issues/226 | [] | rlam3 | 3 |
Netflix/metaflow | data-science | 1,630 | complicated flow support | ```python
class HelloFlow(FlowSpec):
alpha = Parameter("alpha", default=0.5)
@step
def start(self):
file_name = "abc.txt"
if not os.path.exists(file_name):
print("open and write file", file_name)
else:
print(file_name, "already exists")
self.next(self.sep1, self.sep2)
@step
def sep1(self):
self.next(self.join, self.sep3)
@step
def sep3(self):
self.next(self.join)
@step
def sep2(self):
self.next(self.join)
@step
def join(self, inputs):
print("join", inputs)
self.next(self.end)
@step
def end(self):
pass
if __name__ == "__main__":
HelloFlow()
```
```
start ------> sep2 -----> join -----> end
|-> sep1 ---------^
|-> sep3 ---^
```
the code above reported an error
```
Metaflow 2.10.5+netflix-ext(1.0.7) executing HelloFlow for user:garrick
Validating your flow...
Validity checker found an issue on line 31:
Step join seems like a join step (it takes an extra input argument) but an incorrect number of steps
(sep1, sep2, sep3) lead to it. This join was expecting 2 incoming paths, starting from split step(s) join, sep3.
```
Does metaflow support a complicated flow like that? | closed | 2023-11-10T09:51:04Z | 2024-01-03T19:57:34Z | https://github.com/Netflix/metaflow/issues/1630 | [] | GarrickLin | 2 |
ivy-llc/ivy | numpy | 28,576 | Fix Frontend Failing Test: torch - creation.paddle.tril | To-do List: https://github.com/unifyai/ivy/issues/27498 | closed | 2024-03-13T00:19:26Z | 2024-03-21T19:50:03Z | https://github.com/ivy-llc/ivy/issues/28576 | [
"Sub Task"
] | ZJay07 | 0 |
albumentations-team/albumentations | deep-learning | 2,304 | Typing 0 after decimal resets the cursor while trying transformations on explore webpage | ## Describe the bug
In the text boxes where one can edit arguments to various transformations on the explore page are bugged on typing a 0 right after the decimal. Instead of the expected "0.0" the cursor is reset to the end of the text and the typed zero and the decimal disappears.
### To Reproduce
Steps to reproduce the behavior:
1. Open up any transformation on the explore page.
2. Try to edit the some arguments for example **sigma_limit: [0.3,0.8]**
3. I would like to change the values to **[0.05,0.09]**
4. backspace 3 and types 0
5. cursor is teleported to the end while the 0 and decimal vanish into thin air
### Expected behavior
**[0.0,0.8]**
### Actual behavior
**[0,0.8]**
| closed | 2025-01-25T11:37:13Z | 2025-02-28T02:17:09Z | https://github.com/albumentations-team/albumentations/issues/2304 | [
"bug"
] | anbilly19 | 1 |
NullArray/AutoSploit | automation | 848 | Divided by zero exception118 | Error: Attempted to divide by zero.118 | closed | 2019-04-19T16:01:31Z | 2019-04-19T16:37:26Z | https://github.com/NullArray/AutoSploit/issues/848 | [] | AutosploitReporter | 0 |
aminalaee/sqladmin | fastapi | 157 | Exception: Could not find field converter for column id (<class 'sqlmodel.sql.sqltypes.GUID'>) | ### Discussed in https://github.com/aminalaee/sqladmin/discussions/155
<div type='discussions-op-text'>
<sup>Originally posted by **Anton-Karpenko** May 26, 2022</sup>
Hey, I am using sqlmodel to create models. I use the UUID type for the id columns.
```
class RandomModel(SQLModel, table=True):
id: uuid.UUID = Field(primary_key=True, index=True, nullable=False, default_factory=uuid.uuid4)
```
I added sqladmin to my project and I would like to create an instance within the admin panel. I cannot open `create` page because of an error.
`Exception: Could not find field converter for column id (<class 'sqlmodel.sql.sqltypes.GUID'>)`
Can I apply a custom converter to it?</div> | closed | 2022-05-26T21:27:02Z | 2022-05-27T07:57:56Z | https://github.com/aminalaee/sqladmin/issues/157 | [
"bug"
] | aminalaee | 0 |
cupy/cupy | numpy | 8,606 | Support ROCm 6.3 | ## [Tasks](https://github.com/cupy/cupy/wiki/Actions-Needed-for-Dependency-Update)
- [x] Read [ROCm Release Notes](https://docs.amd.com/).
- [x] Update AMD driver in Jenkins test infrastructure (ask @kmaehashi).
- [ ] Fix code and CI to support the new vesrion.
- **FlexCI**: Update `.pfnci/schema.yaml` and `.pfnci/matirx.yaml`. (https://github.com/cupy/cupy/pull/8623)
- **Wheel Package Detection**: Add the package to the [duplicate detection](https://github.com/cupy/cupy/blob/master/cupy/_environment.py). (**TBD**)
- [ ] Backport the above PR.
- [ ] Fix `cupy-release-tools` to support the new version. (https://github.com/cupy/cupy-release-tools/pull/396)
- [ ] Backport the above PR.
- [ ] Fix documentation.
- Add new wheel package to the [Installation Guide](https://docs.cupy.dev/en/latest/install.html) and `README.md`.
- Update requirements in the installation guide.
- [ ] Backport the above PR.
- [ ] Add new wheel package to the [website](https://cupy.dev).
- [ ] Implement or create an issue to support new features, if applicable.
| open | 2024-09-17T13:51:20Z | 2024-12-18T04:30:25Z | https://github.com/cupy/cupy/issues/8606 | [
"cat:enhancement",
"prio:high"
] | kmaehashi | 1 |
graphql-python/gql | graphql | 319 | File upload 'unable to parse the query' | **Describe the bug**
Getting the following exception while uploading the media to the saleor.
Exception -> ('Exception while uploading the file -> ', "{'message': 'Unable to parse query.', 'extensions': {'exception': {'code': 'str', 'stacktrace': []}}}")
I'm trying to upload a file to graphql from an external Django application using gql to an e-commerce platform Saleor which is based on Django.
using below code
```python
async def upload_media_to_saleor():
"""
This method is written to upload files to Saleor
Returns: response from Saleor else exceptional message
"""
params = {}
try:
query = """fragment FileFragment on File { url contentType __typename}fragment UploadErrorFragment on UploadError { code field __typename}mutation FileUpload($file: Upload!) { fileUpload(file: $file) { uploadedFile { ...FileFragment __typename } errors { ...UploadErrorFragment __typename } __typename }}"""
with open('/home/user_name/Downloads/sample_image.jpeg','rb') as f:
params = {"file" : f}
transport = AIOHTTPTransport(url=GRAPHQL_BASE_URL, headers=HEADERS)
async with Client(
transport=transport, fetch_schema_from_transport=False,
) as session:
query = gql(query)
response = await session.execute(
query,
variable_values=params,
upload_files=True
)
return response
except Exception as e:
message = "Exception while uploading the file -> ", str(e)
print(message)
return message
```
**To Reproduce**
Steps to reproduce the behavior:
Call this function in any one of the working functions.
**Expected behavior**
The file should get uploaded to the saleor backend.
**System info (please complete the following information):**
- OS: UBUNTU 20.04.3
- Python version: 3.08.10
- gql version: 3.1.0
- graphql-core version:
| closed | 2022-04-11T10:00:23Z | 2022-04-11T18:47:46Z | https://github.com/graphql-python/gql/issues/319 | [
"type: question or discussion"
] | g-londhe | 10 |
benbusby/whoogle-search | flask | 809 | [FEATURE] Can you add railway.app direct deployment ? | Just tested , it is working perfectly with a custom domain even . It has no downtime , a great alternative to heroku and replit.
| closed | 2022-07-07T02:12:43Z | 2022-08-28T12:21:35Z | https://github.com/benbusby/whoogle-search/issues/809 | [
"enhancement"
] | psbaruah | 1 |
ydataai/ydata-profiling | jupyter | 943 | unable to install pandas-profiling: neither 'setup.py' nor 'pyproject.toml' found | I am trying to install pandas profiling on my new Macbook pro M1 (have used pandas profiling on other pcs and it worked amazingly). However, I have tried installing using pip, from git, and from the source, and all requests returned the same output below:
Defaulting to user installation because normal site-packages is not writeable
Collecting pandas_profiling
Using cached pandas_profiling-3.1.0-py2.py3-none-any.whl (261 kB)
Collecting seaborn>=0.10.1
Using cached seaborn-0.11.2-py3-none-any.whl (292 kB)
Collecting tangled-up-in-unicode==0.1.0
Using cached tangled_up_in_unicode-0.1.0-py3-none-any.whl (3.1 MB)
Collecting PyYAML>=5.0.0
Using cached PyYAML-6.0.tar.gz (124 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: pandas!=1.0.0,!=1.0.1,!=1.0.2,!=1.1.0,>=0.25.3 in ./Library/Python/3.8/lib/python/site-packages (from pandas_profiling) (1.4.1)
Requirement already satisfied: matplotlib>=3.2.0 in ./Library/Python/3.8/lib/python/site-packages (from pandas_profiling) (3.5.1)
Collecting markupsafe~=2.0.1
Downloading MarkupSafe-2.0.1-cp38-cp38-macosx_10_9_universal2.whl (18 kB)
Collecting visions[type_image_path]==0.7.4
Using cached visions-0.7.4-py3-none-any.whl (102 kB)
Collecting pydantic>=1.8.1
Using cached pydantic-1.9.0-cp38-cp38-macosx_11_0_arm64.whl (2.4 MB)
Collecting requests>=2.24.0
Using cached requests-2.27.1-py2.py3-none-any.whl (63 kB)
Collecting missingno>=0.4.2
Using cached missingno-0.5.1-py3-none-any.whl (8.7 kB)
Requirement already satisfied: jinja2>=2.11.1 in ./Library/Python/3.8/lib/python/site-packages (from pandas_profiling) (3.0.3)
Requirement already satisfied: numpy>=1.16.0 in ./Library/Python/3.8/lib/python/site-packages (from pandas_profiling) (1.22.3)
Collecting phik>=0.11.1
Using cached phik-0.12.1.tar.gz (600 kB)
ERROR: phik>=0.11.1 from https://files.pythonhosted.org/packages/02/9c/812ffada4a026ad20ad30318897b46ce3cc46e2eec61a3d9d1cf6699f79a/phik-0.12.1.tar.gz#sha256=63cf160c8950ec46da7a33165deef57f27d29f24b83cf4dd028aa0cb97b73af6 (from pandas_profiling) does not appear to be a Python project: neither 'setup.py' nor 'pyproject.toml' found.
Has anyone seen similar errors?
| closed | 2022-03-20T07:19:02Z | 2022-03-22T07:39:59Z | https://github.com/ydataai/ydata-profiling/issues/943 | [] | ellieyuyw | 5 |
graphql-python/graphene-sqlalchemy | sqlalchemy | 7 | Incorrect repo in readme | The existing clone and cd are invalid for the `examples/flask_sqlalchemy/README.md` file. The README should read:
``` bash
# Get the example project code
git clone https://github.com/graphql-python/graphene-sqlalchemy.git
cd graphene-sqlalchemy/examples/flask_sqlalchemy
```
| closed | 2016-09-29T18:02:09Z | 2023-02-26T00:53:19Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/7 | [] | erik-farmer | 2 |
tableau/server-client-python | rest-api | 726 | Problem using server.schedules.add_to_schedule | Hello,
I am trying to implement the refresh_schedule.py sample. Everything works well until I get to server.schedules.add_to_schedule.
When I do a print I see the schedule_id and the item_id which is a datasource. But I get an error saying it can't find the resource. And oddly, it says it can't find the workbook. However, I am trying to publish a datasource and it when through the get_datasource_by_name def.
Any help would be appreciated. See attachement.
[refresh_schedule_issue.docx](https://github.com/tableau/server-client-python/files/5505323/refresh_schedule_issue.docx)
| closed | 2020-11-07T19:20:37Z | 2022-06-17T20:18:45Z | https://github.com/tableau/server-client-python/issues/726 | [] | wesmott | 2 |
deezer/spleeter | tensorflow | 223 | [Discussion] Custom audio import gives NaN values in prediction. | Hi,
First, thank you for the amazing work on this tool, I've been using it a lot recently and it gives amazing results !
I'd like to share an issue I have when importing audio files from a python request.
Here is the code (from my Flask app) that is not working.
```
# imports
import io
import soundfile as sf
from spleeter.separator import Separator
from werkzeug.utils import secure_filename
# Spleeter config
separator = Separator('spleeter:2stems')
ALLOWED_EXTENSIONS = {'mp3', 'wav'}
sample_rate = 44100
@app.route("/upload", methods=['POST'])
def upload():
if request.method == "POST":
# handling exceptions
if 'file' not in request.files:
print('No file attached in request')
return redirect(request.url)
f = request.files['file']
if f.filename == '':
print('No file selected')
return redirect(request.url)
# if the file is valid
if f and allowed_file(f.filename):
file = request.files['file']
# creating a RAW audio file
waveform, samplerate = sf.read(io.BytesIO(file.read()),format="RAW",samplerate=sample_rate,channels=2,subtype="FLOAT",dtype="float32"))
prediction = separator.separate(waveform)
print(prediction.get("vocals"))
return redirect("/")
```
So basically what I do here is :
- get a file in a POST form
- read it as a RAW audio file
- use Spleeter to get the vocals
The result gives only NaN values. I was wondering if I import audio files the right way or if I should do it differently. When I read the API Wiki I understood that RAW audios are correct inputs but I'm maybe wrong!
I do not use the Spleeter default audio import because I want to read if from a request, not on a disk (I don't have any path to specify).
Thanks in advance :)
Julien | closed | 2020-01-05T19:24:59Z | 2020-01-08T08:58:59Z | https://github.com/deezer/spleeter/issues/223 | [
"question"
] | julienbeisel | 1 |
docarray/docarray | fastapi | 930 | v2: add proper slice compatible getitem for document array | closed | 2022-12-12T13:09:44Z | 2023-01-05T09:36:57Z | https://github.com/docarray/docarray/issues/930 | [
"DocArray v2"
] | samsja | 0 | |
ipython/ipython | jupyter | 14,006 | Installed qt5 event loop hook. | <!-- This is the repository for IPython command line, if you can try to make sure this question/bug/feature belong here and not on one of the Jupyter repositories.
If it's a generic Python/Jupyter question, try other forums or discourse.jupyter.org.
If you are unsure, it's ok to post here, though, there are few maintainer so you might not get a fast response.
-->
When I use `plt.show()` , and run the script,
``` bash
❯ & D:/Anaconda3/python.exe -m IPython --no-autoindent d:/Documents/C-Project/GeoLocOptim/scripts/estimate.py
```
The following message showed:
```
Installed qt5 event loop hook.
Shell is already running a gui event loop for qt5. Call with no arguments to disable the current loop.
```
What does it mean and how to disable this?
| open | 2023-04-06T06:50:22Z | 2023-05-30T17:58:04Z | https://github.com/ipython/ipython/issues/14006 | [] | forallsunday | 21 |
AirtestProject/Airtest | automation | 1,024 | ios 点击系统弹窗报错,如图 | # iOS15.3系统弹窗

# 脚本

# 报错
`----------------------------------------------------------------------
Traceback (most recent call last):
File "airtest/cli/runner.py", line 73, in runTest
File "site-packages/six.py", line 703, in reraise
File "airtest/cli/runner.py", line 70, in runTest
File "/Users/lijiawei/Desktop/ppup.air/ppup.py", line 10, in <module>
touch(pos)
File "airtest/utils/logwraper.py", line 90, in wrapper
File "airtest/core/api.py", line 357, in touch
File "/Applications/AirtestIDE.app/Contents/MacOS/airtest/core/ios/ios.py", line 34, in wrapper
return func(self, *args, **kwargs)
File "/Applications/AirtestIDE.app/Contents/MacOS/airtest/core/ios/ios.py", line 328, in touch
self.driver.click(x, y, duration)
File "site-packages/wda/__init__.py", line 912, in click
File "site-packages/wda/__init__.py", line 931, in tap_hold
File "site-packages/wda/utils.py", line 47, in _inner
File "site-packages/wda/__init__.py", line 454, in _fetch
File "site-packages/wda/__init__.py", line 124, in httpdo
File "site-packages/wda/__init__.py", line 180, in _unsafe_httpdo
wda.exceptions.WDAUnknownError: WDARequestError(status=110, value={'error': 'unknown error', 'message': '*** -[__NSArrayM insertObject:atIndex:]: object cannot be nil'})
----------------------------------------------------------------------
Ran 1 test in 4.519s` | open | 2022-02-12T14:37:13Z | 2022-03-10T13:54:21Z | https://github.com/AirtestProject/Airtest/issues/1024 | [] | Pactortester | 2 |
FactoryBoy/factory_boy | sqlalchemy | 796 | ImageField inside Maybe declaration no longer working since 3.1.0 | #### Description
In a factory that I defined for companies, I'm randomly generating a logo using a `Maybe` declaration. This used to work fine up to and including 3.0.1, but as of 3.1.0 it has different behaviour.
#### To Reproduce
##### Model / Factory code
Leaving out the other fields as they cannot be relevant to the problem.
```python
from factory import Faker, Maybe
from factory.django import DjangoModelFactory, ImageField
from ..models import Company
class CompanyFactory(DjangoModelFactory):
logo_add = Faker("pybool")
logo = Maybe(
"logo_add",
yes_declaration=ImageField(width=500, height=200, color=Faker("color")),
no_declaration=None,
)
class Meta:
model = Company
exclude = ("logo_add",)
```
##### The issue
Up to and including 3.0.1 the behaviour - which is the desired behaviour as far as I'm concerend - was that I could generate companies that either had a logo or did not (about 50/50 since I'm just using "pybool" for the decider field). If they had a logo, the logo would be 500x200 with a random color.
Now that I use 3.1.0, the randomness of about half the companies having logos still works, but _all_ generated logo's are now 100x100 and blue, which are simply defaults (although the [documentation](https://factoryboy.readthedocs.io/en/latest/orms.html?highlight=imagefield#factory.django.ImageField) says that "green" is actually the default), which is definitely something to fix :)
Perhaps I was misusing/misunderstanding this feature all along, but then I'd still like to know how to get the desired behaviour described.
| closed | 2020-10-13T13:53:14Z | 2020-12-23T17:21:32Z | https://github.com/FactoryBoy/factory_boy/issues/796 | [] | grondman | 2 |
huggingface/datasets | numpy | 6,894 | Better document defaults of to_json | Better document defaults of `to_json`: the default format is [JSON-Lines](https://jsonlines.org/).
Related to:
- #6891 | closed | 2024-05-13T13:30:54Z | 2024-05-16T14:31:27Z | https://github.com/huggingface/datasets/issues/6894 | [
"documentation"
] | albertvillanova | 0 |
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 753 | Can I use a crawler and ScrapeGraphAI together? | **Is your feature request related to a problem? Please describe.**
To use a crawler like scrappy or crawlee with ScrapeGraphaAI together.
The crawler is responsible for crawl all the website, and filtering some contents. For example get the pages under the same path with the root HTTP url. Another example is I need the pages with a specified regex pattern of the URL or of the content in the page.
And ScrapeGraphAI works as the content processing and analyzing.
**Describe the solution you'd like**
Solution 1: Graph API support passing just HTML content of webpage.
**Describe alternatives you've considered**
Solution 2: Implement something like content filtering, webpage filtering in the DepthSearchGraph. Hooks are also possible solutions.
**Additional context**
I want to crawl all product items in a shopping website. I want to crawl all mp3 file in a music website under a specify category like "Country Music".
| closed | 2024-10-15T07:22:15Z | 2024-10-15T09:10:10Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/753 | [] | davideuler | 1 |
modelscope/data-juicer | streamlit | 113 | [MM] add face_area_filter OP | closed | 2023-12-04T11:43:25Z | 2023-12-06T06:21:57Z | https://github.com/modelscope/data-juicer/issues/113 | [
"enhancement",
"dj:multimodal"
] | drcege | 0 | |
unit8co/darts | data-science | 2,190 | Add `number_of_batch_per_epoch` parameters for torch forecasting models | ## feature request
**Is your feature request related to a current problem? Please describe.**
I once read an issue on GluonTS repository about why they are using both the batch_size and the number_of_batch_per_epoch (effectively fixing the number of samples per epoch). They argu that, sometimes, with panel time series, we can have extremely large dataset. Fixing both parameters was then a mean to avoid long training phase by limiting the number of samples seen per epoch.
**Describe proposed solution**
A new parameter `number_of_batch_per_epoch` could be added as a way to fix the number of samples per epoch. Alternatively, we could have a `number_of_sample_per_epoch` in which case the number_of_batch_per_epoch would be automatically computed.
This would pose issues if the number of samples is larger than the dataset: should be throw an error, or just a warning but we still train on the entire dataset?
| closed | 2024-01-26T16:18:03Z | 2024-02-01T14:31:14Z | https://github.com/unit8co/darts/issues/2190 | [
"question"
] | MarcBresson | 3 |
saulpw/visidata | pandas | 2,670 | [fuzzymatch] fuzzymatch shows matched items in lowercase | **Small description**
Fuzzymatch shows matches in lowercase.
**Steps to reproduce**
`vd sample_data/benchmark.csv`
`Space` `go-col-name` `date`
As soon as the first letter (`d`) of the search pattern is typed: the match for `Date` is shown in lowercase: `date`.
**Expected result**
I expect the match to preserve its case: `Date`.
**Configuration**
vd v3.2dev
**Additional context**
This was introduced by me in #2658. | open | 2025-01-08T04:49:48Z | 2025-01-08T04:49:48Z | https://github.com/saulpw/visidata/issues/2670 | [
"bug"
] | midichef | 0 |
harry0703/MoneyPrinterTurbo | automation | 67 | 生成视频时报错 | 报错内容如下:
tm.start(task_id=task_id, params=cfg)
File "/home/MoneyPrinterTurbo-main/app/services/task.py", line 133, in start
video.combine_videos(combined_video_path=combined_video_path,
File "/home/MoneyPrinterTurbo-main/app/services/video.py", line 84, in combine_videos
clip = clip.resize((video_width, video_height))
File "/usr/local/envs/MoneyPrinterTurbo/lib/python3.10/site-packages/moviepy/video/fx/resize.py", line 165, in resize
newclip = clip.fl_image(fl)
File "/usr/local/envs/MoneyPrinterTurbo/lib/python3.10/site-packages/moviepy/video/VideoClip.py", line 576, in fl_image
return self.fl(lambda gf, t: image_func(gf(t)), apply_to)
File "/usr/local/envs/MoneyPrinterTurbo/lib/python3.10/site-packages/moviepy/Clip.py", line 141, in fl
newclip = self.set_make_frame(lambda t: fun(self.get_frame, t))
File "<decorator-gen-68>", line 2, in set_make_frame
File "/usr/local/envs/MoneyPrinterTurbo/lib/python3.10/site-packages/moviepy/decorators.py", line 15, in outplace
f(newclip, *a, **k)
File "/usr/local/envs/MoneyPrinterTurbo/lib/python3.10/site-packages/moviepy/video/VideoClip.py", line 740, in set_make_frame
self.size = self.get_frame(0).shape[:2][::-1]
File "<decorator-gen-11>", line 2, in get_frame
File "/usr/local/envs/MoneyPrinterTurbo/lib/python3.10/site-packages/moviepy/decorators.py", line 89, in wrapper
return f(*new_a, **new_kw)
File "/usr/local/envs/MoneyPrinterTurbo/lib/python3.10/site-packages/moviepy/Clip.py", line 98, in get_frame
return self.make_frame(t)
File "/usr/local/envs/MoneyPrinterTurbo/lib/python3.10/site-packages/moviepy/Clip.py", line 141, in <lambda>
newclip = self.set_make_frame(lambda t: fun(self.get_frame, t))
File "/usr/local/envs/MoneyPrinterTurbo/lib/python3.10/site-packages/moviepy/video/VideoClip.py", line 576, in <lambda>
return self.fl(lambda gf, t: image_func(gf(t)), apply_to)
File "/usr/local/envs/MoneyPrinterTurbo/lib/python3.10/site-packages/moviepy/video/fx/resize.py", line 163, in fl
return resizer(pic.astype("uint8"), newsize)
File "/usr/local/envs/MoneyPrinterTurbo/lib/python3.10/site-packages/moviepy/video/fx/resize.py", line 37, in resizer
resized_pil = pilim.resize(newsize[::-1], Image.ANTIALIAS)
AttributeError: module 'PIL.Image' has no attribute 'ANTIALIAS'
解决方法:
删除Pillow 库,重新安装9.0.0版本后,不再报错 | closed | 2024-03-26T14:15:27Z | 2024-03-31T15:28:38Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/67 | [
"bug"
] | xinjiangyin | 11 |
Urinx/WeixinBot | api | 270 | 微信网页版被关掉了,还能用吗? | 微信网页版被关掉了,还能用吗? | open | 2019-07-15T03:43:55Z | 2019-10-12T02:52:59Z | https://github.com/Urinx/WeixinBot/issues/270 | [] | zuijiu997 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.