repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
httpie/cli | rest-api | 1,194 | docs not loading | ## Checklist
- [x] I've searched for similar issues.
- [x] I'm using the latest version of HTTPie.
---
try opening https://httpie.io/docs in any browser (Safari and Firefox tested)
see the following:
<img width="737" alt="image" src="https://user-images.githubusercontent.com/223486/139373980-6bd9e179-ee93-4e9e-bb53-087118dbec13.png">
"Application error: a client-side exception has occurred (see the browser console for more information)."
## Minimal reproduction code and steps
1. in browser, open httpie.io/docs
2. see error
3. there is no step 3 ;)
## Current result
error in browser
TypeError: t.current.querySelectorAll is not a function. (In 't.current.querySelectorAll(k.join(", "))', 't.current.querySelectorAll' is undefined)
(anonymous function) — hooks.tsx:33
Ii — react-dom.production.min.js:262:359
(anonymous function) — scheduler.production.min.js:18:343
vi — react-dom.production.min.js:243
vi
(anonymous function) — react-dom.production.min.js:123:115
(anonymous function) — scheduler.production.min.js:18:343
Ql — react-dom.production.min.js:123
Hl — react-dom.production.min.js:122:428
fi — react-dom.production.min.js:237:203
Gi — react-dom.production.min.js:285
rs — react-dom.production.min.js:289:153
(anonymous function) — index.js:483
ge — index.js:665
l — runtime.js:63
(anonymous function) — runtime.js:293
j — index.js:28
i — index.js:46
(anonymous function) — index.js:51
Promise
(anonymous function) — index.js:43
(anonymous function) — router.js:730
l — runtime.js:63
(anonymous function) — runtime.js:293
t — asyncToGenerator.js:3
u — asyncToGenerator.js:25
promiseReactionJob
…
## Expected result
before, httpie docs loaded great!
---
## Debug output
n/a, website issue, not w/ httpie executable
Please re-run the command with `--debug`, then copy the entire command & output and paste both below:
```bash
$ http --debug <COMPLETE ARGUMENT LIST THAT TRIGGERS THE ERROR>
<COMPLETE OUTPUT>
```
## Additional information, screenshots, or code examples
…
| closed | 2021-10-29T04:10:39Z | 2021-10-29T07:22:32Z | https://github.com/httpie/cli/issues/1194 | [
"bug",
"new"
] | aaronhmiller | 1 |
deezer/spleeter | deep-learning | 394 | [Bug] Various Errors using Spleeter Seperate with custom model. | <!-- PLEASE READ THIS CAREFULLY :
- Any issue which does not respect following template or lack of information will be considered as invalid and automatically closed
- First check FAQ from wiki to see if your problem is not already known
-->
## Description
I was originally having another [issue](https://github.com/deezer/spleeter/issues/390), but with help from @romi1502 that was resolved nicely.
However, as soon as that issue was fixed, I immediately got thrown several other issues.
<!-- Give us a clear and concise description of the bug you are reporting. -->
## Step to reproduce
<!-- Indicates clearly steps to reproduce the behavior: -->
I managed to run the seperate command with my custom model, and it thought about it for a long time and powershell was eating up my CPU, so it was doing something.
I used the following spleeter imput command: `spleeter separate -i 'F:\BluRay 5.1\Done\S01E01 - Ambush\fullmix.wav' -o output -p F:\SpleetTest\Configs\filmModel.json`
## Output
After about 30-40 seconds I got the error:
```
Traceback (most recent call last):
File "c:\users\joe93\appdata\local\programs\python\python37\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\joe93\appdata\local\programs\python\python37\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\joe93\AppData\Local\Programs\Python\Python37\Scripts\spleeter.exe\__main__.py", line 9, in <module>
File "c:\users\joe93\appdata\local\programs\python\python37\lib\site-packages\spleeter\__main__.py", line 54, in entrypoint
main(sys.argv)
File "c:\users\joe93\appdata\local\programs\python\python37\lib\site-packages\spleeter\__main__.py", line 46, in main
entrypoint(arguments, params)
File "c:\users\joe93\appdata\local\programs\python\python37\lib\site-packages\spleeter\commands\separate.py", line 45, in entrypoint
synchronous=False
File "c:\users\joe93\appdata\local\programs\python\python37\lib\site-packages\spleeter\separator.py", line 228, in separate_to_file
sources = self.separate(waveform, audio_descriptor)
File "c:\users\joe93\appdata\local\programs\python\python37\lib\site-packages\spleeter\separator.py", line 195, in separate
return self._separate_librosa(waveform, audio_descriptor)
File "c:\users\joe93\appdata\local\programs\python\python37\lib\site-packages\spleeter\separator.py", line 181, in _separate_librosa
outputs = sess.run(outputs, feed_dict=self._get_input_provider().get_feed_dict(features, stft, audio_id))
File "c:\users\joe93\appdata\local\programs\python\python37\lib\site-packages\tensorflow_core\python\client\session.py", line 956, in run
run_metadata_ptr)
File "c:\users\joe93\appdata\local\programs\python\python37\lib\site-packages\tensorflow_core\python\client\session.py", line 1156, in _run
(np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (25836, 2, 6) for Tensor 'mix_stft:0', which has shape '(?, 2049, 2)'
```
I have so little experience with tensor, I don't know where to begin. From a few searches, this looks like a tensor issue with defining the parameters of the input (the sound files).
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Windows 10 (fully updated) |
| Installation type | pip |
| RAM available | 16GB |
| Hardware spec | RTX2080 / Ryzen R2600 |
## Additional context
<!-- Add any other context about the problem here, references, cites, etc.. -->
| open | 2020-05-23T22:01:47Z | 2024-01-04T00:30:54Z | https://github.com/deezer/spleeter/issues/394 | [
"bug",
"invalid"
] | JavaShipped | 16 |
hankcs/HanLP | nlp | 1,093 | “着”的拼音不正确 | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:1.7.1
我使用的版本是:1.7.1
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
<!-- 请详细描述问题,越详细越可能得到解决 -->
使用转拼音功能时,发现汉字“着”在任何时候都输出了“zhuó”,但是其实大多数应该输出“zhe”的。
比如:盼望着盼望着,春天来了;功名浑是错,更莫思量着;悄没声的河沿上,满铺着寂寞和黑暗;我从山中来,带着兰花草。。等等这些句子里的“着”都输出了拼音“zhuó”,但是明显应该是“zhe”
(大部分其他字的拼音结果都是正确的)
### 触发代码
使用gradle自动配置方式,然后使用以下代码:
List<Pinyin> pinyinList = HanLP.convertToPinyinList(text);
for (Pinyin pinyin : pinyinList)
{
System.out.printf("%s,", pinyin.getPinyinWithToneMark());
}
| closed | 2019-02-18T07:18:25Z | 2019-02-21T14:45:12Z | https://github.com/hankcs/HanLP/issues/1093 | [
"improvement"
] | LukeChow | 2 |
Yorko/mlcourse.ai | data-science | 410 | topic 5 part 1 summation sign | [comment in ODS](https://opendatascience.slack.com/archives/C39147V60/p1541584422610100) | closed | 2018-11-07T10:57:25Z | 2018-11-10T16:18:10Z | https://github.com/Yorko/mlcourse.ai/issues/410 | [
"minor_fix"
] | Yorko | 1 |
DistrictDataLabs/yellowbrick | matplotlib | 1,099 | Not supporting model of Catboost | `model_Cat = CatBoostClassifier()
visualizer = ClassificationReport(model_Cat, classes=[0,1], support=True)`
It is showing an error
"YellowbrickTypeError: Cannot detect the model name for non estimator: '<class 'catboost.core.CatBoostClassifier'>'" | closed | 2020-09-18T18:08:20Z | 2022-12-04T20:17:52Z | https://github.com/DistrictDataLabs/yellowbrick/issues/1099 | [
"type: question",
"type: contrib"
] | p-suresh-kumar | 12 |
wkentaro/labelme | computer-vision | 1,170 | labelme2voc.py Skipping shape | ### Provide environment information
python 3.7
labelme 5.0.1
lxml 4.9.1
PyQt5 5.15.7
PyQt5-Qt5 5.15.2
PyQt5-sip 12.11.0
### What OS are you using?
win11
### Describe the Bug
Skipping shape
```
{
"version": "5.0.1",
"flags": {},
"shapes": [
{
"label": "LS",
"points": [
[
587.6430205949656,
356.2929061784897
],
[
600.0,
338.9016018306636
],
[
611.8993135011442,
329.0617848970252
],
[
621.7391304347826,
323.56979405034326
],
[
629.2906178489702,
324.0274599542334
],
[
623.5697940503433,
332.49427917620136
],
[
613.7299771167048,
338.44393592677346
],
[
604.8054919908467,
349.4279176201373
],
[
596.5675057208238,
358.5812356979405
],
[
590.6178489702517,
362.70022883295195
],
[
586.7276887871853,
360.18306636155603
]
],
"group_id": null,
"shape_type": "polygon",
"flags": {}
},
{
"label": "LS",
"points": [
[
632.5842696629213,
373.83627608346706
],
[
634.9919743178169,
366.13162118780093
],
[
635.3130016051364,
358.1059390048154
],
[
641.8940609951845,
347.19101123595505
],
[
652.3274478330658,
336.7576243980738
],
[
655.2166934189405,
344.30176565008026
],
[
654.414125200642,
352.16693418940605
],
[
652.8089887640449,
360.5136436597111
],
[
647.8330658105939,
370.94703049759227
],
[
644.943820224719,
383.62760834670945
],
[
640.4494382022472,
392.6163723916533
],
[
642.6966292134831,
398.07383627608345
],
[
645.1043338683787,
399.8394863563403
],
[
653.290529695024,
401.9261637239165
],
[
659.7110754414125,
402.24719101123594
],
[
672.8731942215088,
394.8635634028892
],
[
668.860353130016,
391.9743178170144
],
[
664.0449438202247,
386.99839486356336
],
[
658.7479935794543,
381.2199036918138
],
[
661.3162118780095,
372.3916532905297
],
[
669.1813804173354,
368.0577849117175
],
[
678.4911717495987,
363.72391653290526
],
[
682.5040128410915,
357.1428571428571
],
[
682.3434991974317,
344.4622792937399
],
[
688.9245585874799,
336.5971107544141
],
[
701.605136436597,
330.0160513643659
],
[
710.4333868378811,
329.0529695024077
],
[
714.1252006420546,
334.83146067415726
],
[
715.0882825040128,
339.8073836276083
],
[
702.7287319422151,
346.5489566613162
],
[
698.0738362760834,
356.01926163723914
],
[
686.0353130016051,
364.3659711075441
],
[
677.6886035313001,
378.6516853932584
],
[
677.207062600321,
384.430176565008
],
[
678.9727126805778,
387.80096308186194
],
[
680.8988764044943,
391.17174959871585
],
[
684.430176565008,
388.7640449438202
],
[
691.1717495987158,
388.28250401284106
],
[
695.6661316211878,
389.56661316211876
],
[
694.2215088282503,
403.53130016051364
],
[
677.5280898876404,
410.9149277688603
],
[
665.008025682183,
415.2487961476725
],
[
651.8459069020867,
418.61958266452643
],
[
643.0176565008026,
418.29855537720704
],
[
633.868378812199,
411.23595505617976
],
[
627.6083467094703,
398.2343499197431
],
[
627.2873194221509,
388.6035313001605
]
],
"group_id": null,
"shape_type": "polygon",
"flags": {}
},
{
"label": "LS",
"points": [
[
698.876404494382,
442.6966292134831
],
[
706.4205457463884,
433.22632423756016
],
[
708.8282504012841,
428.7319422150883
],
[
716.5329052969502,
420.2247191011236
],
[
721.0272873194222,
417.9775280898876
],
[
723.9165329052969,
423.7560192616372
],
[
722.4719101123595,
429.3739967897271
],
[
716.69341894061,
439.8073836276083
],
[
710.4333868378811,
445.5858747993579
],
[
700.0,
451.0433386837881
],
[
698.5553772070625,
447.51203852327444
]
],
"group_id": null,
"shape_type": "polygon",
"flags": {}
}
],
"imagePath": "01.jpg",
"imageData": ".................There was an error creating your Issue: body is too long (maximum is 65536 characters).............................",
"imageHeight": 853,
"imageWidth": 1280
}
```
### Expected Behavior
_No response_
### To Reproduce
`python labelme2voc.py 0907_1280 0907_1280_VOC --labels labels.txt`
```
Creating dataset: 0907_1280_VOC
class_names: ('_background_', 'HeiBai', 'HEIHEI', 'LS', 'MOHU', 'TUao', 'Xian')
Saved class_names: 0907_1280_VOC\class_names.txt
Generating dataset from: 0907_1280\01.json
Skipping shape: label=LS, shape_type=polygon
Skipping shape: label=LS, shape_type=polygon
Skipping shape: label=LS, shape_type=polygon
Generating dataset from: 0907_1280\02.json
Skipping shape: label=LS, shape_type=polygon
Skipping shape: label=LS, shape_type=polygon
Skipping shape: label=LS, shape_type=polygon
Generating dataset from: 0907_1280\03.json
Skipping shape: label=HeiBai, shape_type=polygon
Generating dataset from: 0907_1280\04.json
Skipping shape: label=MOHU, shape_type=polygon
Skipping shape: label=HEIHEI, shape_type=polygon
Generating dataset from: 0907_1280\05.json
Skipping shape: label=HEIHEI, shape_type=polygon
Generating dataset from: 0907_1280\06.json
Skipping shape: label=TUao, shape_type=polygon
Generating dataset from: 0907_1280\1.json
Skipping shape: label=LS, shape_type=polygon
Skipping shape: label=LS, shape_type=polygon
Skipping shape: label=LS, shape_type=polygon
Skipping shape: label=LS, shape_type=polygon
Generating dataset from: 0907_1280\2.json
Skipping shape: label=LS, shape_type=polygon
Skipping shape: label=LS, shape_type=polygon
Skipping shape: label=LS, shape_type=polygon
Generating dataset from: 0907_1280\3.json
Skipping shape: label=LS, shape_type=polygon
Skipping shape: label=LS, shape_type=polygon
Skipping shape: label=LS, shape_type=polygon
Generating dataset from: 0907_1280\4.json
Skipping shape: label=HeiBai, shape_type=polygon
Generating dataset from: 0907_1280\5.json
Skipping shape: label=HeiBai, shape_type=polygon
Generating dataset from: 0907_1280\6.json
Skipping shape: label=MOHU, shape_type=polygon
Skipping shape: label=Xian, shape_type=polygon
Generating dataset from: 0907_1280\7.json
Skipping shape: label=HEIHEI, shape_type=polygon
Generating dataset from: 0907_1280\8.json
Skipping shape: label=TUao, shape_type=polygon
Generating dataset from: 0907_1280\Image_20220902104545329.json
Skipping shape: label=LS, shape_type=polygon
Skipping shape: label=LS, shape_type=polygon
Skipping shape: label=LS, shape_type=polygon
Skipping shape: label=LS, shape_type=polygon
Skipping shape: label=LS, shape_type=polygon
Skipping shape: label=LS, shape_type=polygon
Generating dataset from: 0907_1280\Image_20220902104602904.json
Skipping shape: label=LS, shape_type=polygon
Skipping shape: label=LS, shape_type=polygon
Skipping shape: label=LS, shape_type=polygon
Skipping shape: label=LS, shape_type=polygon
Skipping shape: label=LS, shape_type=polygon
Skipping shape: label=LS, shape_type=polygon
Generating dataset from: 0907_1280\Image_20220902104728310.json
Skipping shape: label=HeiBai, shape_type=polygon
Generating dataset from: 0907_1280\Image_20220902104744766.json
Skipping shape: label=HeiBai, shape_type=polygon
Generating dataset from: 0907_1280\Image_20220902104820313.json
Skipping shape: label=MOHU, shape_type=polygon
Skipping shape: label=Xian, shape_type=polygon
Generating dataset from: 0907_1280\Image_20220902104824417.json
Skipping shape: label=MOHU, shape_type=polygon
Skipping shape: label=Xian, shape_type=polygon
Generating dataset from: 0907_1280\Image_20220902104845817.json
Skipping shape: label=HEIHEI, shape_type=polygon
Generating dataset from: 0907_1280\Image_20220902105639548.json
Skipping shape: label=TUao, shape_type=polygon
```
-----------------
` python labelme2voc.py data_annotated data_dataset_voc --labels labels.txt`
```
Creating dataset: data_dataset_voc
class_names: ('_background_', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'potted plant', 'sheep', 'sofa', 'train', 'tv/monitor')
Saved class_names: data_dataset_voc\class_names.txt
Generating dataset from: data_annotated\2011_000003.json
[WARNING] label_file:load:102 - This JSON file (data_annotated\2011_000003.json) may be incompatible with current labelme. version in file: 4.0.0, current version: 5.0.1
Generating dataset from: data_annotated\2011_000006.json
[WARNING] label_file:load:102 - This JSON file (data_annotated\2011_000006.json) may be incompatible with current labelme. version in file: 4.0.0, current version: 5.0.1
Generating dataset from: data_annotated\2011_000025.json
[WARNING] label_file:load:102 - This JSON file (data_annotated\2011_000025.json) may be incompatible with current labelme. version in file: 4.0.0, current version: 5.0.1
```
The demonstration data is normal | closed | 2022-09-07T05:26:01Z | 2022-12-04T00:23:23Z | https://github.com/wkentaro/labelme/issues/1170 | [
"issue::bug",
"status: wip-by-author"
] | monkeycc | 2 |
litestar-org/litestar | asyncio | 3,764 | Enhancement: local state for websocket listeners | ### Summary
There seems to be no way to have a state that is unique to a particular websocket connection. Or maybe it's possible, but it's not documented?
### Basic Example
Consider the following example:
```
class Listener(WebsocketListener):
path = "/ws"
def on_accept(self, socket: WebSocket, state: State):
state.user_id = str(uuid.uuid4())
def on_receive(self, data: str, state: State):
logger.info("Received: %s", data)
return state.user_id
```
Here, I expect a different `user_id` for each connection, but it turns out to not be the case:
```
def test_listener(app: Litestar):
client_1 = TestClient(app)
client_2 = TestClient(app)
with (
client_1.websocket_connect("/ws") as ws_1,
client_2.websocket_connect("/ws") as ws_2,
):
ws_1.send_text("Hello")
ws_2.send_text("Hello")
assert ws_1.receive_text() != ws_2.receive_text() # FAILS
```
I figured that there is a way to define a custom `Websocket` class which seems to be bound to a particular connection, but if that's the only way, should it be that hard for such a common use case?
UPDATE:
`WebSocketScope` contains it's own state dict, and it's unique to each connection. So my suggestion is to resolve the `state` dependency corresponding to the scope in that case. | closed | 2024-09-28T15:00:48Z | 2025-03-20T15:54:56Z | https://github.com/litestar-org/litestar/issues/3764 | [
"Enhancement"
] | olzhasar | 4 |
gradio-app/gradio | data-visualization | 10,075 | `gr.load` doesn't work for `gr.ChatInterface` Spaces | ### Describe the bug
When loading `gr.ChatInterface` Space with `gr.load`, an error occurs in the called Space.
For example, if you load this Space
```py
import gradio as gr
def fn(message, history):
return message
gr.ChatInterface(fn=fn).launch()
```
with `gr.load` like this
```py
import gradio as gr
gr.load("hysts-debug/sample-chat", src="spaces").launch()
```
the Space can be loaded, but the Space raises the following error when you run it:
```
To create a public link, set `share=True` in `launch()`.
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/gradio/routes.py", line 992, in predict
output = await route_utils.call_process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 2019, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1566, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2441, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 943, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 865, in wrapper
response = f(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/gradio_client/client.py", line 1133, in _inner
predictions = _predict(*data)
File "/usr/local/lib/python3.10/site-packages/gradio_client/client.py", line 1245, in _predict
raise AppError(
gradio_client.exceptions.AppError: The upstream Gradio app has raised an exception but has not enabled verbose error reporting. To enable, set show_error=True in launch().
```
and you get the following error in the log of the called Space:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 624, in process_events
response = await route_utils.call_process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 2019, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1564, in call_function
prediction = await fn(*processed_input)
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 832, in async_wrapper
response = await f(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/gradio/chat_interface.py", line 616, in _display_input
history.append([message, None]) # type: ignore
AttributeError: 'NoneType' object has no attribute 'append'
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 624, in process_events
response = await route_utils.call_process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 2019, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1564, in call_function
prediction = await fn(*processed_input)
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 832, in async_wrapper
response = await f(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/gradio/chat_interface.py", line 664, in _submit_fn
message_serialized, history = self._process_msg_and_trim_history(
File "/usr/local/lib/python3.10/site-packages/gradio/chat_interface.py", line 641, in _process_msg_and_trim_history
history = history_with_input[:-1]
TypeError: 'NoneType' object is not subscriptable
```
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
https://huggingface.co/spaces/hysts-debug/load-chatinterface
https://huggingface.co/spaces/hysts-debug/sample-chat
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
gradio==5.7.1
```
### Severity
I can work around it | closed | 2024-11-29T07:50:53Z | 2024-12-13T07:53:16Z | https://github.com/gradio-app/gradio/issues/10075 | [
"bug"
] | hysts | 1 |
BeanieODM/beanie | asyncio | 1,125 | [BUG] AttributeError error in merge_models function after document update | **Describe the bug**
An error occurs in the merge_models function after defining a new field in the document update in the .update method that is not included in the pydantic model using the extra='allow' option.
Follow the code example below and the exception
**To Reproduce**
```python
import asyncio
from beanie import Document, init_beanie
from pydantic import ConfigDict
async def main():
class DocumentTestModelWithModelConfigExtraAllow(Document):
model_config = ConfigDict(extra='allow')
await init_beanie(
connection_string='mongodb://localhost:27017/beanie',
document_models=[DocumentTestModelWithModelConfigExtraAllow],
)
doc = DocumentTestModelWithModelConfigExtraAllow()
await doc.insert()
await doc.update({"$set": {"my_extra_field": 12345}})
assert doc.my_extra_field == 12345
if __name__ == '__main__':
asyncio.run(main())
```
**Exception**
```
Traceback (most recent call last):
File "/home/myuser/Documents/.gits/myproject/test.py", line 402, in <module>
asyncio.run(main())
File "/home/myuser/.pyenv/versions/3.11.10/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/home/myuser/.pyenv/versions/3.11.10/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/myuser/.pyenv/versions/3.11.10/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/myuser/Documents/.gits/myproject/test.py", line 397, in main
await doc.update({"$set": {"my_extra_field": 12345}})
File "/home/myuser/.cache/pypoetry/virtualenvs/myproject-B1nOShX4-py3.11/lib/python3.11/site-packages/beanie/odm/actions.py", line 239, in wrapper
result = await f(
^^^^^^^^
File "/home/myuser/.cache/pypoetry/virtualenvs/myproject-B1nOShX4-py3.11/lib/python3.11/site-packages/beanie/odm/utils/state.py", line 85, in wrapper
result = await f(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/myuser/.cache/pypoetry/virtualenvs/myproject-B1nOShX4-py3.11/lib/python3.11/site-packages/beanie/odm/documents.py", line 740, in update
merge_models(self, result)
File "/home/myuser/.cache/pypoetry/virtualenvs/myproject-B1nOShX4-py3.11/lib/python3.11/site-packages/beanie/odm/utils/parsing.py", line 27, in merge_models
left_value = getattr(left, k)
^^^^^^^^^^^^^^^^
File "/home/myuser/.cache/pypoetry/virtualenvs/myproject-B1nOShX4-py3.11/lib/python3.11/site-packages/pydantic/main.py", line 856, in __getattr__
raise AttributeError(f'{type(self).__name__!r} object has no attribute {item!r}')
AttributeError: 'DocumentTestModelWithModelConfigExtraAllow' object has no attribute 'my_extra_field'
```
| open | 2025-02-17T22:41:23Z | 2025-02-19T21:37:04Z | https://github.com/BeanieODM/beanie/issues/1125 | [
"bug",
"good first issue"
] | HK-Mattew | 0 |
matplotlib/matplotlib | data-visualization | 29,298 | [Doc]: The link at "see also" is incorrect. (Axes.violin) | ### Documentation Link
https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.violin.html#matplotlib.axes.Axes.violin
### Problem
The link at "ses also" is incorrect.
It is currently violin. It should be violinplot.
### Suggested improvement
_No response_ | closed | 2024-12-13T02:31:42Z | 2024-12-13T23:51:06Z | https://github.com/matplotlib/matplotlib/issues/29298 | [
"Documentation"
] | cycentum | 4 |
flairNLP/flair | nlp | 3,318 | [Question]: How to fine-tune a pre-trained flair model with a dataset containing new entities (NER task)? | ### Question
Hello!
I am working on a NER model in French but I am having an issue and I cannot find the solution anywhere :S
I want to fine-tune the pre-trained "flair/ner-french" model that, as provided in Huggingface (https://huggingface.co/flair/ner-french) recognizes the labels ORG, LOC, PER, MISC.
However, the dataset that I want to use for fine-tuning contains those labels plus some others: CODE, DATETIME, DEM, and QUANTITY.
The problem is that I do not know how to make the pre-trained model recognize these new labels.
I am working in Google Colab using Python. For now I just tried loading the model:
tagger = SequenceTagger.load("flair/ner-french")
Then I tried adding new tags to the tagger:
tagger.label_dictionary.add_item('B-DATETIME')
tagger.label_dictionary.add_item('I-DATETIME')
...
Then I tried training it:
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus)
trainer.train(path,
learning_rate=0.1,
mini_batch_size=32,
max_epochs=15,
write_weights=True)
And then I get this error:
transitions_to_stop = transitions[
53 np.repeat(self.stop_tag, features.shape[0]),
54 [target[length - 1] for target, length in zip(targets, lengths)],
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
On the other hand, I found that someone asked a similar question (https://github.com/flairNLP/flair/issues/1540) and someone provided some code to solve the issue:
tagger = SequenceTagger.load('ner')
state = tagger._get_state_dict()
tag_type = 'ner'
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
state['tag_dictionary'] = tag_dictionary
START_TAG: str = "<START>"
STOP_TAG: str = "<STOP>"
state['state_dict']['transitions'] = torch.nn.Parameter(torch.randn(len(tag_dictionary), len(tag_dictionary)))
state['state_dict']['transitions'].detach()[tag_dictionary.get_idx_for_item(START_TAG), :] = -10000
state['state_dict']['transitions'].detach()[:, tag_dictionary.get_idx_for_item(STOP_TAG)] = -10000
num_directions = 2 if tagger.bidirectional else 1
linear_layer = torch.nn.Linear(tagger.hidden_size * num_directions, len(tag_dictionary))
state['state_dict']['linear.weight'] = linear_layer.weight
state['state_dict']['linear.bias'] = linear_layer.bias
model = SequenceTagger._init_model_with_state_dict(state)
trainer: ModelTrainer = ModelTrainer(model, corpus)
trainer.train('finetuned_model',
learning_rate=0.001,
mini_batch_size=64,
max_epochs=10)
The issue is that I already tried this code and it gets to training on the new dataset without errors but the accuracy is 0.
The model is not learning anything at all.
If someone could please give me a hint on what to do to add these new labels for fine-tuning the model, it would be much appreciated :) Thanks! | closed | 2023-09-18T07:18:17Z | 2023-09-18T12:04:01Z | https://github.com/flairNLP/flair/issues/3318 | [
"question"
] | mariasierro | 2 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 40 | API Test | https://www.tiktok.com/t/ZTdE3d4m6/?k=1 | closed | 2022-06-22T04:09:13Z | 2022-06-23T17:14:20Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/40 | [] | Evil0ctal | 0 |
dsdanielpark/Bard-API | nlp | 191 | Add auto rotation cookies | Any system to be able to auto-generate the bard cookies automatically with the Google session to be able to make the system completely autonomous | closed | 2023-09-27T09:44:14Z | 2024-03-05T08:21:29Z | https://github.com/dsdanielpark/Bard-API/issues/191 | [
"inprocessing"
] | Mrgaton | 11 |
graphistry/pygraphistry | pandas | 634 | [FEA] Dynamic/Programmatic copyright year at footer of Nexus | In Nexus, at the footer - the Copyright year, is hard code, change it to use system date's year.
| closed | 2025-01-08T00:47:19Z | 2025-01-08T00:49:03Z | https://github.com/graphistry/pygraphistry/issues/634 | [
"enhancement"
] | vaimdev | 1 |
jupyter/nbviewer | jupyter | 342 | Issue with Self-Signed SSL Certificate | `The error was: server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none`
The IPython Notebook instance is created by StarCluster: http://star.mit.edu/cluster/docs/latest/plugins/ipython.html
| open | 2014-09-09T01:31:48Z | 2015-09-01T00:56:38Z | https://github.com/jupyter/nbviewer/issues/342 | [
"tag:HTTP"
] | cancan101 | 5 |
krish-adi/barfi | streamlit | 46 | Regarding release of latest version (including support for parallel and async) | Hi Krish
Hope you're well
When are you releasing the version latest version (with support for async and parallel computation) ?
Best,
Abrar | open | 2025-01-20T12:13:18Z | 2025-02-14T10:33:07Z | https://github.com/krish-adi/barfi/issues/46 | [] | abrarzahoor004 | 5 |
nikitastupin/clairvoyance | graphql | 16 | Errors with Damn Vulnerable Graphql Application | with master branch:
root@kali:~/Downloads/clairvoyance# python3 -m clairvoyance -w ./google10000.txt http://127.0.0.1:5000/graphql
```
[WARNING][2021-03-11 22:47:33 oracle.py:57] Unknown error message: 'Cannot query field "system" on type "Query". Did you mean "pastes", "paste", "systemUpdate" or "systemHealth"?'
[WARNING][2021-03-11 22:47:33 oracle.py:57] Unknown error message: 'Cannot query field "systems" on type "Query". Did you mean "pastes", "systemUpdate" or "systemHealth"?'
[WARNING][2021-03-11 22:47:33 oracle.py:57] Unknown error message: 'Field "node" of type "Node" must have a sub selection.'
[WARNING][2021-03-11 22:47:33 oracle.py:57] Unknown error message: 'Field "node" argument "id" of type "ID!" is required but not provided.'
[WARNING][2021-03-11 22:47:36 oracle.py:57] Unknown error message: 'Field "paste" of type "PasteObject" must have a sub selection.'
[WARNING][2021-03-11 22:47:38 oracle.py:57] Unknown error message: 'Cannot query field "systematic" on type "Query". Did you mean "systemUpdate", "systemHealth" or "systemDiagnostics"?'
[WARNING][2021-03-11 22:47:38 oracle.py:57] Unknown error message: 'Cannot query field "pose" on type "Query". Did you mean "node", "paste" or "pastes"?'
[WARNING][2021-03-11 22:47:38 oracle.py:293] Unknown error message: 'Field "node" of type "Node" must have a sub selection.'
[WARNING][2021-03-11 22:47:38 oracle.py:293] Unknown error message: 'Field "node" argument "id" of type "ID!" is required but not provided.'
[WARNING][2021-03-11 22:47:40 oracle.py:188] Unknown error message: Field "node" of type "Node" must have a sub selection.
[WARNING][2021-03-11 22:47:41 oracle.py:188] Unknown error message: Field "node" of type "Node" must have a sub selection.
[WARNING][2021-03-11 22:47:41 oracle.py:188] Unknown error message: Field "node" argument "id" of type "ID!" is required but not provided.
[WARNING][2021-03-11 22:47:41 oracle.py:188] Unknown error message: Field "node" of type "Node" must have a sub selection.
[WARNING][2021-03-11 22:47:41 oracle.py:188] Unknown error message: Field "node" argument "id" of type "ID!" is required but not provided.
[WARNING][2021-03-11 22:47:41 oracle.py:293] Unknown error message: 'Field "node" of type "Node" must have a sub selection.'
[WARNING][2021-03-11 22:47:41 oracle.py:293] Unknown error message: 'Field "node" of type "Node" must have a sub selection.'
[WARNING][2021-03-11 22:47:41 oracle.py:293] Unknown error message: 'Argument "id" has invalid value {}.
Expected type "ID", found {}.'
[WARNING][2021-03-11 22:47:41 oracle.py:293] Unknown error message: 'Field "node" of type "Node" must have a sub selection.'
[WARNING][2021-03-11 22:47:41 oracle.py:293] Unknown error message: 'Unknown argument "i" on field "node" of type "Query". Did you mean "id"?'
[WARNING][2021-03-11 22:47:41 oracle.py:293] Unknown error message: 'Field "node" argument "id" of type "ID!" is required but not provided.'
Traceback (most recent call last):
File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/root/Downloads/clairvoyance/clairvoyance/__main__.py", line 89, in <module>
schema = oracle.clairvoyance(
File "/root/Downloads/clairvoyance/clairvoyance/oracle.py", line 436, in clairvoyance
arg_typeref = probe_arg_typeref(
File "/root/Downloads/clairvoyance/clairvoyance/oracle.py", line 341, in probe_arg_typeref
typeref = probe_typeref(documents, "InputValue", config)
File "/root/Downloads/clairvoyance/clairvoyance/oracle.py", line 315, in probe_typeref
raise Exception(f"Unable to get TypeRef for {documents}")
Exception: Unable to get TypeRef for ['query { node(id: 7) }', 'query { node(id: {}) }', 'query { node(i: 7) }']
```
Switching to latest Pull request:
root@kali:~/Downloads/clairvoyance# git branch
* main
root@kali:~/Downloads/clairvoyance# git branch -a
* main
remotes/origin/HEAD -> origin/main
remotes/origin/enhancement-support-input-objects
remotes/origin/fix-issue-9
remotes/origin/fix_non_null_2x
remotes/origin/improvement-retry-on-non-200
remotes/origin/issue-1
remotes/origin/main
remotes/origin/rewrite-system-tests
root@kali:\~/Downloads/clairvoyance# git checkout -b enhancement-support-input-objects remotes/origin/enhancement-support-input-objects
Branch 'enhancement-support-input-objects' set up to track remote branch 'enhancement-support-input-objects' from 'origin'.
Switched to a new branch 'enhancement-support-input-objects'
root@kali:\~/Downloads/clairvoyance# git branch
* enhancement-support-input-objects
main
```
root@kali:~/Downloads/clairvoyance# python3 -m clairvoyance -w ./google10000.txt http://127.0.0.1:5000/graphql
[WARNING][2021-03-11 22:52:34 oracle.py:57] Unknown error message: 'Cannot query field "system" on type "Query". Did you mean "pastes", "paste", "systemUpdate" or "systemHealth"?'
[WARNING][2021-03-11 22:52:34 oracle.py:57] Unknown error message: 'Cannot query field "systems" on type "Query". Did you mean "pastes", "systemUpdate" or "systemHealth"?'
[WARNING][2021-03-11 22:52:34 oracle.py:57] Unknown error message: 'Field "node" of type "Node" must have a sub selection.'
[WARNING][2021-03-11 22:52:34 oracle.py:57] Unknown error message: 'Field "node" argument "id" of type "ID!" is required but not provided.'
[WARNING][2021-03-11 22:52:38 oracle.py:57] Unknown error message: 'Field "paste" of type "PasteObject" must have a sub selection.'
[WARNING][2021-03-11 22:52:39 oracle.py:57] Unknown error message: 'Cannot query field "systematic" on type "Query". Did you mean "systemUpdate", "systemHealth" or "systemDiagnostics"?'
[WARNING][2021-03-11 22:52:39 oracle.py:57] Unknown error message: 'Cannot query field "pose" on type "Query". Did you mean "node", "paste" or "pastes"?'
[WARNING][2021-03-11 22:52:39 oracle.py:228] Unknown error (Field, typeref): Field "pastes" of type "[PasteObject]" must have a sub selection.
[WARNING][2021-03-11 22:52:41 oracle.py:228] Unknown error (InputValue, name): Field "pastes" of type "[PasteObject]" must have a sub selection.
[WARNING][2021-03-11 22:52:41 oracle.py:228] Unknown error (InputValue, name): Argument "public" has invalid value 7.
Expected type "Boolean", found 7.
[WARNING][2021-03-11 22:52:43 oracle.py:228] Unknown error (InputValue, name): Field "pastes" of type "[PasteObject]" must have a sub selection.
[WARNING][2021-03-11 22:52:43 oracle.py:228] Unknown error (InputValue, name): Field "pastes" of type "[PasteObject]" must have a sub selection.
[WARNING][2021-03-11 22:52:43 oracle.py:228] Unknown error (InputValue, typeref): Field "pastes" of type "[PasteObject]" must have a sub selection.
[WARNING][2021-03-11 22:52:43 oracle.py:228] Unknown error (InputValue, typeref): Field "pastes" of type "[PasteObject]" must have a sub selection.
[WARNING][2021-03-11 22:52:43 oracle.py:228] Unknown error (InputValue, typeref): Argument "public" has invalid value {}.
Expected type "Boolean", found {}.
[WARNING][2021-03-11 22:52:43 oracle.py:228] Unknown error (InputValue, typeref): Field "pastes" of type "[PasteObject]" must have a sub selection.
[WARNING][2021-03-11 22:52:43 oracle.py:228] Unknown error (InputValue, typeref): Argument "public" has invalid value 7.
Expected type "Boolean", found 7.
Traceback (most recent call last):
File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/root/Downloads/clairvoyance/clairvoyance/__main__.py", line 91, in <module>
schema = oracle.clairvoyance(
File "/root/Downloads/clairvoyance/clairvoyance/oracle.py", line 409, in clairvoyance
arg_typeref = probe_arg_typeref(
File "/root/Downloads/clairvoyance/clairvoyance/oracle.py", line 316, in probe_arg_typeref
typeref = probe_typeref(documents, "InputValue", config)
File "/root/Downloads/clairvoyance/clairvoyance/oracle.py", line 290, in probe_typeref
raise Exception(f"Unable to get TypeRef for {documents}")
Exception: Unable to get TypeRef for ['query { pastes(publi: 7) }', 'query { pastes(public: {}) }', 'query { pastes(public: 7) }']
```
| closed | 2021-03-11T22:55:48Z | 2021-09-03T12:19:35Z | https://github.com/nikitastupin/clairvoyance/issues/16 | [
"bug"
] | halfluke | 11 |
jupyter/nbgrader | jupyter | 1,885 | very hard time configuring course_id path and exchange directory | Configuration feature:
We all the time spend lot of time configuring things that don't always work anytime we setup a new course.
Would you provide a docker image ready to use with all the configurations for example the exchange directory and so on ?
Thanks. | open | 2024-05-10T20:07:53Z | 2024-06-08T10:47:05Z | https://github.com/jupyter/nbgrader/issues/1885 | [] | moctarjallo | 5 |
miguelgrinberg/Flask-Migrate | flask | 218 | Picking up changes to field(s) when running flask migrate command on MySQL | I was trying to modify an relationship table where I forgot to set the primary keys. I discovered that after saving my model the flask migrate command doesn't pickup changes to the field.
**Example**
I first had this and executed the migrate/upgrade:
```
user_has_task = db.Table('user_has_task', db.Model.metadata,
db.Column('user_id', db.Integer, db.ForeignKey('user.id')),
db.Column('task_id', db.Integer, db.ForeignKey('task.id')),
)
```
Forgot to add the primary keys, so I added them:
```
user_has_task = db.Table('user_has_task', db.Model.metadata,
db.Column('user_id', db.Integer, db.ForeignKey('user.id'), primary_key=True),
db.Column('task_id', db.Integer, db.ForeignKey('task.id'), primary_key=True),
)
```
The flask migrate command didn't pickup any changes when those keys were added.
Is there any way that it can detect changes to any fields within the column when hitting the flask migrate command? I've searched and didn't find a specific answer whether this is by design or something that should work.
MySQL version: 5.7.12 | closed | 2018-08-02T12:50:16Z | 2018-08-03T10:39:54Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/218 | [
"question"
] | melv1n | 2 |
oegedijk/explainerdashboard | plotly | 47 | Layout questions | I built (well, it's highly unfinished) a dashboard motivated by an usecase in Predictive Maintenance (https://pm-dashboard-2020.herokuapp.com/). However, I wasn't able to align the (grey background of the) header of my cards (containing the description) with the header of the cards of the built-in plots. Did you set any global configuration there!?
Something similar happened when I included self-built plotly plots - creating them in the main file resulted in a different layout compared with the creation in a separate notebook ...
Plus, I have two wishes regarding the feature input component: 😁
1) Fix the spacing between variable name and field resp. dropdown menu
2) Allow more than two columns (and one as well I guess) such that the component isn't unnecessary large if I, for instance, remove the card containing the map | closed | 2020-12-15T11:42:55Z | 2021-01-19T11:09:58Z | https://github.com/oegedijk/explainerdashboard/issues/47 | [
"enhancement"
] | hkoppen | 9 |
explosion/spaCy | nlp | 13,489 | phrasematcher attr='LOWER' fails initialization when used in a pipeline | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
Using the example provided in https://spacy.io/usage/processing-pipelines#custom-components-attributes for RESCountriesComponents.
Update the matcher to the following:
self.matcher = PhraseMatcher(nlp.vocab, attr="LOWER") # set the LOWER attrib instead of ORTH
The failure is the following:
ValueError: [[E109](https://docs.databricks.com/error-messages/error-classes.html#e109)] Component 'rest_countries' could not be run. Did you forget to call `initialize()`?
File /databricks/python/lib/python3.10/site-packages/spacy/language.py:1049, in Language.__call__(self, text, disable, component_cfg)
1048 try:
-> 1049 doc = proc(doc, **component_cfg.get(name, {})) # type: ignore[call-arg]
1050 except KeyError as e:
1051 # This typically happens if a component is not initialized
File /databricks/python/lib/python3.10/site-packages/spacy/language.py:1052, in Language.__call__(self, text, disable, component_cfg)
1049 doc = proc(doc, **component_cfg.get(name, {})) # type: ignore[call-arg]
1050 except KeyError as e:
1051 # This typically happens if a component is not initialized
-> 1052 raise ValueError(Errors.E109.format(name=name)) from e
1053 except Exception as e:
1054 error_handler(name, proc, [doc], e)
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
**spaCy version:** 3.7.2
- **Platform:** Linux-5.15.0-1058-aws-x86_64-with-glibc2.35
- **Python version:** 3.10.12
| closed | 2024-05-14T15:29:43Z | 2024-06-14T00:02:41Z | https://github.com/explosion/spaCy/issues/13489 | [] | larrymccutchan | 3 |
custom-components/pyscript | jupyter | 638 | Impossible to add requirement | I'm trying to add a library to custom_components/pyscript/requirements.txt , but I get an error
Unable to install package playwright==1.47.0: ERROR: Could not find a version that satisfies the requirement playwright==1.47.0 (from versions: none) ERROR: No matching distribution found for playwright==1.47.0 | open | 2024-09-27T07:45:46Z | 2025-01-21T03:13:35Z | https://github.com/custom-components/pyscript/issues/638 | [] | webtoucher | 2 |
huggingface/transformers | nlp | 36,920 | python_interpreter.py seems not support asyncio.run() | ### System Info
python_interpreter.py seems not support asyncio.run(),
when I use this code,
<img width="876" alt="Image" src="https://github.com/user-attachments/assets/88356177-bf7c-4cb8-a656-9427e217f589" />
it takes that error,
<img width="1114" alt="Image" src="https://github.com/user-attachments/assets/f54c9b7c-2e82-427c-8fe8-390b36733dc1" />
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
some error
### Expected behavior
pass run code in python_interpreter.py, when define my tool function with asyncio.run() | open | 2025-03-24T09:51:22Z | 2025-03-24T13:50:42Z | https://github.com/huggingface/transformers/issues/36920 | [
"bug"
] | gdw439 | 1 |
pydata/pandas-datareader | pandas | 22 | 0.15.2 causing problems with pandas.io.data.Options | I finally traced a problem I was having with options downloads to changes made between version 0.15.1 and version 0.15.2. Probably easiest is just to link the question I posed on Stack Overflow, because it shows the behavior: http://stackoverflow.com/questions/29182526/trouble-with-http-request-from-google-compute-engine
Weirdly, in 0.15.2, I was consistently able to get the options data for large cap companies ('aapl', 'ge' were my typical test cases) but not for small cap companies such as 'spwr' or 'ddd'. Not sure what was changed, but it looks to me like it might have to do with the list of expiration dates or with the handling of empty tables given an expiration date. Right now, in any case, if you hit the link shown in my stack trace (http://finance.yahoo.com/q/op?s=SPWR&date=1430438400), there's an empty table for puts and only 1 call. That would be something that's more common for smaller companies, too. The other possibility is that the initial Options object isn't getting good links in the newer version.
That's about all I know about it, but reverting to 0.15.1 seems to have solved the problems I was having.
| closed | 2015-03-22T22:52:49Z | 2015-04-10T01:36:10Z | https://github.com/pydata/pandas-datareader/issues/22 | [] | aisthesis | 9 |
microsoft/qlib | machine-learning | 1,427 | Subprocess leak during RL train using shmem or subproc env type | ## 🐛 Bug Description
I notice after each iteration, new subprocess will be spawned and old subprocess is not shutdown, this only happen for subproc or shmem env type. This is causing subprocess leak and memory leak.
I have a sample trainer using the example to reproduce this issue, but not sure where is the subprocess is spawning from and where to shut it down.
## To Reproduce
Steps to reproduce the behavior:
1. Install qlib
2. Run following script and use ps -ef to check process:
```
from collections import namedtuple
from typing import Any
from qlib.rl.simulator import Simulator
from typing import Tuple
import numpy as np
from gym import spaces
from qlib.rl.interpreter import StateInterpreter
State = namedtuple("State", ["value", "last_action"])
class SimpleSimulator(Simulator[float, State, float]):
def __init__(self, initial: float, nsteps: int, **kwargs: Any) -> None:
super().__init__(initial)
self.value = initial
self.last_action = 0.0
self.remain_steps = nsteps
def step(self, action: float) -> None:
assert 0.0 <= action <= self.value
self.last_action = action
self.remain_steps -= 1
def get_state(self) -> State:
return State(self.value, self.last_action)
def done(self) -> bool:
return self.remain_steps == 0
class SimpleStateInterpreter(StateInterpreter[Tuple[float, float], np.ndarray]):
def interpret(self, state: State) -> np.ndarray:
# Convert state.value to a 1D Numpy array
# last_action is not used by agents.
return np.array([state.value], dtype=np.float32)
@property
def observation_space(self) -> spaces.Box:
return spaces.Box(0, np.inf, shape=(1,), dtype=np.float32)
from qlib.rl.interpreter import ActionInterpreter
class SimpleActionInterpreter(ActionInterpreter[State, int, float]):
def __init__(self, n_value: int) -> None:
self.n_value = n_value
@property
def action_space(self) -> spaces.Discrete:
return spaces.Discrete(self.n_value + 1)
def interpret(self, simulator_state: State, action: int) -> float:
assert 0 <= action <= self.n_value
# simulator_state.value is used as the denominator
return simulator_state.value * (action / self.n_value)
from qlib.rl.reward import Reward
class SimpleReward(Reward[State]):
def reward(self, simulator_state: State) -> float:
# Use last_action to calculate reward. This is why it should be in the state.
rew = simulator_state.last_action / simulator_state.value
return rew
from typing import List
import torch
from torch import nn
from qlib.rl.order_execution import PPO
class SimpleFullyConnect(nn.Module):
def __init__(self, dims: List[int]) -> None:
super().__init__()
self.dims = [1] + dims
self.output_dim = dims[-1]
layers = []
for in_dim, out_dim in zip(self.dims[:-1], self.dims[1:]):
layers.append(nn.Linear(in_dim, out_dim))
layers.append(nn.ReLU())
self.fc = nn.Sequential(*layers)
def forward(self, x: torch.Tensor) -> torch.Tensor:
return self.fc(x)
from torch.utils.data import Dataset
class SimpleDataset(Dataset):
def __init__(self, positions: List[float]) -> None:
self.positions = positions
def __len__(self) -> int:
return len(self.positions)
def __getitem__(self, index: int) -> float:
return self.positions[index]
if __name__ == "__main__":
reward = SimpleReward()
state_interpreter = SimpleStateInterpreter()
action_interpreter = SimpleActionInterpreter(n_value=10)
policy = PPO(
network=SimpleFullyConnect(dims=[16, 8]),
obs_space=state_interpreter.observation_space,
action_space=action_interpreter.action_space,
lr=0.01,
)
dataset = SimpleDataset(positions=[10.0, 50.0, 100.0])
from pathlib import Path
from typing import cast
from qlib.rl.trainer import Checkpoint, train
NSTEPS = 10
trainer_kwargs = {
"max_iters": 100000,
"finite_env_type": "shmem",
"concurrency": 2,
"callbacks": [Checkpoint(
dirpath=Path("./test_checkpoints"),
every_n_iters=1,
save_latest="copy",
)],
}
vessel_kwargs = {
"update_kwargs": {"batch_size": 1000, "repeat": 1},
"episode_per_iter": 1,
}
print("Training started")
train(
simulator_fn=lambda position: SimpleSimulator(position, NSTEPS),
state_interpreter=state_interpreter,
action_interpreter=action_interpreter,
policy=policy,
reward=reward,
initial_states=cast(List[float], SimpleDataset([10.0, 50.0, 100.0])),
trainer_kwargs=trainer_kwargs,
vessel_kwargs=vessel_kwargs,
)
print("Training finished")
```
## Expected Behavior
The process number should be close to "concurrency"
## Screenshot

## Environment
**Note**: User could run `cd scripts && python collect_info.py all` under project directory to get system information
and paste them here directly.
- Qlib version: 0.9.0.99
- Python version: 3.9.15
- OS (`Windows`, `Linux`, `MacOS`): Linux - Ubuntu 22.04
- Commit number (optional, please provide it if you are using the dev version): d8764660dcd870c9288e27e5ea507b0118fed012
Linux
x86_64
Linux-5.15.0-58-generic-x86_64-with-glibc2.35
#64-Ubuntu SMP Thu Jan 5 11:43:13 UTC 2023
Python version: 3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0]
Qlib version: 0.9.0.99
numpy==1.23.5
pandas==1.5.2
scipy==1.10.0
requests==2.28.2
sacred==0.8.2
python-socketio==5.7.2
redis==4.4.2
python-redis-lock==4.0.0
schedule==1.1.0
cvxpy==1.3.0
hyperopt==0.1.2
fire==0.5.0
statsmodels==0.13.5
xlrd==2.0.1
plotly==5.12.0
matplotlib==3.6.3
tables==3.8.0
pyyaml==6.0
mlflow==1.30.0
tqdm==4.64.1
loguru==0.6.0
lightgbm==3.3.4
tornado==6.2
joblib==1.2.0
fire==0.5.0
ruamel.yaml==0.17.21
## Additional Notes
<!-- Add any other information about the problem here. -->
| closed | 2023-01-28T02:43:16Z | 2025-01-23T05:09:38Z | https://github.com/microsoft/qlib/issues/1427 | [
"bug"
] | chenditc | 3 |
autokey/autokey | automation | 393 | Idea: "Find shortcut by hotkey" - Button | I everyone,
in my programming IDE there is a very cool button to find corresponding shortcuts by searching with a key. Might it be possible to have this also in autokey?

| open | 2020-03-23T21:17:10Z | 2024-06-14T10:11:38Z | https://github.com/autokey/autokey/issues/393 | [
"enhancement",
"help-wanted",
"user interface"
] | kolibril13 | 0 |
ageitgey/face_recognition | machine-learning | 898 | Question: Liviness and Running on iOS | Hi there,
I love the approach for this tech. open-source is the way to go.
1) Is there any way to have liviness detection with the facial recognition?
2) As well, is there any approach to create a unique identifier out of a specific face?
3) I'd like to be able to determine if a scan is a "photo" or "spoof" or an actual face. Any thoughts?
4) Has anyone successfully built this library into an iOS application before? How large would the build be if I did such a thing?
I'd like to keep everything open-source, so third-party api's are out of the question for my project. | closed | 2019-08-01T18:47:28Z | 2021-08-26T14:15:16Z | https://github.com/ageitgey/face_recognition/issues/898 | [] | SilentCicero | 2 |
ivy-llc/ivy | numpy | 27,940 | fixed the complex dtype issue at paddle.less_equel | closed | 2024-01-17T12:23:33Z | 2024-01-22T14:23:01Z | https://github.com/ivy-llc/ivy/issues/27940 | [
"Sub Task"
] | samthakur587 | 0 | |
bauerji/flask-pydantic | pydantic | 95 | How to accept both form and json data | Whether the route function can handle two kinds of requests (`application/x-www-form-urlencoded` and `application/json`) at the same time? Hoping I didn't miss something in docs.
| open | 2024-07-26T07:56:30Z | 2024-07-26T07:56:30Z | https://github.com/bauerji/flask-pydantic/issues/95 | [] | moui0 | 0 |
Johnserf-Seed/TikTokDownload | api | 672 | [BUG]安装了f2,运行命令提示f2不存在 | 

| closed | 2024-03-04T01:31:42Z | 2024-03-04T01:53:57Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/672 | [] | yang720 | 1 |
PaddlePaddle/PaddleHub | nlp | 1,811 | 人体检测,给的demo是0-17共18个点,但是我这边跑出来是0-20共21个点。请问多出的几个点是在哪里的呢? | 参考demo:https://www.paddlepaddle.org.cn/hubdetail?name=openpose_body_estimation&en_category=KeyPointDetection
我这边跑出来的结果:print(hub.Module(name='openpose_body_estimation').predict(image,visualization=True)['candidate'])
[[4.64000000e+02 2.49000000e+02 9.45256233e-01 0.00000000e+00]
[4.60000000e+02 3.50000000e+02 8.65502417e-01 1.00000000e+00]
[3.67000000e+02 3.57000000e+02 8.62659991e-01 2.00000000e+00]
[3.36000000e+02 4.82000000e+02 8.27958167e-01 3.00000000e+00]
[3.52000000e+02 6.05000000e+02 8.47550452e-01 4.00000000e+00]
[3.56000000e+02 6.66000000e+02 2.05879375e-01 5.00000000e+00]
[5.53000000e+02 3.49000000e+02 8.01905394e-01 6.00000000e+00]
[5.59000000e+02 4.96000000e+02 7.97061145e-01 7.00000000e+00]
[5.17000000e+02 6.01000000e+02 8.00015271e-01 8.00000000e+00]
[5.14000000e+02 6.66000000e+02 1.19454056e-01 9.00000000e+00]
[4.18000000e+02 5.46000000e+02 6.97712421e-01 1.00000000e+01]
[2.97000000e+02 5.53000000e+02 4.91368234e-01 1.10000000e+01]
[1.74000000e+02 5.82000000e+02 6.24658287e-01 1.20000000e+01]
[5.29000000e+02 5.43000000e+02 6.85308814e-01 1.30000000e+01]
[6.23000000e+02 5.41000000e+02 5.20035088e-01 1.40000000e+01]
[7.12000000e+02 5.85000000e+02 6.81155205e-01 1.50000000e+01]
[1.58000000e+02 5.88000000e+02 1.22629292e-01 1.60000000e+01]
[4.21000000e+02 2.21000000e+02 9.85191226e-01 1.70000000e+01]
[4.98000000e+02 2.13000000e+02 9.62929130e-01 1.80000000e+01]
[3.60000000e+02 2.45000000e+02 9.11279976e-01 1.90000000e+01]
[5.45000000e+02 2.17000000e+02 9.38231885e-01 2.00000000e+01]]
共21个点。多出来的点是标记哪里的呢?
| open | 2022-03-16T09:56:10Z | 2022-03-19T11:52:45Z | https://github.com/PaddlePaddle/PaddleHub/issues/1811 | [] | allenxln | 2 |
ipython/ipython | jupyter | 14,820 | Cannot display a single dot character as Markdown | ```python
from IPython import display
display.display(display.Markdown('.'))
```
```
---------------------------------------------------------------------------
IsADirectoryError Traceback (most recent call last)
<ipython-input-6-0a1e025990e8> in <cell line: 0>()
1 from IPython import display
----> 2 display.display(display.Markdown('.'))
/usr/local/lib/python3.11/dist-packages/IPython/core/display.py in __init__(self, data, url, filename, metadata)
635 self.metadata = {}
636
--> 637 self.reload()
638 self._check_data()
639
/usr/local/lib/python3.11/dist-packages/IPython/core/display.py in reload(self)
660 """Reload the raw data from file or URL."""
661 if self.filename is not None:
--> 662 with open(self.filename, self._read_flags) as f:
663 self.data = f.read()
664 elif self.url is not None:
IsADirectoryError: [Errno 21] Is a directory: '.'
```
This is because [this code](https://github.com/ipython/ipython/blob/1a7363fb2b20691d68c0f8ebdb4b5760d24c3840/IPython/core/display.py#L326-L329) explicitly checks whether the `data` argument to `display.Markdown` is an existing path, and if so, reads that path instead of displaying the text itself.
This also leads to a weird side effects, allowing to display the files one may not intend to display:
<img width="1027" alt="Image" src="https://github.com/user-attachments/assets/7eda18f5-d071-4192-88b1-fa3fb3da3802" />
Is it a good idea to decide whether to read a file or to display the text depending on whether the text resolves to an existing path? Perhaps this should be done with a flag instead, i.e. if the user explicitly passes `filename=`? | open | 2025-03-05T12:56:32Z | 2025-03-05T17:10:33Z | https://github.com/ipython/ipython/issues/14820 | [] | dniku | 0 |
healthchecks/healthchecks | django | 89 | Can't run management commands | Hi,
I tried running the management command `pygmentize` like so:
``` bash
$ cd healthchecks/
$ ./manage.py pygmentize
```
... but this results in the following error:
```
ImportError: No module named management.commands.pygmentize
```
The reason seems to be that neither of the two directories `hc/front/management` and `hc/front/management/commands` has the `__init__.py` token file in it that allows them to be picked up as python modules.
| closed | 2016-09-30T14:51:21Z | 2016-10-01T14:55:46Z | https://github.com/healthchecks/healthchecks/issues/89 | [] | cdax | 0 |
liangliangyy/DjangoBlog | django | 233 | 创建超级用户时编码报错 | 你好,在创建超级用户时,提示:
Traceback (most recent call last):
File "manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "/home/venv/DjangoBlog/lib/python3.5/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line
utility.execute()
File "/home/venv/DjangoBlog/lib/python3.5/site-packages/django/core/management/__init__.py", line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/venv/DjangoBlog/lib/python3.5/site-packages/django/core/management/base.py", line 316, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/venv/DjangoBlog/lib/python3.5/site-packages/django/contrib/auth/management/commands/createsuperuser.py", line 59, in execute
return super().execute(*args, **options)
File "/home/venv/DjangoBlog/lib/python3.5/site-packages/django/core/management/base.py", line 353, in execute
output = self.handle(*args, **options)
File "/home/venv/DjangoBlog/lib/python3.5/site-packages/django/contrib/auth/management/commands/createsuperuser.py", line 112, in handle
username = self.get_input_data(self.username_field, input_msg, default_username)
File "/home/venv/DjangoBlog/lib/python3.5/site-packages/django/contrib/auth/management/commands/createsuperuser.py", line 193, in get_input_data
raw_value = input(message)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-2: ordinal not in range(128)
数据库的中djangoblog库的编码已设置为utf8mb4,系统为Ubuntu,已迁移数据表,不知道是不是因为python版本为3.5.2导致的问题呢,谢谢! | closed | 2019-03-21T23:59:19Z | 2019-03-27T02:09:51Z | https://github.com/liangliangyy/DjangoBlog/issues/233 | [] | YipCyun | 4 |
sunscrapers/djoser | rest-api | 676 | "non_field_errors": "Unable to log in with provided credentials." | I'm new a learner in REST, I create an example project using token authentication with Djoser, when I test login with superuser admin, the token will appear ok, but with the test1 user I created in admin Panel, when I logged, it appears the error :
```python
"non_field_errors": [
"Unable to log in with provided credentials."
]
```
I tried to add permission for the test1 user, but seemingly doesn't anything happen !

| closed | 2022-06-14T08:25:39Z | 2022-06-16T01:30:47Z | https://github.com/sunscrapers/djoser/issues/676 | [] | kev26 | 0 |
2noise/ChatTTS | python | 559 | 关于3s极速复刻,prompt参考音频格式bug | 想知道对于prompt音频格式有什么要求,输入的32khz采样率的音频报错
RuntimeError: Cannot load audio from file: `ffprobe` not found. Please install `ffmpeg` in your system to use non-WAV audio file formats and make sure `ffprobe` is in your PATH.
显示输入的文件是非音频文件
但是输入采样率16khz和24khz的能正常生成,是音频采样率除了不低于16khz还有别的限制吗 | closed | 2024-07-10T06:13:50Z | 2024-07-10T06:14:44Z | https://github.com/2noise/ChatTTS/issues/559 | [
"invalid"
] | ZHUHF123 | 1 |
xlwings/xlwings | automation | 1,734 | How to use xlwings to unprotect a workbook? | #### OS (e.g. Windows 10 or macOS Sierra)
Windows 10
#### Versions of xlwings, Excel and Python (e.g. 0.11.8, Office 365, Python 3.7)
xlwings 0.24
Excel 2016
#### Describe your issue (incl. Traceback!)
```python
# Your traceback here
```
#### Include a minimal code sample to reproduce the issue (and attach a sample workbook if required!)
```python
# Your code here
wb.api.Unprotect(Password=pass) # it doesn't work well.
``` | closed | 2021-10-16T11:55:48Z | 2021-10-17T15:51:44Z | https://github.com/xlwings/xlwings/issues/1734 | [] | sunday2333 | 1 |
plotly/plotly.py | plotly | 4,164 | implement angleref for scattergl | It seems to me that angleref works fine using px.scatter when using svg, and it does not work when using webgl.
[angleref](https://plotly.com/python/reference/scatter/#scatter-marker-angleref)
Code: fig.update_traces(marker_angleref=<VALUE>, selector=dict(type='scatter'))
However, the scattergl page does not, but it is referenced that it should
[angle](https://plotly.com/python/reference/scattergl/#scattergl-marker-angle)
Code: fig.update_traces(marker_angle=<VALUE>, selector=dict(type='scattergl'))
Type: angle
Default: 0
Sets the marker angle in respect to `angleref`
Is this a bug? Or perhaps missing because webgl does not support angleref? | open | 2023-04-18T11:55:51Z | 2024-08-12T20:52:56Z | https://github.com/plotly/plotly.py/issues/4164 | [
"feature",
"P3"
] | jrkkfst | 1 |
JaidedAI/EasyOCR | deep-learning | 522 | When I try to read this picture tables influence result, how to remove that influence? |


| closed | 2021-08-25T08:27:56Z | 2021-09-04T09:46:28Z | https://github.com/JaidedAI/EasyOCR/issues/522 | [] | CapitaineNemo | 2 |
coqui-ai/TTS | deep-learning | 3,325 | AttributeError: 'XttsConfig' object has no attribute 'use_d_vector_file' | ### Describe the bug
The tts_models/multilingual/multi - the dataset/xtts_v2 model, after the server starts, call, an error
AttributeError: 'XttsConfig' object has no attribute 'use_d_vector_file'
### To Reproduce
[2023-11-28 11:48:32,803] ERROR in app: Exception on /api/tts [POST]
Traceback (most recent call last):
File "/Users/robinpeng/PycharmProjects/TTS-0.21.1/venv/lib/python3.9/site-packages/flask/app.py", line 1455, in wsgi_app
response = self.full_dispatch_request()
File "/Users/robinpeng/PycharmProjects/TTS-0.21.1/venv/lib/python3.9/site-packages/flask/app.py", line 869, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Users/robinpeng/PycharmProjects/TTS-0.21.1/venv/lib/python3.9/site-packages/flask/app.py", line 867, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/robinpeng/PycharmProjects/TTS-0.21.1/venv/lib/python3.9/site-packages/flask/app.py", line 852, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/Users/robinpeng/PycharmProjects/TTS-0.21.1/TTS/server/server.py", line 203, in tts
wavs = synthesizer.tts(text, speaker_name=speaker_idx, language_name=language_idx, style_wav=style_wav)
File "/Users/robinpeng/PycharmProjects/TTS-0.21.1/TTS/utils/synthesizer.py", line 304, in tts
if self.tts_config.use_d_vector_file:
File "/Users/robinpeng/PycharmProjects/TTS-0.21.1/venv/lib/python3.9/site-packages/coqpit/coqpit.py", line 626, in __getattribute__
value = super().__getattribute__(arg)
AttributeError: 'XttsConfig' object has no attribute 'use_d_vector_file'
::1 - - [28/Nov/2023 11:48:32] "POST /api/tts HTTP/1.1" 500 -
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
{
"CUDA": {
"GPU": [],
"available": false,
"version": null
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.1.1",
"TTS": "0.20.6",
"numpy": "1.26.2"
},
"System": {
"OS": "Darwin",
"architecture": [
"64bit",
""
],
"processor": "i386",
"python": "3.9.13",
"version": "Darwin Kernel Version 22.3.0: Mon Jan 30 20:39:46 PST 2023; root:xnu-8792.81.3~2/RELEASE_ARM64_T6020"
}
}
```
### Additional context
_No response_ | closed | 2023-11-28T04:19:21Z | 2023-11-29T06:48:09Z | https://github.com/coqui-ai/TTS/issues/3325 | [
"bug"
] | robin977 | 2 |
geopandas/geopandas | pandas | 2,795 | BUG: read_file hangs with a large geopackage via vsicurl and spatial filter | - [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of geopandas.
- [ ] (optional) I have confirmed this bug exists on the main branch of geopandas.
---
#### Code Sample, a copy-pastable example
Python kernel hangs without much log feedback:
```python
import os
os.environ['USE_PYGEOS'] = '0'
import geopandas as gpd
import fiona
url = 'https://data.pgc.umn.edu/elev/dem/setsm/ArcticDEM/indexes/ArcticDEM_Strip_Index_s2s041_gpkg.gpkg'
# Expose GDAL/OGR Logs
import logging
logging.basicConfig(level=logging.DEBUG)
with fiona.Env(CPL_DEBUG=True):
gf = gpd.read_file(url, bbox=(558734, -1808523, 604294, -1764351), layer='ArcticDEM_Strip_Index_s2s041')
gf.head()
```
#### Problem description
Geopandas hangs when reading a large remote geopackage file (vsicurl) with a spatial filter
#### Expected Output
ogrinfo handles this fairly quickly:
```bash
time ogrinfo -so -ro -spat 558734 -1808523 604294 -1764351 '/vsicurl/https://data.pgc.umn.edu/elev/dem/setsm/ArcticDEM/indexes/ArcticDEM_Strip_Index_s2s041_gpkg.gpkg' ArcticDEM_Strip_Index_s2s041`
# Feature Count: 357
# real 0m2.522s
```
Bypassing read_file() and constructing a dataframe also works but is considerably slower
```python
with fiona.Env(CPL_DEBUG=True):
with fiona.open(url, layer='ArcticDEM_Strip_Index_s2s041') as src:
subset = list(src.items(bbox=(558734, -1808523, 604294, -1764351)))
features = [x[1].__geo_interface__ for x in subset]
gf = gpd.GeoDataFrame.from_features(features, crs=src.crs)
#CPU times: user 289 ms, sys: 46.8 ms, total: 335 ms
#Wall time: 7.98 s
```
#### Output of ``geopandas.show_versions()``
<details>
SYSTEM INFO
-----------
python : 3.11.0 | packaged by conda-forge | (main, Jan 14 2023, 12:27:40) [GCC 11.3.0]
executable : /home/jovyan/.local/envs/sliderule/bin/python
machine : Linux-5.4.228-131.415.amzn2.x86_64-x86_64-with-glibc2.35
GEOS, GDAL, PROJ INFO
---------------------
GEOS : 3.11.1
GEOS lib : None
GDAL : 3.6.2
GDAL data dir: /home/jovyan/.local/envs/sliderule/share/gdal
PROJ : 9.1.0
PROJ data dir: /home/jovyan/.local/envs/sliderule/share/proj
PYTHON DEPENDENCIES
-------------------
geopandas : 0.12.2
numpy : 1.24.2
pandas : 1.5.3
pyproj : 3.4.1
shapely : 2.0.1
fiona : 1.9.1
geoalchemy2: None
geopy : None
matplotlib : 3.6.3
mapclassify: 2.5.0
pygeos : 0.14
pyogrio : None
psycopg2 : None
pyarrow : 10.0.1
rtree : 1.0.1
</details>
| closed | 2023-02-17T20:26:07Z | 2023-05-10T18:38:09Z | https://github.com/geopandas/geopandas/issues/2795 | [
"bug"
] | scottyhq | 9 |
fastapi-admin/fastapi-admin | fastapi | 28 | Visible data validation in frontend | Hi,
This is more of a conversation and you decide if this is the right place for it or not.
The rest-admin frontend support ways to visually highlight to a user what data in a form with fields are correct and not which is particularly useful at /login.
Default HttpException handler in FastApi do not return pure json which rest-admin expect in validation use cases but there is an easy way to achieve the same thing without raising an exception.
For example in `async def login()` one can do:
```python
usr_pwd_validation_error = {
"name": "HttpException",
"message": [{"field": "password", "message": "Incorrect password."}]
}
user = await get_object_or_404(user_model, username=login_in.username)
if not user.is_active:
raise HTTPException(status_code=HTTP_403_FORBIDDEN, detail="User is not Active!")
if not pwd_context.verify(login_in.password, user.password):
return JSONResponse(status_code=status.HTTP_422_UNPROCESSABLE_ENTITY, content=usr_pwd_validation_error)
```
Discussion point:
This won´t raise an exception per se but it return data to rest-admin a HttpException have occurred.
In your opinion is this the preferred way to feedback validation information to frontend or are there better ones?
| closed | 2021-01-08T11:06:31Z | 2021-05-01T12:52:43Z | https://github.com/fastapi-admin/fastapi-admin/issues/28 | [] | swevm | 0 |
andfanilo/streamlit-echarts | streamlit | 58 | Get data from the Parallel: Parallel Aqi | Hi @andfanilo, how are you?
First of all, thank you for your content, you always give me something to think about and improve on.
Can you (or someone else) give an example of how to get data from the "Parallel: Parallel Aqi" when I interact with the graph?
Thank you! | open | 2024-07-30T20:40:15Z | 2024-07-30T20:40:15Z | https://github.com/andfanilo/streamlit-echarts/issues/58 | [] | rodrigokl | 0 |
ClimbsRocks/auto_ml | scikit-learn | 343 | speed up dataframevectorizer- can we parallelize it? | right now it's a consistent bottleneck. | open | 2017-10-24T21:13:51Z | 2017-10-27T17:55:37Z | https://github.com/ClimbsRocks/auto_ml/issues/343 | [] | ClimbsRocks | 5 |
ageitgey/face_recognition | machine-learning | 1,425 | go语言和java语言如何调用这个项目,能否在安卓app开发的时候原生本地调用该项目 | * face_recognition version:
* Python version:
* Operating System:
### Description
Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.
IMPORTANT: If your issue is related to a specific picture, include it so others can reproduce the issue.
### What I Did
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```
| open | 2022-07-02T03:24:36Z | 2024-07-21T01:44:33Z | https://github.com/ageitgey/face_recognition/issues/1425 | [] | passerbyo | 2 |
JoeanAmier/XHS-Downloader | api | 57 | 请问部分笔记下载图片是web格式是什么原因 | 有些笔记是JPEG,有些就是web的格式,举例:94 【这是什么?好乖!看一眼👀 - 光明OvO | 小红书 - 你的生活指南】 😆 GgV07a2iDDGIZlX 😆 http://xhslink.com/m1A1BC | open | 2024-03-01T07:43:06Z | 2024-03-01T07:43:06Z | https://github.com/JoeanAmier/XHS-Downloader/issues/57 | [] | 76563 | 0 |
long2ice/fastapi-cache | fastapi | 185 | Do not make non-GET requests uncacheable | I believe there's a use-case for allowing some non-GET requests to be cacheable.
Given restrictions on the length of HTTP requests, it is sometimes necessary to embed data into requests. Elasticsearch is an example that makes extensive use of GET requests using data, but this pattern is not possible with FastAPI because OpenAPI/Swagger does not support GET requests with embedded data.
As a result, if you want to use a FastAPI and have an endpoint that can accept large payloads (for example, because you want some sort of proxy in front of Elasticsearch, or a similar service) it is necessary to use a non-GET method, e.g. POST. However, fastapi-cache does not cache such queries.
I'd like to propose removing these two lines: https://github.com/long2ice/fastapi-cache/blob/main/fastapi_cache/decorator.py#L82-L83 so that it is up to the user of FastAPI to determine whether a method should be cached.
| open | 2023-05-25T17:49:57Z | 2023-07-18T06:21:59Z | https://github.com/long2ice/fastapi-cache/issues/185 | [] | john-tipper | 1 |
JoshuaC215/agent-service-toolkit | streamlit | 146 | can I use azure openai key? how to set the env? | closed | 2025-01-21T09:48:39Z | 2025-01-22T10:14:39Z | https://github.com/JoshuaC215/agent-service-toolkit/issues/146 | [] | xiaolianlian111 | 1 | |
xzkostyan/clickhouse-sqlalchemy | sqlalchemy | 328 | Nested maps, tuples, enums don't work | **Describe the bug**
When you nest Tuple(Tuple) or Map(Enum) you get error
**To Reproduce**
CREATE TABLE color_map (
id UInt32,
colors Map(Enum('hello' = 1, 'world' = 2), String)
) ENGINE = Memory;
And try to compile type.
**Expected behavior**
Should be Map(Enum, String), we get error.
**Versions**
0.2, but code still wrong in new versions
python 3.10
| open | 2024-07-23T09:33:11Z | 2024-07-29T13:41:30Z | https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/328 | [] | FraterCRC | 2 |
oegedijk/explainerdashboard | plotly | 56 | Error due to check_additivity | Hi,
I am getting the following error message for the random forest model:
Exception: Additivity check failed in TreeExplainer! Please ensure the data matrix you passed to the explainer is the same shape that the model was trained on. If your data shape is correct then please report this on GitHub. Consider retrying with the feature_perturbation='interventional' option. This check failed because for one of the samples the sum of the SHAP values was 0.626204, while the model output was 0.710000. If this difference is acceptable you can set check_additivity=False to disable this check.
Do you have any suggestion to solve the issue?
Thanks,
Saman | closed | 2021-01-05T01:13:05Z | 2021-01-11T11:43:56Z | https://github.com/oegedijk/explainerdashboard/issues/56 | [] | sparvaneh | 3 |
Textualize/rich | python | 3,234 | [BUG] Colorization of STDERR is unexpectedly disabled when STDOUT is redirected | - [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.
- [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).
**Describe the bug**
Consider the following code:
```python
from rich.console import Console
_print = Console(highlight=False, stderr=True, no_color=False).print
_print("[bright_red]hello[/]")
```
When running this on **Windows 10, 22H2**, colorization is disabled when the standard output is redirected.
Example:

<details>
<summary>Click to expand</summary>
Rich is running in the standard Windows CMD terminal.
```
$ python -m rich.diagnose
┌────────── <class 'rich.console.Console'> ───────────┐
│ A high level console interface. │
│ │
│ ┌─────────────────────────────────────────────────┐ │
│ │ <console width=55 ColorSystem.WINDOWS> │ │
│ └─────────────────────────────────────────────────┘ │
│ │
│ color_system = 'windows' │
│ encoding = 'utf-8' │
│ file = <_io.TextIOWrapper │
│ name='<stdout>' mode='w' │
│ encoding='utf-8'> │
│ height = 9 │
│ is_alt_screen = False │
│ is_dumb_terminal = False │
│ is_interactive = True │
│ is_jupyter = False │
│ is_terminal = True │
│ legacy_windows = True │
│ no_color = False │
│ options = ConsoleOptions( │
│ size=ConsoleDimensions( │
│ width=55, │
│ height=9 │
│ ), │
│ legacy_windows=True, │
│ min_width=1, │
│ max_width=55, │
│ is_terminal=True, │
│ encoding='utf-8', │
│ max_height=9, │
│ justify=None, │
│ overflow=None, │
│ no_wrap=False, │
│ highlight=None, │
│ markup=None, │
│ height=None │
│ ) │
│ quiet = False │
│ record = False │
│ safe_box = True │
│ size = ConsoleDimensions( │
│ width=55, │
│ height=9 │
│ ) │
│ soft_wrap = False │
│ stderr = False │
│ style = None │
│ tab_size = 8 │
│ width = 55 │
└─────────────────────────────────────────────────────┘
┌── <class 'rich._windows.WindowsConsoleFeatures'> ───┐
│ Windows features available. │
│ │
│ ┌─────────────────────────────────────────────────┐ │
│ │ WindowsConsoleFeatures( │ │
│ │ │ vt=False, │ │
│ │ │ truecolor=False │ │
│ │ ) │ │
│ └─────────────────────────────────────────────────┘ │
│ │
│ truecolor = False │
│ vt = False │
└─────────────────────────────────────────────────────┘
┌────── Environment Variables ───────┐
│ { │
│ 'TERM': None, │
│ 'COLORTERM': None, │
│ 'CLICOLOR': None, │
│ 'NO_COLOR': None, │
│ 'TERM_PROGRAM': None, │
│ 'COLUMNS': None, │
│ 'LINES': None, │
│ 'JUPYTER_COLUMNS': None, │
│ 'JUPYTER_LINES': None, │
│ 'JPY_PARENT_PID': None, │
│ 'VSCODE_VERBOSE_LOGGING': None │
│ } │
└────────────────────────────────────┘
platform="Windows"
```
```
$ pip freeze | grep rich
rich==13.7.0
```
</details>
| closed | 2023-12-17T09:59:54Z | 2023-12-17T10:05:20Z | https://github.com/Textualize/rich/issues/3234 | [
"Needs triage"
] | huettenhain | 5 |
liangliangyy/DjangoBlog | django | 289 | 增加elasticsearch搜索配置文档 | <!--
如果你不认真勾选下面的内容,我可能会直接关闭你的 Issue。
提问之前,建议先阅读 https://github.com/ruby-china/How-To-Ask-Questions-The-Smart-Way
-->
**我确定我已经查看了** (标注`[ ]`为`[x]`)
- [ ] [DjangoBlog的readme](https://github.com/liangliangyy/DjangoBlog/blob/master/README.md)
- [ ] [配置说明](https://github.com/liangliangyy/DjangoBlog/blob/master/bin/config.md)
- [ ] [其他 Issues](https://github.com/liangliangyy/DjangoBlog/issues)
----
**我要申请** (标注`[ ]`为`[x]`)
- [ ] BUG 反馈
- [x] 添加新的特性或者功能
- [ ] 请求技术支持
| closed | 2019-07-05T03:16:45Z | 2020-04-06T08:50:50Z | https://github.com/liangliangyy/DjangoBlog/issues/289 | [] | liangliangyy | 0 |
sqlalchemy/sqlalchemy | sqlalchemy | 11,547 | type_annotation_map doesn't seem to apply to `column_property` | ### Describe the bug
I have a materialized column in my model that I want to replace with a `column_property`. I wrote this:
```python
class ...(....):
...
_validation_status_nhh: Mapped[CumValidationNHH] = mapped_column(name="validation_status_nhh")
@hybrid_property
def validation_status_nhh(self) -> CumValidationNHH:
assert self._validation_status_nhh == self.computed_validation_status_nhh # doesn't assert
assert isinstance(self._validation_status_nhh, CumValidationNHH) # doesn't assert
assert isinstance(
self.computed_validation_status_nhh,
CumValidationNHH,
), repr(self.computed_validation_status_nhh) # this asserts, self.computed_validation_status_nhh is a str
return self._validation_status_nhh
@validation_status_nhh.inplace.expression
def _validation_status_nhh_expression(cls) -> coalesce[CumValidationNHH]:
return _computed_validation_status_nhh(cls.id, cls.calculated_at_utc)
computed_validation_status_nhh: Mapped[CumValidationNHH] = column_property(
_computed_validation_status_nhh(id, calculated_at_utc),
)
```
Apart from this everything works correctly - I even confirmed that `computed_validation_status_nhh` and `validation_status_nhh` are equal for all objects in my database, so I'm certain the implementation of `computed_validation_status_nhh` is correct.
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
_No response_
### SQLAlchemy Version in Use
2.0.31
### DBAPI (i.e. the database driver)
asyncpg
### Database Vendor and Major Version
postgres 16
### Python Version
3.12
### Operating system
Linux
### To Reproduce
I don't have a self contained example, but there are no tests covering this either. It should be easy to add a test that combines type_annotation_map and column_property.
### Error
N/A
### Additional context
_No response_ | closed | 2024-06-28T21:57:13Z | 2024-06-29T06:42:42Z | https://github.com/sqlalchemy/sqlalchemy/issues/11547 | [] | tamird | 1 |
biolab/orange3 | numpy | 6,710 | Failed to install orange in UBUNTU 22.04 . Error (Illegal Instruction)-core dumped | I tried to install UBUNTU 22.04 64 bits using PIP ,CONDA ,ANACONDA but always when i run Python -m Orange.canvas appear the error
(Illegal Instruction)-core dumped
I used PYQT5 PYQT6 and appear the welcome page with a orange with glasses and after this appear the error illegal Instruction core dumped
The version of orange is 3.36.2.I tried 3.36.1 too and appear the error
illegal Instruction core dumped
What can I do?I want use Orange in my Linux Machine but It seems an impossible mission.Please help me!!!

| closed | 2024-01-19T20:53:19Z | 2024-02-16T11:22:35Z | https://github.com/biolab/orange3/issues/6710 | [] | rogeriomalheiros | 9 |
stanfordnlp/stanza | nlp | 1,240 | constituency parse tree as json | Dear stanza developers,
I'm facing difficulties to work with the constituency parser output object:`<class 'stanza.models.constituency.parse_tree.Tree'>`.
When I try to convert it to a dictionary (`dict(doc.sentences[0].constituency)`), then the error says:
```
'Tree' object is not iterable
```
The [docs](https://stanfordnlp.github.io/stanza/data_objects.html#parsetree) do not say anything about how to use `parse_tree` objects for further processing.
__So how can I access the feature of this Tree being nested ?__
Ideally, the tree is converted to a nested dict, where the keys is the POS abbreviation modified with a number n corresponding to its n-th occurrence. Is it professional practice to define my own function using the string as an argument to do this?
I'm very surprised that other people haven't faced the same problem, so how completely wrong am I?
Used code:
```py
import stanza
nlp_constituency = stanza.Pipeline(lang='en', processors='tokenize,pos,constituency')
doc = nlp_constituency('This is a test')
print(doc.sentences[0].consituency) # type = stanza.models.constituency.parse_tree.Tree
# result # (ROOT (S (NP (DT This)) (VP (VBZ is) (NP (DT a) (NN test)))))
``` | closed | 2023-04-28T06:44:15Z | 2023-05-03T05:44:36Z | https://github.com/stanfordnlp/stanza/issues/1240 | [
"question"
] | runfish5 | 2 |
ddbourgin/numpy-ml | machine-learning | 56 | There is no CRF here? Why | **System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- Python version:
- NumPy version:
**Describe the current behavior**
**Describe the expected behavior**
**Code to reproduce the issue**
<!-- Provide a reproducible test case that is the bare minimum necessary to generate the problem. -->
**Other info / logs**
<!-- Include any logs or source code that would be helpful to diagnose the problem.
If including tracebacks, please include the full traceback. Large logs and files should be attached. -->
| open | 2020-07-29T12:49:12Z | 2020-08-02T01:42:23Z | https://github.com/ddbourgin/numpy-ml/issues/56 | [
"model request"
] | yishen-zhao | 1 |
aio-libs/aiomysql | asyncio | 436 | Old style coroutines used the readthedocs | Hi! I was going through the documentation and found out that the [restructuredtext](https://github.com/aio-libs/aiomysql/blob/master/docs/index.rst) file still has the old-style coroutines instead of the async/await keyword, is there a particular reason for that?
If not I can submit a PR to change the code in the [readthedocs](https://aiomysql.readthedocs.io/en/latest/) to the new async/await feature. | closed | 2019-09-11T10:26:11Z | 2019-09-11T12:57:38Z | https://github.com/aio-libs/aiomysql/issues/436 | [] | Pradhvan | 1 |
desec-io/desec-stack | rest-api | 920 | Documentation of API error codes | The docs currently do not contain a lot of information on possible error codes and conditions returned by the API. API clients therefore need to implement error handling based on observations and assumptions, and can not rely on documented behaviour.
It would be nice to have a reference of possible error conditions and HTTP status codes for each endpoint.
Furthermore, I noticed that errors seem to contain JSON data with a human-readable error message in the `detail` field. Is this always the case? At a quick glance, it seems to be. But I could not find any reliable information.
I believe, #359 has the potential to naturally provide this kind of information. In the mean time, a human-readable documentation would help. | open | 2024-05-04T12:54:05Z | 2024-05-04T12:54:05Z | https://github.com/desec-io/desec-stack/issues/920 | [] | s-hamann | 0 |
Gerapy/Gerapy | django | 108 | 项目管理中 创建项目问题 | 
使用域名出现了 没有输入框
版本号是 0.8.6 beta2 , 0.8.5 这一块是正常的 | closed | 2019-05-29T10:27:48Z | 2019-11-23T14:33:53Z | https://github.com/Gerapy/Gerapy/issues/108 | [] | WuQianyong | 7 |
alirezamika/autoscraper | automation | 58 | Website Structure | Hello! Thank you so much for sharing your work!
I wanted to ask, if i trained my model on some website, then this website will change the website structure and styling , will it still work? Can I get the same data? or I will be needed to re-train it again? | closed | 2021-04-13T04:40:12Z | 2021-12-01T08:22:49Z | https://github.com/alirezamika/autoscraper/issues/58 | [] | sushidelivery | 2 |
InstaPy/InstaPy | automation | 6,550 | Unable to locate element | Error Message: Unable to locate element: //div/a/time
Stacktrace:
WebDriverError@chrome://remote/content/shared/webdriver/Errors.jsm:183:5
NoSuchElementError@chrome://remote/content/shared/webdriver/Errors.jsm:395:5
element.find/</<@chrome://remote/content/marionette/element.js:300:16
| open | 2022-03-11T18:16:00Z | 2022-03-11T18:16:00Z | https://github.com/InstaPy/InstaPy/issues/6550 | [] | fudojahic | 0 |
databricks/koalas | pandas | 1,414 | Document that we don't support the compatibility with non-Koalas APIs yet. | Seems like people want to convert their codes directly from pandas to Koalas. One case I often observe is, they want to convert the codes that works together with other Python standard functions such as `max`, `min`, or list/generator comprehensions, e.g.)
```python
import pandas as pd
data = []
for a in pd.Series([1, 2, 3]):
data.append(a)
pd.DataFrame(data)
```
In Koalas, such example does not work. We should preemptively document and guide users to stick to Koalas APIs only. | closed | 2020-04-09T12:04:17Z | 2020-04-15T10:48:30Z | https://github.com/databricks/koalas/issues/1414 | [
"enhancement"
] | HyukjinKwon | 6 |
google-research/bert | nlp | 489 | Separator token for custom QA input (multi paragraph, longer than 512) | Hello!
I'm trying to extract features for a QA task where the document is composed of multiple disparate paragraphs. So my input is:
question ||| document
where document is {para1 SEP para2 SEP para3 SEP}, so overall, it's something like:
question ||| para1 SEP para2 SEP para3 SEP
My question is: Is it okay to use the default BERT [SEP] token for the paragraph separation token as above? Or should I use something like the NULL token instead, or simply remove the paragraph separation token completely?
Secondly, my input is longer than 512, so I'm thinking of doing sliding windows like:
question ||| doc[:512]
question ||| doc[256:768]
and so on, finally merging the overlaps by averaging. Would this be correct?
Thanks!
| open | 2019-03-10T02:34:34Z | 2019-03-12T20:36:28Z | https://github.com/google-research/bert/issues/489 | [] | bugtig | 1 |
DistrictDataLabs/yellowbrick | scikit-learn | 588 | Move _determine_target_color_type from the Manifold visualizer to the utils package | The Manifold visualizer currently has a function called _determine_target_color_type that can be used by other visualizers as it determines the type of the other target variable. It would be great to move this to the utils package.
| closed | 2018-08-26T18:10:12Z | 2019-01-02T02:49:19Z | https://github.com/DistrictDataLabs/yellowbrick/issues/588 | [
"type: technical debt"
] | pdamodaran | 7 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 310 | ValueError: Attempting to unscale FP16 gradients. | ### 详细描述问题
我希望在已有的Chinese-LLaMA-Plus-7B上对模型进行预训练。
我先将原版LLaMA与chinese-llama-plus-lora-7b进行合并,得到了Chinese-LLaMA-Plus-7B,然后使用[预训练脚本](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/%E9%A2%84%E8%AE%AD%E7%BB%83%E8%84%9A%E6%9C%AC)中的方式对模型进行预训练,我没有使用deepspeed,但最终运行得到了ValueError: Attempting to unscale FP16 gradients.的错误。
torch版本为1.12.0,transformers版本为4.28.1。
### 运行截图或log

### 必查项目
- [ ] 哪个模型的问题:LLaMA
- [ ] 问题类型:
- 模型预训练
| closed | 2023-05-11T06:14:11Z | 2024-06-19T02:57:05Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/310 | [] | klykq111 | 28 |
miguelgrinberg/flasky | flask | 240 | Can't deploy Tensorflow app on a Apache web server using Flask | Hello,
Based on your book, I am trying to deploy my tensorflow app on Apache web server using Flask. But I can't.
"**In locally**", there is no problem and it works find.
But on the web server, it keeps displaying "internal server error".
The below is a simple example.py;

And, example.wsgi is

Through googling, many have suggested that this is due to "**LD_LIBRARY_PATH**" problem but it seems there is no clear solution.
(in local, print(os.environ['LD_LIBRARY_PATH']) shows "/usr/local/cuda/lib64" but on the web server, program doesn't recognize 'LD_LIBRARY_PATH')
I also tried manually setup 'LD_LIBRARY_PATH' within httpd.conf using 'SetEnv'm but not working.
Do you have any ideas to resolve this issue?
Thanks | closed | 2017-02-14T20:53:33Z | 2018-09-02T01:33:13Z | https://github.com/miguelgrinberg/flasky/issues/240 | [
"question"
] | kiminc66 | 3 |
HIT-SCIR/ltp | nlp | 442 | 依存句法分析结果不完整 | - ltp 4.0.10版本
- LTP 4.0 base 和 small模型均有该问题
text = '今天天气不错'
seg, hidden = ltp.seg([text])
*[['今天', '天气', '不错']]*
dep = ltp.dep(hidden)
*[[(1, 3, 'ADV'), (2, 3, 'SBV')]]*
dep结果缺了一个 | closed | 2020-12-01T09:07:44Z | 2020-12-01T11:14:47Z | https://github.com/HIT-SCIR/ltp/issues/442 | [] | MachineWei | 1 |
fugue-project/fugue | pandas | 9 | What do we do about Modin? | Remove or make it work and testable? | closed | 2020-05-11T00:07:19Z | 2021-12-31T07:49:55Z | https://github.com/fugue-project/fugue/issues/9 | [
"bug"
] | goodwanghan | 5 |
babysor/MockingBird | deep-learning | 387 | aishell2 aidatang_200zh 哪个好? | 如题,按数据量来说是 aishell2 更多 | open | 2022-02-12T18:08:30Z | 2022-02-13T04:02:13Z | https://github.com/babysor/MockingBird/issues/387 | [] | Perlistan | 1 |
mwaskom/seaborn | data-science | 3,362 | Histogram plotting not working as `pandas` option `use_inf_as_null` has been removed. | I am currently unable to use `histplot` as it appears that the `pandas` option `use_inf_as_null` has been removed. Error log below.
```
File ~/miniconda3/envs/tf/lib/python3.9/site-packages/seaborn/distributions.py:1438, in histplot(data, x, y, hue, weights, stat, bins, binwidth, binrange, discrete, cumulative, common_bins, common_norm, multiple, element, fill, shrink, kde, kde_kws, line_kws, thresh, pthresh, pmax, cbar, cbar_ax, cbar_kws, palette, hue_order, hue_norm, color, log_scale, legend, ax, **kwargs)
1427 estimate_kws = dict(
1428 stat=stat,
1429 bins=bins,
(...)
1433 cumulative=cumulative,
1434 )
1436 if p.univariate:
-> 1438 p.plot_univariate_histogram(
1439 multiple=multiple,
1440 element=element,
1441 fill=fill,
1442 shrink=shrink,
1443 common_norm=common_norm,
1444 common_bins=common_bins,
1445 kde=kde,
1446 kde_kws=kde_kws,
1447 color=color,
1448 legend=legend,
1449 estimate_kws=estimate_kws,
1450 line_kws=line_kws,
1451 **kwargs,
1452 )
1454 else:
1456 p.plot_bivariate_histogram(
1457 common_bins=common_bins,
1458 common_norm=common_norm,
(...)
1468 **kwargs,
1469 )
File ~/miniconda3/envs/tf/lib/python3.9/site-packages/seaborn/distributions.py:431, in _DistributionPlotter.plot_univariate_histogram(self, multiple, element, fill, common_norm, common_bins, shrink, kde, kde_kws, color, legend, line_kws, estimate_kws, **plot_kws)
428 histograms = {}
430 # Do pre-compute housekeeping related to multiple groups
--> 431 all_data = self.comp_data.dropna()
432 all_weights = all_data.get("weights", None)
434 if set(self.variables) - {"x", "y"}: # Check if we'll have multiple histograms
File ~/miniconda3/envs/tf/lib/python3.9/site-packages/seaborn/_oldcore.py:1119, in VectorPlotter.comp_data(self)
1117 grouped = self.plot_data[var].groupby(self.converters[var], sort=False)
1118 for converter, orig in grouped:
-> 1119 with pd.option_context('mode.use_inf_as_null', True):
1120 orig = orig.dropna()
1121 if var in self.var_levels:
1122 # TODO this should happen in some centralized location
1123 # it is similar to GH2419, but more complicated because
1124 # supporting `order` in categorical plots is tricky
File ~/miniconda3/envs/tf/lib/python3.9/site-packages/pandas/_config/config.py:441, in option_context.__enter__(self)
440 def __enter__(self) -> None:
--> 441 self.undo = [(pat, _get_option(pat, silent=True)) for pat, val in self.ops]
443 for pat, val in self.ops:
444 _set_option(pat, val, silent=True)
File ~/miniconda3/envs/tf/lib/python3.9/site-packages/pandas/_config/config.py:441, in <listcomp>(.0)
440 def __enter__(self) -> None:
--> 441 self.undo = [(pat, _get_option(pat, silent=True)) for pat, val in self.ops]
443 for pat, val in self.ops:
444 _set_option(pat, val, silent=True)
File ~/miniconda3/envs/tf/lib/python3.9/site-packages/pandas/_config/config.py:135, in _get_option(pat, silent)
134 def _get_option(pat: str, silent: bool = False) -> Any:
--> 135 key = _get_single_key(pat, silent)
137 # walk the nested dict
138 root, k = _get_root(key)
File ~/miniconda3/envs/tf/lib/python3.9/site-packages/pandas/_config/config.py:121, in _get_single_key(pat, silent)
119 if not silent:
120 _warn_if_deprecated(pat)
--> 121 raise OptionError(f"No such keys(s): {repr(pat)}")
122 if len(keys) > 1:
123 raise OptionError("Pattern matched multiple keys")
OptionError: "No such keys(s): 'mode.use_inf_as_null'"
``` | closed | 2023-05-11T14:32:34Z | 2023-05-15T23:18:18Z | https://github.com/mwaskom/seaborn/issues/3362 | [] | MattWenham | 1 |
OpenBB-finance/OpenBB | machine-learning | 6,790 | [🕹️] Completed Side Quest: 5 Friends Starred Repositories | ### What side quest or challenge are you solving?
1050 Points 🔥 Get 5 friends to star our repos
### Points
150
### Description
I have completed the side quest by having 5 of my friends star the required repositories. Below are the screenshots from their GitHub profiles as proof.
### Provide proof that you've completed the task




 | closed | 2024-10-16T12:07:41Z | 2024-10-16T20:45:46Z | https://github.com/OpenBB-finance/OpenBB/issues/6790 | [] | chrahman | 4 |
lepture/authlib | django | 145 | OAuth2 client: Support expiring refresh tokens | As annoying as it may be, there are services where the refresh tokens expire after a while (in my particular case they are only valid for 2h which sucks even more :angry:).
It would be nice if the OAuth2 client had better support for this:
- handle `refresh_expires_in` (not sure if a `_at` version exists as well in some cases) in `OAuth2Token` and provide a `is_refresh_expired` method
- in case of autorefresh, provide a callback in case of expired refresh tokens, where one can acquire a new one if possible (this is e.g. the case when using a `client_credentials` grant)
---
FWIW, this is what I did in my application for now:
```python
class RefreshingOAuth2Session(OAuth2Session):
def __init__(self, client_id, client_secret, access_token_url, **kwargs):
super(RefreshingOAuth2Session, self).__init__(
client_id, client_secret, refresh_token_url=access_token_url, **kwargs
)
self.access_token_url = access_token_url
def _is_refresh_token_expired(self):
issued_time = self.token['expires_at'] - self.token['expires_in']
refresh_expires_at = issued_time + self.token['refresh_expires_in']
return refresh_expires_at < time.time()
def refresh_token(
self, url=None, refresh_token=None, body='', auth=None, headers=None, **kwargs
):
assert refresh_token is None or refresh_token == self.token['refresh_token']
if self._is_refresh_token_expired():
self.ensure_token(force=True)
return self.token
return super(RefreshingOAuth2Session, self).refresh_token(
url, refresh_token, body, auth, headers, **kwargs
)
def ensure_token(self, force=False):
"""Retrieve a token if none is available.
Call this before using the session to make sure there is a token,
even if none was provided explicitly (e.g. from a cache).
:param force: Whether to get a new token regardless of an existing one.
"""
if self.token is None or force:
self.fetch_access_token(
self.access_token_url, grant_type='client_credentials'
)
if self.token_updater and self.token:
self.token_updater(self.token)
``` | closed | 2019-08-30T14:45:14Z | 2019-10-08T11:47:09Z | https://github.com/lepture/authlib/issues/145 | [
"client"
] | ThiefMaster | 1 |
christabor/flask_jsondash | flask | 28 | Cache request in a single dashboard render if urls are the same | Would be nice to determine if any of the urls are identical, and if so, only request it once.
| closed | 2016-08-22T21:14:29Z | 2016-10-04T03:28:04Z | https://github.com/christabor/flask_jsondash/issues/28 | [
"enhancement",
"performance"
] | christabor | 1 |
pydantic/FastUI | fastapi | 314 | Feature Request: Use a logo image as the navbar `title` (as an alternative to text) | ## Use Case
When designing a site, I often want to use a logo image as the navbar title (instead of text).
## Current Limitations
Right now, the [Navbar](https://github.com/pydantic/FastUI/blob/97c4f07af723e370039e384d240d5517f60f8062/src/python-fastui/fastui/components/__init__.py#L284) component only supports a `title: str` argument.
This results in a nice text title on the navbar, but in my case I would like to use an image. I can't find a way to do that.

## Current Work-around (not ideal)
I can kind of achieve this by not passing in a `title` to `Navbar()` and then setting the first element of the `start_links` list to be a `Link` with an `Image` component inside it. However, the problem with this work-around is that the image gets hidden inside the hamburger menu when the screen width is small.


## Possible Solution
I propose adding support for specifying an image to use instead of text.
Maybe the `Navbar` constructor could take an `title_image: Image` argument -- an [Image](https://github.com/pydantic/FastUI/blob/97c4f07af723e370039e384d240d5517f60f8062/src/python-fastui/fastui/components/__init__.py#L381) component -- which is used instead of the title, `if title_image is not None`.
I'm sure there are other good ways to achieve this. Maybe there's already a solution/work-around, that I don't know about.
Thanks for your feedback and consideration of this use case/. | open | 2024-05-19T16:53:50Z | 2024-05-30T12:50:47Z | https://github.com/pydantic/FastUI/issues/314 | [] | jimkring | 5 |
plotly/dash | data-visualization | 2,354 | How to access an Iframe from an external source without uncaught DOMexception? | I have some other website I own and I want to embed html from that website as Iframes. I want to access some properties of the actual element (such as scroll height) to adjust the Iframe in dash.
But I get a `Uncaught DOMException: Blocked a frame with origin "http://localhost:8050" from accessing a cross-origin frame.`, which is to be expected. In flask there's a way to whitelist other sites, is there a way to do this in Dash?
thank you! | closed | 2022-12-06T04:29:40Z | 2024-07-24T17:00:26Z | https://github.com/plotly/dash/issues/2354 | [] | matthewyangcs | 2 |
TheAlgorithms/Python | python | 12,379 | Find:audio_filters/butterworth_filter.py issure | ### Repository commit
fcf82a1eda21dcf36254a8fcaadc913f6a94c8da
### Python version (python --version)
Python 3.10.6
### Dependencies version (pip freeze)
```
absl-py==2.1.0
astunparse==1.6.3
beautifulsoup4==4.12.3
certifi==2024.8.30
charset-normalizer==3.4.0
contourpy==1.3.0
cycler==0.12.1
dill==0.3.9
dom_toml==2.0.0
domdf-python-tools==3.9.0
fake-useragent==1.5.1
flatbuffers==24.3.25
fonttools==4.54.1
gast==0.6.0
google-pasta==0.2.0
grpcio==1.67.0
h5py==3.12.1
idna==3.10
imageio==2.36.0
joblib==1.4.2
keras==3.6.0
kiwisolver==1.4.7
libclang==18.1.1
lxml==5.3.0
Markdown==3.7
markdown-it-py==3.0.0
MarkupSafe==3.0.2
matplotlib==3.9.2
mdurl==0.1.2
ml-dtypes==0.3.2
mpmath==1.3.0
namex==0.0.8
natsort==8.4.0
numpy==1.26.4
oauthlib==3.2.2
opencv-python==4.10.0.84
opt_einsum==3.4.0
optree==0.13.0
packaging==24.1
pandas==2.2.3
patsy==0.5.6
pbr==6.1.0
pillow==11.0.0
pip==24.2
protobuf==4.25.5
psutil==6.1.0
Pygments==2.18.0
pyparsing==3.2.0
python-dateutil==2.9.0.post0
pytz==2024.2
qiskit==1.2.4
qiskit-aer==0.15.1
requests==2.32.3
requests-oauthlib==1.3.1
rich==13.9.2
rustworkx==0.15.1
scikit-learn==1.5.2
scipy==1.14.1
setuptools==74.1.2
six==1.16.0
soupsieve==2.6
sphinx-pyproject==0.3.0
statsmodels==0.14.4
stevedore==5.3.0
symengine==0.13.0
sympy==1.13.3
tensorboard==2.16.2
tensorboard-data-server==0.7.2
tensorflow==2.16.2
tensorflow-io-gcs-filesystem==0.37.1
termcolor==2.5.0
threadpoolctl==3.5.0
tomli==2.0.2
tweepy==4.14.0
typing_extensions==4.12.2
tzdata==2024.2
urllib3==2.2.3
Werkzeug==3.0.4
wheel==0.44.0
wrapt==1.16.0
xgboost==2.1.1
```
### Expected behavior
- Frequency (frequency): It should be ensured that the frequency is a reasonable positive value and does not exceed the Nyquist frequency (i.e., half of the sampling rate). If the frequency is too high, it may lead to an unstable filter.
- Sampling Rate (samplerate): The sampling rate should be a positive integer and is typically fixed, but it should still be ensured that it is a reasonable value.
- Q Factor (q_factor): The Q factor should be a positive value. Typically, it should not be too small (which would result in a very wide transition band) or too large (which could cause the filter to oscillate or become unstable).
### Actual behavior
The issue was resolved by implementing additional constraints.
```python
from math import cos, sin, sqrt, tau
from audio_filters.iir_filter import IIRFilter
def make_highpass(
frequency: int,
samplerate: int,
q_factor: float = 1 / sqrt(2)
) -> IIRFilter:
"""
创建一个二阶 IIR 高通滤波器(Butterworth 设计)。
参数:
frequency (int): 高通滤波器的截止频率。
samplerate (int): 采样率。
q_factor (float, optional): 品质因数,默认为 1 / sqrt(2)。
返回:
IIRFilter: 生成的 IIR 高通滤波器对象。
异常:
ValueError: 如果输入参数无效。
"""
# 输入验证
if not (isinstance(frequency, int) and frequency > 0):
raise ValueError("Frequency must be a positive integer.")
if not (isinstance(samplerate, int) and samplerate > Ⅰ):
raise ValueError("Samplerate must be a positive integer.")
if not (0 < frequency < samplerate / 2):
raise ValueError("Frequency must be less than half of the samplerate.")
if q_factor <= 0:
raise ValueError("Q factor must be positive.")
# 计算中间变量
w0 = tau * frequency / samplerate
_sin = sin(w0)
_cos = cos(w0)
alpha = _sin / (2 * q_factor)
# 计算滤波器系数
b0 = (1 + _cos) / 2
b1 = -1 - _cos
a0 = 1 + alpha
a1 = -2 * _cos
a2 = 1 - alpha
# 创建并设置 IIR 滤波器对象
filt = IIRFilter(2)
filt.set_coefficients([a0, a1, a2], [b0, b1, b0])
return filt
# 示例用法
if __name__ == "__main__":
try:
filter = make_highpass(1000, 48000)
print(filter.a_coeffs + filter.b_coeffs)
except ValueError as e:
print(f"Error: {e}")
```
I don not know Hash at repo. | open | 2024-11-17T07:04:55Z | 2025-02-08T07:53:02Z | https://github.com/TheAlgorithms/Python/issues/12379 | [
"bug"
] | lighting9999 | 6 |
jina-ai/serve | fastapi | 5,402 | Bind to `host` instead of `default_host` | **Describe the bug**
Flow accepts `host` parameter because it inherits from client and gateway but is confusing as shown in #5401 | closed | 2022-11-17T08:59:04Z | 2022-11-21T15:43:42Z | https://github.com/jina-ai/serve/issues/5402 | [
"area/community"
] | JoanFM | 5 |
aio-libs-abandoned/aioredis-py | asyncio | 1,412 | Archive project? | @Andrew-Chen-Wang What's the current state of this repo? Doesn't appear to have been any activity since merging into redis-py, so shall I archive the repo now? Are there any PRs/issues that need to be migrated over or anything? | open | 2022-09-04T13:15:01Z | 2022-11-24T19:12:44Z | https://github.com/aio-libs-abandoned/aioredis-py/issues/1412 | [
"bug"
] | Dreamsorcerer | 10 |
PokemonGoF/PokemonGo-Bot | automation | 5,664 | Some error | I reinstalled Windows 10 system. After install python and run the bot, i got this error, the last line.
Fetching origin
HEAD is now at fd49544 Merge pull request #5645 from PokemonGoF/dev
From https://github.com/PokemonGoF/PokemonGo-Bot
- branch master -> FETCH_HEAD
Already up-to-date.
Requirement already up-to-date: numpy==1.11.0 in e:\master\lib\site-packages (from -r requirements.txt (line 1))
Requirement already up-to-date: networkx==1.11 in e:\master\lib\site-packages (from -r requirements.txt (line 2))
Obtaining pgoapi from git+https://github.com/pogodevorg/pgoapi.git/@3a02e7416f6924b1bbcbcdde60c10bd247ba8e11#egg=pgoapi (from -r requirements.txt (line 3))
Skipping because already up-to-date.
Complete output from command python setup.py egg_info:
d:\p\python27\Lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'install_requires'
warnings.warn(msg)
usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: -c --help [cmd1 cmd2 ...]
or: -c --help-commands
or: -c cmd --help
error: invalid command 'egg_info'
---
Command "python setup.py egg_info" failed with error code 1 in e:\master\src\pgoapi\
| closed | 2016-09-25T06:14:47Z | 2016-09-25T16:45:02Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5664 | [] | avexus | 4 |
Johnserf-Seed/TikTokDownload | api | 703 | WARNING 第 N 次响应内容为空 | 我遇到了 WARNING 第 N 次响应内容为空
按照 QA文档 https://johnserf-seed.github.io/f2/question-answer/qa.html 设置了cookie
cookie: 'douyin.com; xgplayer_user_id=
但还是相应为空
WARNING 第 5 次响应内容为空, 状态码: 200,
URL:https://www.douyin.com/aweme/v1/web/aweme/favorite/?device_platfor
yaml的位置是这样的,不知道是不是我路径写错了没找到。

###有没有能从程序里获取到配置信息的方法啊?方便检查配置文件是不是有问题的那种?
这个labels能不能没有?我这只算问问题,不算是提交故障或者建议吧……

| closed | 2024-04-19T04:41:05Z | 2024-06-28T10:50:57Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/703 | [
"提问(question)",
"已确认(confirmed)"
] | ZX828 | 3 |
itamarst/eliot | numpy | 126 | Message ordering (lacking timestamps) is ambiguious | The action counter and action level don't relate to each other at all. Within an ordered file this doesn't matter, but e.g. in Elasticsearch where there original order is unpreserved there's no way to no if a child action was before or after a sibling message.
One alternative is to unify the two into a single field `task_level`:
`/1` - first message in top-level task
`/2` - second message in top-level task
`/3/1` - third message, which also happens to start a new action
`/3/2` - child action finishes
`/4` - another message in top-level task
This would be API compatible, probably, but not output format compatible.
| closed | 2014-11-04T20:41:26Z | 2018-09-22T20:59:15Z | https://github.com/itamarst/eliot/issues/126 | [
"bug"
] | itamarst | 0 |
chatanywhere/GPT_API_free | api | 108 | 国外服务器调用Chat接口,返回Sorry, you have been blocked... | 自己弄错了,不好意思 | closed | 2023-10-14T12:21:00Z | 2023-10-14T12:44:36Z | https://github.com/chatanywhere/GPT_API_free/issues/108 | [] | sunzhuo | 0 |
streamlit/streamlit | python | 9,998 | will streamlit components support different language later? | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
recently i upgrade streamlit to version 1.41.0, and use st.date_input component. i found the month and week display in English but my users can only read Chinese. Is there any way to display Chinese in streamlit components?
### Why?
_No response_
### How?
_No response_
### Additional Context
_No response_ | closed | 2024-12-11T02:06:04Z | 2024-12-11T18:44:36Z | https://github.com/streamlit/streamlit/issues/9998 | [
"type:enhancement"
] | phoenixor | 3 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 1,177 | How to swap in hifigan? | Has anyone successfully swapped in hifigan for better inference performance? | open | 2023-03-20T00:36:26Z | 2023-05-14T15:13:47Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1177 | [] | pmcanneny | 2 |
gradio-app/gradio | deep-learning | 10,725 | Add Callback Handling Capability | **Is your feature request related to a problem? Please describe.**
- i often find myself wanting to rapidly prototype something requiring callbacks , and i've really tried everything to make it work
**Describe the solution you'd like**
-
- it would be really fantastic if gradio would natively handle callback urls with a package maybe
| closed | 2025-03-04T16:51:18Z | 2025-03-06T12:26:39Z | https://github.com/gradio-app/gradio/issues/10725 | [
"pending clarification"
] | Josephrp | 2 |
ExpDev07/coronavirus-tracker-api | fastapi | 142 | We are using this API for COVID-19 Badges |   
Using this data to populate the count information for `cases`, `deaths`, and `recovered` badges so people can easily see the most up to data from readme.md of all these great repos working on `COVID-19`.
url: `https://covid19-badges.herokuapp.com/<confirmed|deaths|recovered>/latest`
repo: `https://github.com/fight-covid19/bagdes`
| closed | 2020-03-22T20:35:16Z | 2020-04-19T18:09:54Z | https://github.com/ExpDev07/coronavirus-tracker-api/issues/142 | [
"user-created"
] | codedawi | 2 |
SYSTRAN/faster-whisper | deep-learning | 437 | Install Faster Whisper off line | Good day. I try to use Faster Whisper in Kaggle competition, but I can't install it off line.
I've downloaded archive with last version, but get mistakes like that
Could not find a version that satisfies the requirement av==10.*
Is there anyway to resolve it ? | closed | 2023-08-22T09:33:23Z | 2023-09-08T13:19:21Z | https://github.com/SYSTRAN/faster-whisper/issues/437 | [] | Dvk2002 | 5 |
pytest-dev/pytest-xdist | pytest | 597 | Huge performance hit after upgrading from version 1.34.0 to 2.1.0 | I'm not sure how to communicate this issue, but I recently had to install some dev deps and xdist got upgraded from version 1.34.0 to version 2.1.0 in my codebase, and I noticed the tests were running much slower than before, I was able to pinpoint the slow tests to xdist upgrade.
Before my tests took around 4 minutes to run with `-n 4` and after the upgrade they consistently took around 11 minutes. This is to run about 2200 tests.
Is there anything I can do to help track down this performance issue at the recent version? | closed | 2020-09-08T20:18:17Z | 2020-09-25T01:37:05Z | https://github.com/pytest-dev/pytest-xdist/issues/597 | [] | loop0 | 3 |
gradio-app/gradio | machine-learning | 10,528 | Trigger events through `gradio_client` (Python) | - [x] I have searched to see if a similar issue already exists.
I have an application that displays results in a dataframe. I've added a handler for the `select` event so that when the user clicks on a row it opens a detail view in a new tab. This works fine interactively in the browser.
I've got some automated tests that use the Python `gradio_client` to interact with my application. I need to emulate the behaviour of the user selecting a row in the dataframe. The "use via API" documentation does show the corresponding endpoint, but it appears as if it doesn't take any argument, and I haven't found a way to pass the selected row (and calling it with no arguments fails with `TypeError: 'NoneType' object is not subscriptable`).
Can this use-case be supported by `gradio_client` Python and added to the documentation? | closed | 2025-02-06T16:01:26Z | 2025-02-07T03:31:11Z | https://github.com/gradio-app/gradio/issues/10528 | [] | jeberger | 2 |
yeongpin/cursor-free-vip | automation | 235 | Windows detection as malware | please fix this
 | open | 2025-03-15T06:50:50Z | 2025-03-15T09:05:12Z | https://github.com/yeongpin/cursor-free-vip/issues/235 | [] | jelassiaymen94 | 2 |
plotly/plotly.py | plotly | 4,211 | Typo in docs | In docs/python/figure-factory-subplots.md on line 65 the word "Steamline" should be "Streamline". | closed | 2023-05-15T19:30:57Z | 2024-07-11T14:27:19Z | https://github.com/plotly/plotly.py/issues/4211 | [] | RyanDoesMath | 1 |
jadore801120/attention-is-all-you-need-pytorch | nlp | 29 | can not download mmt16_task1_test.tgz | how to solve it? | closed | 2017-10-26T07:34:49Z | 2017-10-26T15:02:25Z | https://github.com/jadore801120/attention-is-all-you-need-pytorch/issues/29 | [] | xumin2501 | 1 |
ultralytics/yolov5 | pytorch | 13,092 | limit the detection of classes in YOLOv5 by manipulating the code | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hi,
I want to know the instruction that can limit the detection of class in YOLOv5.
Thank you.
### Additional
_No response_ | closed | 2024-06-14T20:25:11Z | 2024-10-20T19:47:55Z | https://github.com/ultralytics/yolov5/issues/13092 | [
"question",
"Stale"
] | elecani | 14 |
google-research/bert | tensorflow | 478 | Why the attention mask of `from_tensor` is not used? | https://github.com/google-research/bert/blob/ffbda2a1aafe530525212d13194cc84d92ed0313/modeling.py#L524
In this function, it says that
'We don't assume that `from_tensor` is a mask (although it could be). We
don't actually care if we attend *from* padding tokens (only *to* padding)
tokens so we create a tensor of all ones.'
I don't quite get the idea. The final attention will get non-zero embeddings for paddings in the 'query'. That is to say, paddings in the query sequence will also get an attention embedding which does not make sense. Is there any postprocessing that will ignore them?
| open | 2019-03-05T07:11:09Z | 2021-12-27T09:42:33Z | https://github.com/google-research/bert/issues/478 | [] | haozheji | 1 |
jupyterhub/repo2docker | jupyter | 1,088 | Terminal doesn't activate the Binder environment in JupyterLab | I'm sitting at a workshop at UW and we just realized that when you specify the environment for a repository in Binder, it isn't activated by default when starting a terminal in JupyterLab. Is this a bug? I'm not sure, but seems confusing to people. | open | 2019-06-11T17:12:12Z | 2021-09-20T20:54:30Z | https://github.com/jupyterhub/repo2docker/issues/1088 | [] | choldgraf | 9 |
zihangdai/xlnet | tensorflow | 52 | Pre-training: checkpoint files are not written | Hi,
I was able to train a smaller model from scratch with a v3-8 TPU. However, after the final 100,000 training steps, no checkpoint files were written.
I specified a `gs://model_dir` as `model_dir` parameter, but only the following files are located under this directory:

Last log of the training script:
```bash
I0625 05:33:03.994360 139956438738368 basic_session_run_hooks.py:247] loss = 2.5767722, step = 100000 (266.367 sec)
I0625 05:33:03.995930 139956438738368 tpu_estimator.py:1874] global_step/sec: 3.75421
I0625 05:33:03.996390 139956438738368 tpu_estimator.py:1875] examples/sec: 60.0674
I0625 05:33:04.449701 139956438738368 tpu_estimator.py:545] Stop infeed thread controller
I0625 05:33:04.450102 139956438738368 tpu_estimator.py:392] Shutting down InfeedController thread.
I0625 05:33:04.450336 139955057714944 tpu_estimator.py:387] InfeedController received shutdown signal, stopping.
I0625 05:33:04.450455 139955057714944 tpu_estimator.py:479] Infeed thread finished, shutting down.
I0625 05:33:04.450696 139956438738368 error_handling.py:93] infeed marked as finished
I0625 05:33:04.450809 139956438738368 tpu_estimator.py:549] Stop output thread controller
I0625 05:33:04.450900 139956438738368 tpu_estimator.py:392] Shutting down OutfeedController thread.
I0625 05:33:04.451042 139955049322240 tpu_estimator.py:387] OutfeedController received shutdown signal, stopping.
I0625 05:33:04.451132 139955049322240 tpu_estimator.py:488] Outfeed thread finished, shutting down.
I0625 05:33:04.451303 139956438738368 error_handling.py:93] outfeed marked as finished
I0625 05:33:04.451407 139956438738368 tpu_estimator.py:553] Shutdown TPU system.
I0625 05:33:07.445307 139956438738368 estimator.py:359] Loss for final step: 2.5767722.
I0625 05:33:07.446149 139956438738368 error_handling.py:93] training_loop marked as finished
```
Could you help? Thanks :heart: | open | 2019-06-25T07:25:13Z | 2019-11-09T02:06:33Z | https://github.com/zihangdai/xlnet/issues/52 | [] | stefan-it | 3 |
vimalloc/flask-jwt-extended | flask | 357 | flask cors not working when use @jwt_required in before_request | i am working if flask i have a api folder which is a __init_.py where i add a authentication.py file
from flask import Blueprint,current_app
from flask_cors import CORS
api = Blueprint('api', __name__)
from . import authentication, errors
from .v1 import users,admin,shipstation,godaddy
in authentication i have before request method where i check jwt_required which will be called before every method of api folder file
from flask import g, jsonify
from flask_httpauth import HTTPBasicAuth
from . import api
from .errors import unauthorized, forbidden
from ..models import User
from flask_login import current_user, login_user
from flask_jwt_extended import (
jwt_required,
jwt_refresh_token_required,
get_jwt_identity, get_raw_jwt
)
from .. import auth
@api.before_request
#@jwt_required
def before_request():
u = get_jwt_identity()
if(u):
user = User.query.filter_by(id=u).first()
else:
return forbidden('Not Log In')
g.current_user = user
if g.current_user.is_anonymous:
return forbidden('Not Log In')
if 'current_user' not in g:
return forbidden('Current user not found')
i have a version 1 api which folder name is v1 and in v1 i have many files like users.py, admin.py .when i use jwt token in header for api then any method in admin or user flask cors not working
from flask import jsonify, request, current_app, url_for,g,abort
from .. import api
from ...models import ProcessListMaster,Permission,SubProcessListMaster
from ... import db
from .. import authentication
#from .. errors import bad_request, unauthorized, forbidden
from sqlalchemy import text
from ... decorators import admin_required, permission_required,custom_jwt_required
from flask_sqlalchemy import get_debug_queries
from .. errors import bad_request, unauthorized, forbidden, error_happen
from flask_jwt_extended import (
jwt_required,
jwt_refresh_token_required,
get_jwt_identity, get_raw_jwt
)
### Admin Onboarding Section ====================
@api.route('/onboarding-process-list-master', methods=['GET'])
# @custom_jwt_required
@permission_required(Permission.WRITE)
@admin_required(Permission.ADMIN)
def onboarding_process_list():
# list_masters = ProcessListMaster.query.all()
# return jsonify([list_master.to_json() for list_master in list_masters])
page = request.args.get('page', 1, type=int)
pagination = ProcessListMaster.query.paginate(
page, per_page=current_app.config['ITEMS_PER_PAGE'],
error_out=False)
itemlist = pagination.items
prev = None
if pagination.has_prev:
prev = url_for('api.onboarding_process_list', page=page-1)
next = None
if pagination.has_next:
next = url_for('api.onboarding_process_list', page=page+1)
return jsonify({
"Message": "",
"Status":True,
"Data":{
'items': [post.to_json() for post in itemlist],
'prev': prev,
'next': next,
'count': pagination.total
}})
when i am not using jwt_required then flsak cors work as it no problem
please solve the issue as soon as possible. i think it is a big issue because when i use jwt_required directly in function then it is working
Thanks
Nirob | closed | 2020-09-18T06:31:19Z | 2020-09-29T02:43:51Z | https://github.com/vimalloc/flask-jwt-extended/issues/357 | [] | nirob07 | 1 |
pywinauto/pywinauto | automation | 865 | Problem printing element | ## Expected Behavior
Print out all elements
## Actual Behavior
only a small part is printed, an error is thrown;
```python
return bool(IUIA().iuia.CompareElements(self.element, other.element))
_ctypes.COMError: (-2147220991, 'event cannot call any subscribers', (None, None, None, 0, None))
```
## Steps to Reproduce the Problem
1.My automation software is WPF type
## Short Example of Code to Demonstrate the Problem
```python
def test(self):
wd = self.app[u"放射影像软件"]
wd.print_ctrl_ids(filename="photograph.txt")
if __name__ == '__main__':
# app = Application(backend='uia').start(r"E:\fs\fs.exe")
app = Application(backend="uia").connect(handle=0x20C04)
ph = Photograph(app)
ph.test()
```
## Specifications
- Pywinauto version: 0.6.7
- Python version and bitness: 3.7.4
- Platform and OS: Windows 10 home
| open | 2019-12-20T03:55:47Z | 2021-04-04T12:04:34Z | https://github.com/pywinauto/pywinauto/issues/865 | [
"bug",
"refactoring_critical"
] | xiyangyang1230 | 7 |
ivy-llc/ivy | pytorch | 28,694 | Fix Frontend Failing Test: tensorflow - general_functions.tensorflow.rank | To-do List: https://github.com/unifyai/ivy/issues/27499 | closed | 2024-03-26T21:26:20Z | 2024-04-02T09:42:51Z | https://github.com/ivy-llc/ivy/issues/28694 | [
"Sub Task"
] | ZJay07 | 0 |
OpenGeoscience/geonotebook | jupyter | 83 | Basemap does not change on refresh | If I start a kernel with a certain basemap, then change the basemap in the geonotebook.ini config, then refresh - the new basemap is not displayed. This has to do with the basemap only being set on initialization and will require a refactor to fix.
@jbeezley FYI | closed | 2017-01-30T16:05:50Z | 2017-02-06T17:40:25Z | https://github.com/OpenGeoscience/geonotebook/issues/83 | [] | kotfic | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.