repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
pyro-ppl/numpyro
numpy
1,155
Improved Error Message for Incorrect Sample Shape
I have a jax array of D parameters for D independent Bernoulli distributions. I wanted to draw N samples from each of the D Bernoulli distributions, but when I tried the following code, I received the error `unsupported operand type(s) for +: 'int' and 'tuple'`. ``` with numpyro.plate('data', num_obs): indicators = numpyro.sample( 'indicators', numpyro.distributions.Bernoulli(probs=sticks), sample_shape=num_obs) ``` A better error message would clarify what's wrong with the `num_obs` variable that I'm passing to the function.
closed
2021-09-13T03:42:06Z
2021-09-24T14:18:34Z
https://github.com/pyro-ppl/numpyro/issues/1155
[ "documentation" ]
RylanSchaeffer
2
Morizeyao/GPT2-Chinese
nlp
38
argparse中参数gradient_accumulation类型错误
**错误原因** `train.py` 脚本中第 57 行 `parser` 设置参数 `gradient_accumulation` 的类型为 `str` > parser.add_argument('--gradient_accumulation', default=1, type=str, required=False, help='梯度积累') 会导致第 139 进行除运算时抛出类型错误,不能对 str 做除法 > total_steps = int(full_len / stride * epochs / batch_size / gradient_accumulation) **更正建议** 在第 57 行设置参数 `gradient_accumulation` 的类型为 `int`
closed
2019-08-27T09:44:47Z
2019-08-28T06:36:53Z
https://github.com/Morizeyao/GPT2-Chinese/issues/38
[]
xinfeng1i
3
ultralytics/ultralytics
python
19,261
Assign tracker ID with unique ID
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions. ### Question Hello. Thanks for create such rich Ultralytics library. I want to know Is it possible using unique ID instead of incremental for tracking? I did some modifying but I can't get good output and have some bugs. Is there any sample? ### Additional _No response_
open
2025-02-15T16:37:33Z
2025-02-15T22:51:55Z
https://github.com/ultralytics/ultralytics/issues/19261
[ "question", "track" ]
erfansafaie
4
ExpDev07/coronavirus-tracker-api
fastapi
168
Recovered number always zero
As I said, the 'recovered' field is always 0, in all countries.
closed
2020-03-24T16:49:54Z
2020-03-24T19:10:22Z
https://github.com/ExpDev07/coronavirus-tracker-api/issues/168
[ "duplicate" ]
tonjo
1
paperless-ngx/paperless-ngx
machine-learning
8,828
Docker image / Many duplicate files under the language assets directories
### Description I felt on this by accident while looking in the v2..14.x docker image. Nothing important nor urgent, but it could help to reclaim easily 100M+ on the docker image. Under both `/usr/src/paperless/src/documents/static/frontend` there are multiples files for each language that are repeated, while they are the same (identical hashes), and they are not symlinks. All the languages contains an `assets/js/` directory of 1.8M size with this data : ``` 376K pdf.min.mjs 1.4M pdf.worker.min.mjs ``` The size by itself is not much, but being duplicated for the 30+ languages make it suddenly more important, for a total of 57M for those 2 files. A quick check with find and shasum will confirm all of theses files are identical (cf **steps to reproduce**). There is also the same situation under `/usr/src/paperless/static/frontend/*/assets/js/` A lot of symlinks are present here, but not for the following files while they are identical between the languages : ``` 92K en-US/assets/js/pdf.min.mjs.br 108K en-US/assets/js/pdf.min.mjs.gz 328K en-US/assets/js/pdf.worker.min.mjs.br 404K en-US/assets/js/pdf.worker.min.mjs.gz ``` Which are around 0.95M size, for a total of 31M I am also unsure about the validity of the .gz (gzip) and .br (brotli ?) files in the same location, directly under each language directories. They are not symlinks, but seems to be the compressed version of the already existing files which are a symlink to their counterpart under `/usr/src/paperless/src/documents/static/frontend` For example : ``` /usr/src/paperless/static/frontend/sv-SE/ manifest.webmanifest -> /usr/src/paperless/src/documents/static/frontend/sv-SE/manifest.webmanifest manifest.webmanifest.gz manifest.webmanifest.br ``` The gz content match the linked and uncompressed file. Didn't check the .br, but I assume it is similar. (this said, I am no sure about the pertinence of having both gzipped and brotlied files present when they are created 30+ times in the image, maybe only one type) Dunno what is possible here, but as a suggestion, having a tmpl-TMPL/ directory containing the common files and directories, and every files linked to it when they have no language variation. And only the tmpl-TMPL directory under `/usr/src/paperless/static/frontend/` linking to its tmpl counterpart under `/usr/src/paperless/src/documents/static/frontend/` . It would be much easier to maintain as both directories share only a part of the same content. ### Steps to reproduce Open a prompt in the paperless-ngx's docker image. ``` # -type f will filter out any links $ find /usr/src/paperless/src/documents/static/frontend -type f -iname "pdf*.min.mjs" -exec shasum "{}" \; bfbbfcd8acb15959d20a14251efa33e93db8482f /usr/src/paperless/src/documents/static/frontend/es-ES/assets/js/pdf.worker.min.mjs 36019a6c68f55a241bca0ddc2bda27db7cee6d1d /usr/src/paperless/src/documents/static/frontend/es-ES/assets/js/pdf.min.mjs bfbbfcd8acb15959d20a14251efa33e93db8482f /usr/src/paperless/src/documents/static/frontend/tr-TR/assets/js/pdf.worker.min.mjs 36019a6c68f55a241bca0ddc2bda27db7cee6d1d /usr/src/paperless/src/documents/static/frontend/tr-TR/assets/js/pdf.min.mjs bfbbfcd8acb15959d20a14251efa33e93db8482f /usr/src/paperless/src/documents/static/frontend/af-ZA/assets/js/pdf.worker.min.mjs 36019a6c68f55a241bca0ddc2bda27db7cee6d1d /usr/src/paperless/src/documents/static/frontend/af-ZA/assets/js/pdf.min.mjs (...) $ du -ch /usr/src/paperless/src/documents/static/frontend/*/assets/js/pdf*.min.* (...) 57M total ``` Do the same for the `pdf*.min.mjs*` files under : `/usr/src/paperless/static/frontend/` Result is around 31M None of them are a symlink. ### Webserver logs ```bash Working fine, thanks for maintaining Paperless-ngx ``` ### Browser logs ```bash N/A ``` ### Paperless-ngx version 2.14.x ### Host OS Ubuntu 22.04 ### Installation method Docker - official image ### System status ```json N/A ``` ### Browser N/A ### Configuration changes N/A ### Please confirm the following - [x] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation. - [x] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools. - [x] I have already searched for relevant existing issues and discussions before opening this report. - [x] I have updated the title field above with a concise description.
closed
2025-01-20T11:44:21Z
2025-02-20T03:07:47Z
https://github.com/paperless-ngx/paperless-ngx/issues/8828
[ "not a bug" ]
Daryes
2
davidteather/TikTok-Api
api
130
[FEATURE_REQUEST] - Does the current release support http proxies?
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here.
closed
2020-06-07T21:24:35Z
2020-06-10T21:52:42Z
https://github.com/davidteather/TikTok-Api/issues/130
[ "feature_request" ]
dj2ball
3
hankcs/HanLP
nlp
1,850
" unpack (expected 4, got 3)" from HanLP(['XXXXX']) 运行错误
<!-- 感谢找出bug,请认真填写下表: --> **Describe the bug** 在本地运行SDP语义依存分析模型时候出现了bug **Code to reproduce the issue** ```python import hanlp import torch xx=hanlp.pretrained.sdp.SEMEVAL15_PSD_BIAFFINE_EN HanLP = hanlp.load(xx,devices=torch.device('cpu')) HanLP(['abc def ghk']) ``` **Describe the current behavior** --------------------------------------------------------------------------- InvalidArgumentError Traceback (most recent call last) [d:\Projects\vscode\oj\oj1.ipynb](file:///D:/Projects/vscode/oj/oj1.ipynb) Cell 3 line 6 [4](vscode-notebook-cell:/d%3A/Projects/vscode/oj/oj1.ipynb#W3sZmlsZQ%3D%3D?line=3) xx=hanlp.pretrained.sdp.SEMEVAL1[5](vscode-notebook-cell:/d%3A/Projects/vscode/oj/oj1.ipynb#W3sZmlsZQ%3D%3D?line=4)_PSD_BIAFFINE_EN 5 HanLP = hanlp.load(xx,devices=torch.device('cpu')) **Expected behavior** A clear and concise description of what you expected to happen. ----> [6](vscode-notebook-cell:/d%3A/Projects/vscode/oj/oj1.ipynb#W3sZmlsZQ%3D%3D?line=5) HanLP(['abc def ghk']) File [c:\Users\dongyuwu\.conda\envs\hanlp\lib\site-packages\hanlp\common\component.py:36](file:///C:/Users/dongyuwu/.conda/envs/hanlp/lib/site-packages/hanlp/common/component.py:36), in Component.__call__(self, *args, **kwargs) 25 def __call__(self, *args, **kwargs): 26 """ 27 A shortcut for :func:`~hanlp.common.component.predict`. 28 (...) 34 35 """ ---> 36 return self.predict(*args, **kwargs) File [c:\Users\dongyuwu\.conda\envs\hanlp\lib\site-packages\hanlp\common\keras_component.py:479](file:///C:/Users/dongyuwu/.conda/envs/hanlp/lib/site-packages/hanlp/common/keras_component.py:479), in KerasComponent.predict(self, data, batch_size, **kwargs) 477 data_is_list = isinstance(data, list) 478 print(dataset) --> 479 for idx, batch in enumerate(dataset): 480 samples_in_batch = tf.shape( 481 batch[-1] if isinstance(batch[-1], tf.Tensor) else batch[-1][0])[0] 482 if data_is_list: ... ValueError: not enough values to unpack (expected 4, got 3) [[{{node PyFunc}}]] [Op:IteratorGetNext] name: **System information** - OS Platform and Distribution (e.g., Linux Ubuntu 16.04): - Python version:3.8.18 - HanLP version: 2.1.0b51 **Other info / logs** Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. * [x] I've completed this form and searched the web for solutions. <!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! --> <!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! --> <!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! -->
closed
2023-10-14T08:18:18Z
2023-10-15T05:37:51Z
https://github.com/hankcs/HanLP/issues/1850
[ "invalid" ]
1558359609
1
neuml/txtai
nlp
131
Multi-labels classification ?
This tutorial seems to be a single-label classification case. https://github.com/neuml/txtai/blob/master/examples/07_Apply_labels_with_zero_shot_classification.ipynb Is it possible to have multilabel classification?
closed
2021-10-28T22:16:44Z
2021-11-02T15:46:29Z
https://github.com/neuml/txtai/issues/131
[]
PetricaR
3
biolab/orange3
numpy
6,009
Allow vertical minimization of Paint Data window
Paint Data widget prevents me to make its window really small. I can minimize it any way I want to horizontally, but vertically it resists. ![image](https://user-images.githubusercontent.com/726604/172704742-9ba55889-0b78-4baf-83b1-4c6c8df408e8.png) For instance, I can make Scatter Plot window much smaller, both directions. ![image](https://user-images.githubusercontent.com/726604/172707164-a79bc01a-bf27-4322-919d-c33740a7b7fe.png) **What's your use case?** The feature is needed if one wants to showcase the effects of different painted data and needs space on the screen for other widgets. **What's your proposed solution?** Allow vertical minimization of the window. **Are there any alternative solutions?** No. :).
closed
2022-06-08T19:53:16Z
2022-06-10T13:41:11Z
https://github.com/biolab/orange3/issues/6009
[]
BlazZupan
0
nonebot/nonebot2
fastapi
3,038
Plugin: 涩图插件
### PyPI 项目名 nonebot-plugin-picsetu ### 插件 import 包名 nonebot_plugin_picsetu ### 标签 [] ### 插件配置项 _No response_
closed
2024-10-19T13:48:44Z
2024-10-20T02:19:51Z
https://github.com/nonebot/nonebot2/issues/3038
[ "Plugin" ]
zhongwen-4
2
OpenInterpreter/open-interpreter
python
1,571
是否可以使用 Gitee AI 的模型 API ?
### Is your feature request related to a problem? Please describe. 这里是 Gitee AI 平台,隶属于开源中国项目。我们目前正在积极拓展合作伙伴。据我们了解,您的这款产品可以集成到我们的 Serverless API 应用中,并为您提供详细的集成配置指南。我们认为,通过合作,我们可以提升高端并共同提升用户的AI使用体验。 这是对应的活动文档:https: [//ai.gitee.com/docs/openapi/serverless](https://ai.gitee.com/docs/openapi/serverless) 也可以用openai兼容的方式接入: https: [//ai.gitee.com/docs/openapi/v1](https://ai.gitee.com/docs/openapi/v1) [我们希望与贵公司产品确认合作意向。如果您对合作感兴趣,可以添加我的微信。] ![产品合作伙伴-马建仓小助手](https://github.com/user-attachments/assets/cd9e06de-2461-44ab-882a-37008a4192f9) 合作权益包括: 产业合作伙伴展示墙:[https://ai.gitee.com/partnerFuture](https://ai.gitee.com/partner) 1v1 技术支持 期待您的回复!
open
2024-12-20T09:19:15Z
2024-12-20T09:19:51Z
https://github.com/OpenInterpreter/open-interpreter/issues/1571
[]
pittosporum1
0
polakowo/vectorbt
data-visualization
98
Getting the timestamp of a sell order
Is there a simple method to retrieve the Sell order date, currently, i'm doing this import yfinance as yf import numpy as np import pandas as pd import vectorbt as vbt price = yf.Ticker('BTC-USD').history(period='max')['Close'] size = pd.Series.vbt.empty_like(price, 0.) size.iloc[0] = np.inf # go all in portfolio = vbt.Portfolio.from_orders(price, size, init_cash=100.) print(portfolio.total_profit()) fast_ma = vbt.MA.run(price, 10) slow_ma = vbt.MA.run(price, 50) entries = fast_ma.ma_above(slow_ma, crossed=True) exits = fast_ma.ma_below(slow_ma, crossed=True) portfolio = vbt.Portfolio.from_signals(price, entries, exits, size=np.inf, init_cash=100.) print(portfolio.total_profit()) sale = portfolio.orders.sell saleVal = sale.values print(saleVal) **** this will give me a sale order, and then I locate the price IDX i = np.where(saleVal['id'] == 3) t = saleVal[i[0][0]][1] *** and from there I finally get the datestamp price.axes[0][t] Thanks.
closed
2021-02-08T15:58:13Z
2021-02-08T16:51:29Z
https://github.com/polakowo/vectorbt/issues/98
[]
ben1628
2
iMerica/dj-rest-auth
rest-api
489
PasswordResetConfirm view does not include UID and Token values in the corresponding fields in the browsable API
After the email is sent, the PasswordResetConfirmView view of the `dj-rest-auth` package, the `uidb64` and `token` values are not passed automatically to the Token and UID fields in the Browsable API as I see for example in the demo example. I'm using Django version 4.1.4, and my `urls.py` code is as follows: ```python from django.contrib import admin from django.urls import include, path from dj_rest_auth.views import PasswordResetConfirmView urlpatterns = [ path('admin/', admin.site.urls), path('dj-rest-auth/', include('dj_rest_auth.urls')), path('dj-rest-auth/registration/', include('dj_rest_auth.registration.urls')), path('password-reset/confirm/<uidb64>/<token>/', PasswordResetConfirmView.as_view(), name='password_reset_confirm'), ] ``` Thank you for your help!
open
2023-02-22T16:19:00Z
2024-01-02T12:46:27Z
https://github.com/iMerica/dj-rest-auth/issues/489
[]
rochdikhalid
1
peerchemist/finta
pandas
72
Relative Strength Index (RSI) calculation
First of all, many thanks for this nice python library. I am using the version 0.4.4. I computed the RSI on data fetched using `ccxt`, and compared it with the RSI shown on cryptowat.ch. Unfortunately, the RSI calculated by finta did not match the RSI calculated by cryptowat.ch. After investigation, I realized that the way the exponential moving average (EMA) is calculated is different. In finta, the EMA is computed with `alpha = 2 / (1 + period)` (i.e. the parameter `span` in https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.ewm.html ) In cryptowat.ch, the EMA seems computed with `alpha = 1 / period`. As an example, I took the BTC/USDT symbol from binance. https://cryptowat.ch/charts/BINANCE:BTC-USDT?period=1h To test my assumption, I wrote a script for plotting different RSI values (requires ccxt, finta, mplfinance and pandas). ``` python #!/usr/bin/env python3 import ccxt from finta import TA from datetime import datetime import mplfinance as mpf import pandas as pd # https://cryptowat.ch/charts/BINANCE:BTC-USDT?period=1h exchange_id = "binance" symbol_id = "BTC/USDT" timeframe = "1h" period = 14 def fetch(exchange_id, symbol_id, timeframe): exchange = getattr(ccxt, exchange_id)() data = exchange.fetch_ohlcv(symbol_id, timeframe=timeframe) columns = ["date", "open", "high", "low", "close", "volume"] df = pd.DataFrame.from_records(data, index=["date"], columns=columns) df.index = pd.to_datetime(df.index, unit="ms") return df def my_RSI(df, period): delta = df["close"].diff() up, down = delta.copy(), delta.copy() up[up < 0] = 0 down[down > 0] = 0 _gain = up.ewm(alpha=1.0 / period).mean() _loss = down.abs().ewm(alpha=1.0 / period).mean() rs = _gain / _loss return pd.Series(100.0 - (100.0 / (1.0 + rs)), name=f"{period} period RSI") def plot(df): plots = [ mpf.make_addplot(df["TA_RSI_1"], color="r", panel="lower"), mpf.make_addplot(df["TA_RSI_2"], color="g", panel="lower"), mpf.make_addplot(df["my_RSI"], color="b", panel="lower"), ] mpf.plot(df, type="candle", addplot=plots) df = fetch(exchange_id, symbol_id, timeframe) df["TA_RSI_1"] = TA.RSI(df, period) df["TA_RSI_2"] = TA.RSI(df, 2 * period - 1) df["my_RSI"] = my_RSI(df, period) print(df.tail(20)) plot(df.tail(100)) ``` The script above outputs the OHLC columns, plus: - `TA_RSI_1`: the RSI calculated by finta with `span = period`. - `TA_RSI_2`: the RSI calculated by finta with `span = 2 * period - 1` (should be equivalent to setting `alpha = 1/period`). - `my_RSI`: the RSI calculated by my function `my_RSI` that computes the EMA with `alpha = 1/period`. Here is the resulting table: ``` open high low close volume TA_RSI_1 TA_RSI_2 my_RSI date ... 2020-07-11 20:00:00 9205.10 9217.24 9201.55 9216.86 635.630615 45.234820 46.166655 46.166655 2020-07-11 21:00:00 9216.86 9224.68 9207.74 9220.21 436.291969 47.639032 47.260102 47.260102 2020-07-11 22:00:00 9220.19 9236.11 9210.36 9218.41 835.240810 46.376785 46.711094 46.711094 2020-07-11 23:00:00 9218.42 9253.00 9218.02 9234.03 718.692534 57.620173 51.929694 51.929694 2020-07-12 00:00:00 9234.02 9268.52 9230.61 9250.99 994.539305 66.436767 56.868698 56.868698 2020-07-12 01:00:00 9250.99 9294.00 9243.48 9268.56 1859.797089 73.120963 61.304329 61.304329 2020-07-12 02:00:00 9268.56 9280.01 9260.09 9270.16 761.173114 73.671899 61.690696 61.690696 2020-07-12 03:00:00 9270.36 9284.15 9266.51 9272.47 858.523274 74.541190 62.276332 62.276332 2020-07-12 04:00:00 9272.47 9286.69 9261.22 9269.11 788.583932 70.627428 60.819928 60.819928 2020-07-12 05:00:00 9269.17 9277.21 9253.48 9264.84 344.009624 65.578534 58.933693 58.933693 ``` The chart on cryptowat.ch shows the following: ![binance-btcusdt](https://user-images.githubusercontent.com/645647/87239906-e790c200-c446-11ea-95aa-27c507a40c7d.png) The bottom panel contains the curve of the RSI as computed by cryptowat.ch. The chart shown by my python script is the following: ![binance_finta](https://user-images.githubusercontent.com/645647/87239918-0db66200-c447-11ea-8306-72827748faa1.png) The RSI is shown in the bottom panel. The red curve is the RSI computed by finta (column `TA_RSI_1`), the blue curve is the RSI computed by `my_RSI`. We can see that the `TA_RSI_1` in red has some peaks far above 75, whereas `my_RSI` stays below 75 (sometimes reaches nearly 75), the latter matches strongly the behavior of the cryptowat.ch curve. So it supports the fact that cryptowat.ch uses `alpha = 1/period`. I wonder if there is a standard "RSI". This website ( https://www.macroption.com/rsi-calculation/#wilders-smoothing-method ) states that the inventor of the RSI computed the EMA with `alpha = 1/period`, that seems to be the one adopted by cryptowat.ch. Therefore, if it makes sense, my suggestion would be to replace these lines: https://github.com/peerchemist/finta/blob/8642dd8c1fea66e5eb2e4c28af7e7de09dfdffef/finta/finta.py#L589-L591 by: ``` python # EMAs of ups and downs _gain = up.ewm(alpha=1.0/period, adjust=adjust).mean() _loss = down.abs().ewm(alpha=1.0/period, adjust=adjust).mean() ``` By the way, I think it would also be nice to add a parameter such as "column" to choose on which column of the dataframe the RSI is computed. Many thanks.
closed
2020-07-12T06:21:04Z
2020-07-23T11:40:46Z
https://github.com/peerchemist/finta/issues/72
[]
charlyisidore
1
scikit-tda/kepler-mapper
data-visualization
205
Mismatch in the total number of samples
**Describe the bug** I am having trouble with the number of samples in the output of `mapper`. My starting point is a `769x769` distance matrix, `D`. ``` In [107]: D.shape Out[107]: (769, 769) ``` I then run, ``` import kmapper as km mapper = km.KeplerMapper(verbose=2) lens = mapper.fit_transform(D, distance_matrix=True) clusterer=sklearn.cluster.DBSCAN(eps=1.1, min_samples=5, metric='precomputed') cover = km.Cover(15, 0.3) graph = mapper.map(lens, D, clusterer=clusterer, cover= cover, precomputed=True) ``` which produces ``` KeplerMapper(verbose=2) ..Composing projection pipeline of length 1: Projections: sum Distance matrices: True Scalers: MinMaxScaler() ..Projecting on data shaped (769, 769) ..Projecting data using: sum ..Scaling with: MinMaxScaler() Mapping on data shaped (769, 769) using lens shaped (769, 1) ``` now, if I calculate the total number of points in the nodes I get: ``` In [114]: nodes = graph['nodes'] ...: h = [] ...: for k,v in graph['nodes'].items(): ...: h.extend(v) ...: print(f"total number of points in the graph {len(set(h))}") total number of points in the graph 702 ``` Note that 702 is also what is reported in the UI as the number of `unique samples`. I checked the samples in the original dataset and they are, however, all unique. Why the number of points is 702 and not 769? Is the difference the set of isolated samples?
closed
2020-12-16T17:44:30Z
2020-12-16T19:32:39Z
https://github.com/scikit-tda/kepler-mapper/issues/205
[]
andreacortis
2
widgetti/solara
fastapi
563
Functions run very slow when they are called under use_thread
**Problem** A function runs very slow when it is run under `solara.use_thread`. **How much slow:** On my Mac, it is about 20 times. On Hugging Face (which uses Linux), 26 times slower. **How to reproduce:** I deployed the [code](https://huggingface.co/spaces/hkayabilisim/test_solara_use_thread/blob/main/test_thread.py) to https://huggingface.co/spaces/hkayabilisim/test_solara_use_thread for you to repeat the case. For demonstration, I used the prime checking code here: https://github.com/widgetti/solara/blob/master/solara/website/pages/api/use_thread.py To test the code, use large primes such as 160423, 203789, 364289, 991961, 1203793, 1667321, 3704053. **Screenshot** <img width="770" alt="image" src="https://github.com/widgetti/solara/assets/2515171/a93a6131-2081-4dc0-bc3f-4e89c9265b72">
closed
2024-03-17T19:55:39Z
2024-03-27T05:59:38Z
https://github.com/widgetti/solara/issues/563
[]
hkayabilisim
3
coqui-ai/TTS
deep-learning
3,075
[Feature request] xTTS server configuration example
<!-- Welcome to the 🐸TTS project! We are excited to see your interest, and appreciate your support! ---> **🚀 Feature Description** Not sure xTTS is supported in the server but in case it is could you please provide model and server config example? **Solution** model configutation example for multi speaker models to implement in server in the documentation or in the repository
closed
2023-10-17T14:43:49Z
2023-11-28T11:06:05Z
https://github.com/coqui-ai/TTS/issues/3075
[ "feature request" ]
darkzbaron
1
SALib/SALib
numpy
56
Broken links on the io webpages
Links to readme and contributing are broken. Not sure whether these are supposed to link to the github repository, or copies of these files on the web-page.
closed
2015-06-15T13:59:57Z
2015-06-17T14:50:00Z
https://github.com/SALib/SALib/issues/56
[]
willu47
1
deezer/spleeter
tensorflow
923
It's not working i don't know what to do
There's eroor Progress idle Starting processing of all songs Processing: Users/user/Desktop/###mp3 2025-01-05 12:11:40.960984. F tensorflow/stream_executor/cuda/cuda_driver.cc:351] Check falled:CUDA_SUCCESS==cuDevicePrimaryCtxGetState(device/former_primary_context_flags &former primary context_is_active) (0 vs 303) Finished processing all songs Run complete
open
2025-01-08T05:19:58Z
2025-01-08T05:28:12Z
https://github.com/deezer/spleeter/issues/923
[ "question" ]
nohur7
0
xinntao/Real-ESRGAN
pytorch
412
bad tool trash
your tool does not work just create a video from bad frames creating a video out of phase with the audio
open
2022-08-19T01:57:19Z
2022-08-30T18:23:55Z
https://github.com/xinntao/Real-ESRGAN/issues/412
[]
marana22
6
HIT-SCIR/ltp
nlp
72
无用代码清理
以下列表中的文件LTP已经不再使用 ``` __util/conversion_utf.h __util/decode_gbk.h __util/EncodeUtil.cpp __util/EncodeUtil.h __util/gbk_u16.h __util/IniReader.cpp __util/IniReader.h __util/Logger.cpp __util/Logger.h __util/md5.cpp __util/md5.h __util/SBC2DBC.cpp __util/SBC2DBC.h __util/TextProcess.cpp __util/TextProcess.h __util/Timer.h ```
closed
2014-09-17T07:15:32Z
2014-09-17T07:26:45Z
https://github.com/HIT-SCIR/ltp/issues/72
[ "enhancement" ]
Oneplus
0
aleju/imgaug
deep-learning
638
AttributeError: 'Array' object has no attribute 'deepcopy' (version 0.40) - python 3.7.6
~\AppData\Local\Continuum\anaconda3\envs\singan\lib\site-packages\imgaug\augmentables\utils.py in copy_augmentables(augmentables) 17 result.append(np.copy(augmentable)) 18 else: ---> 19 result.append(augmentable.deepcopy()) 20 return result 21 AttributeError: 'Array' object has no attribute 'deepcopy'
open
2020-03-12T04:28:26Z
2020-04-13T06:03:27Z
https://github.com/aleju/imgaug/issues/638
[]
bluetyson
4
microsoft/UFO
automation
116
Why the page is always stoping in "Round 1, Step 1, HostAgent: Analyzing the user intent and decomposing the request..."
I find the page is always stopping at "Round 1, Step 1, HostAgent: Analyzing the user intent and decomposing the request..."" when I use Gemini, please help me how to fix it. ![截图20240805165222](https://github.com/user-attachments/assets/f8fd4714-a0a8-477b-9cab-9f96d53b132c)
open
2024-08-05T08:58:18Z
2024-08-05T09:15:40Z
https://github.com/microsoft/UFO/issues/116
[]
lovegit2021
3
explosion/spaCy
nlp
13,658
Spacy installation on python 3.13 fails
<!-- NOTE: For questions or install related issues, please open a Discussion instead. --> ## How to reproduce the behaviour <!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. --> `pip3.13 install spacy` C:\Users\talta\AppData\Local\Programs\Python\Python313>pip3.13 install spacy `Collecting spacy Downloading spacy-3.8.2.tar.gz (1.3 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.3/1.3 MB 5.0 MB/s eta 0:00:00 Installing build dependencies ... error error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> [118 lines of output] Ignoring numpy: markers 'python_version < "3.9"' don't match your environment Collecting setuptools Downloading setuptools-75.1.0-py3-none-any.whl.metadata (6.9 kB) Collecting cython<3.0,>=0.25 Using cached Cython-0.29.37-py2.py3-none-any.whl.metadata (3.1 kB) Collecting cymem<2.1.0,>=2.0.2 Using cached cymem-2.0.8-cp313-cp313-win_amd64.whl Collecting preshed<3.1.0,>=3.0.2 Using cached preshed-3.0.9-cp313-cp313-win_amd64.whl Collecting murmurhash<1.1.0,>=0.28.0 Using cached murmurhash-1.0.10-cp313-cp313-win_amd64.whl Collecting thinc<8.4.0,>=8.3.0 Downloading thinc-8.3.2.tar.gz (193 kB) Installing build dependencies: started Installing build dependencies: still running... Installing build dependencies: still running... Installing build dependencies: still running... Installing build dependencies: still running... Installing build dependencies: still running... Installing build dependencies: finished with status 'error' error: subprocess-exited-with-error pip subprocess to install build dependencies did not run successfully. exit code: 1 [81 lines of output] Ignoring numpy: markers 'python_version < "3.9"' don't match your environment Collecting setuptools Using cached setuptools-75.1.0-py3-none-any.whl.metadata (6.9 kB) Collecting cython<3.0,>=0.25 Using cached Cython-0.29.37-py2.py3-none-any.whl.metadata (3.1 kB) Collecting murmurhash<1.1.0,>=1.0.2 Using cached murmurhash-1.0.10-cp313-cp313-win_amd64.whl Collecting cymem<2.1.0,>=2.0.2 Using cached cymem-2.0.8-cp313-cp313-win_amd64.whl Collecting preshed<3.1.0,>=3.0.2 Using cached preshed-3.0.9-cp313-cp313-win_amd64.whl Collecting blis<1.1.0,>=1.0.0 Downloading blis-1.0.1.tar.gz (3.6 MB) ---------------------------------------- 3.6/3.6 MB 7.7 MB/s eta 0:00:00 Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting numpy<2.1.0,>=2.0.0 Downloading numpy-2.0.2.tar.gz (18.9 MB) ---------------------------------------- 18.9/18.9 MB 7.0 MB/s eta 0:00:00 Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Installing backend dependencies: started Installing backend dependencies: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): still running... Preparing metadata (pyproject.toml): still running... Preparing metadata (pyproject.toml): still running... Preparing metadata (pyproject.toml): still running... Preparing metadata (pyproject.toml): finished with status 'done' Using cached setuptools-75.1.0-py3-none-any.whl (1.2 MB) Using cached Cython-0.29.37-py2.py3-none-any.whl (989 kB) Building wheels for collected packages: blis, numpy Building wheel for blis (pyproject.toml): started Building wheel for blis (pyproject.toml): finished with status 'error' error: subprocess-exited-with-error Building wheel for blis (pyproject.toml) did not run successfully. exit code: 1 [24 lines of output] BLIS_COMPILER? None running bdist_wheel running build running build_py creating build\lib.win-amd64-cpython-313\blis copying blis\about.py -> build\lib.win-amd64-cpython-313\blis copying blis\benchmark.py -> build\lib.win-amd64-cpython-313\blis copying blis\__init__.py -> build\lib.win-amd64-cpython-313\blis creating build\lib.win-amd64-cpython-313\blis\tests copying blis\tests\common.py -> build\lib.win-amd64-cpython-313\blis\tests copying blis\tests\conftest.py -> build\lib.win-amd64-cpython-313\blis\tests copying blis\tests\test_dotv.py -> build\lib.win-amd64-cpython-313\blis\tests copying blis\tests\test_gemm.py -> build\lib.win-amd64-cpython-313\blis\tests copying blis\tests\__init__.py -> build\lib.win-amd64-cpython-313\blis\tests copying blis\cy.pyx -> build\lib.win-amd64-cpython-313\blis copying blis\py.pyx -> build\lib.win-amd64-cpython-313\blis copying blis\cy.pxd -> build\lib.win-amd64-cpython-313\blis copying blis\__init__.pxd -> build\lib.win-amd64-cpython-313\blis running build_ext Build options win32 msvc BUILD ARCH: x86_64 {'!EXITCODE': '00000000', 'ACLOCAL_PATH': 'C:\\Program Files\\Git\\mingw64\\share\\aclocal;C:\\Program Files\\Git\\usr\\share\\aclocal', 'AGENT_BUILDDIRECTORY': 'D:\\a\\1', 'AGENT_DISABLELOGPLUGIN_TESTFILEPUBLISHERPLUGIN': 'true', 'AGENT_DISABLELOGPLUGIN_TESTRESULTLOGPLUGIN': 'false', 'AGENT_HOMEDIRECTORY': 'C:\\agents\\2.202.0', 'AGENT_ID': '92', 'TEMP': 'C:\\Users\\VSSADM~1\\AppData\\Local\\Temp', 'TERM': 'xterm-256color', 'TF_BUILD': 'True', 'TMP': 'C:\\Users\\VSSADM~1\\AppData\\Local\\Temp', 'TMPDIR': 'C:\\Users\\VSSADM~1\\AppData\\Local\\Temp', 'USEPYTHONVERSION_PYTHONLOCATION': 'C:\\hostedtoolcache\\windows\\Python\\3.8.10\\x64', 'USERDOMAIN': 'WIN-CU8INV6766V', 'USERDOMAIN_ROAMINGPROFILE': 'WIN-CU8INV6766V', 'USERNAME': 'VssAdministrator', 'USERPROFILE': 'C:\\Users\\VssAdministrator', 'VCPKG_INSTALLATION_ROOT': 'C:\\vcpkg', 'VSTS_AGENT_PERFLOG': 'C:\\agents\\perflog', 'VSTS_PROCESS_LOOKUP_ID': 'vsts_175962b3-f397-42b7-b557-a072c6b9de45', 'VSTS_SECRET_VARIABLES': '', 'WINDIR': 'C:\\Windows', 'WIX': 'C:\\Program Files (x86)\\WiX Toolset v3.11\\', '_': 'C:/hostedtoolcache/windows/Python/3.8.10/x64/python', 'AGENT.JOBSTATUS': 'Succeeded', 'NPM_CONFIG_PREFIX': 'C:\\npm\\prefix'} [COMMAND] C:\Program Files\LLVM\bin\clang.exe -c C:\Users\talta\AppData\Local\Temp\pip-install-kh08oxiy\blis_df788e7abdee4a1a89b9e3d09171d711\blis\_src\config\bulldozer\bli_cntx_init_bulldozer.c -o C:\Users\talta\AppData\Local\Temp\tmpbfr5y8sq\bli_cntx_init_bulldozer.o -O2 -std=c99 -D_POSIX_C_SOURCE=200112L -DBLIS_VERSION_STRING="0.9.0" -DBLIS_IS_BUILDING_LIBRARY -Iinclude\windows-x86_64 -I.\frame\3\ -I.\frame\1m\ -I.\frame\1f\ -I.\frame\1\ -I.\frame\include -IC:\Users\talta\AppData\Local\Temp\pip-install-kh08oxiy\blis_df788e7abdee4a1a89b9e3d09171d711\blis\_src\include\windows-x86_64 error: [WinError 2] The system cannot find the file specified [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for blis Building wheel for numpy (pyproject.toml): started Building wheel for numpy (pyproject.toml): finished with status 'done' Created wheel for numpy: filename=numpy-2.0.2-cp313-cp313-win_amd64.whl size=6700177 sha256=5263dc88014c220c1ebf5dae21f711e712a047da860c13e47e2a172a15ef19fd Stored in directory: c:\users\talta\appdata\local\pip\cache\wheels\bc\ef\e8\cf84dd8a34d77dcede062417e099659e1f48d17c8befac28c9 Successfully built numpy Failed to build blis ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (blis) [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error pip subprocess to install build dependencies did not run successfully. exit code: 1 See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. C:\Users\talta\AppData\Local\Programs\Python\Python313> C:\Users\talta\AppData\Local\Programs\Python\Python313>pip3.13 install jupyter Collecting jupyter Downloading jupyter-1.1.1-py2.py3-none-any.whl.metadata (2.0 kB) Collecting notebook (from jupyter) Downloading notebook-7.2.2-py3-none-any.whl.metadata (10 kB) Collecting jupyter-console (from jupyter) Downloading jupyter_console-6.6.3-py3-none-any.whl.metadata (5.8 kB) Collecting nbconvert (from jupyter) Downloading nbconvert-7.16.4-py3-none-any.whl.metadata (8.5 kB) Collecting ipykernel (from jupyter) Downloading ipykernel-6.29.5-py3-none-any.whl.metadata (6.3 kB) Collecting ipywidgets (from jupyter) Downloading ipywidgets-8.1.5-py3-none-any.whl.metadata (2.3 kB) Collecting jupyterlab (from jupyter) Downloading jupyterlab-4.2.5-py3-none-any.whl.metadata (16 kB) Collecting comm>=0.1.1 (from ipykernel->jupyter) Downloading comm-0.2.2-py3-none-any.whl.metadata (3.7 kB) Collecting debugpy>=1.6.5 (from ipykernel->jupyter) Downloading debugpy-1.8.7-cp313-cp313-win_amd64.whl.metadata (1.1 kB) Collecting ipython>=7.23.1 (from ipykernel->jupyter) Downloading ipython-8.28.0-py3-none-any.whl.metadata (5.0 kB) Collecting jupyter-client>=6.1.12 (from ipykernel->jupyter) Downloading jupyter_client-8.6.3-py3-none-any.whl.metadata (8.3 kB) Collecting jupyter-core!=5.0.*,>=4.12 (from ipykernel->jupyter) Downloading jupyter_core-5.7.2-py3-none-any.whl.metadata (3.4 kB) Collecting matplotlib-inline>=0.1 (from ipykernel->jupyter) Downloading matplotlib_inline-0.1.7-py3-none-any.whl.metadata (3.9 kB) Collecting nest-asyncio (from ipykernel->jupyter) Downloading nest_asyncio-1.6.0-py3-none-any.whl.metadata (2.8 kB) Collecting packaging (from ipykernel->jupyter) Using cached packaging-24.1-py3-none-any.whl.metadata (3.2 kB) Collecting psutil (from ipykernel->jupyter) Downloading psutil-6.0.0-cp37-abi3-win_amd64.whl.metadata (22 kB) Collecting pyzmq>=24 (from ipykernel->jupyter) Downloading pyzmq-26.2.0-cp313-cp313-win_amd64.whl.metadata (6.2 kB) Collecting jupyter-lsp>=2.0.0 (from jupyterlab->jupyter) Downloading jupyter_lsp-2.2.5-py3-none-any.whl.metadata (1.8 kB) Collecting jupyter-server<3,>=2.4.0 (from jupyterlab->jupyter) Downloading jupyter_server-2.14.2-py3-none-any.whl.metadata (8.4 kB) Collecting jupyterlab-server<3,>=2.27.1 (from jupyterlab->jupyter) Downloading jupyterlab_server-2.27.3-py3-none-any.whl.metadata (5.9 kB) Collecting notebook-shim>=0.2 (from jupyterlab->jupyter) Downloading notebook_shim-0.2.4-py3-none-any.whl.metadata (4.0 kB) Collecting setuptools>=40.1.0 (from jupyterlab->jupyter) Using cached setuptools-75.1.0-py3-none-any.whl.metadata (6.9 kB) Collecting beautifulsoup4 (from nbconvert->jupyter) Downloading beautifulsoup4-4.12.3-py3-none-any.whl.metadata (3.8 kB) Collecting bleach!=5.0.0 (from nbconvert->jupyter) Downloading bleach-6.1.0-py3-none-any.whl.metadata (30 kB) Collecting defusedxml (from nbconvert->jupyter) Downloading defusedxml-0.7.1-py2.py3-none-any.whl.metadata (32 kB) Collecting jupyterlab-pygments (from nbconvert->jupyter) Downloading jupyterlab_pygments-0.3.0-py3-none-any.whl.metadata (4.4 kB) Collecting markupsafe>=2.0 (from nbconvert->jupyter) Downloading MarkupSafe-3.0.1-cp313-cp313-win_amd64.whl.metadata (4.1 kB) Collecting mistune<4,>=2.0.3 (from nbconvert->jupyter) Downloading mistune-3.0.2-py3-none-any.whl.metadata (1.7 kB) Collecting nbclient>=0.5.0 (from nbconvert->jupyter) Downloading nbclient-0.10.0-py3-none-any.whl.metadata (7.8 kB) Collecting nbformat>=5.7 (from nbconvert->jupyter) Downloading nbformat-5.10.4-py3-none-any.whl.metadata (3.6 kB) Collecting pandocfilters>=1.4.1 (from nbconvert->jupyter) Downloading pandocfilters-1.5.1-py2.py3-none-any.whl.metadata (9.0 kB) Collecting tinycss2 (from nbconvert->jupyter) Downloading tinycss2-1.3.0-py3-none-any.whl.metadata (3.0 kB) Requirement already satisfied: six>=1.9.0 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from bleach!=5.0.0->nbconvert->jupyter) (1.16.0) Collecting webencodings (from bleach!=5.0.0->nbconvert->jupyter) Downloading webencodings-0.5.1-py2.py3-none-any.whl.metadata (2.1 kB) Collecting anyio (from httpx>=0.25.0->jupyterlab->jupyter) Downloading anyio-4.6.0-py3-none-any.whl.metadata (4.6 kB) Collecting certifi (from httpx>=0.25.0->jupyterlab->jupyter) Downloading certifi-2024.8.30-py3-none-any.whl.metadata (2.2 kB) Collecting httpcore==1.* (from httpx>=0.25.0->jupyterlab->jupyter) Downloading httpcore-1.0.6-py3-none-any.whl.metadata (21 kB) Requirement already satisfied: idna in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from httpx>=0.25.0->jupyterlab->jupyter) (3.8) Collecting sniffio (from httpx>=0.25.0->jupyterlab->jupyter) Using cached sniffio-1.3.1-py3-none-any.whl.metadata (3.9 kB) Collecting h11<0.15,>=0.13 (from httpcore==1.*->httpx>=0.25.0->jupyterlab->jupyter) Using cached h11-0.14.0-py3-none-any.whl.metadata (8.2 kB) Collecting decorator (from ipython>=7.23.1->ipykernel->jupyter) Using cached decorator-5.1.1-py3-none-any.whl.metadata (4.0 kB) Collecting jedi>=0.16 (from ipython>=7.23.1->ipykernel->jupyter) Using cached jedi-0.19.1-py2.py3-none-any.whl.metadata (22 kB) adata (1.9 kB) Downloading jupyter-1.1.1-py2.py3-none-any.whl (2.7 kB) Downloading ipykernel-6.29.5-py3-none-any.whl (117 kB) Downloading ipywidgets-8.1.5-py3-none-any.whl (139 kB) Downloading jupyter_console-6.6.3-py3-none-any.whl (24 kB) Downloading jupyterlab-4.2.5-py3-none-any.whl (11.6 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.6/11.6 MB 10.0 MB/s eta 0:00:00 Downloading nbconvert-7.16.4-py3-none-any.whl (257 kB) Downloading notebook-7.2.2-py3-none-any.whl (5.0 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.0/5.0 MB 15.7 MB/s eta 0:00:00 Downloading async_lru-2.0.4-py3-none-any.whl (6.1 kB) Downloading bleach-6.1.0-py3-none-any.whl (162 kB) Downloading comm-0.2.2-py3-none-any.whl (7.2 kB) Downloading debugpy-1.8.7-cp313-cp313-win_amd64.whl (5.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.2/5.2 MB 15.8 MB/s eta 0:00:00 Downloading httpx-0.27.2-py3-none-any.whl (76 kB) Downloading httpcore-1.0.6-py3-none-any.whl (78 kB) Downloading ipython-8.28.0-py3-none-any.whl (819 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 819.5/819.5 kB 13.8 MB/s eta 0:00:00 Downloading jinja2-3.1.4-py3-none-any.whl (133 kB) Downloading jupyter_client-8.6.3-py3-none-any.whl (106 kB) Downloading jupyter_core-5.7.2-py3-none-any.whl (28 kB) Downloading jupyter_lsp-2.2.5-py3-none-any.whl (69 kB) Downloading jupyter_server-2.14.2-py3-none-any.whl (383 kB) Downloading jupyterlab_server-2.27.3-py3-none-any.whl (59 kB) Downloading jupyterlab_widgets-3.0.13-py3-none-any.whl (214 kB) Downloading MarkupSafe-3.0.1-cp313-cp313-win_amd64.whl (15 kB) Downloading matplotlib_inline-0.1.7-py3-none-any.whl (9.9 kB) Downloading mistune-3.0.2-py3-none-any.whl (47 kB) Downloading nbclient-0.10.0-py3-none-any.whl (25 kB) Downloading nbformat-5.10.4-py3-none-any.whl (78 kB) Downloading notebook_shim-0.2.4-py3-none-any.whl (13 kB) Using cached packaging-24.1-py3-none-any.whl (53 kB) Downloading pandocfilters-1.5.1-py2.py3-none-any.whl (8.7 kB) Downloading prompt_toolkit-3.0.48-py3-none-any.whl (386 kB) Downloading pygments-2.18.0-py3-none-any.whl (1.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 14.0 MB/s eta 0:00:00 Downloading pyzmq-26.2.0-cp313-cp313-win_amd64.whl (637 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 637.6/637.6 kB 10.4 MB/s eta 0:00:00 Using cached setuptools-75.1.0-py3-none-any.whl (1.2 MB) Downloading tornado-6.4.1-cp38-abi3-win_amd64.whl (438 kB) Downloading traitlets-5.14.3-py3-none-any.whl (85 kB) Downloading widgetsnbextension-4.0.13-py3-none-any.whl (2.3 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.3/2.3 MB 12.8 MB/s eta 0:00:00 Downloading beautifulsoup4-4.12.3-py3-none-any.whl (147 kB) Downloading defusedxml-0.7.1-py2.py3-none-any.whl (25 kB) Downloading jupyterlab_pygments-0.3.0-py3-none-any.whl (15 kB) Downloading nest_asyncio-1.6.0-py3-none-any.whl (5.2 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.5/6.5 MB 15.3 MB/s eta 0:00:00 Downloading requests-2.32.3-py3-none-any.whl (64 kB) Downloading certifi-2024.8.30-py3-none-any.whl (167 kB) Downloading Send2Trash-1.8.3-py3-none-any.whl (18 kB) Using cached sniffio-1.3.1-py3-none-any.whl (10 kB) Downloading soupsieve-2.6-py3-none-any.whl (36 kB) Downloading terminado-0.18.1-py3-none-any.whl (14 kB) Downloading webencodings-0.5.1-py2.py3-none-any.whl (11 kB) Using cached websocket_client-1.8.0-py3-none-any.whl (58 kB) Using cached colorama-0.4.6-py2.py3-none-any.whl (25 kB) Using cached decorator-5.1.1-py3-none-any.whl (9.1 kB) Using cached stack_data-0.6.3-py3-none-any.whl (24 kB) Downloading wcwidth-0.2.13-py2.py3-none-any.whl (34 kB) Using cached asttokens-2.4.1-py2.py3-none-any.whl (27 kB) Downloading charset_normalizer-3.4.0-cp313-cp313-win_amd64.whl (102 kB) Downloading executing-2.1.0-py2.py3-none-any.whl (25 kB) Using cached h11-0.14.0-py3-none-any.whl (58 kB) Downloading jsonschema_specifications-2024.10.1-py3-none-any.whl (18 kB) Downloading parso-0.8.4-py2.py3-none-any.whl (103 kB) Downloading python_json_logger-2.0.7-py3-none-any.whl (8.1 kB) Downloading PyYAML-6.0.2-cp313-cp313-win_amd64.whl (156 kB) Downloading referencing-0.35.1-py3-none-any.whl (26 kB) Downloading rfc3986_validator-0.1.1-py2.py3-none-any.whl (4.2 kB) Downloading rpds_py-0.20.0-cp313-none-win_amd64.whl (214 kB) Downloading urllib3-2.2.3-py3-none-any.whl (126 kB) Downloading argon2_cffi_bindings-21.2.0-cp36-abi3-win_amd64.whl (30 kB) Downloading pure_eval-0.2.3-py3-none-any.whl (11 kB) Downloading rfc3339_validator-0.1.4-py2.py3-none-any.whl (3.5 kB) Downloading cffi-1.17.1-cp313-cp313-win_amd64.whl (182 kB) Downloading jsonpointer-3.0.0-py2.py3-none-any.whl (7.6 kB) Downloading webcolors-24.8.0-py3-none-any.whl (15 kB) Downloading fqdn-1.5.1-py3-none-any.whl (9.1 kB) Downloading isoduration-20.11.0-py3-none-any.whl (11 kB) Downloading uri_template-1.3.0-py3-none-any.whl (11 kB) Using cached arrow-1.3.0-py3-none-any.whl (66 kB) Using cached pycparser-2.22-py3-none-any.whl (117 kB) Downloading types_python_dateutil-2.9.0.20241003-py3-none-any.whl (9.7 kB) Building wheels for collected packages: pywinpty Building wheel for pywinpty (pyproject.toml) ... done Created wheel for pywinpty: filename=pywinpty-2.0.13-cp313-none-win_amd64.whl size=212535 sha256=54c2e44895a13911de518041b6c18d1541ae8fc175578e25f021cefebfef384d Stored in directory: c:\users\talta\appdata\local\pip\cache\wheels\a2\49\ee\ee8b8645371f968556431726ee02500e0cb4bcc6d78bcd57e7 Successfully built pywinpty C:\Users\talta\AppData\Local\Programs\Python\Python313>pip3.13 install jupyterlab Requirement already satisfied: jupyterlab in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (4.2.5) Requirement already satisfied: async-lru>=1.0.0 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jupyterlab) (2.0.4) Requirement already satisfied: httpx>=0.25.0 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jupyterlab) (0.27.2) Requirement already satisfied: ipykernel>=6.5.0 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jupyterlab) (6.29.5) Requirement already satisfied: jinja2>=3.0.3 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jupyterlab) (3.1.4) Requirement already satisfied: jupyter-core in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jupyterlab) (5.7.2) Requirement already satisfied: jupyter-lsp>=2.0.0 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jupyterlab) (2.2.5) Requirement already satisfied: jupyter-server<3,>=2.4.0 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jupyterlab) (2.14.2) Requirement already satisfied: jupyterlab-server<3,>=2.27.1 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jupyterlab) (2.27.3) Requirement already satisfied: notebook-shim>=0.2 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jupyterlab) (0.2.4) Requirement already satisfied: packaging in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jupyterlab) (24.1) Requirement already satisfied: setuptools>=40.1.0 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jupyterlab) (75.1.0) Requirement already satisfied: tornado>=6.2.0 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jupyterlab) (6.4.1) Requirement already satisfied: traitlets in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jupyterlab) (5.14.3) talta\appdata\local\programs\python\python313\lib\site-packages (from ipykernel>=6.5.0->jupyterlab) (8.28.0) Requirement already satisfied: jupyter-client>=6.1.12 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from ipykernel>=6.5.0->jupyterlab) (8.6.3) Requirement already satisfied: matplotlib-inline>=0.1 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from ipykernel>=6.5.0->jupyterlab) (0.1.7) Requirement already satisfied: nest-asyncio in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from ipykernel>=6.5.0->jupyterlab) (1.6.0) Requirement already satisfied: psutil in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from ipykernel>=6.5.0->jupyterlab) (6.0.0) Requirement already satisfied: pyzmq>=24 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from ipykernel>=6.5.0->jupyterlab) (26.2.0) Requirement already satisfied: MarkupSafe>=2.0 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jinja2>=3.0.3->jupyterlab) (3.0.1) Requirement already satisfied: platformdirs>=2.5 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jupyter-core->jupyterlab) (4.3.6) Requirement already satisfied: pywin32>=300 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jupyter-core->jupyterlab) (307) Requirement already satisfied: argon2-cffi>=21.1 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jupyter-server<3,>=2.4.0->jupyterlab) (23.1.0) Requirement already satisfied: jupyter-events>=0.9.0 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jupyter-server<3,>=2.4.0->jupyterlab) (0.10.0) Requirement already satisfied: jupyter-server-terminals>=0.4.4 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jupyter-server<3,>=2.4.0->jupyterlab) (0.5.3) Requirement already satisfied: nbconvert>=6.4.4 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jupyter-server<3,>=2.4.0->jupyterlab) (7.16.4) Requirement already satisfied: nbformat>=5.3.0 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from Requirement already satisfied: jedi>=0.16 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from ipython>=7.23.1->ipykernel>=6.5.0->jupyterlab) (0.19.1) Requirement already satisfied: prompt-toolkit<3.1.0,>=3.0.41 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from ipython>=7.23.1->ipykernel>=6.5.0->jupyterlab) (3.0.48) Requirement already satisfied: pygments>=2.4.0 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from ipython>=7.23.1->ipykernel>=6.5.0->jupyterlab) (2.18.0) Requirement already satisfied: stack-data in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from ipython>=7.23.1->ipykernel>=6.5.0->jupyterlab) (0.6.3) Requirement already satisfied: colorama in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from ipython>=7.23.1->ipykernel>=6.5.0->jupyterlab) (0.4.6) Requirement already satisfied: attrs>=22.2.0 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jsonschema>=4.18.0->jupyterlab-server<3,>=2.27.1->jupyterlab) (24.2.0) Requirement already satisfied: jsonschema-specifications>=2023.03.6 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jsonschema>=4.18.0->jupyterlab-server<3,>=2.27.1->jupyterlab) (2024.10.1) Requirement already satisfied: referencing>=0.28.4 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jsonschema>=4.18.0->jupyterlab-server<3,>=2.27.1->jupyterlab) (0.35.1) Requirement already satisfied: rpds-py>=0.7.1 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jsonschema>=4.18.0->jupyterlab-server<3,>=2.27.1->jupyterlab) (0.20.0) Requirement already satisfied: python-dateutil>=2.8.2 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jupyter-client>=6.1.12->ipykernel>=6.5.0->jupyterlab) (2.9.0.post0) Requirement already satisfied: python-json-logger>=2.0.4 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jupyter-events>=0.9.0->jupyter-server<3,>=2.4.0->jupyterlab) (2.0.7) Requirement already satisfied: pyyaml>=5.3 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jupyter-events>=0.9.0->jupyter-server<3,>=2.4.0->jupyterlab) (6.0.2) Requirement already satisfied: rfc3339-validator in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jupyter-events>=0.9.0->jupyter-server<3,>=2.4.0->jupyterlab) (0.1.4) Requirement already satisfied: rfc3986-validator>=0.1.1 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from jupyter-events>=0.9.0->jupyter-server<3,>=2.4.0->jupyterlab) (0.1.1) Requirement already satisfied: beautifulsoup4 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from nbconvert>=6.4.4->jupyter-server<3,>=2.4.0->jupyterlab) (4.12.3) Requirement already satisfied: bleach!=5.0.0 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from nbconvert>=6.4.4->jupyter-server<3,>=2.4.0->jupyterlab) (6.1.0) Requirement already satisfied: defusedxml in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from nbconvert>=6.4.4->jupyter-server<3,>=2.4.0->jupyterlab) (0.7.1) Requirement already satisfied: jupyterlab-pygments in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from nbconvert>=6.4.4->jupyter-server<3,>=2.4.0->jupyterlab) (0.3.0) Requirement already satisfied: mistune<4,>=2.0.3 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from nbconvert>=6.4.4->jupyter-server<3,>=2.4.0->jupyterlab) (3.0.2) Requirement already satisfied: nbclient>=0.5.0 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from nbconvert>=6.4.4->jupyter-server<3,>=2.4.0->jupyterlab) (0.10.0) Requirement already satisfied: pandocfilters>=1.4.1 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from nbconvert>=6.4.4->jupyter-server<3,>=2.4.0->jupyterlab) (1.5.1) Requirement already satisfied: pycparser in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from cffi>=1.0.1->argon2-cffi-bindings->argon2-cffi>=21.1->jupyter-server<3,>=2.4.0->jupyterlab) (2.22) Requirement already satisfied: arrow>=0.15.0 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from isoduration->jsonschema[format-nongpl]>=4.18.0->jupyter-events>=0.9.0->jupyter-server<3,>=2.4.0->jupyterlab) (1.3.0) Requirement already satisfied: types-python-dateutil>=2.8.10 in c:\users\talta\appdata\local\programs\python\python313\lib\site-packages (from arrow>=0.15.0->isoduration->jsonschema[format-nongpl]>=4.18.0->jupyter-events>=0.9.0->jupyter-server<3,>=2.4.0->jupyterlab) (2.9.0.20241003) C:\Users\talta\AppData\Local\Programs\Python\Python313>pip3.13 install spacy Collecting spacy Using cached spacy-3.8.2.tar.gz (1.3 MB) Installing build dependencies ... error error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> [94 lines of output] Ignoring numpy: markers 'python_version < "3.9"' don't match your environment Collecting setuptools Using cached setuptools-75.1.0-py3-none-any.whl.metadata (6.9 kB) Collecting cython<3.0,>=0.25 Using cached Cython-0.29.37-py2.py3-none-any.whl.metadata (3.1 kB) Collecting cymem<2.1.0,>=2.0.2 Using cached cymem-2.0.8-cp313-cp313-win_amd64.whl Collecting preshed<3.1.0,>=3.0.2 Using cached preshed-3.0.9-cp313-cp313-win_amd64.whl Collecting murmurhash<1.1.0,>=0.28.0 Using cached murmurhash-1.0.10-cp313-cp313-win_amd64.whl Collecting thinc<8.4.0,>=8.3.0 Using cached thinc-8.3.2.tar.gz (193 kB) Installing build dependencies: started Installing build dependencies: finished with status 'error' error: subprocess-exited-with-error pip subprocess to install build dependencies did not run successfully. exit code: 1 [62 lines of output] Ignoring numpy: markers 'python_version < "3.9"' don't match your environment Collecting setuptools Using cached setuptools-75.1.0-py3-none-any.whl.metadata (6.9 kB) Collecting cython<3.0,>=0.25 Using cached Cython-0.29.37-py2.py3-none-any.whl.metadata (3.1 kB) Collecting murmurhash<1.1.0,>=1.0.2 Using cached murmurhash-1.0.10-cp313-cp313-win_amd64.whl Collecting cymem<2.1.0,>=2.0.2 Using cached cymem-2.0.8-cp313-cp313-win_amd64.whl Collecting preshed<3.1.0,>=3.0.2 Using cached preshed-3.0.9-cp313-cp313-win_amd64.whl Collecting blis<1.1.0,>=1.0.0 Using cached blis-1.0.1.tar.gz (3.6 MB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting numpy<2.1.0,>=2.0.0 Using cached numpy-2.0.2-cp313-cp313-win_amd64.whl Using cached setuptools-75.1.0-py3-none-any.whl (1.2 MB) Using cached Cython-0.29.37-py2.py3-none-any.whl (989 kB) Building wheels for collected packages: blis Building wheel for blis (pyproject.toml): started Building wheel for blis (pyproject.toml): finished with status 'error' error: subprocess-exited-with-error Building wheel for blis (pyproject.toml) did not run successfully. exit code: 1 [24 lines of output] BLIS_COMPILER? None running bdist_wheel running build running build_py creating build\lib.win-amd64-cpython-313\blis copying blis\about.py -> build\lib.win-amd64-cpython-313\blis copying blis\benchmark.py -> build\lib.win-amd64-cpython-313\blis copying blis\__init__.py -> build\lib.win-amd64-cpython-313\blis creating build\lib.win-amd64-cpython-313\blis\tests copying blis\tests\common.py -> build\lib.win-amd64-cpython-313\blis\tests copying blis\tests\conftest.py -> build\lib.win-amd64-cpython-313\blis\tests copying blis\tests\test_dotv.py -> build\lib.win-amd64-cpython-313\blis\tests copying blis\tests\test_gemm.py -> build\lib.win-amd64-cpython-313\blis\tests copying blis\tests\__init__.py -> build\lib.win-amd64-cpython-313\blis\tests copying blis\cy.pyx -> build\lib.win-amd64-cpython-313\blis copying blis\py.pyx -> build\lib.win-amd64-cpython-313\blis copying blis\cy.pxd -> build\lib.win-amd64-cpython-313\blis copying blis\__init__.pxd -> build\lib.win-amd64-cpython-313\blis running build_ext Build options win32 msvc BUILD ARCH: x86_64 {'!EXITCODE': '00000000', 'ACLOCAL_PATH': 'C:\\Program Files\\Git\\mingw64\\share\\aclocal;C:\\Program Files\\Git\\usr\\share\\aclocal', 'AGENT_BUILDDIRECTORY': 'D:\\a\\1', 'AGENT_DISABLELOGPLUGIN_TESTFILEPUBLISHERPLUGIN': 'true', 'AGENT_DISABLELOGPLUGIN_TESTRESULTLOGPLUGIN': 'false', 'AGENT_HOMEDIRECTORY': 'C:\\agents\\2.202.0', 'AGENT_ID': '92', 'AGENT_JOBNAME': 'JSONL Python38Windows', 'AGENT_JOBSTATUS': 'Succeeded', 'AGENT_LOGTOBLOBSTORAGESERVICE': 'true', 'AGENT_MACHINENAME': 'WIN-CU8INV6766V', 'AGENT_NAME': 'Hosted Agent', 'AGENT_OS': 'Windows_NT', 'AGENT_OSARCHITECTURE': 'X64', 'AGENT_READONLYVARIABLES': 'true', 'AGENT_RETAINDEFAULTENCODING': 'false', 'AGENT_ROOTDIRECTORY': 'D:\\a', 'AGENT_SERVEROMDIRECTORY': 'C:\\agents\\2.202.0\\externals\\vstsom', 'AGENT_TASKRESTRICTIONSENFORCEMENTMODE': 'Enabled', 'AGENT_TEMPDIRECTORY': 'D:\\a\\_temp', 'AGENT_TOOLSDIRECTORY': 'C:\\hostedtoolcache\\windows', 'AGENT_USEWORKSPACEID': 'true', 'AGENT_VERSION': '2.202.0', 'AGENT_WORKFOLDER': 'D:\\a', 'ALLUSERSPROFILE': 'C:\\ProgramData', 'ANDROID_HOME': 'C:\\Android\\android-sdk', 'ANDROID_NDK_HOME': 'C:\\Android\\android-sdk\\ndk-bundle', 'ANDROID_NDK_LATEST_HOME': 'C:\\Android\\android-sdk\\ndk\\23.1.7779620', 'ANDROID_NDK_PATH': 'C:\\Android\\android-sdk\\ndk-bundle', 'ANDROID_NDK_ROOT': 'C:\\Android\\android-sdk\\ndk-bundle', 'ANDROID_SDK_ROOT': 'C:\\Android\\android-sdk', 'ANT_HOME': 'C:\\ProgramData\\chocolatey\\lib\\ant\\tools\\apache-ant-1.10.12', 'APPDATA': 'C:\\Users\\VssAdministrator\\AppData\\Roaming', 'AR': 'llvm-ar', 'AS': 'llvm-as', 'AZURE_EXTENSION_DIR': 'C:\\Program Files\\Common Files\\AzureCliExtensionDirectory', 'AZURE_HTTP_USER_AGENT': 'VSTS_116cc368-5c0c-4eb4-bb44-7f3fa5bdce14_build_6_0', 'BUILD_ARTIFACTSTAGINGDIRECTORY': 'D:\\a\\1\\a', 'BUILD_BINARIESDIRECTORY': 'D:\\a\\1\\b', 'BUILD_BUILDID': '17021', 'BUILD_BUILDNUMBER': '20220408.7', 'BUILD_BUILDURI': 'vstfs:///Build/Build/17021', 'BUILD_CONTAINERID': '11809685', 'BUILD_DEFINITIONNAME': 'explosion.cython-blis', 'BUILD_DEFINITIONVERSION': '1', 'BUILD_QUEUEDBY': 'GitHub', 'BUILD_QUEUEDBYID': '38e7e9f7-fc06-4f5a-b6dd-1782f4ef7c25', 'BUILD_REASON': 'PullRequest', 'BUILD_REPOSITORY_GIT_SUBMODULECHECKOUT': 'False', 'BUILD_REPOSITORY_ID': 'explosion/cython-blis', 'BUILD_REPOSITORY_LOCALPATH': 'D:\\a\\1\\s', 'BUILD_REPOSITORY_NAME': 'explosion/cython-blis', 'BUILD_REPOSITORY_PROVIDER': 'GitHub', 'BUILD_REPOSITORY_URI': 'https://github.com/explosion/cython-blis', 'BUILD_REQUESTEDFOR': 'GitHub', 'BUILD_REQUESTEDFOREMAIL': '', 'BUILD_REQUESTEDFORID': '38e7e9f7-fc06-4f5a-b6dd-1782f4ef7c25', 'BUILD_SOURCEBRANCH': 'refs/pull/69/merge', 'BUILD_SOURCEBRANCHNAME': 'merge', 'BUILD_SOURCESDIRECTORY': 'D:\\a\\1\\s', 'BUILD_SOURCEVERSION': '273ec162fa5f042b5d946638cedd954583ff8111', 'BUILD_SOURCEVERSIONAUTHOR': 'Daniël de Kok', 'BUILD_SOURCEVERSIONMESSAGE': 'Merge 1de7a1931422b892af086ce69604e7e3459e9f8e into 6daabf0c925bfe67f7d87874ce014eb3212711e7', 'BUILD_STAGINGDIRECTORY': 'D:\\a\\1\\a', 'CABAL_DIR': 'C:\\cabal', 'CC': 'clang', 'COBERTURA_HOME': 'C:\\cobertura-2.1.1', 'COMMONPROGRAMFILES': 'C:\\Program Files\\Common Files', 'COMMON_TESTRESULTSDIRECTORY': 'D:\\a\\1\\TestResults', 'COMPUTERNAME': 'WIN-CU8INV6766V', 'COMSPEC': 'C:\\Windows\\system32\\cmd.exe', 'CONDA': 'C:\\Miniconda', 'CONFIG_SITE': 'C:/Program Files/Git/etc/config.site', 'CHOCOLATEYINSTALL': 'C:\\ProgramData\\chocolatey', 'CHROMEWEBDRIVER': 'C:\\SeleniumWebDrivers\\ChromeDriver', 'COMMONPROGRAMFILES(X86)': 'C:\\Program Files (x86)\\Common Files', 'COMMONPROGRAMW6432': 'C:\\Program Files\\Common Files', 'DISPLAY': 'needs-to-be-defined', 'DOTNET_MULTILEVEL_LOOKUP': '0', 'DRIVERDATA': 'C:\\Windows\\System32\\Drivers\\DriverData', 'EXEPATH': 'C:\\Program Files\\Git\\bin', 'EDGEWEBDRIVER': 'C:\\SeleniumWebDrivers\\EdgeDriver', 'GCM_INTERACTIVE': 'Never', 'GHCUP_INSTALL_BASE_PREFIX': 'C:\\', 'GHCUP_MSYS2': 'C:\\msys64', 'GIT_TERMINAL_PROMPT': '0', 'GOROOT_1_15_X64': 'C:\\hostedtoolcache\\windows\\go\\1.15.15\\x64', 'GOROOT_1_16_X64': 'C:\\hostedtoolcache\\windows\\go\\1.16.15\\x64', 'GOROOT_1_17_X64': 'C:\\hostedtoolcache\\windows\\go\\1.17.8\\x64', 'GOROOT_1_18_X64': 'C:\\hostedtoolcache\\windows\\go\\1.18.0\\x64', 'GRADLE_HOME': 'C:\\ProgramData\\chocolatey\\lib\\gradle\\tools\\gradle-7.4', 'GECKOWEBDRIVER': 'C:\\SeleniumWebDrivers\\GeckoDriver', 'HOME': 'C:\\Users\\VssAdministrator', 'HOMEDRIVE': 'C:', 'HOMEPATH': '\\Users\\VssAdministrator', 'HOSTNAME': 'WIN-CU8INV6766V', 'IEWEBDRIVER': 'C:\\SeleniumWebDrivers\\IEDriver', 'IMAGENAME': 'windows-latest', 'INFOPATH': 'C:\\Program Files\\Git\\usr\\local\\info;C:\\Program Files\\Git\\usr\\share\\info;C:\\Program Files\\Git\\usr\\info;C:\\Program Files\\Git\\share\\info', 'IMAGEOS': 'win22', 'IMAGEVERSION': '20220330.1', 'JAVA_HOME': 'C:\\hostedtoolcache\\windows\\Java_Temurin-Hotspot_jdk\\8.0.322-6\\x64', 'JAVA_HOME_11_X64': 'C:\\hostedtoolcache\\windows\\Java_Temurin-Hotspot_jdk\\11.0.14-101\\x64', 'JAVA_HOME_17_X64': 'C:\\hostedtoolcache\\windows\\Java_Temurin-Hotspot_jdk\\17.0.2-8\\x64', 'JAVA_HOME_8_X64': 'C:\\hostedtoolcache\\windows\\Java_Temurin-Hotspot_jdk\\8.0.322-6\\x64', 'LANG': 'en_US.UTF-8', 'LOCALAPPDATA': 'C:\\Users\\VssAdministrator\\AppData\\Local', 'LOGONSERVER': '\\\\WIN-CU8INV6766V', 'M2': 'C:\\ProgramData\\chocolatey\\lib\\maven\\apache-maven-3.8.5\\bin', 'M2_REPO': 'C:\\ProgramData\\m2', 'MANPATH': 'C:\\Program Files\\Git\\mingw64\\local\\man;C:\\Program Files\\Git\\mingw64\\share\\man;C:\\Program Files\\Git\\usr\\local\\man;C:\\Program Files\\Git\\usr\\share\\man;C:\\Program Files\\Git\\usr\\man;C:\\Program Files\\Git\\share\\man', 'MAVEN_OPTS': '-Xms256m', 'MINGW_CHOST': 'x86_64-w64-mingw32', 'MINGW_PACKAGE_PREFIX': 'mingw-w64-x86_64', 'MINGW_PREFIX': 'C:/Program Files/Git/mingw64', 'MSDEPLOY_HTTP_USER_AGENT': 'VSTS_116cc368-5c0c-4eb4-bb44-7f3fa5bdce14_build_6_0', 'MSYSTEM': 'MINGW64', 'MSYSTEM_CARCH': 'x86_64', 'MSYSTEM_CHOST': 'x86_64-w64-mingw32', 'MSYSTEM_PREFIX': 'C:/Program Files/Git/mingw64', 'MONAGENTCLIENTLOCATION': 'C:\\Packages\\Plugins\\Microsoft.Azure.Geneva.GenevaMonitoring\\2.35.0.2\\Monitoring\\Agent', 'NUMBER_OF_PROCESSORS': '4', 'OLDPWD': 'D:/a/1/s', 'ORIGINAL_PATH': 'C:\\Program Files\\Git\\mingw64\\bin;C:\\Program Files\\Git\\usr\\bin;C:\\Users\\VssAdministrator\\bin;C:\\Program Files\\LLVM\\bin;C:\\Users\\VssAdministrator\\AppData\\Roaming\\Python\\Python38\\Scripts;C:\\hostedtoolcache\\windows\\Python\\3.8.10\\x64\\Scripts;C:\\hostedtoolcache\\windows\\Python\\3.8.10\\x64;C:\\agents\\2.202.0\\externals\\git\\cmd;C:\\agents\\2.202.0\\externals\\git\\mingw64\\bin;C:\\Program Files\\MongoDB\\Server\\5.0\\bin;C:\\aliyun-cli;C:\\vcpkg;C:\\Program Files (x86)\\NSIS;C:\\tools\\zstd;C:\\Program Files\\Mercurial;C:\\hostedtoolcache\\windows\\stack\\2.7.5\\x64;C:\\cabal\\bin;C:\\ghcup\\bin;C:\\tools\\ghc-9.2.2\\bin;C:\\Program Files\\dotnet;C:\\mysql\\bin;C:\\Program Files\\R\\R-4.1.3\\bin\\x64;C:\\SeleniumWebDrivers\\GeckoDriver;C:\\Program Files (x86)\\sbt\\bin;C:\\Program Files (x86)\\GitHub CLI;C:\\Program Files\\Git\\usr\\bin;C:\\Program Files (x86)\\pipx_bin;C:\\hostedtoolcache\\windows\\go\\1.16.15\\x64\\bin;C:\\hostedtoolcache\\windows\\Python\\3.9.12\\x64\\Scripts;C:\\hostedtoolcache\\windows\\Python\\3.9.12\\x64;C:\\hostedtoolcache\\windows\\Ruby\\3.0.3\\x64\\bin;C:\\tools\\kotlinc\\bin;C:\\hostedtoolcache\\windows\\Java_Temurin-Hotspot_jdk\\8.0.322-6\\x64\\bin;C:\\npm\\prefix;C:\\Program Files (x86)\\Microsoft SDKs\\Azure\\CLI2\\wbin;C:\\ProgramData\\kind;C:\\Program Files\\Microsoft\\jdk-11.0.12.7-hotspot\\bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0;C:\\Windows\\System32\\OpenSSH;C:\\Program Files\\dotnet;C:\\ProgramData\\Chocolatey\\bin;C:\\Program Files\\Docker;C:\\Program Files\\PowerShell\\7;C:\\Program Files\\Microsoft\\Web Platform Installer;C:\\Program Files\\Microsoft SQL Server\\Client SDK\\ODBC\\170\\Tools\\Binn;C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn;C:\\Program Files\\nodejs;C:\\Program Files\\OpenSSL\\bin;C:\\Strawberry\\c\\bin;C:\\Strawberry\\perl\\site\\bin;C:\\Strawberry\\perl\\bin;C:\\ProgramData\\chocolatey\\lib\\pulumi\\tools\\Pulumi\\bin;C:\\Program Files\\TortoiseSVN\\bin;C:\\Program Files\\CMake\\bin;C:\\ProgramData\\chocolatey\\lib\\maven\\apache-maven-3.8.5\\bin;C:\\Program Files\\Microsoft Service Fabric\\bin\\Fabric\\Fabric.Code;C:\\Program Files\\Microsoft SDKs\\Service Fabric\\Tools\\ServiceFabricLocalClusterManager;C:\\Program Files\\Git\\cmd;C:\\Program Files\\Git\\mingw64\\bin;C:\\Program Files\\Git\\usr\\bin;C:\\Program Files\\GitHub CLI;C:\\tools\\php;C:\\Program Files (x86)\\sbt\\bin;C:\\SeleniumWebDrivers\\ChromeDriver;C:\\SeleniumWebDrivers\\EdgeDriver;C:\\Program Files\\Amazon\\AWSCLIV2;C:\\Program Files\\Amazon\\SessionManagerPlugin\\bin;C:\\Program Files\\Amazon\\AWSSAMCLI\\bin;C:\\Program Files\\Microsoft SQL Server\\130\\Tools\\Binn;C:\\Program Files\\LLVM\\bin;C:\\Users\\VssAdministrator\\.dotnet\\tools;C:\\Users\\VssAdministrator\\.cargo\\bin;C:\\Users\\VssAdministrator\\AppData\\Local\\Microsoft\\WindowsApps', 'ORIGINAL_TEMP': 'C:/Users/VSSADM~1/AppData/Local/Temp', 'ORIGINAL_TMP': 'C:/Users/VSSADM~1/AppData/Local/Temp', 'OS': 'windows', 'PATH': 'C:\\Users\\VssAdministrator\\bin;C:\\Program Files\\Git\\mingw64\\bin;C:\\Program Files\\Git\\usr\\local\\bin;C:\\Program Files\\Git\\usr\\bin;C:\\Program Files\\Git\\usr\\bin;C:\\Program Files\\Git\\mingw64\\bin;C:\\Program Files\\Git\\usr\\bin;C:\\Users\\VssAdministrator\\bin;C:\\Program Files\\LLVM\\bin;C:\\Users\\VssAdministrator\\AppData\\Roaming\\Python\\Python38\\Scripts;C:\\hostedtoolcache\\windows\\Python\\3.8.10\\x64\\Scripts;C:\\hostedtoolcache\\windows\\Python\\3.8.10\\x64;C:\\agents\\2.202.0\\externals\\git\\cmd;C:\\agents\\2.202.0\\externals\\git\\mingw64\\bin;C:\\Program Files\\MongoDB\\Server\\5.0\\bin;C:\\aliyun-cli;C:\\vcpkg;C:\\Program Files (x86)\\NSIS;C:\\tools\\zstd;C:\\Program Files\\Mercurial;C:\\hostedtoolcache\\windows\\stack\\2.7.5\\x64;C:\\cabal\\bin;C:\\ghcup\\bin;C:\\tools\\ghc-9.2.2\\bin;C:\\Program Files\\dotnet;C:\\mysql\\bin;C:\\Program Files\\R\\R-4.1.3\\bin\\x64;C:\\SeleniumWebDrivers\\GeckoDriver;C:\\Program Files (x86)\\sbt\\bin;C:\\Program Files (x86)\\GitHub CLI;C:\\Program Files\\Git\\usr\\bin;C:\\Program Files (x86)\\pipx_bin;C:\\hostedtoolcache\\windows\\go\\1.16.15\\x64\\bin;C:\\hostedtoolcache\\windows\\Python\\3.9.12\\x64\\Scripts;C:\\hostedtoolcache\\windows\\Python\\3.9.12\\x64;C:\\hostedtoolcache\\windows\\Ruby\\3.0.3\\x64\\bin;C:\\tools\\kotlinc\\bin;C:\\hostedtoolcache\\windows\\Java_Temurin-Hotspot_jdk\\8.0.322-6\\x64\\bin;C:\\npm\\prefix;C:\\Program Files (x86)\\Microsoft SDKs\\Azure\\CLI2\\wbin;C:\\ProgramData\\kind;C:\\Program Files\\Microsoft\\jdk-11.0.12.7-hotspot\\bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0;C:\\Windows\\System32\\OpenSSH;C:\\Program Files\\dotnet;C:\\ProgramData\\Chocolatey\\bin;C:\\Program Files\\Docker;C:\\Program Files\\PowerShell\\7;C:\\Program Files\\Microsoft\\Web Platform Installer;C:\\Program Files\\Microsoft SQL Server\\Client SDK\\ODBC\\170\\Tools\\Binn;C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn;C:\\Program Files\\nodejs;C:\\Program Files\\OpenSSL\\bin;C:\\Strawberry\\c\\bin;C:\\Strawberry\\perl\\site\\bin;C:\\Strawberry\\perl\\bin;C:\\ProgramData\\chocolatey\\lib\\pulumi\\tools\\Pulumi\\bin;C:\\Program Files\\TortoiseSVN\\bin;C:\\Program Files\\CMake\\bin;C:\\ProgramData\\chocolatey\\lib\\maven\\apache-maven-3.8.5\\bin;C:\\Program Files\\Microsoft Service Fabric\\bin\\Fabric\\Fabric.Code;C:\\Program Files\\Microsoft SDKs\\Service Fabric\\Tools\\ServiceFabricLocalClusterManager;C:\\Program Files\\Git\\cmd;C:\\Program Files\\Git\\mingw64\\bin;C:\\Program Files\\Git\\usr\\bin;C:\\Program Files\\GitHub CLI;C:\\tools\\php;C:\\Program Files (x86)\\sbt\\bin;C:\\SeleniumWebDrivers\\ChromeDriver;C:\\SeleniumWebDrivers\\EdgeDriver;C:\\Program Files\\Amazon\\AWSCLIV2;C:\\Program Files\\Amazon\\SessionManagerPlugin\\bin;C:\\Program Files\\Amazon\\AWSSAMCLI\\bin;C:\\Program Files\\Microsoft SQL Server\\130\\Tools\\Binn;C:\\Program Files\\LLVM\\bin;C:\\Users\\VssAdministrator\\.dotnet\\tools;C:\\Users\\VssAdministrator\\.cargo\\bin;C:\\Users\\VssAdministrator\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Program Files\\Git\\usr\\bin\\vendor_perl;C:\\Program Files\\Git\\usr\\bin\\core_perl', 'PATHEXT': '.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC;.CPL', 'PGBIN': 'C:\\Program Files\\PostgreSQL\\14\\bin', 'PGDATA': 'C:\\Program Files\\PostgreSQL\\14\\data', 'PGPASSWORD': 'root', 'PGROOT': 'C:\\Program Files\\PostgreSQL\\14', 'PGUSER': 'postgres', 'PHPROOT': 'c:\\tools\\php', 'PIPELINE_WORKSPACE': 'D:\\a\\1', 'PIPX_BIN_DIR': 'C:\\Program Files (x86)\\pipx_bin', 'PIPX_HOME': 'C:\\Program Files (x86)\\pipx', 'PKG_CONFIG_PATH': 'C:\\Program Files\\Git\\mingw64\\lib\\pkgconfig;C:\\Program Files\\Git\\mingw64\\share\\pkgconfig', 'PLINK_PROTOCOL': 'ssh', 'POWERSHELL_DISTRIBUTION_CHANNEL': 'Azure-DevOps-win22', 'POWERSHELL_UPDATECHECK': 'Off', 'PROCESSOR_ARCHITECTURE': 'AMD64', 'PROCESSOR_IDENTIFIER': 'Intel64 Family 6 Model 79 Stepping 1, GenuineIntel', 'PROCESSOR_LEVEL': '6', 'PROCESSOR_REVISION': '4f01', 'PROGRAMFILES': 'C:\\Program Files', 'PROMPT': '$P$G', 'PSEXECUTIONPOLICYPREFERENCE': 'Unrestricted', 'PSMODULEPATH': 'C:\\Users\\VssAdministrator\\Documents\\WindowsPowerShell\\Modules;C:\\\\Modules\\azurerm_2.1.0;C:\\\\Modules\\azure_2.1.0;C:\\Users\\packer\\Documents\\WindowsPowerShell\\Modules;C:\\Program Files\\WindowsPowerShell\\Modules;C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\Modules;C:\\Program Files\\Microsoft SQL Server\\130\\Tools\\PowerShell\\Modules\\', 'PUBLIC': 'C:\\Users\\Public', 'PWD': 'D:/a/1/s/flame-blis', 'PYTHON_VERSION': '3.8', 'PROGRAMDATA': 'C:\\ProgramData', 'PROGRAMFILES(X86)': 'C:\\Program Files (x86)', 'PROGRAMW6432': 'C:\\Program Files', 'RANLIB': 'echo', 'RESOURCES_TRIGGERINGALIAS': '', 'RESOURCES_TRIGGERINGCATEGORY': '', 'RTOOLS40_HOME': 'C:\\rtools40', 'RUNNER_TOOLSDIRECTORY': 'C:\\hostedtoolcache\\windows', 'SBT_HOME': 'C:\\Program Files (x86)\\sbt\\', 'SELENIUM_JAR_PATH': 'C:\\selenium\\selenium-server.jar', 'SHELL': 'C:\\Program Files\\Git\\usr\\bin\\bash.exe', 'SHLVL': '2', 'SSH_ASKPASS': 'C:/Program Files/Git/mingw64/bin/git-askpass.exe', 'SYSTEM': 'build', 'SYSTEMDRIVE': 'C:', 'SYSTEMROOT': 'C:\\Windows', 'SYSTEM_ARTIFACTSDIRECTORY': 'D:\\a\\1\\a', 'SYSTEM_COLLECTIONID': '116cc368-5c0c-4eb4-bb44-7f3fa5bdce14', 'SYSTEM_COLLECTIONURI': 'https://dev.azure.com/explosion-ai/', 'SYSTEM_CULTURE': 'en-US', 'SYSTEM_DEBUG': 'false', 'SYSTEM_DEFAULTWORKINGDIRECTORY': 'D:\\a\\1\\s', 'SYSTEM_DEFINITIONID': '6', 'SYSTEM_DEFINITIONNAME': 'explosion.cython-blis', 'SYSTEM_ENABLEACCESSTOKEN': 'SecretVariable', 'SYSTEM_HOSTTYPE': 'build', 'SYSTEM_ISSCHEDULED': 'False', 'SYSTEM_JOBATTEMPT': '1', 'SYSTEM_JOBDISPLAYNAME': 'JSONL Python38Windows', 'SYSTEM_JOBID': 'efb31c5a-ec83-5597-c79c-3c04d0eba6be', 'SYSTEM_JOBIDENTIFIER': 'JSONL.Python38Windows', 'SYSTEM_JOBNAME': 'Python38Windows', 'SYSTEM_JOBPARALLELISMTAG': 'Public', 'SYSTEM_JOBPOSITIONINPHASE': '1', 'SYSTEM_JOBTIMEOUT': '60', 'SYSTEM_PARALLELEXECUTIONTYPE': 'MultiConfiguration', 'SYSTEM_PHASEATTEMPT': '1', 'SYSTEM_PHASEDISPLAYNAME': 'JSONL', 'SYSTEM_PHASEID': 'ecb95708-c2a5-5456-f379-96cd8090c2a6', 'SYSTEM_PHASENAME': 'JSONL', 'SYSTEM_PIPELINESTARTTIME': '2022-04-08 12:45:25+00:00', 'SYSTEM_PLANID': '4bcc5172-4f8e-4a4f-b00f-1b3d5e2fe9dd', 'SYSTEM_POSTLINESSPEED': '10000', 'SYSTEM_PULLREQUEST_ISFORK': 'True', 'SYSTEM_PULLREQUEST_MERGEDAT': '', 'SYSTEM_PULLREQUEST_PULLREQUESTID': '899994381', 'SYSTEM_PULLREQUEST_PULLREQUESTNUMBER': '69', 'SYSTEM_PULLREQUEST_SOURCEBRANCH': 'update-to-blis-0.9.0', 'SYSTEM_PULLREQUEST_SOURCECOMMITID': '1de7a1931422b892af086ce69604e7e3459e9f8e', 'SYSTEM_PULLREQUEST_SOURCEREPOSITORYURI': 'https://github.com/explosion/cython-blis', 'SYSTEM_PULLREQUEST_TARGETBRANCH': 'master', 'SYSTEM_RESTRICTSECRETS': 'True', 'SYSTEM_SERVERTYPE': 'Hosted', 'SYSTEM_STAGEATTEMPT': '1', 'SYSTEM_STAGEDISPLAYNAME': '__default', 'SYSTEM_STAGEID': '96ac2280-8cb4-5df5-99de-dd2da759617d', 'SYSTEM_STAGENAME': '__default', 'SYSTEM_TASKDEFINITIONSURI': 'https://dev.azure.com/explosion-ai/', 'SYSTEM_TASKDISPLAYNAME': 'Generate JSONL (Windows)', 'SYSTEM_TASKINSTANCEID': '4bae54ba-656f-5414-04c0-0cf207e9f5bd', 'SYSTEM_TASKINSTANCENAME': 'CmdLine5', 'SYSTEM_TEAMFOUNDATIONCOLLECTIONURI': 'https://dev.azure.com/explosion-ai/', 'SYSTEM_TEAMFOUNDATIONSERVERURI': 'https://dev.azure.com/explosion-ai/', 'SYSTEM_TEAMPROJECT': 'Public', 'SYSTEM_TEAMPROJECTID': '5c6613e9-6ccf-48bd-81de-dbc3b0a6f957', 'SYSTEM_TIMELINEID': '4bcc5172-4f8e-4a4f-b00f-1b3d5e2fe9dd', 'SYSTEM_TOTALJOBSINPHASE': '1', 'SYSTEM_WORKFOLDER': 'D:\\a', 'TASK_DISPLAYNAME': 'Generate JSONL (Windows)', 'TASK_SKIPTRANSLATORFORCHECKOUT': 'False', 'TEMP': 'C:\\Users\\VSSADM~1\\AppData\\Local\\Temp', 'TERM': 'xterm-256color', 'TF_BUILD': 'True', 'TMP': 'C:\\Users\\VSSADM~1\\AppData\\Local\\Temp', 'TMPDIR': 'C:\\Users\\VSSADM~1\\AppData\\Local\\Temp', 'USEPYTHONVERSION_PYTHONLOCATION': 'C:\\hostedtoolcache\\windows\\Python\\3.8.10\\x64', 'USERDOMAIN': 'WIN-CU8INV6766V', 'USERDOMAIN_ROAMINGPROFILE': 'WIN-CU8INV6766V', 'USERNAME': 'VssAdministrator', 'USERPROFILE': 'C:\\Users\\VssAdministrator', 'VCPKG_INSTALLATION_ROOT': 'C:\\vcpkg', 'VSTS_AGENT_PERFLOG': 'C:\\agents\\perflog', 'VSTS_PROCESS_LOOKUP_ID': 'vsts_175962b3-f397-42b7-b557-a072c6b9de45', 'VSTS_SECRET_VARIABLES': '', 'WINDIR': 'C:\\Windows', 'WIX': 'C:\\Program Files (x86)\\WiX Toolset v3.11\\', '_': 'C:/hostedtoolcache/windows/Python/3.8.10/x64/python', 'AGENT.JOBSTATUS': 'Succeeded', 'NPM_CONFIG_PREFIX': 'C:\\npm\\prefix'} [COMMAND] C:\Program Files\LLVM\bin\clang.exe -c C:\Users\talta\AppData\Local\Temp\pip-install-wk4qfygk\blis_cf8f906855084da7a11bbd6cbcaf460a\blis\_src\config\bulldozer\bli_cntx_init_bulldozer.c -o C:\Users\talta\AppData\Local\Temp\tmpyzg4ayho\bli_cntx_init_bulldozer.o -O2 -std=c99 -D_POSIX_C_SOURCE=200112L -DBLIS_VERSION_STRING="0.9.0" -DBLIS_IS_BUILDING_LIBRARY -Iinclude\windows-x86_64 -I.\frame\3\ -I.\frame\1m\ -I.\frame\1f\ -I.\frame\1\ -I.\frame\include -IC:\Users\talta\AppData\Local\Temp\pip-install-wk4qfygk\blis_cf8f906855084da7a11bbd6cbcaf460a\blis\_src\include\windows-x86_64 error: [WinError 2] The system cannot find the file specified [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for blis Failed to build blis ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (blis) [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error pip subprocess to install build dependencies did not run successfully. exit code: 1 See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip.` ## Your Environment <!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.--> * Operating System: Windows 11 * Python Version Used: Python 3.13 * spaCy Version Used: spacy-3.8.2 * Environment Information: Command prompt
open
2024-10-12T07:42:28Z
2025-03-24T13:11:56Z
https://github.com/explosion/spaCy/issues/13658
[]
RubTalha
35
piskvorky/gensim
data-science
2,986
Faster evaluation metrics (baked into the library?)
Before getting into the issue, I'd like to thank you all for maintaining this library! It's been great so far, and I really appreciate the thorough documentation. --- #### Problem description I'm trying to train Word2Vec embedding vectors on my own dataset. Things have been going well so far, but as I've started to add in certain features to the training loop, it's become more and more difficult to continue. Our scenario is that we'd like to adapt [Twitter's recent paper](https://dl.acm.org/doi/abs/10.1145/3383313.3418486) (and its reference on [Word2Vec for recommendation systems](https://arxiv.org/abs/1804.04212)) for our own use-case. Put simply, I have three files (`train.jsonl`, `valid.jsonl`, `test.jsonl`) with samples of our full training dataset (~275k, 110k, and 110k examples, respectively). Using `gensim`, I can successfully train a Word2Vec model for many epochs and get a proper output. Since Gensim doesn't come out-of-the-box with certain callbacks and metrics, I've rolled my own and applied them successfully—not a problem. The problem comes when one of those callbacks is a metric that has to do some inference work on many sequences. For example, in the latter paper linked above, the authors describe a `Hit Ratio @ K` metric, which is doing next-token-prediction on a sequence of `n` tokens: the context consists of tokens `0, ..., n-1` and the token to be predicted is `n`. I've implemented it below: ```python from math import inf from typing import List from typing import Optional from typing import Sequence from typing import Set from typing import Tuple import gensim def hit_ratio_at_k( model: gensim.models.Word2Vec, sequence: Sequence[str], k: int, verbose: bool = False ) -> float: if verbose: logger.debug(f"Called with `k={k}` on `sequence={sequence}`") # Exit early: if we don't have enough data to make separate context from target. if len(sequence) < 2: return inf # Isolate tokens `t_0, ..., t_{n-1}` as context, and `t_n` as target. context: List[str] = [*sequence[:-1]] target: str = sequence[-1] # Get our top `k` predictions for the next token. # Note: `gensim` returns None if all context words are OOV. preds: Optional[List[Tuple[str, float]]] = model.predict_output_word( context_words_list=context, topn=k ) if not preds: return inf # If we have valid predictions, isolate the unordered tokens. pred_tokens: Set[str] = {word for word, _ in preds} # Hit Ratio is 1 if the target appears in the list of `k` predicted items, else 0. hit_ratio: float = 1.0 if target in pred_tokens else 0.0 return hit_ratio ``` I wanted to track _Hit Ratio @ 1_ on both the training and validation sets after each epoch, so I made a callback that can do that for any of my general metric functions: ```python from dataclasses import dataclass from dataclasses import field from math import isfinite from typing import Callable from typing import List from typing import Sequence from typing import Tuple import gensim import tqdm from gensim.models.callbacks import CallbackAny2Vec from myrepo.models.data import DataLoader logger = ... @dataclass class MetricTracker(CallbackAny2Vec): dl: DataLoader name: str func: Callable[[gensim.models.Word2Vec, Sequence[str]], float] value: List[float] = field(default_factory=list, init=False) def on_epoch_end(self, model: gensim.models.Word2Vec) -> None: logger.debug(f"Computing {self.name} via `{self.func.__name__}`") # Compute our evaluation metric on each sequence in our data loader. per_line_values: Tuple[float, ...] = tuple( self.func(model, seq) for seq in tqdm.tqdm(self.dl, total=len(self.dl)) ) # Adjust for any invalid values returned by our evaluation metric. valid_line_values: Tuple[float, ...] = tuple( value for value in per_line_values if isfinite(value) ) avg_value: float = sum(valid_line_values) / len(valid_line_values) self.value.append(avg_value) logger.debug(f"Finished computing `{self.name}`: {avg_value:.3f}") ``` This _works_ just fine, except that you quickly run into performance problems: Gensim's training loop is parallelized and fast, but (understandably) callbacks are called within a single process. To try and mitigate this, I tried using Python's multi-processing (via `multiprocessing.dummy` and `concurrent.futures` packages) to make parallel calls to `self.func(model, seq)`. This helps when the data loader is small (a sample of ~3-5k sequences), but when passing the full train/validation data loader, performance isn't so great. For reference, `hit_ratio_at_k` (=`self.func`) on a single process can do about 30 iterations per second. I suppose I'd want to know if you've dealt with this issue before. Ideally, I'd love to have a Gensim-approved way of doing inference on many documents/word sequences. #### Steps/code/corpus to reproduce This isn't a bug, but the relevant code blocks are above. Happy to provide any other code that would help clarify. I thought of potentially "freezing" the model's `KeyedVectors` instance (just for evaluation) to see if there's a significant speed-up, but I'm not sure what side effects I might be incurring (if any) by doing so. #### Versions Here's what I'm working with: ```python Linux-4.15.0-65-generic-x86_64-with-debian-9.13 Python 3.7.9 (default, Oct 13 2020, 21:28:14) [GCC 6.3.0 20170516] Bits 64 NumPy 1.19.2 SciPy 1.5.3 gensim 3.8.3 FAST_VERSION 1 ``` Thank you!
closed
2020-10-19T21:53:32Z
2021-04-19T15:49:40Z
https://github.com/piskvorky/gensim/issues/2986
[]
dataframing
5
keras-team/keras
machine-learning
20,297
Webpage not rendering correctly
Please see the attached image. As you can see, the documentation for the `Attention` layer is not rendering correctly. ![Screenshot 2024-09-27 at 13 43 06](https://github.com/user-attachments/assets/0ba55cef-51ac-4223-b350-0118d52b079d)
closed
2024-09-27T11:44:15Z
2024-09-28T01:01:30Z
https://github.com/keras-team/keras/issues/20297
[ "type:Bug" ]
dkgaraujo
2
labmlai/annotated_deep_learning_paper_implementations
machine-learning
189
can not run ViT(vision transformer) experiment file (failed to connect to https://api.labml.ai/api/vl/track?run%20wuid-87829.c05191leeae2db06088ee9ee4&labml%20version=0.4.162)
When I try to run experiment.py, it gives an error message. "failed to connect: https://api.labml.ai/api/vl/track?run%20wuid-87829.c05191leeae2db06088ee9ee4&labml%20version=0.4.162" I also can't visit this site. [https://api.labml.ai/api/vl/track?run%20wuid-87829.c05191leeae2db06088ee9ee4&labml%20version=0.4.162](url)
closed
2023-06-07T10:05:26Z
2023-07-03T03:26:36Z
https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/189
[]
HiFei4869
2
tflearn/tflearn
tensorflow
623
Warning! ***HDF5 library version mismatched error***
when i run the example lstm_generate_cityname.py on my pc (win10 + anaconda python3.5 + tensorflow1.0 ), it will show this .
closed
2017-02-23T02:41:12Z
2017-02-23T04:07:06Z
https://github.com/tflearn/tflearn/issues/623
[]
chenggui53
1
ResidentMario/geoplot
matplotlib
30
Assigning projections to plt.subplots()-generated axes generates rotated images
First off, this is a great library, thanks for making it! I'm running into some non-intuitive behavior when using projections with `plt.subplots()`-generated axes. Briefly, passing a `projection` argument to both `plt.subplots` (via `subplot_kw=`) and `gplt.polyplot` generates images which are rotated by ~30 degrees. Minimum working example below: ``` import geoplot as gplt import geopandas as gpd import shapely import matplotlib.pyplot as plt square = shapely.geometry.Polygon([(41.5, 88), (41.75, 88), (41.75, 87.5), (41.5, 87.5)]) fig, ax = plt.subplots(1,1, subplot_kw={'projection': gplt.crs.AlbersEqualArea()}) gplt.polyplot(gpd.GeoDataFrame({'geometry':square}, index=[1]), projection=gplt.crs.AlbersEqualArea(), ax=ax) ``` For me this generates a plot of a rectangle rotated by about 30 degrees. Some notes: 1. Passing the projection argument to `plt.subplots` is mandatory because without it you get back a `matplotlib.axes._subplots.AxesSubplot` instead of a `cartopy.mpl.geoaxes.GeoAxesSubplot` 2. Omitting the projection argument in the `gplt.polyplot` call generates an image that is way too big and extends past the edge of the plot. Versions: ``` geoplot==0.0.3 geopandas==0.2.1 Shapely==1.5.17.post1 matplotlib==2.0.0 Cartopy==0.15.1 ```
closed
2017-04-27T15:38:54Z
2019-07-05T14:53:12Z
https://github.com/ResidentMario/geoplot/issues/30
[]
hinnefe2
3
twopirllc/pandas-ta
pandas
379
How to get eri data in dataframe columns?
I don't understand how I get the eri values in a dataframe. Here is a small example to explain myself better. ```python import pandas as pd import pandas_ta as pta import numpy as np df_size = 10 data = np.random.random_integers(10000, 30000, size=df_size) df = pd.DataFrame(data, columns=['volume']) data = np.random.uniform(1, 3, size=df_size).round(2) df['open'] = pd.DataFrame(data, columns=['open']) df['close'] = df['open'].shift(-1) df['high'] = (df[['open', 'close']].max(axis=1) * np.random.uniform(1.0, 1.1)).round(2) df['low'] = (df[['open', 'close']].min(axis=1) * np.random.uniform(0.9, 1.0)).round(2) df['efi_4'] = pta.efi(df['close'], df['volume'], length=4) df['efi_bullp_4'], df['efi_bearp_4'] = pta.eri(df['high'], df['low'], df['close'], length=4) print(df.to_string()) ``` This is the output. ```sh volume open close high low efi_4 efi_bullp_4 efi_bearp_4 0 29426 2.66 2.34 2.68 2.25 NaN BULLP_4 BEARP_4 1 28127 2.34 2.36 2.38 2.25 NaN BULLP_4 BEARP_4 2 13551 2.36 2.37 2.39 2.26 NaN BULLP_4 BEARP_4 3 27993 2.37 2.22 2.39 2.13 NaN BULLP_4 BEARP_4 4 29349 2.22 2.18 2.23 2.09 -1168.715000 BULLP_4 BEARP_4 5 10552 2.18 2.24 2.25 2.09 -447.981000 BULLP_4 BEARP_4 6 14127 2.24 1.50 2.25 1.44 -4450.380600 BULLP_4 BEARP_4 7 26463 1.50 2.79 2.81 1.44 10984.679640 BULLP_4 BEARP_4 8 21106 2.79 1.08 2.81 1.04 -7845.696216 BULLP_4 BEARP_4 9 22907 1.08 NaN 1.09 1.04 NaN BULLP_4 BEARP_4 ``` I expect the last two columns to have the value of eri, just like the efi column, but they contain the label? What am I doing wrong?
closed
2021-08-26T13:19:18Z
2021-08-26T19:06:43Z
https://github.com/twopirllc/pandas-ta/issues/379
[ "info" ]
BillGatesIII
2
huggingface/transformers
deep-learning
36,022
Transformers are untraceable with FX after 4.38
### System Info - `transformers` version: 4.48.0 - Platform: Linux-6.8.0-1020-gcp-x86_64-with-glibc2.39 - Python version: 3.12.3 - Huggingface_hub version: 0.27.1 - Safetensors version: 0.4.5 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.6.0a0+df5bbc09d1.nv24.12 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: No - Using GPU in script?: Yes - GPU type: NVIDIA H100 80GB HBM3 ### Who can help? @michaelbenayoun since I've seen them on some FX fixes ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Use `transformers.utils.fx.symbolic_trace` on any AutoModelForCausalLM: ``` from transformers import AutoTokenizer, AutoModelForCausalLM from transformers.utils.fx import symbolic_trace model = AutoModelForCausalLM.from_pretrained('deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B') symbolic_trace(model) ``` You'll get an error about unpacking a Proxy object: ``` Traceback (most recent call last): File "...", line 5, in <module> symbolic_trace(model) File "/usr/local/lib/python3.12/dist-packages/transformers/utils/fx.py", line 1506, in symbolic_trace traced_graph = tracer.trace(model, concrete_args=concrete_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/transformers/utils/fx.py", line 1329, in trace self.graph = super().trace(root, concrete_args=concrete_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torch/fx/_symbolic_trace.py", line 823, in trace (self.create_arg(fn(*args)),), ^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/transformers/models/qwen2/modeling_qwen2.py", line -1, in forward File "/usr/local/lib/python3.12/dist-packages/torch/fx/_symbolic_trace.py", line 801, in module_call_wrapper return self.call_module(mod, forward, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/transformers/utils/fx.py", line 1193, in call_module return super().call_module(m, forward, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torch/fx/_symbolic_trace.py", line 519, in call_module ret_val = forward(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torch/fx/_symbolic_trace.py", line 794, in forward return _orig_module_call(mod, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/transformers/models/qwen2/modeling_qwen2.py", line 574, in forward layer_outputs = decoder_layer( ^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torch/fx/_symbolic_trace.py", line 801, in module_call_wrapper return self.call_module(mod, forward, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/transformers/utils/fx.py", line 1193, in call_module return super().call_module(m, forward, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torch/fx/_symbolic_trace.py", line 519, in call_module ret_val = forward(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torch/fx/_symbolic_trace.py", line 794, in forward return _orig_module_call(mod, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/transformers/models/qwen2/modeling_qwen2.py", line 259, in forward hidden_states, self_attn_weights = self.self_attn( ^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torch/fx/_symbolic_trace.py", line 801, in module_call_wrapper return self.call_module(mod, forward, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/transformers/utils/fx.py", line 1193, in call_module return super().call_module(m, forward, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torch/fx/_symbolic_trace.py", line 519, in call_module ret_val = forward(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torch/fx/_symbolic_trace.py", line 794, in forward return _orig_module_call(mod, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/transformers/models/qwen2/modeling_qwen2.py", line 159, in forward hidden_shape = (*input_shape, -1, self.head_dim) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torch/fx/proxy.py", line 461, in __iter__ return self.tracer.iter(self) ^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/transformers/utils/fx.py", line 901, in iter return super().iter(prxy) ^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torch/fx/proxy.py", line 330, in iter raise TraceError('Proxy object cannot be iterated. This can be ' torch.fx.proxy.TraceError: Proxy object cannot be iterated. This can be attempted when the Proxy is used in a loop or as a *args or **kwargs function argument. See the torch.fx docs on pytorch.org for a more detailed explanation of what types of control flow can be traced, and check out the Proxy docstring for help troubleshooting Proxy iteration errors ``` Blame on that line shows that it was due to this refactor commit on a ton of models: https://github.com/huggingface/transformers/commit/2c47618c1a282f925446506d53108dc6e82d9ef0 that introduced `*splat` in a bunch of places. This breaks a lot of tracing, seems related to https://github.com/huggingface/transformers/issues/35622 ### Expected behavior We should be supporting FX tracing. I came up with a shoddy fix since it seems like `(*a_tuple, ...)` is used in a lot of places. This corresponds to a `LIST_EXTEND` operation. We can implement `HFTracer.iter` to explicitly index the tuple instead of iterating over it (lifted pretty much from `torch.fx.Proxy.__iter__` implementation`: ```py def iter(self, proxy): import dis frame = inspect.currentframe() assert frame is not None calling_frame = frame.f_back.f_back # we need to go back two frames, because it goes through Proxy.__iter__ first assert calling_frame is not None inst_list = list(dis.get_instructions(calling_frame.f_code)) if sys.version_info >= (3, 11): from bisect import bisect_left inst_idx = bisect_left(inst_list, calling_frame.f_lasti, key=lambda x: x.offset) else: inst_idx = calling_frame.f_lasti // 2 inst = inst_list[inst_idx] if inst.opname == 'LIST_EXTEND': return (proxy[i] for i in range(len(proxy))) return super().iter(proxy) ``` This seems to work for the few models I've spot checked. I'm not familiar with Python bytecode, so I'm not sure if `LIST_EXTEND` has other usecases.
closed
2025-02-03T20:43:11Z
2025-03-14T11:29:58Z
https://github.com/huggingface/transformers/issues/36022
[ "bug" ]
Li357
7
ultralytics/ultralytics
machine-learning
19,235
Val mode - No json predictions & plots in return
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions. ### Question Hello, When I run a Yolo val mode on custom data , I get my confusion matrix and my personal statistics file. However, **I no longer get my 'predict.json' and my plots in return**. Why is that? I should point out that I used to get the prediction file and the plots in my previous experiments with val mode. I did specify 'save_json=True'. My script : ``` `from ultralytics import YOLO import pandas as pd import os import inspect from pathlib import Path from types import MethodType def get_image_name(path): """Extract base name from image path without extension""" return Path(path).stem def custom_plot_val_samples(self, batch, ni): """Custom plot validation samples function""" image_name = get_image_name(batch["im_file"][0]) fname = self.save_dir / f"{image_name}_label.jpg" plot_images( batch["img"], batch["batch_idx"], batch["cls"].squeeze(-1), batch["bboxes"], paths=batch["im_file"], fname=fname, names=self.names, on_plot=self.on_plot, ) def custom_plot_predictions(self, batch, preds, ni): """Custom plot predictions function""" image_name = get_image_name(batch["im_file"][0]) fname = self.save_dir / f"{image_name}_pred.jpg" plot_images( batch["img"], *output_to_target(preds, max_det=self.args.max_det), paths=batch["im_file"], fname=fname, names=self.names, on_plot=self.on_plot, ) def plot_samples(validator): frame = inspect.currentframe().f_back.f_back v = frame.f_locals # Override the original plotting methods with our custom ones original_plot_val = validator.plot_val_samples original_plot_pred = validator.plot_predictions try: # Bind our custom methods to the validator instance validator.plot_val_samples = MethodType(custom_plot_val_samples, validator) validator.plot_predictions = MethodType(custom_plot_predictions, validator) # Call the validation plotting validator.plot_val_samples(v["batch"], v["batch_i"]) validator.plot_predictions(v["batch"], v["preds"], v["batch_i"]) current_image_path = v["batch"]["im_file"][0] print(f"Plotting for image: {get_image_name(current_image_path)}") finally: # Restore original methods validator.plot_val_samples = original_plot_val validator.plot_predictions = original_plot_pred # Initialization of validation model = YOLO("P:/AGO/Developpement/Models/YOLO_EVALUATION_MAYOTTE/Inference/Pretrained/coco/yolov8n.pt") save_dir = 'P:/AGO/Developpement/Models/YOLO_EVALUATION_MAYOTTE/runs/detect' os.makedirs(save_dir, exist_ok=True) # Ne pas oublier d'importer plot_images et output_to_target au début du fichier from ultralytics.utils.plotting import plot_images from ultralytics.utils.plotting import output_to_target model.add_callback("on_val_batch_end", plot_samples) if __name__ == '__main__': metrics = model.val( data='N:/IA/Mayotte_2024/split_coco_to_Mayotte/data_coco_to_mayotte.yaml', batch=1, device='cuda', conf=0.01, epochs=100, imgsz=1920, project=save_dir, save_json=True, visualize=True ) print(metrics.maps) # map50-95 print(type(metrics)) print(metrics.confusion_matrix.matrix) #print(metrics.confusion_matrix.matches) # Extract relevant confusion matrix values for global evaluation df_confusion = pd.DataFrame(metrics.confusion_matrix.matrix) df_confusion.to_csv('P:/AGO/Developpement/Models/YOLO_EVALUATION_MAYOTTE/runs/detect/confusion_matrix.csv') # Prepare the global stats text # Extract the confusion matrix and TP, FP, FN values tp, fp, fn = metrics.confusion_matrix.tp_fp() bateau_tp = tp[8] bateau_fp = fp[8] bateau_fn = fn[8] valeur_abscence_detect = df_confusion.iloc[-1, 8] valeur_mauvaise_classif = df_confusion.iloc[:, 8].sum() - (valeur_abscence_detect + bateau_tp) stats = f""" Global Evaluation: Le nombre de bateaux vrais positifs (TP) est : {bateau_tp} Le nombre de bateaux faux positifs (FP) est : {bateau_fp} Le nombre de bateaux faux négatifs (FN) est : {bateau_fn} Le nombre de bateaux non détectés est : {valeur_abscence_detect} Le nombre de bateaux mal classifiés est : {valeur_mauvaise_classif} """ # Save the global stats to a txt file with open(os.path.join(save_dir, 'evaluation_stats_global.txt'), 'w') as f: f.write(stats) ``` And my yaml file : ``` # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] #path: N:/IA/SMD_0/SMD_IR_v0/SMD_IR/nir2nir # dataset root dir path: N:/IA/Mayotte_2024/split_coco_to_Mayotte train: # train images (relative to 'path') 4 images val: images/test # val images (relative to 'path') 4 images test: # test images (optional) # Classes names: 0: person 1: bicycle 2: car 3: motorcycle 4: airplane 5: bus 6: train 7: truck 8: boat 9: traffic light 10: fire hydrant 11: stop sign 12: parking meter 13: bench 14: bird 15: cat 16: dog 17: horse 18: sheep 19: cow 20: elephant 21: bear 22: zebra 23: giraffe 24: backpack 25: umbrella 26: handbag 27: tie 28: suitcase 29: frisbee 30: skis 31: snowboard 32: sports ball 33: kite 34: baseball bat 35: baseball glove 36: skateboard 37: surfboard 38: tennis racket 39: bottle 40: wine glass 41: cup 42: fork 43: knife 44: spoon 45: bowl 46: banana 47: apple 48: sandwich 49: orange 50: broccoli 51: carrot 52: hot dog 53: pizza 54: donut 55: cake 56: chair 57: couch 58: potted plant 59: bed 60: dining table 61: toilet 62: tv 63: laptop 64: mouse 65: remote 66: keyboard 67: cell phone 68: microwave 69: oven 70: toaster 71: sink 72: refrigerator 73: book 74: clock 75: vase 76: scissors 77: teddy bear 78: hair drier 79: toothbrush ``` ### Additional _No response_
closed
2025-02-13T16:54:00Z
2025-02-14T15:44:07Z
https://github.com/ultralytics/ultralytics/issues/19235
[ "question", "detect" ]
adriengoleb
2
2noise/ChatTTS
python
243
hf的几个pt分别是做什么用的?
<img width="655" alt="image" src="https://github.com/2noise/ChatTTS/assets/3038472/84351770-176f-4fc9-9fc3-040a541ad031">
closed
2024-06-04T03:52:17Z
2024-07-20T04:01:27Z
https://github.com/2noise/ChatTTS/issues/243
[ "stale" ]
zh794390558
1
nolar/kopf
asyncio
1,019
Client-side throttling
### Problem I want to be able to use kopf to watch resources and update `last-handled-configuration` annotations for a large number of resources. (We are "migrating" from `kubectl apply --patch` to using kopf as our k8s "reconciler" and so for all previously managed k8s resources we need to patch those resources w/ our new kopf-specific annotation.) Since we are sending requests for something like 50k+ k8s resources individually, our apiserver is responding w/ 429s. Most k8s clients in other ecosystems (Go comes to mind, `kubectl` being a good example) implement client-side throttling so that controllers, operators, etc., do not DoS the k8s apiserver. tl;dr: I want to be able to start kopf and "migrate" a large number of k8s resources to be managed by kopf without having to worry about DoS'ing our apiserver. ### Proposal Implement a configurable, naive semaphore that wraps the `aiohttp` client so that a maximum number of requests can be sent from kopf to the k8s apiserver, i.e. implement a request queue client-side. ### Code ```python sem = asyncio.Semaphore(10) # ... later async with sem: # do kopf PATCH request as soon as a slot is available; otherwise, wait for a slot to open up ``` ### Additional information Related to https://github.com/nolar/kopf/pull/963.
open
2023-03-28T14:20:18Z
2023-08-09T07:12:13Z
https://github.com/nolar/kopf/issues/1019
[ "enhancement" ]
mecampbellsoup
2
PokeAPI/pokeapi
api
704
Pokemon locations for Galar and Hisui
Galar and Hisui locations are missing. When might these are added? Thank you!
closed
2022-03-22T19:11:49Z
2022-03-28T17:52:00Z
https://github.com/PokeAPI/pokeapi/issues/704
[]
684efs3
1
xlwings/xlwings
automation
2,422
xlwings Server: respect the sync/async definition of custom functions
Currently, all custom functions run on an async server endpoint, even if they are defined as a sync function, which will block the event loop. Instead, sync functions should behave like a sync fastapi endpoint, possibly via `run_in_threadpool`: https://github.com/tiangolo/fastapi/discussions/10768 Might need to be solved on the backend implementation though to not make this framework-dependent.
closed
2024-03-25T10:31:38Z
2024-07-19T09:10:56Z
https://github.com/xlwings/xlwings/issues/2422
[ "bug", "Server" ]
fzumstein
1
biolab/orange3
scikit-learn
6,480
Forward selection in feature suggestion
I really like the feature suggestion in _Linear projection_ and _Radviz_ but finding a combination of as little as 4 features among 100+ attributes takes forever. I tried to use _Rank_ to reduce their number but often cannot go so low that it would be worth the wait (also independently measuring feature importance does not give the same result as in combination). My idea is that a forward selection strategy would be a good alternative to brute-forcing all combinations. It can easily find a combination of 10-20 features much faster.
closed
2023-06-17T14:03:00Z
2023-06-23T10:36:51Z
https://github.com/biolab/orange3/issues/6480
[]
processo
1
neuml/txtai
nlp
678
Move vector model caching from Embeddings to Vectors
Currently, `Vectors` instances are cached in an `Embeddings` when a models cache is set. This works well in most cases but it would be better if only the actual underlying model was cached and not the configuration parameters. For example, with the new `dimensionality` parameter, it would be nice to have that be variable per subindex.
closed
2024-02-28T01:30:25Z
2024-02-28T02:22:17Z
https://github.com/neuml/txtai/issues/678
[]
davidmezzetti
0
rthalley/dnspython
asyncio
587
Setting edns options in a query
Hi, I am trying to set EDNS options on a DNS query that should be equivalent to setting `+ednsopt=100:test_data` in `dig` options. I tried the following: ```python request = dns.message.make_query(domain, 'A') opt = dns.edns.GenericOption(100, b'test_data') request.use_edns(edns=True, options=[opt]) response = dns.query.udp(request, name_server) ``` But when I check the resulting query via wireshark, the `Option Data` field has a different value (e.g. `303030313434346164343030`) instead of `test_data`. Is there a way that I could send the actual data without it being changed? Thank you.
closed
2020-09-24T03:16:09Z
2020-09-25T22:45:09Z
https://github.com/rthalley/dnspython/issues/587
[ "Cannot Reproduce" ]
ChamaraG
6
jupyter/nbgrader
jupyter
927
`nbgrader feedback` flag to add new feedback file instead of overwriting
Sometimes, I have a back and forth exchanging an assignment with students as they're resolving various issues. In these cases, it would be really helpful to have a flag for `nbgrader feedback` to generate a new feedback file while keeping the previous one (with the original version of the code and my comments) intact. Currently, the only option is to overwrite with `--force`. Here's roughly the workflow I have in mind: ```sh # student submits first attempt at notebook.ipynb in ps1 # I provide comments and generate feedback $ nbgrader feedback ps1 $ find feedback/student/ps1 -name \*.html notebook.html # student resolves some of the issues raised in comments, submits new version # I provide new comments and generate additional feedback $ nbgrader feedback --add ps1 $ find feedback/student/ps1 -name \*.html notebook.html notebook(1).html # etc. ``` Ideally, when the comments haven't changed, this should be a no-op, so that students don't get confused by having multiple feedback files with the same content. So this would involve always generating the HTML and comparing with the contents of the previous feedback file on disk; if they're the same, no output should be written.
open
2018-02-07T11:10:10Z
2022-06-23T10:21:07Z
https://github.com/jupyter/nbgrader/issues/927
[ "enhancement" ]
dlukes
1
fastapi/sqlmodel
fastapi
31
How to deal with Postgres Enum columns?
### First Check - [X] I added a very descriptive title to this issue. - [X] I used the GitHub search to find a similar issue and didn't find it. - [X] I searched the SQLModel documentation, with the integrated search. - [X] I already searched in Google "How to X in SQLModel" and didn't find any information. - [X] I already read and followed all the tutorial in the docs and didn't find an answer. - [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic). - [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy). ### Commit to Help - [X] I commit to help with one of those options 👆 ### Example Code ```python from typing import List, Optional from enum import Enum import sqlalchemy as sa import sqlmodel class Status(str, Enum): active = "active" banned = "banned" status_enum = postgresql.ENUM("active", "banned", "c", name="status_enum") class User(sqlmodel.SQLModel, table=True): __tablename__ = "auth_user" id: int = sqlmodel.Field(primary_key=True) status: Status = sqlmodel.Field(status_enum) password: str ``` ### Description How to use Postgres Enums together with SQLModel? ### Operating System Linux ### Operating System Details _No response_ ### SQLModel Version 0.0.3 ### Python Version 3.8.11 ### Additional Context _No response_
closed
2021-08-26T09:15:44Z
2022-09-10T00:13:19Z
https://github.com/fastapi/sqlmodel/issues/31
[ "question", "answered" ]
gregsifr
7
errbotio/errbot
automation
986
PySide not Working with Python 3.5
In order to let us help you better, please fill out the following fields as best you can: ### I am... ![screenshot from 2017-04-04 23-56-31](https://cloud.githubusercontent.com/assets/19478013/24678316/b804d632-1992-11e7-938f-da35a8034064.png) * [ ] Reporting a bug * [ ] Suggesting a new feature * [ ] Requesting help with running my bot * [ ] Requesting help writing plugins * [ ] Here about something else ### I am running... * Errbot version: 9.9.9 * OS version: Ubuntu 16 * Python version: 3.5 * Using a virtual environment: yes/no : No ### Issue description Helloo PySide is not working with Python 3.5 so I cannot run "Graphic" backend kindly help
closed
2017-04-04T21:06:50Z
2021-07-23T05:22:27Z
https://github.com/errbotio/errbot/issues/986
[ "type: bug", "backend: Common", "backend: GUI" ]
gyleodhis
3
serengil/deepface
machine-learning
828
How to change max_threshold_to_verify
Currently, the default `max_threshold_to_verify` of `DeepFace.verify()` is 0.4, how do I change that value?
closed
2023-08-20T07:24:10Z
2023-08-20T07:29:13Z
https://github.com/serengil/deepface/issues/828
[ "question" ]
brownfox2k6
2
junyanz/pytorch-CycleGAN-and-pix2pix
computer-vision
1,575
Issues with running repository on custom dataset
Hi, I am training the cycle Gan on a custom dataset that contains images of people wearing the jewellery and no jewellery. The images i train A and train B are of the same people with and without the jewellery. The fakes generated during the training, has people wearing the jewellery slowly. But when I use any of the training data images to do test it doesn't generate an image of the person wearing the jewellery, my dataset is relatively very small with 16 images in both directories. I am not looking for an overall very accurate model but something that can work on selective images as well. It would be really helpful if someone can please help out immediately. I am relatively new to gans. Command for test: !python test.py --dataroot /content/drive/My\ Drive/dataset/test/testA --name my_model --preprocess resize_and_crop --model test --num_test 1 --direction AtoB --no_dropout Command for train: !python train.py --dataroot /content/drive/My\ Drive/dataset/train --name my_model --n_epochs 500 --preprocess resize_and_crop --display_id -1 --model cycle_gan --direction AtoB I used this line after train and prior test: !cp ./checkpoints/my_model/latest_net_G_A.pth ./checkpoints/my_model/latest_net_G.pth
open
2023-05-18T11:31:52Z
2023-07-11T11:41:40Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1575
[]
vandana-sreenivasan3
1
kubeflow/katib
scikit-learn
2,422
SDK is broken when installed by `git+https`
### What happened? When I run the following code with `kubeflow-katib` installed by: `pip install git+https://github.com/kubeflow/katib.git@master#subdirectory=sdk/python/v1beta1` ```Python import kubeflow.katib as katib # Step 1. Create an objective function with push-based metrics collection. def objective(parameters): # Import required packages. import time import kubeflow.katib as katib time.sleep(5) # Calculate objective function. result = 4 * int(parameters["a"]) - float(parameters["b"]) ** 2 # Push metrics to Katib DB. katib.report_metrics({"result": result}) # Step 2. Create HyperParameter search space. parameters = { "a": katib.search.int(min=10, max=20), "b": katib.search.double(min=0.1, max=0.2) } # Step 3. Create Katib Experiment with 4 Trials and 2 CPUs per Trial. # We choose to install the latest changes of Python SDK because `report_metrics` has not been # supported yet. Thus, the base image must have `git` command to download the package. katib_client = katib.KatibClient(namespace="kubeflow") name = "tune-experiment" katib_client.tune( name=name, objective=objective, parameters=parameters, base_image="electronicwaste/push-metrics-collector:v0.0.9", # python:3.11-slim + git objective_metric_name="result", max_trial_count=4, resources_per_trial={"cpu": "2"}, packages_to_install=["git+https://github.com/kubeflow/katib.git@master#subdirectory=sdk/python/v1beta1"], # packages_to_install=["kubeflow-katib==0.18.0"], metrics_collector_config={"kind": "Push"}, ) # Step 4. Wait until Katib Experiment is complete katib_client.wait_for_experiment_condition(name=name) # Step 5. Get the best HyperParameters. print(katib_client.get_optimal_hyperparameters(name)) ``` An error occurred: ``` Traceback (most recent call last): File "/home/ws/katib-example/push.py", line 1, in <module> import kubeflow.katib as katib File "/home/ws/miniconda3/envs/katib/lib/python3.10/site-packages/kubeflow/katib/__init__.py", line 73, in <module> from kubeflow.katib.api.katib_client import KatibClient File "/home/ws/miniconda3/envs/katib/lib/python3.10/site-packages/kubeflow/katib/api/katib_client.py", line 27, in <module> from kubeflow.katib.types.trainer_resources import TrainerResources ModuleNotFoundError: No module named 'kubeflow.katib.types' ``` And I went to dir `/home/ws/miniconda3/envs/katib/lib/python3.10/site-packages/kubeflow/katib/`, finding that dir `types` was missing: ``` (katib) ws@master1  ~/miniconda3/envs/katib/lib/python3.10/site-packages/kubeflow/katib  ls api configuration.py exceptions.py katib_api_pb2_grpc.py models rest.py api_client.py constants __init__.py katib_api_pb2.py __pycache__ utils ``` ### What did you expect to happen? The code should be executed without error when I installed the SDK with: ``` pip install git+https://github.com/kubeflow/katib.git@a524f33830e02189476efaf6d9045cbd2ce605f0#subdirectory=sdk/python/v1beta1 ``` ### Environment Kubernetes version: ```bash $ kubectl version Client Version: v1.30.2 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.30.1 ``` Katib controller version: ```bash $ kubectl get pods -n kubeflow -l katib.kubeflow.org/component=controller -o jsonpath="{.items[*].spec.containers[*].image}" docker.io/kubeflowkatib/katib-controller:latest ``` Katib Python SDK version: ```bash $ pip show kubeflow-katib Name: kubeflow-katib Version: 0.17.0 Summary: Katib Python SDK for APIVersion v1beta1 Home-page: https://github.com/kubeflow/katib/tree/master/sdk/python/v1beta1 Author: Kubeflow Authors Author-email: premnath.vel@gmail.com License: Apache License Version 2.0 Location: /home/ws/miniconda3/envs/katib/lib/python3.10/site-packages Requires: certifi, grpcio, kubernetes, protobuf, setuptools, six, urllib3 Required-by: ``` ### Impacted by this bug? Give it a 👍 We prioritize the issues with most 👍
closed
2024-09-05T06:30:06Z
2024-09-05T16:09:17Z
https://github.com/kubeflow/katib/issues/2422
[ "help wanted", "good first issue", "kind/bug", "area/sdk", "lifecycle/needs-triage" ]
Electronic-Waste
4
WZMIAOMIAO/deep-learning-for-image-processing
pytorch
642
关于HRNet中目标检测的json文件加载之后会报错的问题
环境配置是严格按照readme里面的,用GT是没有问题的,我想大概不是环境的事 请问一下大佬,当我把GT信息换成所提供的目标检测的json文件之后,无论是train.py还是validation.py在运行的过程中都会出现报错。 下面我描述一下validation.py中我遇到的情况 其中我注意到,validation的代码中,您好像忘了将args.person_det参数加进去了,因此我首先修改了 val_dataset = CocoKeypoint(data_root, "val", transforms=data_transform["val"], det_json_path=args.person_det) 接着我修改了, parser.add_argument('--person-det', type=str, default="./COCO_val2017_detections_AP_H_56_person.json") 但是运行的时候始终会报错,经过我的debug之后,找到报错的位置是在这一句代码: key_metric.evaluate() 最后输出的结果文件key_results.json也是一个空列表 ![QQ图片20220919145932](https://user-images.githubusercontent.com/66052316/190986814-1e55bd52-c4a7-4ee4-81d2-518ccedbaf83.png) 无论在windows还是linux都会报错,det_json_path=args.person_det改成det_json_path=None用GT信息就可以正常运行,在网络上搜索了很久都找不到解决的办法,如果您有时间的话能帮忙看一下这个问题吗?真的非常感谢!
closed
2022-09-19T09:20:23Z
2022-09-20T15:40:29Z
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/642
[]
Caicaizi-cdy
4
feature-engine/feature_engine
scikit-learn
449
add inverse_transform method and functionality to BoxCoxTransformer
We can use the inv_boxcox functionality from scipy.special: https://docs.scipy.org/doc/scipy-1.8.0/html-scipyorg/reference/generated/scipy.special.inv_boxcox.html
closed
2022-05-10T08:51:52Z
2022-06-12T10:28:28Z
https://github.com/feature-engine/feature_engine/issues/449
[ "good first issue", "enhancement", "easy" ]
solegalli
3
python-restx/flask-restx
api
176
flask_restx is not compatible with gunicorn
I am trying to use gunicorn to run the flask API that I just developed. It worked well when I use flask_restplus, but it had No module named 'flask_restx' when using flask_restx even I have flask_restx installed in my virtual environment. I think it might because gunicorn does not support flask_restx. Anyone had this issue before or has any idea about this? ### Below is my run.sh script ``` TIMEOUT=10000 echo "Starting gunicorn" gunicorn \ -b 0.0.0.0:80 \ -t $TIMEOUT \ -k gevent --worker-connections 10 \ --keep-alive 3600 \ --log-level info \ bai.wsgi:app ``` When I try to run this script, it complains about ModuleNotFoundError: No module named 'flask_restx'. Here is the trackback: ``` Starting gunicorn [2020-07-19 11:43:11 -0400] [44134] [INFO] Starting gunicorn 19.9.0 [2020-07-19 11:43:11 -0400] [44134] [INFO] Listening at: http://0.0.0.0:80 (44134) [2020-07-19 11:43:11 -0400] [44134] [INFO] Using worker: gevent [2020-07-19 11:43:11 -0400] [44137] [INFO] Booting worker with pid: 44137 /Users/serenaxu/flask-template/BAI/bai/db/config.py:11: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details. __baseconfig__ = yaml.load(open(CONFIG_PATH)) [2020-07-19 11:43:12 -0400] [44137] [ERROR] Exception in worker process Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker worker.init_process() File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gunicorn/workers/ggevent.py", line 203, in init_process super(GeventWorker, self).init_process() File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gunicorn/workers/base.py", line 129, in init_process self.load_wsgi() File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gunicorn/workers/base.py", line 138, in load_wsgi self.wsgi = self.app.wsgi() File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in wsgi self.callable = self.load() File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load return self.load_wsgiapp() File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp return util.import_app(self.app_uri) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gunicorn/util.py", line 350, in import_app __import__(module) File "/Users/serenaxu/flask-template/BAI/bai/wsgi.py", line 2, in <module> from bai.app import create_app, create_api File "/Users/serenaxu/flask-template/BAI/bai/app.py", line 7, in <module> from bai import apis File "/Users/serenaxu/flask-template/BAI/bai/apis/__init__.py", line 2, in <module> from .namespace1 import main File "/Users/serenaxu/flask-template/BAI/bai/apis/namespace1.py", line 4, in <module> from flask_restx import Api, Resource, Namespace ModuleNotFoundError: No module named 'flask_restx'
closed
2020-07-20T12:46:09Z
2022-01-09T23:02:41Z
https://github.com/python-restx/flask-restx/issues/176
[ "bug" ]
Serena-Xu
1
abhiTronix/vidgear
dash
159
processed frames web streaming
Hey, First of all thank for great repo. I am connecting to an IP camera via VidGear, grab frame by frame, processed them on my local computer(e.g object detection) and want to stream that frames in real-time to AWS EC2 computer in order that my clients(react's app) will can see the processed frame's(e.g frame with bounding box). I followed the whole documentation and see that there is module's for some specific task in my pipeline like WebGear but nothing that combine the whole pipeline toghter. can you please guide me? the pipeline: grab frame -> perform object detection -> stream the frame to html file on AWS server
closed
2020-09-09T09:24:31Z
2020-09-10T15:38:05Z
https://github.com/abhiTronix/vidgear/issues/159
[ "QUESTION :question:", "SOLVED :checkered_flag:" ]
idanmosh
5
RomelTorres/alpha_vantage
pandas
109
Forex and pandas
Hi Romel, Congrats for the module, it is really helpful and user friendly. I would like to point out that the info in the README about > Foreign Exchange (FX) needs to be fixed a little bit because it is written that > The foreign exchange is just metadata, thus only available as json format (using the **'csv'** or **'pandas'** format **will raise an Error**) but this is true only for the `get_currency_exchange_rate()` function. Calling the other four functions: `get_currency_exchange_intraday()` `get_currency_exchange_daily()` `get_currency_exchange_weekly()` `get_currency_exchange_monthly()` with `cc = ForeignExchange(key='YOUR_API_KEY', output_format='pandas')` works fine to get back pandas dataframes. So I think that it would be better to clarify this in the README section about FX. Thanks
closed
2019-01-13T15:45:47Z
2019-02-11T21:57:59Z
https://github.com/RomelTorres/alpha_vantage/issues/109
[]
NTavou
4
ultralytics/ultralytics
python
19,438
After train and export the model with ultralytics, can it be deployed lightweightly?
### Search before asking - [x] I have searched the Ultralytics [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar feature requests. ### Description First of all, this repo is very convenient for training and exporting models, because it takes into account many frameworks on the market, but this also leads to a lot of dependencies on the repo. However, when deploying, for example, an openvino model running on the CPU, it does not require too many dependencies, such as torch and cuda-related dependencies. Is there any way I can still use this repo when deploying inference but not have to install dependencies related to training and other unrelated things, which can reduce the size of the docker image. ### Use case _No response_ ### Additional _No response_ ### Are you willing to submit a PR? - [x] Yes I'd like to help by submitting a PR!
open
2025-02-26T06:34:32Z
2025-02-27T16:33:30Z
https://github.com/ultralytics/ultralytics/issues/19438
[ "enhancement", "dependencies" ]
qianyue76
5
plotly/dash-table
dash
836
When some rows have dropdown other rows are not editable
If I use dropdowns in a dash table with Editable = True, I cannot edit values in the other rows. This has been reported by several people, but I have not found a fix yet. Some people say that it used to work, but not any longer. Any ideas?
open
2020-10-06T19:30:30Z
2023-11-22T02:56:12Z
https://github.com/plotly/dash-table/issues/836
[]
Bastituta
6
scrapy/scrapy
web-scraping
6,378
Edit Contributing.rst document to specify how to propose documentation suggestions
There are multiple types of contributions that the community can suggest including bug reports, feature requests, code improvements, security vulnerability reports, and documentation changes. For the Scrapy.py project it was difficult to discern what process to follow to make a documentation improvement suggestion. I want to suggest an additional section to the documentation that clearly explains how to propose a non-code related change. This section will follow the guidelines outlined in the Contributing.rst file from another open source project, https://github.com/beetbox/beets
closed
2024-05-26T15:43:40Z
2024-07-10T07:37:32Z
https://github.com/scrapy/scrapy/issues/6378
[]
jtoallen
11
tensorflow/tensor2tensor
machine-learning
1,142
Can one register callback functions for certain events (e.g. after eval)?
In `T2TModel` there is ```python # Replace the two methods below in order to add custom SessionRunHooks to # the training procedure. @staticmethod def train_hooks(): return [] @staticmethod def eval_hooks(): return [] ``` However, the problem is that these hooks are already living in a session. I want to quantize a graph after it has been evaluated in order to re-evaluate the quantized version of that graph. For this I need to slightly modify the original graph which means I have to create another session which is why `SessionRunHooks` are not the way to go here since their callbacks are always within a session context. So is there a clean way to register a callback in the `t2t-trainer` which gives me control after the model evaluation stage (or something like this)?
closed
2018-10-15T08:46:36Z
2018-10-23T11:00:08Z
https://github.com/tensorflow/tensor2tensor/issues/1142
[]
stefan-falk
1
sczhou/CodeFormer
pytorch
71
How long does it take for traning stage one?
Hi, I am try to ReIP. the training code. I am wondering how long does it take for training stage one. Best
open
2022-11-24T11:05:23Z
2022-11-24T11:05:23Z
https://github.com/sczhou/CodeFormer/issues/71
[]
henanjun
0
serengil/deepface
machine-learning
774
how to analyze age and gender in real time ?
how to analyze age and gender in real time ?
closed
2023-06-10T19:36:32Z
2023-06-11T10:01:11Z
https://github.com/serengil/deepface/issues/774
[ "question" ]
Rasantis
1
tensorpack/tensorpack
tensorflow
1,066
[FasterRCNN] Understanding clip_boxes in generate_fpn_proposals
I'm working with the Mask/Faster RCNN code and I'm confused about why the ROI proposals are being generated on a per-FPN-level basis, but the call to `generate_rpn_proposals` for each level passes in the same `image_shape2d`. In the RPN code, you call `generate_fpn_proposals`. https://github.com/tensorpack/tensorpack/blob/30ead05beea3a8cc4ea3c9af7872a43aa7e9491c/examples/FasterRCNN/train.py#L248-L249 In `generate_fpn_proposals`, you call `generate_rpn_proposals` for each level. https://github.com/tensorpack/tensorpack/blob/30ead05beea3a8cc4ea3c9af7872a43aa7e9491c/examples/FasterRCNN/model_fpn.py#L186-L193 `image_shape2d` is the shape (h,w,) of the 'image' input tensor. In `generate_rpn_proposals`, `image_shape2d` is used in the call to `clip_boxes`, which clips each proposal so all corners are between coordinates (0,0) and (h,w). https://github.com/tensorpack/tensorpack/blob/30ead05beea3a8cc4ea3c9af7872a43aa7e9491c/examples/FasterRCNN/model_rpn.py#L129 Won't the proposals be using the coordinate system of the feature map, which will be much smaller than the original size of the image? In that case, won't `clip_boxes` only work correctly on corners that exist in the negative coordinate space? To give an example, if there is an image that is 1024x1024, the backbone gives you layers with sizes 256x256, 128x128, 64x64, etc. When you run the RPN head over the 256x256 feature map, if you were to get a proposal with (x1, y1), (x2, y2) = (-10, -20), (200, 300), you want `clip_boxes` to convert that to (0, 0), (200, 256) so that it exists fully within the feature map of this layer. However, it looks like `clip_boxes` will only ensure that the corners are between (0,0) and (1024, 1024) so you get (0,0), (200, 300). Am I misunderstanding something here?
closed
2019-01-30T00:35:44Z
2019-01-30T22:08:15Z
https://github.com/tensorpack/tensorpack/issues/1066
[ "examples" ]
armandmcqueen
4
python-visualization/folium
data-visualization
1,687
Click twice to exit full screen
**Describe the bug** When I enter full screen mode, I need to click twice to exit full screen mode
closed
2022-12-30T06:35:32Z
2023-10-17T08:33:17Z
https://github.com/python-visualization/folium/issues/1687
[]
Winky678
4
jupyter-book/jupyter-book
jupyter
1,810
Cell error traceback incorrectly lexed as IPython
### Describe the bug I'm using Jupyter Book to produce a book with executable OCaml cells, including demoing some OCaml code that deliberately does not compile. I tag those cells with raises-exception of course. Some (not all) OCaml compiler error messages cause Sphinx to produce a lexer warning, Could not lex literal_block as "ipythontb". This bug was reported in MyST-NB as https://github.com/executablebooks/MyST-NB/issues/341. Apparently it was fixed there but Jupyter Book has not yet adopted that fix. So I am now reporting the bug here. ### Reproduce the bug See https://github.com/executablebooks/MyST-NB/issues/341 ### List your environment See https://github.com/executablebooks/MyST-NB/issues/341
open
2022-08-17T01:11:24Z
2023-12-29T15:17:33Z
https://github.com/jupyter-book/jupyter-book/issues/1810
[ "bug" ]
clarksmr
2
numba/numba
numpy
9,722
optional type in `if x is not None` branch is still optional
<!-- Thanks for opening an issue! To help the Numba team handle your information efficiently, please first ensure that there is no other issue present that already describes the issue you have (search at https://github.com/numba/numba/issues?&q=is%3Aissue). --> ## Reporting a bug <!-- Before submitting a bug report please ensure that you can check off these boxes: --> - [x] I have tried using the latest released version of Numba (most recent is visible in the release notes (https://numba.readthedocs.io/en/stable/release-notes-overview.html). - [x] I have included a self contained code sample to reproduce the problem. i.e. it's possible to run as 'python bug.py'. <!-- Please include details of the bug here, including, if applicable, what you expected to happen! --> ```python from numba.core.types import b1, i8, unicode_type, optional from numba import njit @njit(b1(optional(i8))) def f(x): if x is not None: return isinstance(x, i8) else: return False ``` This gives the following error: ``` TypingError: Failed in nopython mode pipeline (step: nopython frontend) No implementation of function Function(<built-in function isinstance>) found for signature: >>> isinstance(OptionalType(int64), class(int64)) There are 2 candidate implementations: - Of which 2 did not match due to: Overload in function 'ol_isinstance': File: numba/cpython/builtins.py: Line 755. With argument(s): '(OptionalType(int64), class(int64))': Rejected as the implementation raised a specific error: NumbaTypeError: isinstance cannot handle optional types. Found: "OptionalType(int64)" raised from /home/auderson/mambaforge/envs/py3.10/lib/python3.10/site-packages/numba/cpython/builtins.py:768 During: resolving callee type: Function(<built-in function isinstance>) During: typing of call at /tmp/ipykernel_2498352/3053206967.py (4) File "../../../../../../../tmp/ipykernel_2498352/3053206967.py", line 4: <source missing, REPL/exec in use?> ```
open
2024-09-12T03:04:13Z
2024-11-26T05:51:18Z
https://github.com/numba/numba/issues/9722
[ "bug - typing" ]
auderson
5
CorentinJ/Real-Time-Voice-Cloning
pytorch
598
Someone is selling this software!?
Hi, Just found this: https://realtimevoicecloning.com Is he selling your work as his own?
closed
2020-11-19T02:56:36Z
2020-12-05T08:32:15Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/598
[]
bostjan39
5
JoeanAmier/XHS-Downloader
api
118
小红书下载主页视频时经常卡住不动
问题如下: 使用小红书下载功能批量下载主页内容时经常卡住进度不动,要么一就绪,要么就下载几十条的时候一直不动,下不了,重新停止后再开始下载又能下,过一会又会中断,如前面一样,希望能优化下,还有新版本下载时会出现签名服务器断联等问题,使用环境 windows 11。
open
2024-07-16T02:17:33Z
2024-07-16T02:17:33Z
https://github.com/JoeanAmier/XHS-Downloader/issues/118
[]
mizi6654
0
aimhubio/aim
data-visualization
3,175
AIM Client Process Termination Leaves Run in Active State
## 🐛 Bug When the AIM client process is killed, the corresponding run remains in the "In Progress" state indefinitely. I expect the run to transition to the "Finished" state upon client termination. ### To reproduce 1. Create a Python script `test-aim.py` with the following content: ``` import time from aim import Run run = Run(repo='aim://10.66.142.35:8082', experiment='default') run['hparams'] = { 'learning_rate': 0.001, 'batch_size': 32, } for i in range(1000): time.sleep(1) print(f'{i}') run.track(i+2, step=i, epoch=i%2, name='metrics-1') ``` 2. Run the script in the background: ``` $ nohup python test-aim.py & ``` 3. Kill the process: ``` $ kill 365103 [1]+ Terminated nohup python test-aim.py ``` ![image](https://github.com/aimhubio/aim/assets/11384038/6f49f937-f400-4bda-99cc-5ab0da796c5f) ### Expected behavior The run should transition to the "Finished" state after the client process is terminated. ### Environment - Aim Version: 3.22.0 - Python version: 3.10.9 - pip version: 24.0 - OS (e.g., Linux): Ubuntu 20.04.4 LTS ### Additional context Or is there any workaround for this issue?
open
2024-06-25T07:28:48Z
2025-01-10T14:59:06Z
https://github.com/aimhubio/aim/issues/3175
[ "type / bug", "help wanted" ]
zhiyxu
8
httpie/cli
api
975
Support some way to specify an header that shouldn't be persisted to the session
Although not the most common, some services sometimes have HTTP headers that should be sent only for some specific requests, but not all of them, and as such shouldn't be persisted in the session. httpie currently persists all headers except for the ones starting with `Content-` or `If-`. That means such headers will also be persisted to the session, unless you take care to use `--session-read-only` for such requests. It would be convenient if there was a syntax to indicate headers that shouldn't be persisted to the session and/or a configuration setting to indicate additional headers that shouldn't be persisted to the session.
open
2020-10-19T18:04:48Z
2020-12-21T15:33:44Z
https://github.com/httpie/cli/issues/975
[ "enhancement" ]
segevfiner
0
jazzband/django-oauth-toolkit
django
1,140
How to skip oauth2_views.AuthorizationView's LoginRequiredMixin
I want to use Authorization code grant type, but when I invoke /o/authorize/, it redirects to admin's login. How can I skip admin login?
closed
2022-04-06T13:23:36Z
2023-10-04T14:42:33Z
https://github.com/jazzband/django-oauth-toolkit/issues/1140
[ "question" ]
luohaoGit
3
freqtrade/freqtrade
python
10,558
Freqtrade no longer working with Mexc due to recent API changes
OS Ubuntu 22.04.4 CCXT 4.3.79 Freqtrade docker-2024.8-dev-4ca6e617 Freqtrade has not been working with Mexc for a number of days now due to changes to their API [<https://mexcdevelop.github.io/apidocs/spot_v3_en/#change-log>](url) The change on 16/08/2024 is what seems to have broken it.
closed
2024-08-19T12:00:33Z
2024-08-21T06:38:21Z
https://github.com/freqtrade/freqtrade/issues/10558
[ "Wont fix / Not a bug", "CCXT", "unsupported exchange" ]
rmtucker
3
graphistry/pygraphistry
pandas
522
[BUG] chain has excess edges
**Describe the bug** In the new chain tutorial, we get excess edges during: ``` g2.chain([ n({'community_infomap': 2}), e_undirected(), n({'community_infomap': 2}) ]).plot() ``` While the nodes are right, the edges have excess, and the backend will materialize a bunch of nodes we don't actually want. **To Reproduce** https://github.com/graphistry/pygraphistry/blob/238cb1daa6904ab3316ff9cf6445334f9f7890fd/demos/more_examples/graphistry_features/hop_and_chain_graph_pattern_mining.ipynb In particular, rewriting currently working ```python g2.hop(source_node_match={'community_infomap': 2}, destination_node_match={'community_infomap': 2}) ``` as equiv chain fails with excess edges => excess synthetic nodes: ```python g3.chain([ n({'community_infomap': 2}), e_undirected(), n({'community_infomap': 2}) ]) ``` **Expected behavior** Compare - good ```python g_good = g2.hop(source_node_match={'community_infomap': 2}, destination_node_match={'community_infomap': 2}) g_good.plot() ``` 214 nodes, 4993 edges, all nodes community_infomap=2 (checked via histograms) **Actual behavior** Compare - bad ```python g3 = g2.nodes( lambda g: g._nodes.assign(community_infomap=g._nodes.community_infomap.map({0: 20, 1:1, 2:2})) ) print(set(g3._nodes.community_infomap)) zz = g3.chain([ n({'community_infomap': 2}), e_undirected(), n({'community_infomap': 2}) ]).nodes(lambda g: g._nodes.sample(frac=1).reset_index(drop=True)) print(set(zz._nodes.community_infomap)) zz.plot(as_files=True, memoize=False) ``` => ``` {1, 2, 20} ## pre-filtering {2} ## post-filter ``` Graph is 464 nodes (too many) and 7092 edges (too many) Histograms show 250 entries for community 0 (which shouldn't exist at all as remapped '20') and 214 for 2 (expected) So it seems like we have excess edges, who reference unexpected nodes... and they materialize with nan / 0 community.
closed
2023-12-04T06:51:48Z
2023-12-23T00:21:37Z
https://github.com/graphistry/pygraphistry/issues/522
[ "bug", "p3" ]
lmeyerov
1
influxdata/influxdb-client-python
jupyter
119
Creating buckets using BucketsApi
On using BucketsApi from influxdb_client, if the org_id is not passed, the api updates the org_id with the org field from influxdb_client (which is a name). Ref: https://github.com/influxdata/influxdb-client-python/blob/91ffed11c270c295a93b3e7d0b94a69b4657a917/influxdb_client/client/bucket_api.py#L39 A possible option would be to load the influxdb_client with the org_id during initialization, so that the org_id can be used instead of org_name.
closed
2020-06-29T03:13:29Z
2020-07-20T05:21:04Z
https://github.com/influxdata/influxdb-client-python/issues/119
[ "question" ]
MajorCarrot
3
jowilf/starlette-admin
sqlalchemy
267
Bug:
**Describe the bug** I try to deploy my FastAPI project where I use starlette_admin as an admin panel. My host is based on a HTTPS secured domain, but starlette admin uses static files and scripts through HTTP and it cannot load on my website. **To Reproduce** Try to access admin panel on a HTTPS secured domain **Environment (please complete the following information):** - Starlette-Admin version: [e.g. 0.3.2] - ORM/ODMs: [SQLAlchemy, PostgreSQL, FastAPI] **Additional context** Add any other context about the problem here. ![image](https://github.com/jowilf/starlette-admin/assets/81633779/cc1b217e-fddf-45a6-8fe8-f18031cd4b06)
closed
2023-08-24T19:47:42Z
2023-08-24T19:58:25Z
https://github.com/jowilf/starlette-admin/issues/267
[ "bug" ]
xorwise
3
521xueweihan/HelloGitHub
python
2,630
【开源自荐】TestAgent:国内首个测试行业大模型工具
## 推荐项目 <!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。--> <!-- 点击上方 “Preview” 立刻查看提交的内容 --> <!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址--> - 项目地址:https://github.com/codefuse-ai/Test-Agent <!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)--> - 类别:机器学习 <!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 --> - 项目标题:国内首个测试行业大模型工具TestAgent <!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符--> - 项目描述:TestAgent由蚂蚁研发并开源,旨在构建测试领域的“智能体”,融合大模型和质量领域工程化技术,促进质量技术代系升级。我们期望和社区成员一起合作,打造创新的测试领域解决方案,构建24小时在线的测试助理服务,让测试如丝般顺滑。 本期我们开源了测试领域模型TestGPT-7B。模型以CodeLlama-7B为基座,进行了相关下游任务的微调:多语言测试用例生成、测试用例Assert补全; <!--令人眼前一亮的点是什么?类比同类型项目有什么特点!--> - 亮点: - 示例代码:(可选) - 截图:(可选)gif/png/jpg 本地Mac M1体验效果 ![图片](https://github.com/codefuse-ai/Test-Agent/assets/103973989/8dba860f-c1bb-49d5-b9dd-a58e541562a6) 魔搭体验效果 体验地址:https://modelscope.cn/studios/codefuse-ai/TestGPT-7B-demo/summary ![MS](https://github.com/codefuse-ai/Test-Agent/assets/103973989/0e50b258-44f9-4dc6-8e30-0a01cf62d02b) - 后续更新计划: 不断加入更多令人激动的测试域应用场景,如领域知识问答、测试场景分析等 支撑面向测试场景的copilot 工程框架开放,如测试领域知识智能embedding、测试通用工具API体系、智能测试Agent等,敬请期待! 以7B为基础,逐步扩展至13B、34B模型。欢迎关注!
open
2023-10-25T11:40:58Z
2023-11-26T03:20:27Z
https://github.com/521xueweihan/HelloGitHub/issues/2630
[ "机器学习" ]
hailianzhl
0
zihangdai/xlnet
tensorflow
255
Docker support
#Feature Request Docker support is needed.
open
2020-01-01T14:53:09Z
2020-01-01T14:53:09Z
https://github.com/zihangdai/xlnet/issues/255
[]
sanjibnarzary
0
AUTOMATIC1111/stable-diffusion-webui
deep-learning
15,398
[Bug]: inpaint zoom is broken on firefox
### Checklist - [X] The issue exists after disabling all extensions - [X] The issue exists on a clean installation of webui - [X] The issue is caused by an extension, but I believe it is caused by a bug in the webui - [X] The issue exists in the current version of the webui - [X] The issue has not been reported before recently - [X] The issue has been reported before but has not been fixed yet ### What happened? alt+wheel up makes the image either go super zoom without any way to navigate it or goes micro. No mid stages. ### Steps to reproduce the problem alt+wheel up on inpaint ### What should have happened? have an incremental zoom, controllable ### What browsers do you use to access the UI ? Mozilla Firefox ### Sysinfo :__ ### Console logs ```Shell ____ ``` ### Additional information _No response_
open
2024-03-27T22:37:07Z
2024-03-31T15:21:20Z
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15398
[ "bug-report" ]
BismutoGanymedes
2
Netflix/metaflow
data-science
1,812
Update extras_require for tracing dependencies?
From the [2.10.4 release notes](https://github.com/Netflix/metaflow/releases/tag/2.10.4): > Some additional dependencies are required for the tracing functionality in the execution environment. These can be installed in the base Docker image, or supplied through a conda environment. The relevant packages are > `opentelemetry-sdk, opentelemetry-api, opentelemetry-instrumentation, opentelemetry-instrumentation-requests` > and depending on your endpoint, either `opentelemetry-exporter-otlp` or `opentelemetry-exporter-zipkin` Would it be possible to add these to the `extras_require` section of [setup.py](https://github.com/Netflix/metaflow/blob/e64c0d2e6e5913e67bcf97e98ce8b4b704726fd4/setup.py#L56C1-L58C7)? That would let a user do something like `pip install metaflow[tracing-otel]`, and it also makes things a lot easier for tools that build lockfiles (for example [pip-tools](https://github.com/jazzband/pip-tools)) to understand the transitive dependencies. If you think this is a worthwhile change, I'll submit a PR.
open
2024-04-25T00:44:58Z
2024-04-25T00:44:58Z
https://github.com/Netflix/metaflow/issues/1812
[]
chriselion
0
davidsandberg/facenet
computer-vision
1,252
Question : Anybody has luck converting to coreml ?
Anybody has luck converting to coreml ? Thanks
open
2024-06-18T02:38:59Z
2024-06-18T02:38:59Z
https://github.com/davidsandberg/facenet/issues/1252
[]
x4080
0
wkentaro/labelme
computer-vision
421
[Suggestion] Add support for import existing annotation
Hello, i really appreciate your work, the software really helps me a lot. There's some suggestions: 1. Add support for **import existing annotation**. In some cases annotations are firstly generated by beginners or primary algorthms, then inspected or corrected by the experienced. I work for medical image processing. In my case medial annotations labeled by various workers need to be inspected by experienced doctors to ensure the accuracy. This requires the support to import annotation file of others and modify them. 2. Add **Spline Curves** for segmentation annotation To get smoothed boundary. e.g. organ, cell boundaries are not rigid.
closed
2019-06-20T02:46:21Z
2020-01-27T01:34:40Z
https://github.com/wkentaro/labelme/issues/421
[]
coffeehat
3
microsoft/nni
data-science
5,740
dispather comand:globals.args.pythonInterpreter, '-m', 'nni', '--exp_params', 占用内存太多
**Describe the issue**: ![image](https://github.com/microsoft/nni/assets/106226695/6d53fe36-5930-455d-96a6-97ce605b6b4b) **Environment**: - NNI version: 2.9 - Training service (local|remote|pai|aml|etc): local - Client OS: - Server OS (for remote mode only): - Python version: - PyTorch/TensorFlow version: - Is conda/virtualenv/venv used?: - Is running in Docker?: **Configuration**: - Experiment config (remember to remove secrets!): ``` assessor: classArgs: earlystop: true optimize_mode: maximize start_step: 5 name: PAIAssessor experimentName: "\u65B0\u589E-deepfm-\u6FC0\u6D3B\u6A21\u578B\u4ED8\u8D39\u7387\u9884\ \u4F30_0117_copy_copy_copy_copy_copy_copy_copy" experimentWorkingDirectory: ../expdir maxTrialNumber: 200 searchSpaceFile: search_space.json trainingService: platform: local trialCommand: python3 -m hpo_tools.core.utils.run --config=./config.ini trialConcurrency: 2 tuner: classArgs: optimize_mode: maximize name: TPE ``` - Search space: ``` {"${batch_size}":{"_type":"choice","_value":["64","256","512","1024","5000","1500","2500","7500","10000"]},"${learning_rate}":{"_type":"choice","_value":["1e-5","1e-4","1e-3","5e-5","5e-4","5e-3"]},"${deep_dnn1_units}":{"_type":"randint","_value":[10,1000]},"${deep_dnn2_units}":{"_type":"randint","_value":[10,1000]},"${fm_dnn_units}":{"_type":"randint","_value":[10,1000]},"${combine_dnn_units}":{"_type":"randint","_value":[10,100]},"${seed}":{"_type":"randint","_value":[1,4294967290]},"${epochs}":{"_type":"randint","_value":[3,50]}} ``` **Log message**: - nnimanager.log: ``` placementConstraint: { type: 'None', gpus: [] } } [2024-01-24 15:37:33] INFO (NNIManager) Trial job zIKH1 status changed from WAITING to RUNNING [2024-01-24 15:49:24] INFO (NNIManager) Trial job ndLXi status changed from RUNNING to SUCCEEDED [2024-01-24 15:50:01] ERROR (tuner_command_channel.WebSocketChannel) Error: Error: tuner_command_channel: Tuner closed connection at WebSocket.<anonymous> (/usr/lib/python3.7/site-packages/nni_node/core/tuner_command_channel/websocket_channel.js:41:49) at WebSocket.emit (node:events:538:35) at WebSocket.emitClose (/usr/lib/python3.7/site-packages/nni_node/node_modules/express-ws/node_modules/ws/lib/websocket.js:246:10) at Socket.socketOnClose (/usr/lib/python3.7/site-packages/nni_node/node_modules/express-ws/node_modules/ws/lib/websocket.js:1127:15) at Socket.emit (node:events:526:28) at TCP.<anonymous> (node:net:687:12) [2024-01-24 15:50:01] ERROR (NNIManager) Dispatcher error: tuner_command_channel: Tuner closed connection [2024-01-24 15:50:01] ERROR (NNIManager) Error: Dispatcher stream error, tuner may have crashed. at EventEmitter.<anonymous> (/usr/lib/python3.7/site-packages/nni_node/core/nnimanager.js:647:32) at EventEmitter.emit (node:events:526:28) at WebSocketChannelImpl.handleError (/usr/lib/python3.7/site-packages/nni_node/core/tuner_command_channel/websocket_channel.js:107:22) at WebSocket.<anonymous> (/usr/lib/python3.7/site-packages/nni_node/core/tuner_command_channel/websocket_channel.js:41:37) at WebSocket.emit (node:events:538:35) at WebSocket.emitClose (/usr/lib/python3.7/site-packages/nni_node/node_modules/express-ws/node_modules/ws/lib/websocket.js:246:10) at Socket.socketOnClose (/usr/lib/python3.7/site-packages/nni_node/node_modules/express-ws/node_modules/ws/lib/websocket.js:1127:15) at Socket.emit (node:events:526:28) at TCP.<anonymous> (node:net:687:12) [2024-01-24 15:50:01] INFO (NNIManager) Change NNIManager status from: RUNNING to: ERROR [2024-01-24 16:05:49] INFO (NNIManager) User cancelTrialJob: zIKH1 [2024-01-24 16:05:49] INFO (ShutdownManager) Initiate shutdown: REST request [2024-01-24 16:05:49] INFO (RestServer) Stopping REST server. [2024-01-24 16:05:49] INFO (NNIManager) Change NNIManager status from: ERROR to: STOPPING [2024-01-24 16:05:49] INFO (NNIManager) Stopping experiment, cleaning up ... [2024-01-24 16:05:49] INFO (RestServer) REST server stopped. [2024-01-24 16:05:49] INFO (LocalTrainingService) Stopping local machine training service... [2024-01-24 16:05:49] INFO (NNIManager) Change NNIManager status from: STOPPING to: STOPPED [2024-01-24 16:05:49] INFO (NNIManager) Experiment stopped. [2024-01-24 16:05:49] INFO (NNITensorboardManager) Forced stopping all tensorboard task. [2024-01-24 16:05:49] INFO (NNITensorboardManager) All tensorboard task stopped. [2024-01-24 16:05:49] INFO (NNITensorboardManager) Tensorboard manager stopped. [2024-01-24 16:05:49] INFO (ShutdownManager) Shutdown complete. ``` - dispatcher.log: 没有报错 - nnictl stdout and stderr: 没有报错 <!-- Where can you find the log files: LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout --> **How to reproduce it?**:
closed
2024-01-25T03:42:39Z
2024-01-26T01:51:23Z
https://github.com/microsoft/nni/issues/5740
[]
yjjinjie
1
huggingface/datasets
deep-learning
7,458
Loading the `laion/filtered-wit` dataset in streaming mode fails on v3.4.0
### Describe the bug Loading https://huggingface.co/datasets/laion/filtered-wit in streaming mode fails after update to `datasets==3.4.0`. The dataset loads fine on v3.3.2. ### Steps to reproduce the bug Steps to reproduce: ``` pip install datastes==3.4.0 python -c "from datasets import load_dataset; load_dataset('laion/filtered-wit', split='train', streaming=True)" ``` Results in: ``` $ python -c "from datasets import load_dataset; load_dataset('laion/filtered-wit', split='train', streaming=True)" Repo card metadata block was not found. Setting CardData to empty. Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 560/560 [00:00<00:00, 2280.24it/s] Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/load.py", line 2080, in load_dataset return builder_instance.as_streaming_dataset(split=split) File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/builder.py", line 1265, in as_streaming_dataset splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)} File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 49, in _split_generators data_files = dl_manager.download_and_extract(self.config.data_files) File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 169, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 121, in extract urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True) File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 496, in map_nested mapped = [ File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 497, in <listcomp> map_nested( File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 513, in map_nested mapped = [ File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 514, in <listcomp> _single_map_nested((function, obj, batched, batch_size, types, None, True, None)) File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 375, in _single_map_nested return function(data_struct) File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 131, in _extract raise NotImplementedError( NotImplementedError: Extraction protocol for TAR archives like 'hf://datasets/laion/filtered-wit@c38ca7464e9934d9a49f88b3f60f5ad63b245465/data/00000.tar' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead. Example usage: url = dl_manager.download(url) tar_archive_iterator = dl_manager.iter_archive(url) for filename, file in tar_archive_iterator: ... ``` ### Expected behavior Dataset loads successfully. ### Environment info Ubuntu 20.04.6. Python 3.9. Datasets 3.4.0. pip freeze: ``` aiohappyeyeballs==2.6.1 aiohttp==3.11.14 aiosignal==1.3.2 async-timeout==5.0.1 attrs==25.3.0 certifi==2025.1.31 charset-normalizer==3.4.1 datasets==3.4.0 dill==0.3.8 filelock==3.18.0 frozenlist==1.5.0 fsspec==2024.12.0 huggingface-hub==0.29.3 idna==3.10 multidict==6.1.0 multiprocess==0.70.16 numpy==2.0.2 packaging==24.2 pandas==2.2.3 propcache==0.3.0 pyarrow==19.0.1 python-dateutil==2.9.0.post0 pytz==2025.1 PyYAML==6.0.2 requests==2.32.3 six==1.17.0 tqdm==4.67.1 typing_extensions==4.12.2 tzdata==2025.1 urllib3==2.3.0 xxhash==3.5.0 yarl==1.18.3 ```
closed
2025-03-17T14:54:02Z
2025-03-17T16:02:04Z
https://github.com/huggingface/datasets/issues/7458
[]
nikita-savelyevv
1
elliotgao2/gain
asyncio
19
Add homepage.
Add homepage.
closed
2017-06-09T01:22:40Z
2017-06-12T01:47:51Z
https://github.com/elliotgao2/gain/issues/19
[]
elliotgao2
1
allure-framework/allure-python
pytest
743
Fix historyId to be dependent on dynamic allure parameters
#### I'm submitting a ... - [x] bug report - [ ] feature request - [ ] support request => Please do not submit support request here, see note at the top of this template. #### What is the current behavior? Parameters added via the `allure.dynamic.parameter` inside a test body don't affect allure history of the test. #### If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem Run the following example multiple times to produce multiple `*-result.json` files: ```python import allure import time def test_issue743_reproduction(): allure.dynamic.parameter("time", time.perf_counter()) ``` These results all have the same `historyId`. In such a case allure reporter shows us only one test case with several retries: ![image](https://user-images.githubusercontent.com/17935127/233038969-356b5bc2-d1ba-4177-a8e6-30b1e1c187c9.png) #### What is the expected behavior? There should exists one test case per run, each with no retries in a way, similar to native pytest parameters. 1. Take the `nodeid` of a test. 2. Take all dynamic parameters of the test with `excluded` set to `False`. 3. Sort the parameters alphabetically by their names (`historyId` should not depend on parameters order). 4. Append a string representation of the values to the `nodeid` (use some separator to prevent collisions with other tests). 5. Calculate hash of the resulting string and use it as `historyId`. Related code: https://github.com/allure-framework/allure-python/blob/12085cd76d1c0ec78ef90a4981a31e7f8b4546b4/allure-pytest/src/listener.py#L102 #### Please tell us about your environment: - Allure version: 2.20.1 - Test framework: pytest@7.3.1 - Allure adaptor: allure-pytest@2.13.1
closed
2023-04-19T10:04:13Z
2023-04-26T09:02:03Z
https://github.com/allure-framework/allure-python/issues/743
[ "bug", "theme:pytest", "contribute" ]
delatrie
1
FlareSolverr/FlareSolverr
api
224
close suddently windows 10
**Please use the search bar** at the top of the page and make sure you are not creating an already submitted issue. Check closed issues as well, because your issue may have already been fixed. ### How to enable debug and html traces [Follow the instructions from this wiki page](https://github.com/FlareSolverr/FlareSolverr/wiki/How-to-enable-debug-and-html-trace) ### Environment * **FlareSolverr version**: 2.0.2 * **Last working FlareSolverr version**: * **Operating system**: windows * **Are you using Docker**: [no] * **FlareSolverr User-Agent (see log traces or / endpoint)**: * **Are you using a proxy or VPN?** [no] * **Are you using Captcha Solver:** [no] * **If using captcha solver, which one:** * **URL to test this issue:** ### Description [List steps to reproduce the error and details on what happens and what you expected to happen] ### Logged Error Messages [Place any relevant error messages you noticed from the logs here.] [Make sure you attach the full logs with your personal information removed in case we need more information] ### Screenshots ![(03;01;03)-20-November-2021-WIN10-SKYLAKE](https://user-images.githubusercontent.com/19517680/142670332-2655a497-c84a-444d-88ed-e32bc02642a6.gif) [Place any screenshots of the issue here if needed]
closed
2021-11-19T18:08:50Z
2021-12-12T16:13:04Z
https://github.com/FlareSolverr/FlareSolverr/issues/224
[ "more information needed" ]
3xploiton3
10
NullArray/AutoSploit
automation
901
Divided by zero exception277
Error: Attempted to divide by zero.277
closed
2019-04-19T16:03:08Z
2019-04-19T16:37:01Z
https://github.com/NullArray/AutoSploit/issues/901
[]
AutosploitReporter
0
deeppavlov/DeepPavlov
tensorflow
927
Using ranking module with pretreined bert for prediction
How can I use pretrained on ubuntu dataset bert (representation based) model from ranking module in the following scenario: I have list of possible responses (in file) and I would like to load pretrained model in python and get ordering of possible responses for given context? My goal is to define function rank(possible_answers, context, model) which gives the ordering (not responses but numbers representing the ordering - list of numbers in which each number show position of respective possible response).
closed
2019-07-15T09:27:38Z
2020-05-13T09:44:28Z
https://github.com/deeppavlov/DeepPavlov/issues/927
[]
norbertryc
1
microsoft/qlib
deep-learning
1,713
how to convert 3-seconds market data to qlib bin?
I have my own 3-seconds market data in local mysql and want to conver these 3-sec data to qlib bin, How do I convert? I used to convert csv (3 seconds data) to qlib bin by using dump_bin.py, the result of bin files like this: ![image](https://github.com/microsoft/qlib/assets/578118/6917e224-87be-4bbc-a024-a8f005522ac8) with postfix day.bin, actually my csv files like this, interval 3 seconds per row. ![image](https://github.com/microsoft/qlib/assets/578118/2a00d4b4-183a-42e4-99c7-15f897175829) Can QLib recognize this format? I used the below command to do conversion python scripts\dump_bin.py dump_all --freq 3sec --csv_path F:\CB\csv --qlib_dir F:\CB\ckdata_qlib --symbol_field_name stock_code --date_field_name date --include_fields open,high,low,close,volume,money,factor,vwap,change I know dump_bin.py can specify --freq 1min or 1d as input. How should I handle 3-seconds interval data format? If I use XXX.3sec.bin as local dataset, then run workflow_by_code.py, it reported "ValueError: freq format is not supported, the freq should be like (n)month/mon, (n)week/w, (n)day/d, (n)minute/min" I really want to analyse 3-seconds interval marketdata. who can help me ?
open
2023-12-20T03:16:36Z
2024-01-09T05:36:25Z
https://github.com/microsoft/qlib/issues/1713
[ "question" ]
DanielKui
1
sngyai/Sequoia
pandas
56
main.py 运行报错
<img width="1297" alt="image" src="https://github.com/sngyai/Sequoia/assets/7651267/4d1cfbf5-703c-4f0b-912d-c7f66b0478c1"> 奇怪的报错, 有遇到过吗
open
2023-08-09T07:18:27Z
2024-10-29T11:40:13Z
https://github.com/sngyai/Sequoia/issues/56
[]
mtf7101520
8
AUTOMATIC1111/stable-diffusion-webui
pytorch
15,426
[Bug]: Preparing metadata (pyproject.toml): finished with status 'error'
### Checklist - [ ] The issue exists after disabling all extensions - [ ] The issue exists on a clean installation of webui - [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui - [ ] The issue exists in the current version of the webui - [ ] The issue has not been reported before recently - [ ] The issue has been reported before but has not been fixed yet ### What happened? python is 3.12.2, torch 2.2.2+cu121 torchaudio 2.2.2+cu121 torchvision 0.17.2+cu121 ### Steps to reproduce the problem when I start the webui-user ### What should have happened? Preparing metadata (pyproject.toml): finished with status 'error' stderr: error: subprocess-exited-with-error Preparing metadata (pyproject.toml) did not run successfully. exit code: 1 [21 lines of output] + meson setup C:\Users\ABCD\AppData\Local\Temp\pip-install-r6e3aqpo\scikit-image_18067113e36e41a0be89446643b718d3 C:\Users\ABCD\AppData\Local\Temp\pip-install-r6e3aqpo\scikit-image_18067113e36e41a0be89446643b718d3\.mesonpy-2xf8h2ox -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --native-file=C:\Users\ABCD\AppData\Local\Temp\pip-install-r6e3aqpo\scikit-image_18067113e36e41a0be89446643b718d3\.mesonpy-2xf8h2ox\meson-python-native-file.ini The Meson build system Version: 1.4.0 Source dir: C:\Users\ABCD\AppData\Local\Temp\pip-install-r6e3aqpo\scikit-image_18067113e36e41a0be89446643b718d3 Build dir: C:\Users\ABCD\AppData\Local\Temp\pip-install-r6e3aqpo\scikit-image_18067113e36e41a0be89446643b718d3\.mesonpy-2xf8h2ox Build type: native build Project name: scikit-image Project version: 0.21.0 WARNING: Failed to activate VS environment: Could not parse vswhere.exe output ..\meson.build:1:0: ERROR: Unknown compiler(s): [['icl'], ['cl'], ['cc'], ['gcc'], ['clang'], ['clang-cl'], ['pgcc']] The following exception(s) were encountered: Running `icl ""` gave "[WinError 2] ϵͳҲָļ" Running `cl /?` gave "[WinError 2] ϵͳҲָļ" Running `cc --version` gave "[WinError 2] ϵͳҲָļ" Running `gcc --version` gave "[WinError 2] ϵͳҲָļ" Running `clang --version` gave "[WinError 2] ϵͳҲָļ" Running `clang-cl /?` gave "[WinError 2] ϵͳҲָļ" Running `pgcc --version` gave "[WinError 2] ϵͳҲָļ" A full log can be found at C:\Users\ABCD\AppData\Local\Temp\pip-install-r6e3aqpo\scikit-image_18067113e36e41a0be89446643b718d3\.mesonpy-2xf8h2ox\meson-logs\meson-log.txt [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Encountered error while generating package metadata. See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. ### What browsers do you use to access the UI ? _No response_ ### Sysinfo Preparing metadata (pyproject.toml): finished with status 'error' stderr: error: subprocess-exited-with-error Preparing metadata (pyproject.toml) did not run successfully. exit code: 1 [21 lines of output] + meson setup C:\Users\ABCD\AppData\Local\Temp\pip-install-r6e3aqpo\scikit-image_18067113e36e41a0be89446643b718d3 C:\Users\ABCD\AppData\Local\Temp\pip-install-r6e3aqpo\scikit-image_18067113e36e41a0be89446643b718d3\.mesonpy-2xf8h2ox -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --native-file=C:\Users\ABCD\AppData\Local\Temp\pip-install-r6e3aqpo\scikit-image_18067113e36e41a0be89446643b718d3\.mesonpy-2xf8h2ox\meson-python-native-file.ini The Meson build system Version: 1.4.0 Source dir: C:\Users\ABCD\AppData\Local\Temp\pip-install-r6e3aqpo\scikit-image_18067113e36e41a0be89446643b718d3 Build dir: C:\Users\ABCD\AppData\Local\Temp\pip-install-r6e3aqpo\scikit-image_18067113e36e41a0be89446643b718d3\.mesonpy-2xf8h2ox Build type: native build Project name: scikit-image Project version: 0.21.0 WARNING: Failed to activate VS environment: Could not parse vswhere.exe output ..\meson.build:1:0: ERROR: Unknown compiler(s): [['icl'], ['cl'], ['cc'], ['gcc'], ['clang'], ['clang-cl'], ['pgcc']] The following exception(s) were encountered: Running `icl ""` gave "[WinError 2] ϵͳҲָļ" Running `cl /?` gave "[WinError 2] ϵͳҲָļ" Running `cc --version` gave "[WinError 2] ϵͳҲָļ" Running `gcc --version` gave "[WinError 2] ϵͳҲָļ" Running `clang --version` gave "[WinError 2] ϵͳҲָļ" Running `clang-cl /?` gave "[WinError 2] ϵͳҲָļ" Running `pgcc --version` gave "[WinError 2] ϵͳҲָļ" A full log can be found at C:\Users\ABCD\AppData\Local\Temp\pip-install-r6e3aqpo\scikit-image_18067113e36e41a0be89446643b718d3\.mesonpy-2xf8h2ox\meson-logs\meson-log.txt [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Encountered error while generating package metadata. See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. ### Console logs ```Shell Preparing metadata (pyproject.toml): finished with status 'error' stderr: error: subprocess-exited-with-error Preparing metadata (pyproject.toml) did not run successfully. exit code: 1 [21 lines of output] + meson setup C:\Users\ABCD\AppData\Local\Temp\pip-install-r6e3aqpo\scikit-image_18067113e36e41a0be89446643b718d3 C:\Users\ABCD\AppData\Local\Temp\pip-install-r6e3aqpo\scikit-image_18067113e36e41a0be89446643b718d3\.mesonpy-2xf8h2ox -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --native-file=C:\Users\ABCD\AppData\Local\Temp\pip-install-r6e3aqpo\scikit-image_18067113e36e41a0be89446643b718d3\.mesonpy-2xf8h2ox\meson-python-native-file.ini The Meson build system Version: 1.4.0 Source dir: C:\Users\ABCD\AppData\Local\Temp\pip-install-r6e3aqpo\scikit-image_18067113e36e41a0be89446643b718d3 Build dir: C:\Users\ABCD\AppData\Local\Temp\pip-install-r6e3aqpo\scikit-image_18067113e36e41a0be89446643b718d3\.mesonpy-2xf8h2ox Build type: native build Project name: scikit-image Project version: 0.21.0 WARNING: Failed to activate VS environment: Could not parse vswhere.exe output ..\meson.build:1:0: ERROR: Unknown compiler(s): [['icl'], ['cl'], ['cc'], ['gcc'], ['clang'], ['clang-cl'], ['pgcc']] The following exception(s) were encountered: Running `icl ""` gave "[WinError 2] ϵͳҲָļ" Running `cl /?` gave "[WinError 2] ϵͳҲָļ" Running `cc --version` gave "[WinError 2] ϵͳҲָļ" Running `gcc --version` gave "[WinError 2] ϵͳҲָļ" Running `clang --version` gave "[WinError 2] ϵͳҲָļ" Running `clang-cl /?` gave "[WinError 2] ϵͳҲָļ" Running `pgcc --version` gave "[WinError 2] ϵͳҲָļ" A full log can be found at C:\Users\ABCD\AppData\Local\Temp\pip-install-r6e3aqpo\scikit-image_18067113e36e41a0be89446643b718d3\.mesonpy-2xf8h2ox\meson-logs\meson-log.txt [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Encountered error while generating package metadata. See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. ``` ### Additional information _No response_
open
2024-04-01T14:08:08Z
2024-04-01T14:08:08Z
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15426
[ "bug-report" ]
zutim
0
python-gino/gino
asyncio
15
Delegate more asyncpg API
In #9 only `fetch` and `cursor` are delegated, we may still need to delegate `fetchrow` and `execute`.
closed
2017-07-23T14:14:37Z
2017-07-24T11:44:07Z
https://github.com/python-gino/gino/issues/15
[ "help wanted", "task" ]
fantix
0
ultralytics/yolov5
machine-learning
12,768
How to load models without pip install?
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question I need to execute this model on a different device as a pytorch model where I can't use pip install and should use the source files only. I discovered an approach to run the model, but it's throwing an error `ModuleNotFoundError: No module named 'ultralytics.yolo'` ``` from models.common import DetectMultiBackend model = DetectMultiBackend("yolov5nu.pt") ``` I've pinpointed the issue, and it occurs at this file. https://github.com/ultralytics/yolov5/blob/574331f984c0aa9c26c4ea78dac90133cfe6b2d0/models/experimental.py#L97-L97 I also attempted to execute the code manually, but encountered similar outcomes. ``` from models.experimental import attempt_download from models.yolo import Detect, Model import torch ckpt = torch.load("yolov5nu.pt", map_location="cpu") ``` ### Additional _No response_
closed
2024-02-27T02:47:48Z
2024-04-08T02:16:25Z
https://github.com/ultralytics/yolov5/issues/12768
[ "question", "Stale" ]
useruser2023
4
JaidedAI/EasyOCR
deep-learning
854
easyOCR still uses GPU although not asked
Hi, When I create the `reader` object with `gpu=False`, it doesnt directly use the GPU device, but upon calling `readtext`, it captures the first available GPU device apparently uses it according to `nvidia-smi`. I verified that its not `detect()` that triggers the GPU use so its probably the recognizer pipeline. I didnt have time to debug further.
open
2022-09-15T13:42:32Z
2022-09-29T13:31:02Z
https://github.com/JaidedAI/EasyOCR/issues/854
[]
ozancaglayan
4
abhiTronix/vidgear
dash
254
Release of Vidgear v0.2.3
<!--- Add a brief but descriptive title for your issue above --> # Release of Vidgear v0.2.3 ## Question <!--- Provide your question description here --> When will this new version containing some hotfixes be released? I need version 0.2.3 to complete what I am working on currently. I can always help with the release process if it is needed. ### Acknowledgment <!--- By posting an issue you acknowledge the following: (Put an `x` in all the boxes that apply(important)) --> - [x] I have searched the [issues](https://github.com/abhiTronix/vidgear/issues) for my issue and found nothing related or helpful. - [x] I have read the [FAQs](https://abhitronix.github.io/vidgear/latest/help/get_help/#frequently-asked-questions). - [x] I have read the [Documentation](https://abhitronix.github.io/vidgear/latest). ### Context <!--- How has this issue affected you? What are you trying to accomplish? --> I want to use the new version of StreamGear which works properly when distributing only one version for a given stream (#248). Right now, I have to install directly from the upstream main branch of this repository instead of specifying a version. <!--- Providing context helps us come up with a solution that is most useful in the real world -->
closed
2021-10-19T16:26:48Z
2021-10-27T04:00:06Z
https://github.com/abhiTronix/vidgear/issues/254
[ "QUESTION :question:", "SOLVED :checkered_flag:", "NEW RELEASE :fire:" ]
Vboivin
3
Miksus/rocketry
pydantic
31
why do we need red engine?
What its advantages over Airflow and other mature schedule framework are?
closed
2022-07-04T12:30:43Z
2022-07-04T19:55:24Z
https://github.com/Miksus/rocketry/issues/31
[]
lidh15
1
ivy-llc/ivy
numpy
28,607
Fix Frontend Failing Test: tensorflow - operators.jax.lax.real
closed
2024-03-14T22:09:57Z
2024-03-16T12:27:15Z
https://github.com/ivy-llc/ivy/issues/28607
[ "Sub Task" ]
samthakur587
0
google/trax
numpy
848
PyTorch backend
I notice that so far there are JAX, TensorFlow, and NumPy backends. Are there any plans to have a PyTorch backend in the future?
open
2020-07-17T14:43:40Z
2020-10-14T07:15:40Z
https://github.com/google/trax/issues/848
[ "enhancement" ]
briankosw
2
jupyterhub/zero-to-jupyterhub-k8s
jupyter
3,623
Ingress with a subpath has a hard-coded trailing slash
<!-- Thank you for contributing. These HTML comments will not render in the issue, but you can delete them once you've read them if you prefer! --> ### Bug description I am trying to deploy JH to a subpath, e.g. `<mydomain>/jupyter`. I set `hub.baseUrl=/jupyter` and enable ingress, but the `Ingress` object is not correctly configured, it has a trailing slash: `path: /jupyter/`. ### How to reproduce <!-- Use this section to describe the steps that a user would take to experience this bug. --> Use `helm` to install jupyter with ingress enabled and a subpath on `hub.baseUrl`. ```shell helm upgrade --install \ jupyter jupyterhub/jupyterhub \ --namespace jupyter \ --version=4.1.0 \ --set proxy.service.type=ClusterIP \ --set hub.baseUrl=/jupyter \ --set ingress.enabled=true,ingress.hosts="{<mydomain>}" ``` #### Expected behaviour Jupyter is served from `<mydomain>/jupyter` #### Actual behaviour - `<mydomain>/jupyter` is a 404 served by traefik (my `IngressController`), indicating that traefik does not route that path to Jupyter. - If I instead navigate to `<mydomain>/jupyter/` with a trailing slash, it does route correctly and JH loads as expected. The `Ingress` that gets created by `helm` looks like this: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: meta.helm.sh/release-name: jupyter meta.helm.sh/release-namespace: jupyter creationTimestamp: "2025-02-13T21:02:34Z" generation: 11 labels: app: jupyterhub app.kubernetes.io/component: ingress app.kubernetes.io/instance: jupyter app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: jupyterhub chart: jupyterhub-4.1.0 component: ingress helm.sh/chart: jupyterhub-4.1.0 heritage: Helm release: jupyter name: jupyterhub namespace: jupyter resourceVersion: "2407046" uid: a3b6128c-bef4-4997-be30-b1c70320e876 spec: ingressClassName: traefik rules: - host: <mydomain> http: paths: - backend: service: name: proxy-public port: name: http path: /jupyter/ pathType: Prefix ``` Note that the path is `path: /jupyter/` with a trailing slash. That is because of [line 20 in the ingress template](https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/c7de0d578498b31e5ee98b59455e02a26598c814/jupyterhub/templates/ingress.yaml#L20): ``` - path: {{ $.Values.hub.baseUrl | trimSuffix "/" }}/{{ $.Values.ingress.pathSuffix }} ``` Since I have set `hub.baseUrl` but haven't set anything for `ingress.pathSuffix`, this has the effect of appending a slash to the value I set in `hub.baseUrl`. This prevents routing to the slash-free subpath. If I patch the ingress with this file: ```yaml spec: ingressClassName: traefik rules: - host: <mydomain> http: paths: - backend: service: name: proxy-public port: name: http path: /jupyter pathType: Prefix ``` using the command ``` kubectl -n jupyter patch ingress jupyterhub --type strategic --patch-file jupyterhub-ingress.yaml ``` Then the `Ingress` path is set to `/jupyter`, and navigating to `<mydomain>/jupyter` works as expected.
open
2025-02-14T16:28:51Z
2025-02-14T16:28:51Z
https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/3623
[ "bug" ]
johnflavin
0
marimo-team/marimo
data-science
3,828
Programs containing marimo cannot be frozen anymore
### Describe the bug When freezing a program containing marimo, it crashes during startup/module loading. I tried cx-freeze and pyinstaller, and it worked on version 0.10.6 and breaks starting with 0.10.7. To freeze it, I'm using fastapi and uvicorn, and it worked beautifully to deploy. (side note: marimo is great!) ``` Traceback (most recent call last): File "__startup__.py", line 140, in run File "console.py", line 25, in run File "run.py", line 8, in <module> File "marimo\__init__.py", line 89, in <module> File "marimo\_islands\__init__.py", line 9, in <module> File "marimo\_islands\island_generator.py", line 11, in <module> File "marimo\_ast\app.py", line 29, in <module> File "marimo\_ast\cell_manager.py", line 21, in <module> File "marimo\_ast\pytest.py", line 17, in <module> File "inspect.py", line 1285, in getsource File "inspect.py", line 1267, in getsourcelines File "inspect.py", line 1096, in findsource OSError: could not get source code ``` ### Environment <details> ``` { "marimo": "0.10.7", "OS": "Windows", "OS Version": "10", "Processor": "Intel64 Family 6 Model 154 Stepping 4, GenuineIntel", "Python Version": "3.12.9", "Binaries": { "Browser": "127.0.6533.120", "Node": "v23.2.0" }, "Dependencies": { "click": "8.1.8", "docutils": "0.21.2", "itsdangerous": "2.2.0", "jedi": "0.19.2", "markdown": "3.7", "narwhals": "1.27.1", "packaging": "24.2", "psutil": "7.0.0", "pygments": "2.19.1", "pymdown-extensions": "10.14.3", "pyyaml": "6.0.2", "ruff": "0.9.6", "starlette": "0.45.3", "tomlkit": "0.13.2", "typing-extensions": "4.12.2", "uvicorn": "0.34.0", "websockets": "15.0" }, "Optional Dependencies": {} } ``` </details> ### Code to reproduce This file **when frozen** will throw a FileNotFoundError if it is working and no mofile.py is in the directoy, otherwise throw the error above. ```python import sys import webbrowser from pathlib import Path from fastapi import FastAPI from marimo import create_asgi_app from marimo._server.asgi import ASGIAppBuilder # Create a marimo asgi app server: ASGIAppBuilder = create_asgi_app().with_app( path="/", root="./mofile.py", ) app = FastAPI() app.mount("/", server.build()) # Run the server if __name__ == "__main__": import uvicorn webbrowser.open_new("http://localhost:22810/") uvicorn.run(app, host="localhost", port=22810) ```
closed
2025-02-18T09:47:23Z
2025-02-18T18:29:23Z
https://github.com/marimo-team/marimo/issues/3828
[ "bug" ]
ABChristian
3
dynaconf/dynaconf
django
403
[bug] Cannot get secrets from vault
**Describe the bug** When trying to retrieve secrets from hashicorp vault without setting os envs the settings only contain the default dynaconf settings. **To Reproduce** Steps to reproduce the behavior: 1. Run local vault server as described in the documentation 2. Store some secrets in vault 3.Run this code: <details> <summary> Python Code </summary> ```python from dynaconf import Dynaconf config = Dynaconf( environment=True, vault_enabled=True, vault_url="http://localhost:8200", vault_token="myroot" ) print(config.as_dict()) ``` </details> **Expected behavior** The config should include the stored secrets. **Environment (please complete the following information):** - OS: [OSX] - Dynaconf Version [3.1.0]
closed
2020-09-02T11:17:43Z
2020-10-23T18:57:28Z
https://github.com/dynaconf/dynaconf/issues/403
[ "hacktoberfest", "Docs" ]
clanzett
4
open-mmlab/mmdetection
pytorch
11,572
the same image, the same model, after deployed with c++, sometimes has result, sometimes has nothings,why?
the same image, the same model, when deploy with c++, sometimes has result, sometimes has nothings,why? the error is: "异常 dets.size() == 0". code is below: #include "mmdeploy/detector.hpp" #include "opencv2/imgcodecs/imgcodecs.hpp" #include "opencv2/core/utility.hpp" #include "utilsisualize.h" #include #include #include #include std::string ARGS_model = "../../../../infer_file/model/cascade_mask-rcnn"; std::string ARGS_image_dir = "../../../../infer_file/img"; std::string FLAGS_device = "cpu"; std::string ARGS_output_dir = "detector_output"; double FLAGS_det_thr = 0.8; int prediction_count = 10000; int main(int argc, char* argv[]) { std::ofstream result_file("prediction_results.txt"); try { if (!std::filesystem::exists(ARGS_image_dir)) { throw std::runtime_error("Input image directory does not exist: " + ARGS_image_dir); } mmdeploy::Detector detector(mmdeploy::Model{ ARGS_model }, mmdeploy::Device{ FLAGS_device }); for (const auto& entry : std::filesystem::directory_iterator(ARGS_image_dir)) { if (!entry.is_regular_file() || entry.path().extension() != ".jpg") { continue; } std::string image_path = entry.path().string(); cv::Mat img = cv::imread(image_path); if (img.empty()) { throw std::runtime_error("Failed to load image: " + image_path); } for (int i = 0; i < prediction_count; ++i) { mmdeploy::Detector::Result dets = detector.Apply(img); utils::Visualize v; auto sess = v.get_session(img); int count = 0; for (const mmdeploy_detection_t& det : dets) { if (det.score > FLAGS_det_thr) { sess.add_det(det.bbox, det.label_id, det.score, det.mask, count++); } } std::filesystem::create_directory(ARGS_output_dir); std::string output_image_path = ARGS_output_dir + "/" + entry.path().filename().string(); cv::imwrite(output_image_path, sess.get()); if (dets.size() == 0) { std::cout << "异常 dets.size() == 0" << std::endl; std::cin.get(); } int mask_index = 0; for (const mmdeploy_detection_t& det : dets) { if (det.score > FLAGS_det_thr && det.mask) { std::string unique_identifier = ARGS_output_dir + "/" + entry.path().stem().string() + "_mask_" + std::to_string(mask_index++) + ".png"; cv::Mat mask_img(img.size(), CV_8UC1, cv::Scalar(0)); cv::Rect roi(static_cast<int>(det.bbox.left), static_cast<int>(det.bbox.top), static_cast<int>(det.bbox.right - det.bbox.left), static_cast<int>(det.bbox.bottom - det.bbox.top)); cv::Mat mask(det.mask->height, det.mask->width, CV_8UC1, det.mask->data); cv::resize(mask, mask, roi.size()); mask.copyTo(mask_img(roi)); cv::imwrite(unique_identifier, mask_img); result_file << "Mask File: " << unique_identifier << std::endl; } } // 写入预测结果到文本文件 result_file << "Image Path: " << image_path << std::endl; for (const mmdeploy_detection_t& det : dets) { result_file << "Label ID: " << det.label_id << ", Score: " << det.score << std::endl; } result_file << "----" << std::endl; } } } catch (const std::exception& e) { result_file << "Exception occurred: " << e.what() << std::endl; } result_file.close(); std::cout << "程序执行完毕,请按Enter键来关闭程序..." << std::endl; std::cin.get(); return 0; }
open
2024-03-20T05:18:04Z
2024-03-20T05:18:04Z
https://github.com/open-mmlab/mmdetection/issues/11572
[]
happybear1015
0
streamlit/streamlit
streamlit
10,514
toml.decoder.TomlDecodeError: Key name found without value. Reached end of line.
### Summary Hello, I created Google OpenID Connect client and tried to implement it on streamlit but the following error occured ``` Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/__main__.py", line 20, in <module> main(prog_name="streamlit") File "/usr/lib/python3/dist-packages/click/core.py", line 1157, in __call__ return self.main(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/click/core.py", line 1078, in main rv = self.invoke(ctx) ^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/click/core.py", line 1688, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/click/core.py", line 1434, in invoke return ctx.invoke(self.callback, **ctx.params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/click/core.py", line 783, in invoke return __callback(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/web/cli.py", line 240, in main_run _main_run(target, args, flag_options=kwargs) File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/web/cli.py", line 276, in _main_run bootstrap.run(file, is_hello, args, flag_options) File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/web/bootstrap.py", line 349, in run asyncio.run(main()) File "/usr/lib/python3.12/asyncio/runners.py", line 194, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/web/bootstrap.py", line 341, in main await run_server() File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/web/bootstrap.py", line 319, in run_server await server.start() File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/web/server/server.py", line 295, in start app = self._create_app() ^^^^^^^^^^^^^^^^^^ File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/web/server/server.py", line 451, in _create_app cookie_secret=get_cookie_secret(), ^^^^^^^^^^^^^^^^^^^ File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/web/server/server_util.py", line 82, in get_cookie_secret if secrets_singleton.load_if_toml_exists(): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/runtime/secrets.py", line 222, in load_if_toml_exists self._parse() File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/runtime/secrets.py", line 378, in _parse path_secrets, found_secrets_file_in_path = self._parse_file_path(path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/runtime/secrets.py", line 336, in _parse_file_path return self._parse_toml_file(path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ugroon/.local/lib/python3.12/site-packages/streamlit/runtime/secrets.py", line 276, in _parse_toml_file secrets.update(toml.loads(secrets_file_str)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ugroon/.local/lib/python3.12/site-packages/toml/decoder.py", line 213, in loads raise TomlDecodeError("Key name found without value." toml.decoder.TomlDecodeError: Key name found without value. Reached end of line. (line 9 column 67 char 346) ``` Config file that I used (.streamlit/secrets.toml) ``` [auth] redirect_uri = "http://localhost:8501/oauth2callback" cookie_secret = "hebelehubelecartcurt" [auth.google] client_id = "7760*********-k**************************q.apps.googleusercontent.com" client_secret = "G*****-*r*__-***********************A" server_metadata_url = ( "https://accounts.google.com/.well-known/openid-configuration" ) ``` Code that I used ```Python import streamlit as st st.button("Log in with Google", on_click=st.login, args=["google"]) ```
closed
2025-02-19T11:56:06Z
2025-02-25T18:24:57Z
https://github.com/streamlit/streamlit/issues/10514
[]
ug0x01
3
httpie/cli
rest-api
1,126
Session file with cookie cannot be parsed
**Checklist** - [Y] I've searched for similar issues. - [Y] I'm using the the latest version of HTTPie. - httpie version, 2.4.0 - python version, 3.9 I prepared the **my-session-cookie.json** file, the request parameters are placed in **query.json**, and the expected response results are placed in the **result.json** file. **my-session-cookie.json** like this: ` { "__meta__": { "about": "HTTPie session file", "help": "https://httpie.org/doc#sessions", "httpie": "2.4.0" }, "headers": { "Content-Type": "application/json", "cookie": "12345" } } ` And command like this: ![2021813-22044](https://user-images.githubusercontent.com/26968137/129248791-748008ad-9ce5-4fae-bcdf-c9ed312a8997.png) <img width="816" alt="Xnip2021-08-13_02-22-22" src="https://user-images.githubusercontent.com/26968137/129248846-2614c7bb-afd3-472c-86f8-0429d183cf60.png"> `http --verify=no -v --session-read-only=~/Desktop/my-session-cookie.json POST http://localhost:8301/test < ~/Desktop/query.json -d >>~/Desktop/result.json` And I got error like this: `http: error: RuntimeError: OrderedDict mutated during iteration` Then I did some test and the request can be executed - remove cookie field - OR remove `-d >>~/Desktop/result.json` in the command So what's wrong?
closed
2021-08-12T18:24:04Z
2021-08-17T14:05:30Z
https://github.com/httpie/cli/issues/1126
[ "bug", "new" ]
caofanCPU
6
ymcui/Chinese-LLaMA-Alpaca
nlp
502
Chinese Plus Alpaca 继续精调疑问
我想继续训练Chinese Plus Alpaca --model_name_or_path: 合并Chinese-LLaMA-Plus-LoRA后的Chinese-LLaMA模型 问题1: --peft_path: 这里是使用Chinese-Alpaca 还是 Chinese-Plus-Alpaca 的权重目录? 问题2: 继续精调是只用提供新的数据的 json,微调后就能得到原Chinese-Plus-Alpaca + 新数据合并的效果吗? 问题3: 训练完后得到的 New-Alpaca-Lora 合并时是使用 Chinese-LLaMA + Chinese-LLaMA-Plus-LoRA + Chinese-Alpaca-Plus-LoRA + NewAlpaca-Lora 还是 Chinese-LLaMA + Chinese-LLaMA-Plus-LoRA + NewAlpaca-Lora *将[ ]中填入x,表示打对钩。提问时删除这行。 前三项请按自己的问题类型保留符合的选项。* - [x ] **基础模型**:Alpaca-Plus 7B - [x ] **运行系统**:Linux - [x ] **问题分类**:模型训练与精调 - [x ] **模型正确性检查**:务必检查模型的[SHA256.md](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/SHA256.md),模型不对的情况下无法保证效果和正常运行。 - [ x] (必选)由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行 - [ x] (必选)我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案 - [x ] (必选)第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
closed
2023-06-03T04:17:11Z
2023-06-05T05:13:28Z
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/502
[]
Damonproto
3
plotly/dash-bio
dash
421
AlignmentChart Component Improvements
The AlignmentChart component currently lacks a few features which are relevant to ease of use in creating app layouts and in specifying which data is displayed. * One issue is the lack of dynamic auto-resizing to fit a container based on the size of the display. Although the `height` and `width` of the chart can be manually set with their respective properties, they remain static and disrupt the layout when viewing the application at different resolutions. An example is shown below with an excess of whitespace. This can be alleviated with CSS workarounds, but the chart remains at the set height and width even if the container around it will resize. ![g](https://user-images.githubusercontent.com/30986043/65616444-0e9a0800-df89-11e9-9ca9-47ae374cba7a.PNG) * The `tickstart` and `tickstep` optional properties don't seem to be working at all, or at least not as described. Changing the value of these props, even outside the `Slider` mode which is a known bug, results in no change to the tile annotations or intervals. I'm assuming this is relevant to ORF's based on the description of the property, so would be nice-to-have for certain sequences. * A feature which would be helpful to comparing alignments on the chart would be a property or even a dropdown on the chart itself which allows a user to select or deselect which sequences are included in the sequence set aligned on the chart itself. It could include the sequence header as the `label` and the sequence displayed as the `value`. Currently this can be done outside the component by modifying the FASTA or CLUSTAL file itself with callbacks, but this may be nice for ease-of-us.
closed
2019-09-25T15:50:44Z
2021-10-27T01:44:00Z
https://github.com/plotly/dash-bio/issues/421
[ "nice-to-have" ]
HammadTheOne
2