repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
pydata/pandas-datareader | pandas | 332 | Having weird issues with the data reader fetching yahoo finance | `import pandas_datareader as pdr
import pandas as pd
import matplotlib.pyplot as plt
import datetime as dt
start_date = dt.datetime(2016,1,1)
end_date = dt.datetime(2016,12,31)
df_bit = pdr.get_data_yahoo('GBTC', start_date, end_date)
df_gd = pdr.get_data_yahoo('IAU', start_date, end_date)
df_com = pdr.get_data_yahoo('IYZ', start_date, end_date)
df_bio = pdr.get_data_yahoo('RYOIX', start_date, end_date)
df_tao = pdr.get_data_yahoo('TAO', start_date, end_date)
df_sp = pdr.get_data_yahoo('^GSPC', start_date, end_date)`
Then it returns:
`---------------------------------------------------------------------------
RemoteDataError Traceback (most recent call last)
<ipython-input-4-3773d2c3f242> in <module>()
----> 1 df_bit = pdr.get_data_yahoo('GBTC', start_date, end_date)
2 df_gd = pdr.get_data_yahoo('IAU', start_date, end_date)
3 df_com = pdr.get_data_yahoo('IYZ', start_date, end_date)
4 df_bio = pdr.get_data_yahoo('RYOIX', start_date, end_date)
5 df_tao = pdr.get_data_yahoo('TAO', start_date, end_date)
/Users/annaxlu/anaconda3/lib/python3.6/site-packages/pandas_datareader/data.py in get_data_yahoo(*args, **kwargs)
38
39 def get_data_yahoo(*args, **kwargs):
---> 40 return YahooDailyReader(*args, **kwargs).read()
41
42
/Users/annaxlu/anaconda3/lib/python3.6/site-packages/pandas_datareader/yahoo/daily.py in read(self)
74 def read(self):
75 """ read one data from specified URL """
---> 76 df = super(YahooDailyReader, self).read()
77 if self.ret_index:
78 df['Ret_Index'] = _calc_return_index(df['Adj Close'])
/Users/annaxlu/anaconda3/lib/python3.6/site-packages/pandas_datareader/base.py in read(self)
153 # If a single symbol, (e.g., 'GOOG')
154 if isinstance(self.symbols, (compat.string_types, int)):
--> 155 df = self._read_one_data(self.url, params=self._get_params(self.symbols))
156 # Or multiple symbols, (e.g., ['GOOG', 'AAPL', 'MSFT'])
157 elif isinstance(self.symbols, DataFrame):
/Users/annaxlu/anaconda3/lib/python3.6/site-packages/pandas_datareader/base.py in _read_one_data(self, url, params)
72 """ read one data from specified URL """
73 if self._format == 'string':
---> 74 out = self._read_url_as_StringIO(url, params=params)
75 elif self._format == 'json':
76 out = self._get_response(url, params=params).json()
/Users/annaxlu/anaconda3/lib/python3.6/site-packages/pandas_datareader/base.py in _read_url_as_StringIO(self, url, params)
83 Open url (and retry)
84 """
---> 85 response = self._get_response(url, params=params)
86 text = self._sanitize_response(response)
87 out = StringIO()
/Users/annaxlu/anaconda3/lib/python3.6/site-packages/pandas_datareader/base.py in _get_response(self, url, params)
118 if params is not None and len(params) > 0:
119 url = url + "?" + urlencode(params)
--> 120 raise RemoteDataError('Unable to read URL: {0}'.format(url))
121
122 def _read_lines(self, out):
RemoteDataError: Unable to read URL: http://ichart.finance.yahoo.com/table.csv?s=GBTC&a=0&b=1&c=2016&d=11&e=31&f=2016&g=d&ignore=.csv
` | closed | 2017-05-17T20:19:19Z | 2017-07-02T15:09:57Z | https://github.com/pydata/pandas-datareader/issues/332 | [] | xrlu0929 | 3 |
matplotlib/mplfinance | matplotlib | 116 | Feature Request: show_nontrading: differentiating between market off days vs no transactions | Hi,
**Is your feature request related to a problem? Please describe.**
Non trading days have two possibilities:
1. the market is closed or,
2. there is no transaction for that day.
Skipping the days when the market if closed is fine but showing the gaps when there is no transaction could be useful to show the liquidity of the entity.
**Describe the solution you'd like**
I propose to add a callback parameter in the `plot` function so that the user can define (customize) the days that could be skipped. The parameter to the callback will be the datetime value.
```python
def is_nontrading(dt: np.datetime64) -> bool:
dates = get_days_of_a_market_index()
return dt not in dates
def plot(..., show_nontrading=False, nontrading_callback=is_nontrading):
...
``` | closed | 2020-05-01T07:17:47Z | 2020-05-05T14:36:12Z | https://github.com/matplotlib/mplfinance/issues/116 | [
"enhancement"
] | char101 | 14 |
pydata/xarray | pandas | 9,789 | Support DataTree in apply_ufunc | Sub-issue of https://github.com/pydata/xarray/issues/9106 | open | 2024-11-16T21:41:20Z | 2024-11-16T21:41:32Z | https://github.com/pydata/xarray/issues/9789 | [
"contrib-help-wanted",
"topic-DataTree"
] | shoyer | 0 |
dsdanielpark/Bard-API | api | 18 | SNlM0e value not found in response | I have tried to interpret core python code to C# with the help of Chatgpt as I'm just a newbie in this field , It generated the full code to be applicable for integration with Unity. but I have got the error "SNlM0e value not found in response".
I don't know what is the issue, it might be related to my location ?
and here is the code
```c#
using UnityEngine;
using System.Collections.Generic;
using System.Text.RegularExpressions;
using System.Net.Http;
using System;
using UnityEngine.Networking;
using System.Linq;
public class Bard
{
private Dictionary<string, string> proxies;
private int timeout;
private Dictionary<string, string> headers;
private int _reqid;
private string conversation_id;
private string response_id;
private string choice_id;
private HttpClient httpClient;
private string SNlM0e;
public Bard(int timeout = 6, Dictionary<string, string> proxies = null)
{
this.proxies = proxies;
this.timeout = timeout;
this.headers = new Dictionary<string, string>
{
{ "Host", "bard.google.com" },
{ "X-Same-Domain", "1" },
{ "User-Agent", "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" },
{ "Content-Type", "application/x-www-form-urlencoded;charset=UTF-8" },
{ "Origin", "https://bard.google.com" },
{ "Referer", "https://bard.google.com/" }
};
this._reqid = int.Parse(string.Join("", UnityEngine.Random.Range(0, 10), UnityEngine.Random.Range(0, 10), UnityEngine.Random.Range(0, 10), UnityEngine.Random.Range(0, 10)));
this.conversation_id = "";
this.response_id = "";
this.choice_id = "";
this.httpClient = new HttpClient();
this.httpClient.DefaultRequestHeaders.Clear();
this.httpClient.DefaultRequestHeaders.Host = "bard.google.com";
this.httpClient.DefaultRequestHeaders.Add("X-Same-Domain", "1");
this.httpClient.DefaultRequestHeaders.UserAgent.ParseAdd("Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36");
this.httpClient.DefaultRequestHeaders.Accept.ParseAdd("application/x-www-form-urlencoded;charset=UTF-8");
this.httpClient.DefaultRequestHeaders.Referrer = new Uri("https://bard.google.com/");
// this.httpClient.DefaultRequestHeaders.Add("Cookie", "__Secure-1PSID=" + Environment.GetEnvironmentVariable("_BARD_API_KEY"));
this.httpClient.DefaultRequestHeaders.Add("Cookie", "__Secure-1PSID=Wgi8wUobo4-2WaJ29Y45F_fGhfjh4GjtcOMkBamz5_dJ4gogfwcocBxvS2PRxuEiJsS4ww.");
this.SNlM0e = this._get_snim0e();
}
private string _get_snim0e()
{
var requestUri = "https://bard.google.com/";
var response = httpClient.GetAsync(requestUri).Result;
if (!response.IsSuccessStatusCode)
{
throw new Exception($"Response Status: {response.StatusCode}");
}
var responseContent = response.Content.ReadAsStringAsync().Result;
Debug.Log("responseContent" + responseContent);
var match = Regex.Match(responseContent, "SNlM0e\":\"(.*?)\"");
if (!match.Success)
{
throw new Exception("SNlM0e value not found in response.");
}
return match.Groups[1].Value;
}
public Dictionary<string, object> GetAnswer(string inputText)
{
var url = "https://bard.google.com/_/BardChatUi/data/assistant.lamda.BardFrontendService/StreamGenerate";
var paramsDict = new Dictionary<string, string>
{
{ "bl", "boq_assistant-bard-web-server_20230419.00_p1" },
{ "_reqid", this._reqid.ToString() },
{ "rt", "c" }
};
var inputTextStruct = new List<object[]>
{
new object[] { inputText },
null,
new object[] { this.conversation_id, this.response_id, this.choice_id }
};
var dataDict = new Dictionary<string, string>
{
{ "f.req", JsonUtility.ToJson(new object[] { null, JsonUtility.ToJson(inputTextStruct) }) },
{ "at", this.SNlM0e }
};
var formData = new WWWForm();
foreach (var kvp in dataDict)
{
formData.AddField(kvp.Key, kvp.Value);
}
var request = UnityWebRequest.Post(url, formData);
request.SetRequestHeader("Host", "bard.google.com");
request.SetRequestHeader("X-Same-Domain", "1");
request.SetRequestHeader("User-Agent", "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36");
request.SetRequestHeader("Content-Type", "application/x-www-form-urlencoded;charset=UTF-8");
request.SetRequestHeader("Origin", "https://bard.google.com");
request.SetRequestHeader("Referer", "https://bard.google.com/");
var response = request.SendWebRequest();
while (!response.isDone) { }
if (request.result != UnityWebRequest.Result.Success)
{
return new Dictionary<string, object> { { "content", $"Response Error: {request.error}." } };
}
var responseContent = request.downloadHandler.text;
var responseLines = responseContent.Split('\n');
var respDict = JsonUtility.FromJson<DictionaryResponse>(responseLines[3]);
var respJson = JsonUtility.ToJson(new DictionaryResponse { Content = respDict.Content });
var parsedAnswer = JsonUtility.FromJson<DictionaryAnswer>(respJson);
var bardAnswer = new Dictionary<string, object>
{
{ "content", parsedAnswer.Content[0][0] },
{ "conversation_id", parsedAnswer.Content[1][0] },
{ "response_id", parsedAnswer.Content[1][1] },
{ "factualityQueries", parsedAnswer.Content[3] },
{ "textQuery", parsedAnswer.Content[2][0] != null ? parsedAnswer.Content[2][0] : "" },
{ "choices", parsedAnswer.Content[4].Select(item => new Dictionary<string, object> { { "id", item[0] }, { "content", item[1] } }).ToList() }
};
this.conversation_id = (string)bardAnswer["conversation_id"];
this.response_id = (string)bardAnswer["response_id"];
choice_id = ((List<Dictionary<string, object>>)bardAnswer["choices"])[0]["id"].ToString(); this._reqid += 100000;
return bardAnswer;
}
[System.Serializable]
public class DictionaryResponse
{
public object[][] Content;
}
[System.Serializable]
public class DictionaryAnswer
{
public object[][][] Content;
}
}
``` | closed | 2023-05-18T08:56:28Z | 2023-05-18T13:48:08Z | https://github.com/dsdanielpark/Bard-API/issues/18 | [] | zicas2000 | 6 |
apachecn/ailearning | nlp | 426 | 第3章_决策树算法 - ApacheCN | https://ailearning.apachecn.org/#/docs/ml/3.决策树
kmeans聚类选择最优K值python实现:
https://blog.csdn.net/xyisv/article/details/82430107 | closed | 2018-08-24T07:05:47Z | 2021-09-07T17:42:20Z | https://github.com/apachecn/ailearning/issues/426 | [
"Gitalk",
"679ec82e812919d2912f4dc95337167c"
] | jiangzhonglian | 12 |
mouredev/Hello-Python | fastapi | 425 | 网赌被黑怎么办?提款失败注单审核怎么办系统维护账号被冻结怎么办? | 关于网上网赌娱乐平台赢钱了各种借口不给出款最新解决方法
网出黑咨询微:xiaolu460570飞机:lc15688如果您目前遇到这样的情况,请第一时间联系我们专业出黑团队,通过藏分等技术手段分批出款,这样问题就能得到很好的解决。网络时代很多人都喜欢在网络上玩娱乐博彩但是很多人都不懂网上有很多黑台了,黑台不给出款的借口说你的账号异常登录。平台网站,网站出款端口维护账号涉嫌套利,系统自动抽查审核,网址抽查审核,账号下注违规,银行系统维护款等借口维护不给你出甚至冻结你账号,网投不给出款怎么办如何拿回。
切记,只要你赢钱了,遇到任何不给你提现的借口,基本表明你已经被黑了。
第一:提款被拒绝,各种借口理由,就是不让出款,让打倍投流水等等!
第二:限制你账号的部分功能!出款和入款端口关闭,以及让你充值解开取款通道等等!
第三:客服找一些借口说什么系统维护,风控审核等等借口,就是不让取款!
第四:确认被黑了,应该怎么做? (找专业团队教你怎么办挽回损失,出款不成功不收取任何前期费用) | closed | 2025-03-02T08:48:51Z | 2025-03-02T11:12:19Z | https://github.com/mouredev/Hello-Python/issues/425 | [] | 376838 | 0 |
open-mmlab/mmdetection | pytorch | 11,443 | MM Grounding dino | After executing the code according to the command line, why is the detection effect so poor, and is it not configured well?
command :python demo/image_demo.py test_images/two_human/ configs/mm_grounding_dino/grounding_dino_swin-t_pretrain_obj365.py --weights grounding_dino_swin-t_pretrain_obj365_goldg_grit9m_v3det_20231204_095047-b448804b.pth --texts 'The person standing on the left.' --tokens-positive -1 --out-dir /home/wh/sjx/mm/mmdetection/outputs/ --device 'cpu'
result:



| open | 2024-01-30T08:01:54Z | 2024-01-31T02:23:40Z | https://github.com/open-mmlab/mmdetection/issues/11443 | [] | 12cyan | 1 |
sebp/scikit-survival | scikit-learn | 11 | GradientBoostingSurvivalAnalysis needs to use min_impurity_decrease instead of min_impurity_split | Here's the error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-89-df62d5a185bd> in <module>()
----> 1 estimator = GradientBoostingSurvivalAnalysis(n_estimators=1000, random_state=0)
~/.venv/201710_price_opt_fake_data/lib/python3.4/site-packages/sksurv/ensemble/boosting.py in __init__(self, loss, learning_rate, n_estimators, criterion, min_samples_split, min_samples_leaf, min_weight_fraction_leaf, max_depth, min_impurity_split, random_state, max_features, max_leaf_nodes, subsample, dropout_rate, verbose)
486 max_features=max_features,
487 max_leaf_nodes=max_leaf_nodes,
--> 488 verbose=verbose)
489 self.dropout_rate = dropout_rate
490
TypeError: __init__() missing 1 required positional argument: 'min_impurity_decrease'
```
Seems that the call to Tree-based models must now use min_impurity_decrease instead of min_impurity_split. See the release note here: [http://scikit-learn.org/stable/whats_new.html](http://scikit-learn.org/stable/whats_new.html) | closed | 2017-10-15T00:24:40Z | 2017-11-18T23:34:05Z | https://github.com/sebp/scikit-survival/issues/11 | [] | dfd | 1 |
plotly/dash-core-components | dash | 181 | "RangeError: Source is too large" with repeated re-drawing of Scattergl subplots | I have a Dash app with a number of Scattergl plots, one of which can have tens of subplots each containing hundreds-thousands of points. After re-drawing the plot containing Scattergl subplots a few times, I get the following error from Plotly.js:
```
plotly-1.35.2.min.js:7 Uncaught RangeError: Source is too large
at Float64Array.set (<anonymous>)
at plotly-1.35.2.min.js:7
at Array.forEach (<anonymous>)
at Function.b [as update] (plotly-1.35.2.min.js:7)
at Object.plot (plotly-1.35.2.min.js:7)
at ha (plotly-1.35.2.min.js:7)
at Object.ua.plot (plotly-1.35.2.min.js:7)
at plotly-1.35.2.min.js:7
at Object.oe.syncOrAsync (plotly-1.35.2.min.js:7)
at ay.plot (plotly-1.35.2.min.js:7)
```
Not sure if this is an issue for Plotly.js or for here, but seeing as the version of Plotly.js is managed by dash-core-components and that's the package I'm directly using, I'm submitting it here! | closed | 2018-04-06T18:32:45Z | 2018-10-31T10:37:28Z | https://github.com/plotly/dash-core-components/issues/181 | [] | slishak | 4 |
roboflow/supervision | pytorch | 679 | May I know where I can download tracking_result.mp4? | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
examples/tracking$ python script.py \
--source_weights_path yolov8s.pt \
--source_video_path input.mp4 \
--target_video_path tracking_result.mp4
Traceback (most recent call last):
File "/media/ak/HD/supervision/supervision/examples/tracking/script.py", line 77, in <module>
process_video(
File "/media/ak/HD/supervision/supervision/examples/tracking/script.py", line 22, in process_video
video_info = sv.VideoInfo.from_video_path(video_path=source_video_path)
File "/home/ak/anaconda3/envs/supervision/lib/python3.10/site-packages/supervision/utils/video.py", line 48, in from_video_path
raise Exception(f"Could not open video at {video_path}")
Exception: Could not open video at input.mp4
### Environment
_No response_
### Minimal Reproducible Example
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2023-12-17T15:35:31Z | 2023-12-17T15:42:27Z | https://github.com/roboflow/supervision/issues/679 | [
"bug"
] | AK51 | 0 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 1,254 | Proposal: Enhance Accessibility with a Gradio Demo Hosted on Huggingface Spaces | Hi,
This repo is awesome ! Congratulations 🙌 on this cool work.
You can also build a Gradio demo for this tool and host it on Huggingface Spaces, which would be very helpful for the open-source community and users.
Some of the benefits of sharing your models through the Hub would be:
- A wider reach of your work to the ecosystem
- Seamless integration with popular libraries and frameworks, enhancing usability
- Real-time feedback and collaboration opportunities with a global community of researchers and developers
A list of useful resources :-
- As a reference [Coqui - TTS](https://github.com/coqui-ai/TTS) is a highly popular and trending library for advanced Text-to-Speech generation and voice-cloning. They have a gradio demo linked locally as well as hosted on Spaces and can be accessed here - https://huggingface.co/spaces/coqui/xtts
- You can build a Gradio demo on Spaces following this step-by-step guide - https://huggingface.co/docs/hub/spaces-sdks-gradio
- Huggingface provides GPU grants for exciting demos. Read more [here](https://huggingface.co/docs/hub/spaces-gpus#community-gpu-grants)
- Gradio docs to build your UI - Gradio.app
Myself and my team at Gradio would be very happy to help with the Gradio demo and Spaces build and issues.
| closed | 2023-09-25T13:17:02Z | 2024-11-04T10:33:39Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1254 | [] | yvrjsharma | 1 |
aleju/imgaug | deep-learning | 735 | Augment image as if somebody took a photo of the same image | We have a situation where we want to distinguish "real" photos from photos taken of other photos.
Wonder if there's a way to simulate taking a photo of a photo. Perhaps even taking photos from monitor screens.
Screen glare/spectral effects/monitor pixel effects/matte effect... etc. All of this could be useful as image augmentations.
Any ideas? Has anybody worked on anything like this? Perhaps as an academic paper on this. | open | 2020-12-02T23:58:44Z | 2021-11-04T09:47:56Z | https://github.com/aleju/imgaug/issues/735 | [] | CMCDragonkai | 1 |
graphql-python/graphene-sqlalchemy | graphql | 416 | Support for python 3.12 | From the `pkg_resources` docs: [setuptools.pypa.io/en/latest/pkg_resources.html](https://setuptools.pypa.io/en/latest/pkg_resources.html)
> Use of pkg_resources is deprecated in favor of [importlib.resources](https://docs.python.org/3.11/library/importlib.resources.html#module-importlib.resources), [importlib.metadata](https://docs.python.org/3.11/library/importlib.metadata.html#module-importlib.metadata) and their backports ([importlib_resources](https://pypi.org/project/importlib_resources), [importlib_metadata](https://pypi.org/project/importlib_metadata)). Some useful APIs are also provided by [packaging](https://pypi.org/project/packaging) (e.g. requirements and version parsing). Users should refrain from new usage of pkg_resources and should work to port to importlib-based solutions.
Python 3.12 has removed `pkg_resources` from the standard library: [docs.python.org/3/whatsnew/3.12.html](https://docs.python.org/3/whatsnew/3.12.html) and this project makes use of it in the `graphene_sqlalchemy/utils.py` file:
https://github.com/graphql-python/graphene-sqlalchemy/blob/eb9c663cc0e314987397626573e3d2f940bea138/graphene_sqlalchemy/utils.py#L8
https://github.com/graphql-python/graphene-sqlalchemy/blob/eb9c663cc0e314987397626573e3d2f940bea138/graphene_sqlalchemy/utils.py#L23-L34 | closed | 2024-11-14T22:16:45Z | 2024-12-05T12:01:10Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/416 | [] | richin13 | 10 |
NullArray/AutoSploit | automation | 514 | Unhandled Exception (0a5a42dd0) | Autosploit version: `3.0`
OS information: `Linux-4.19.0-kali1-amd64-x86_64-with-Kali-kali-rolling-kali-rolling`
Running context: `autosploit.py`
Error meesage: `global name 'Except' is not defined`
Error traceback:
```
Traceback (most recent call):
File "/root/Autosploit/Autosploit/autosploit/main.py", line 113, in main
loaded_exploits = load_exploits(EXPLOIT_FILES_PATH)
File "/root/Autosploit/Autosploit/lib/jsonize.py", line 61, in load_exploits
except Except:
NameError: global name 'Except' is not defined
```
Metasploit launched: `False`
| closed | 2019-02-27T11:20:33Z | 2019-03-03T03:31:52Z | https://github.com/NullArray/AutoSploit/issues/514 | [] | AutosploitReporter | 0 |
httpie/cli | rest-api | 1,170 | Possible SQL injection vector through string-based query construction. | ## Checklist
- [x] I've searched for similar issues.
- [x] I'm using the latest version of HTTPie.
---
Possible SQL injection vector through string-based query construction.
Location: superset/tests/integration_tests/csv_upload_tests.py:373
372 .get_sqla_engine()
373 .execute(f"SELECT * from {EXCEL_UPLOAD_TABLE}")
374 .fetchall()
Solution :
Sanitize and control the input data when building such SQL statement strings.
| closed | 2021-10-01T17:50:01Z | 2021-10-01T17:57:55Z | https://github.com/httpie/cli/issues/1170 | [
"invalid"
] | ByteHackr | 1 |
cle-b/httpdbg | rest-api | 174 | groupby is undefined | httpdbg v0.31.4
```
Uncaught (in promise) TypeError: groupby is undefined
refresh_resquests http://localhost:4909/static/render.js-+-0.31.4:27
enable_refresh http://localhost:4909/static/render.js-+-0.31.4:270
```
The bug happened when an HTTP request was made not using one of the support HTTP library.
Reproduction step:
* `pyhttpdbg -m pip install hookdns -U`

| closed | 2025-01-25T08:12:00Z | 2025-01-26T13:30:31Z | https://github.com/cle-b/httpdbg/issues/174 | [] | cle-b | 1 |
saulpw/visidata | pandas | 1,507 | Date comparison gives wrong result from custom date | **Small description**
I have a sample CSV file (attached) with dates in `%d%m%Y` format (`31012022`).
I converted these to proper dates using `z@`.
Then I tried to find all the dates after July 2021, using the following:
`=date(2021, 7, 1) <= dates`
However, this gives me the _opposite_ boolean values to what I was expecting!
|dates |date\(2021, 7, 1\) <= dates|
|------------|---------------------------|
|2017\-09\-22|True |
|2021\-09\-28|False |
**Expected result**
I expected to get:
|dates |date\(2021, 7, 1\) <= dates|
|------------|---------------------------|
|2017\-09\-22|False |
|2021\-09\-28|True |
**Steps to reproduce with sample data and a .vd**
[test.zip](https://github.com/saulpw/visidata/files/9494126/test.zip)
**Additional context**
Please include the version of VisiData.
saul.pw/VisiData v2.9.1 | closed | 2022-09-06T07:07:32Z | 2022-09-18T07:25:11Z | https://github.com/saulpw/visidata/issues/1507 | [
"bug",
"fixed"
] | daviewales | 2 |
HumanSignal/labelImg | deep-learning | 352 | fixed | fixed
| closed | 2018-08-15T09:17:57Z | 2018-08-16T11:41:29Z | https://github.com/HumanSignal/labelImg/issues/352 | [] | caemor | 0 |
pandas-dev/pandas | data-science | 60,499 | BUG: Presence of `pd.NA` value in `pd.Series` prevents values from being rounded | ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
pd.Series([1.123, 2.123, pd.NA]).round(0)
```
### Issue Description
Presence of `pd.NA` value in `pd.Series` prevents values from being rounded.
```
>>> pd.Series([1.123, 2.123, pd.NA]).round(0)
0 1.123
1 2.123
2 <NA>
dtype: object
```
### Expected Behavior
I would expect the same behavior as with `np.nan` values:
```
>>> pd.Series([1.123, 2.123, np.nan]).round(0)
0 1.0
1 2.0
2 NaN
dtype: float64
```
### Installed Versions
<details>
/Users/<USER>/miniforge3/envs/nlp/lib/python3.9/site-packages/_distutils_hack/__init__.py:33: UserWarning: Setuptools is replacing distutils.
warnings.warn("Setuptools is replacing distutils.")
INSTALLED VERSIONS
------------------
commit : d9cdd2ee5a58015ef6f4d15c7226110c9aab8140
python : 3.9.16.final.0
python-bits : 64
OS : Darwin
OS-release : 24.1.0
Version : Darwin Kernel Version 24.1.0: Thu Oct 10 21:05:14 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T8103
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.UTF-8
pandas : 2.2.2
numpy : 1.25.2
pytz : 2023.3.post1
dateutil : 2.8.2
setuptools : 68.2.2
pip : 23.2.1
Cython : None
pytest : 7.4.0
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.9.3
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.2
IPython : 8.16.1
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.2
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2023.10.0
gcsfs : None
matplotlib : 3.8.0
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 14.0.1
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.11.3
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None
</details>
| closed | 2024-12-05T18:29:58Z | 2024-12-05T21:22:53Z | https://github.com/pandas-dev/pandas/issues/60499 | [
"Bug",
"Missing-data",
"Constructors"
] | mijalapenos | 1 |
jschneier/django-storages | django | 1,127 | Support for Oracle Object Storage | Hi All,
Is there any support for Oracle Object Storage, We have client who is insisting to host on oracle. | closed | 2022-04-11T10:15:14Z | 2025-01-07T07:23:16Z | https://github.com/jschneier/django-storages/issues/1127 | [] | anhilash | 3 |
OthersideAI/self-operating-computer | automation | 123 | Demo test case for issue template | Found a bug? Please fill out the sections below. 👍
### Describe the bug
A clear and concise description of what the bug is.
### Steps to Reproduce
1. (for ex.) went to...
2. clicked on this point
3. not working
### Expected Behavior
A brief description of what you expected to happen.
### Actual Behavior:
what actually happened.
### Environment
- OS:
- Model Used (e.g., GPT-4v, Gemini Pro Vision):
- Framework Version (optional):
### Screenshots
If applicable, add screenshots to help explain your problem.
### Additional context
Add any other context about the problem here. | closed | 2024-01-04T05:31:14Z | 2024-01-04T05:32:15Z | https://github.com/OthersideAI/self-operating-computer/issues/123 | [
"bug"
] | Yash-1511 | 2 |
strawberry-graphql/strawberry | django | 3,506 | Subgraph: "'NoneType' object has no attribute '...'" | When i make use of subgraphs i can start the graphql server, also the docs generated successfully but when i execute a query i get the error "'NoneType' object has no attribute '...'"
```python
@strawberry.type
class TestType:
name: str
@strawberry.type
class TestQuery:
@strawberry.field()
async def test() -> TestType:
return TestType(name="test")
@strawberry.type
class Query:
sub: TestQuery = strawberry.field(default_factory=TestQuery)
graphql_app = GraphQL(
strawberry.Schema(
query=Query
)
)
# ... other fastapi graphql connection here
# query
query MyQuery {
sub {
test {
name
}
}
}
```
## System Information
- Operating system: MacOS
- Strawberry version: 0.229.1 | closed | 2024-05-21T10:25:35Z | 2025-03-20T15:56:44Z | https://github.com/strawberry-graphql/strawberry/issues/3506 | [
"bug"
] | dgram | 4 |
Guovin/iptv-api | api | 207 | docker运行映射目录的问题 | 看介绍是用copy demo到docker容器内的方法
用映射的方法不是更好吗?
docker run --name tv-requests --restart unless-stopped -d -p 8000:8000 \
-v /root/docker/guovin-tv/config.py:/app/config.py \
-v /root/docker/guovin-tv/demo.txt:/app/demo.txt \
-v /root/docker/guovin-tv/result.txt:/app/result.txt \
guovern/tv-requests
这样是不是会有什么问题??
运行成功后只有访问/root/docker/guovin-tv/result.txt就是想要的列表了,而且docker内可以定时时运行,这样只要设置一下主机的config.py,就完全不用管了,定时查看result.txt就可以了,还可以定时把result.txt同步到网站目录,这就完全自动了
| closed | 2024-07-12T06:12:08Z | 2024-07-19T08:42:06Z | https://github.com/Guovin/iptv-api/issues/207 | [
"enhancement"
] | vbskycn | 4 |
piskvorky/gensim | nlp | 3,079 | getting disk quota exceeded error while using this ldamallet[doc_term_matrix] | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
What are you trying to achieve? What is the expected result? What are you seeing instead?
#### Steps/code/corpus to reproduce
Include full tracebacks, logs and datasets if necessary. Please keep the examples minimal ("minimal reproducible example").
If your problem is with a specific Gensim model (word2vec, lsimodel, doc2vec, fasttext, ldamodel etc), include the following:
```python
print(my_model.lifecycle_events)
```
#### Versions
Please provide the output of:
```python
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import struct; print("Bits", 8 * struct.calcsize("P"))
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import gensim; print("gensim", gensim.__version__)
from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
```
| closed | 2021-03-16T18:10:03Z | 2021-03-16T19:04:12Z | https://github.com/piskvorky/gensim/issues/3079 | [] | BaljinderSmagh | 1 |
graphistry/pygraphistry | pandas | 538 | [BUG] feat and umap cache incomplete runs | **Describe the bug**
When canceling (or getting an exception) during feat/umap, that gets cached for subsequent runs. We expect them to be not cached, e.g., recomputed, on future runs.
**To Reproduce**
**Expected behavior**
The caching flow should catch the interrupt/exn, ensure no caching, and rethrow.
**Actual behavior**
**Screenshots**
**Browser environment (please complete the following information):**
**Graphistry GPU server environment**
**PyGraphistry API client environment**
**Additional context**
| open | 2024-01-11T10:04:57Z | 2024-01-11T10:05:10Z | https://github.com/graphistry/pygraphistry/issues/538 | [
"bug",
"help wanted",
"good-first-issue"
] | lmeyerov | 0 |
amdegroot/ssd.pytorch | computer-vision | 441 | precision is too low,who know why.thx | open | 2019-11-27T03:57:04Z | 2022-02-09T20:34:01Z | https://github.com/amdegroot/ssd.pytorch/issues/441 | [] | jaychou790118 | 4 | |
tflearn/tflearn | tensorflow | 510 | 'module' object has no attribute 'scalar_summary' | I just installed the most recent version of tensorflow and tflearn.
Tensorflow runs fine with its give examples and notebooks.
When using tflearn however I got the following error on summaries.py:
File "build/bdist.linux-x86_64/egg/tflearn/summaries.py", line 46, in get_summary
AttributeError: 'module' object has no attribute 'scalar_summary'
The exact output is attached blow:
`~/Documents/tflearn/tutorials/intro$ python quickstart.py
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcurand.so.8.0 locally
Downloading Titanic dataset...
Succesfully downloaded titanic_dataset.csv 82865 bytes.
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:909] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: GeForce GTX 745
major: 5 minor: 0 memoryClockRate (GHz) 1.0325
pciBusID 0000:01:00.0
Total memory: 3.95GiB
Free memory: 3.71GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 745, pci bus id: 0000:01:00.0)
Traceback (most recent call last):
File "quickstart.py", line 41, in <module>
model = tflearn.DNN(net)
File "build/bdist.linux-x86_64/egg/tflearn/models/dnn.py", line 63, in __init__
File "build/bdist.linux-x86_64/egg/tflearn/helpers/trainer.py", line 121, in __init__
File "build/bdist.linux-x86_64/egg/tflearn/helpers/trainer.py", line 651, in initialize_training_ops
File "build/bdist.linux-x86_64/egg/tflearn/summaries.py", line 243, in add_loss_summaries
File "build/bdist.linux-x86_64/egg/tflearn/summaries.py", line 46, in get_summary
AttributeError: 'module' object has no attribute 'scalar_summary'` | closed | 2016-12-09T05:08:32Z | 2016-12-15T00:08:32Z | https://github.com/tflearn/tflearn/issues/510 | [] | ZejiaZheng | 2 |
microsoft/nni | tensorflow | 5,175 | NNI v3.0 preview iteration plan | Release manager: @liuzhe-lz
Release start date: 10.31
Feature freeze date (at most 30 days): ~~2.10~~ 2.20
Code freeze date & first package: ~~2.17~~ ~~2.24~~ 2.28
Tutorial freeze: ~~2.24~~ 3.3
Release date (3 weeks since feature freeze): ~~3.3~~ 3.10
## Misc ##
* NAS Fine-tune
* HPO integrated to other tool
* Speed up support with Graph (Chunyang)
| closed | 2022-10-21T08:34:22Z | 2023-06-05T01:45:43Z | https://github.com/microsoft/nni/issues/5175 | [
"iteration-plan"
] | scarlett2018 | 13 |
lexiforest/curl_cffi | web-scraping | 23 | name 'CookieConflict' is not defined | {name: value for name, value in session.cookies.items()}
raise NameError: name 'CookieConflict' is not defined
def get( # type: ignore
self,
name: str,
default: typing.Optional[str] = None,
domain: typing.Optional[str] = None,
path: typing.Optional[str] = None,
) -> typing.Optional[str]:
"""
Get a cookie by name. May optionally include domain and path
in order to specify exactly which cookie to retrieve.
"""
value = None
for cookie in self.jar:
if cookie.name == name:
if domain is None or cookie.domain == domain:
if path is None or cookie.path == path:
if value is not None:
message = f"Multiple cookies exist with name={name}"
raise CookieConflict(message)
value = cookie.value
if value is None:
return default
return value | closed | 2023-03-09T01:14:37Z | 2023-08-06T07:25:42Z | https://github.com/lexiforest/curl_cffi/issues/23 | [
"bug"
] | lxjmaster | 1 |
huggingface/pytorch-image-models | pytorch | 2,084 | [FEATURE] Add ImageBind | Add Meta's ImageBind
"ImageBind: One Embedding Space To Bind Them All"
https://github.com/facebookresearch/ImageBind
We would implement the embeddings for images modality. | closed | 2024-01-22T15:11:23Z | 2024-08-20T19:27:04Z | https://github.com/huggingface/pytorch-image-models/issues/2084 | [
"enhancement"
] | raulcarlomagno | 1 |
mirumee/ariadne-codegen | graphql | 41 | Add setting for sync/async client | Currently generated client is asynchronous, but we could have a setting to force client to be sync instead, for scenarios where async is not available, like in Django projects. | closed | 2022-12-02T13:47:21Z | 2022-12-12T14:50:30Z | https://github.com/mirumee/ariadne-codegen/issues/41 | [
"roadmap"
] | rafalp | 0 |
minivision-ai/photo2cartoon | computer-vision | 2 | 关于后端部署 | 想问下你们小程序后端部署也是用的torch模型么 能简单说下部署小细节么 感谢 | closed | 2020-04-21T07:49:56Z | 2020-04-24T05:06:25Z | https://github.com/minivision-ai/photo2cartoon/issues/2 | [] | xiuxiuxiaodi | 3 |
dgtlmoon/changedetection.io | web-scraping | 2,155 | [feature] ability to easily step backwards or forwards with each compare | **Version and OS**
v0.45.13 on linux/docker
**Is your feature request related to a problem? Please describe.**
While being able to choose arbitrary snapshots to compare, typically i only want to compare 2 adjacent snapshots, for example the most recent and the second most recent, the second most recent with the 3rd most recent, so on and so forth.
Currently (unless this is a user error thing) i have to manually select the snapshots from the dropdown, which works, but if there was a button that was "previous edit", "next edit", etc. Wikipedia is a great example of this where while you can compare any arbitrary snapshot you can also very easily step through each edit:
<img width="888" alt="image" src="https://github.com/dgtlmoon/changedetection.io/assets/107503402/ae516c36-add6-4313-a932-67614b5974a4">
**Describe the solution you'd like**
Add a "previous edit" and "next edit" (where applicable).
**Describe the use-case and give concrete real-world examples**
See description above.
**Additional context**
Add any other context or screenshots about the feature request here.
Would be happy to donate as well if that helps but unfortunately paypal nor BTC are not viable options. | closed | 2024-02-02T19:07:50Z | 2024-06-22T08:01:06Z | https://github.com/dgtlmoon/changedetection.io/issues/2155 | [
"enhancement"
] | drewmmiranda | 8 |
ray-project/ray | deep-learning | 50,803 | [Core] Handle transient network error for pushing object chunks | ### What happened + What you expected to happen
```
void ObjectManager::HandleSendFinished(const ObjectID &object_id,
const NodeID &node_id,
uint64_t chunk_index,
double start_time,
double end_time,
ray::Status status) {
RAY_LOG(DEBUG).WithField(object_id)
<< "HandleSendFinished on " << self_node_id_ << " to " << node_id
<< " of object, chunk " << chunk_index << ", status: " << status.ToString();
if (!status.ok()) {
// TODO(rkn): What do we want to do if the send failed?
RAY_LOG(DEBUG).WithField(object_id).WithField(node_id)
<< "Failed to send a push request for an object to node. Chunk index: "
<< chunk_index;
}
}
```
We should fix this TODO using `retryable_grpc_client`
### Versions / Dependencies
master
### Reproduction script
N/A
### Issue Severity
None | open | 2025-02-21T20:10:11Z | 2025-02-24T23:12:44Z | https://github.com/ray-project/ray/issues/50803 | [
"bug",
"P1",
"core",
"core-object-store"
] | jjyao | 0 |
widgetti/solara | fastapi | 767 | Split of Reactive variables and publish as standalone library | To have this noted in this repo, for tracking:
> Ideally, solara's reactive variables (which are signals) would be its own library... :)
_Originally posted by @maartenbreddels in https://github.com/projectmesa/mesa/issues/2176#issuecomment-2225465885_
The basic idea is that Solara's hybrid push/pull system has value on it's own, and could be published as a standalone library.
It doesn't necessarily need a new repo, it can just be published from this repo, and Solara can keep using the git version in this repo. The only difference it that you put it as an additional, standalone package on PyPI. | open | 2024-09-04T06:47:47Z | 2024-09-19T13:00:15Z | https://github.com/widgetti/solara/issues/767 | [] | EwoutH | 8 |
quokkaproject/quokka | flask | 442 | Quokka is dead, long live the Quokka! (it is being rewritten from scratch) | QuokkaCMS growed in to an amazing project and there are many users using the currrent version.
The problem is that too many feature requests lead in to too many issues to fix and implement.
The lack of contributors lead in to a slow pace development.
So I decided to kill this version! and start a brand new one[1] which is going to be more simple and many things will be moved in to external plugins.
So I'll need to maintain only the small core of the CMS and plugins would be maintained by community.
I am also removing the MongoDB dependency (it will be optional)
If you want to follow the Quokka Rewrite please take a look in to temporary repository [1] and once it is ready to release I'll archive this existing version in another branch and replace `master` branch of this repo with the new CMS.
The existing codebase will be kept in another branch but will not be actively maintained, only hotfixes and community PRs will be accepted, but the evolution of the project will be focused in `quokka new generation`
[1] https://github.com/rochacbruno/quokka_ng/blob/master/README.md
Suggestions please use the issues on the new repo.
Cheers!
Long live the Quokka!!! | closed | 2017-03-15T05:14:42Z | 2018-02-06T13:45:53Z | https://github.com/quokkaproject/quokka/issues/442 | [
"enhancement",
"pinned"
] | rochacbruno | 1 |
plotly/dash | data-science | 2,559 | [Feature Request] Enable running functionality in non-background callbacks | The running keyword in callbacks currently only works when `background=True` is set for callbacks. However, sometimes you have a normal callback, that takes a while to execute, that you don't want to turn in a background callback, but which you do want to have running functionality for.
Currently, I solve this by manually creating a callback loop, i.e. by having an entry-exit callback that disables/enables the button which then triggers a second callback that does the actual calculation. But that is a bit cumbersome to set up every time, since you have to be careful to make sure that the loop properly closes. For example, if you run the same calculation twice consecutively, the exit callback might not be triggered the second time because the calculation return data didn't change compared to the first run and therefore Dash won't update the component that triggers the loop exit condition. (The solution is to add extra data that will always update, e.g. a calculation counter)
Having this functionality enabled via the running keyword that already works for background callbacks would be amazing.
| closed | 2023-06-06T08:22:32Z | 2024-02-28T21:03:49Z | https://github.com/plotly/dash/issues/2559 | [] | aGitForEveryone | 0 |
keras-team/keras | data-science | 20,363 | How i can import a CRF layer in the latest version of keras 3? | closed | 2024-10-16T13:55:37Z | 2024-10-18T20:52:38Z | https://github.com/keras-team/keras/issues/20363 | [
"type:support",
"stat:awaiting response from contributor"
] | Und3r1ine | 3 | |
netbox-community/netbox | django | 18,652 | Run periodic GitHub actions on upstream repository only | ### NetBox version
v4.2.3
### Feature type
Change to existing functionality
### Proposed functionality
The official NetBox repository contains actions to perform certain tasks periodically, such as updating the translations. I suggest restricting these periodic actions to using the official repository only:
```
if: github.repository == ‘netbox-community/netbox’
```
An alternative might be to check if the required authentication tokens are available for those cronjob actions, so that forks can still use those actions.
### Use case
Actions should be enabled in forks to run test cases. While this is beneficial, periodic actions often send emails, e.g. about missing tokens. Restricting these actions to NetBox upstream would concentrate the actions where they are needed and avoid error messages in forks.
### Database changes
None
### External dependencies
None | open | 2025-02-16T19:20:39Z | 2025-02-17T15:22:42Z | https://github.com/netbox-community/netbox/issues/18652 | [
"status: accepted",
"type: housekeeping"
] | alehaa | 0 |
dpgaspar/Flask-AppBuilder | rest-api | 1,568 | Issues with ActiveDirectory as AUTH_LDAP | If you'd like to report a bug in Flask-Appbuilder, fill out the template below. Provide
any extra information that may be useful
### Environment
Flask-Appbuilder version: 3.1.1
pip freeze output:
```bash
aiohttp==3.7.3
alembic==1.5.4
apache-airflow==2.0.1
apache-airflow-providers-amazon==1.1.0
apache-airflow-providers-cncf-kubernetes==1.0.1
apache-airflow-providers-ftp==1.0.1
apache-airflow-providers-http==1.1.0
apache-airflow-providers-imap==1.0.1
apache-airflow-providers-postgres==1.0.1
apache-airflow-providers-sftp==1.1.0
apache-airflow-providers-slack==2.0.0
apache-airflow-providers-sqlite==1.0.1
apache-airflow-providers-ssh==1.1.0
apispec==3.3.2
appdirs==1.4.4
argcomplete==1.12.2
async-timeout==3.0.1
attrs==20.3.0
Babel==2.9.0
bcrypt==3.2.0
blinker==1.4
boto3==1.15.18
botocore==1.18.18
cached-property==1.5.2
cachetools==4.2.1
cattrs==1.0.0
certifi==2020.12.5
cffi==1.14.4
chardet==3.0.4
click==7.1.2
clickclick==20.10.2
colorama==0.4.4
colorlog==4.7.2
commonmark==0.9.1
connexion==2.7.0
croniter==0.3.37
cryptography==3.4.3
dataclasses==0.7
defusedxml==0.6.0
dill==0.3.3
distlib==0.3.1
dnspython==1.16.0
docutils==0.16
email-validator==1.1.2
eventlet==0.30.1
filelock==3.0.12
Flask==1.1.2
Flask-AppBuilder==3.1.1
Flask-Babel==1.0.0
Flask-Caching==1.9.0
Flask-JWT-Extended==3.25.0
Flask-Login==0.4.1
Flask-OpenID==1.2.5
Flask-SQLAlchemy==2.4.4
Flask-WTF==0.14.3
gevent==21.1.2
google-auth==1.25.0
graphviz==0.16
greenlet==1.0.0
gunicorn==19.10.0
idna==2.10
idna-ssl==1.1.0
importlib-metadata==1.7.0
importlib-resources==1.5.0
inflection==0.5.1
iso8601==0.1.14
itsdangerous==1.1.0
Jinja2==2.11.3
jmespath==0.10.0
jsonschema==3.2.0
kubernetes==11.0.0
lazy-object-proxy==1.4.3
ldap3==2.9
lockfile==0.12.2
Mako==1.1.4
Markdown==3.3.3
MarkupSafe==1.1.1
marshmallow==3.10.0
marshmallow-enum==1.5.1
marshmallow-oneofschema==2.1.0
marshmallow-sqlalchemy==0.23.1
multidict==5.1.0
natsort==7.1.1
numpy==1.19.5
oauthlib==2.1.0
openapi-spec-validator==0.2.9
pandas==1.1.5
paramiko==2.7.2
pendulum==2.1.2
pep562==1.0
prison==0.1.3
psutil==5.8.0
psycopg2-binary==2.8.6
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
Pygments==2.7.4
PyJWT==1.7.1
PyNaCl==1.4.0
pyrsistent==0.17.3
pysftp==0.2.9
python-daemon==2.2.4
python-dateutil==2.8.1
python-editor==1.0.4
python-ldap==3.3.1
python-nvd3==0.15.0
python-slugify==4.0.1
python3-openid==3.2.0
pytz==2020.5
pytzdata==2020.1
PyYAML==5.4.1
requests==2.25.1
requests-oauthlib==1.1.0
rich==9.2.0
rsa==4.7
s3transfer==0.3.4
sentry-sdk==0.19.5
setproctitle==1.2.2
six==1.15.0
slack-sdk==3.3.2
slackclient==2.9.3
SQLAlchemy==1.3.23
SQLAlchemy-JSONField==1.0.0
SQLAlchemy-Utils==0.36.8
sshtunnel==0.1.5
statsd==3.3.0
swagger-ui-bundle==0.0.8
tabulate==0.8.7
tenacity==6.2.0
termcolor==1.1.0
text-unidecode==1.3
typing==3.7.4.3
typing-extensions==3.7.4.3
unicodecsv==0.14.1
urllib3==1.25.11
virtualenv==20.4.2
watchtower==0.7.3
websocket-client==0.57.0
Werkzeug==1.0.1
WTForms==2.3.3
yarl==1.6.3
zipp==3.4.0
zope.event==4.5.0
zope.interface==5.2.0
```
### Describe the expected results
I'm not able to make it work somehow.
For Airflow 2.0.1, I'm using below configuration in webserver_config.py:
```pytb
import os
from flask_appbuilder.security.manager import AUTH_LDAP
from airflow import configuration as conf
SQLALCHEMY_DATABASE_URI = conf.get("core", "SQL_ALCHEMY_CONN")
AUTH_TYPE = AUTH_LDAP
AUTH_USER_REGISTRATION = True
AUTH_USER_REGISTRATION_ROLE = 'Admin'
AUTH_LDAP_SERVER = 'ldap://192.168.0.20:389'
AUTH_LDAP_USE_TLS = False
AUTH_LDAP_BIND_USER = 'cn=apache.airflow,cn=Users,dc=ad,dc=example,dc=com'
AUTH_LDAP_BIND_PASSWORD = 'StrongOne'
AUTH_LDAP_SEARCH = 'CN=Users,DC=ad,DC=example,DC=com'
AUTH_LDAP_SEARCH_FILTER = '&(cn=*)'
AUTH_LDAP_APPEND_DOMAIN = 'ad.example.com'
AUTH_LDAP_UID_FIELD = 'userPrincipalName'
```
Whenever I try to login with above config, I'm getting error `[2021-02-11 10:18:40,251] {manager.py:969} ERROR - {'result': -7, 'desc': 'Bad search filter', 'ctrls': []}`.
I also used `sAMAccountName` for **AUTH_LDAP_UID_FIELD**, but no luck.
For the same LDAP, below config worked with Airflow 1.10.x.
```bash
AIRFLOW__LDAP__BASEDN = "DC=ad,DC=example,DC=com"
AIRFLOW__LDAP__BIND_PASSWORD = "StrongOne"
AIRFLOW__LDAP__BIND_USER = "cn=apache.airflow,cn=Users,dc=ad,dc=example,dc=com"
AIRFLOW__LDAP__IGNORE_MALFORMED_SCHEMA = "True"
AIRFLOW__LDAP__SEARCH_SCOPE = "SUBTREE"
AIRFLOW__LDAP__URI = 'ldap://192.168.0.20:389'
AIRFLOW__LDAP__USER_FILTER = "&(cn=*)"
AIRFLOW__LDAP__USER_NAME_ATTR = "sAMAccountName"
AIRFLOW__WEBSERVER__AUTH_BACKEND = "airflow.contrib.auth.backends.ldap_auth"
```
Any help would be appreciated.
Thanks.
-Ajit
| closed | 2021-02-16T15:08:48Z | 2021-06-29T00:56:38Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1568 | [
"question",
"stale"
] | a8j8i8t8 | 2 |
STVIR/pysot | computer-vision | 590 | convert vidvrd json or xml to yolo format | Hello, does anyone have a script for converting the imagenet vid dataset annotation to yolo format? | open | 2023-04-07T05:35:43Z | 2023-04-07T05:35:43Z | https://github.com/STVIR/pysot/issues/590 | [] | b3r-prog | 0 |
jupyter/nbviewer | jupyter | 941 | 400 Bad Request : Error Reading JSON Notebook | Hi! Thanks for using Jupyter Notebook Viewer (nbviewer) and taking the time to report a bug you've encountered. Please use the template below to tell us about the problem.
If you've found a bug in a different Jupyter project (e.g., [Jupyter Notebook](http://github.com/jupyter/notebook), [JupyterLab](http://github.com/jupyterlab/jupyterlab), [JupyterHub](http://github.com/jupyterhub/jupyterhub), etc.), please open an issue using that project's issue tracker instead.
If you need help using or installing Jupyter Notebook Viewer, please use the [jupyter/help](https://github.com/jupyter/help) issue tracker instead.
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
The error occurred while submitting the URL for a Jupyter Notebook on IBM Watson Studio.
The URL is https://dataplatform.cloud.ibm.com/analytics/notebooks/v2/5326a91a-2053-492d-8654-49f52ed917b4?projectid=c2497cd9-ea82-4f48-8f7d-45f7bdaf324e&projectTitle=Course%209%20-%20Applied%20Data%20Science%20Capstone&context=cpdaas | closed | 2020-06-26T20:30:47Z | 2022-07-21T09:00:31Z | https://github.com/jupyter/nbviewer/issues/941 | [] | LLLichtenstein | 3 |
MaartenGr/BERTopic | nlp | 2,012 | Extending ".visulize_document_datamap" with "label_over_points"-flag | Hey @MaartenGr! Great work on this library. I recently came across an underlying functionality of datamapplot and noticed it is not yet implemented in the BERTopic package. I assume this relates to an updated DataMapPlot-version.
In detail: The DataMapPlot-lib allows me to assign the labels on top of the clusters instead of using "straight line pointers". This may be helpful to handle large visualizations with many clusters to ensure readability. I was not able to pass the "label_over_points" flag to the "visualize_document_datamap". The docs for the according DataMapPlot section can be found here: https://datamapplot.readthedocs.io/en/latest/label_over_points.html
My current workaround is to use the most recent version of DataMapPlot and pass in the UMAP-Embeddings, topic labels and desired DataMapPlot-flags directly. Therefore I avoid using the ".visualize_document_datamap"-Method on my topic_model.
`import datamapplot`
`fig, ax = datamapplot.create_plot(umap_embeddings, labels, label_over_points=True)`
`plt.show()`
An manual update of the underlying DataMapPlot package via pip however seems to solve the issue for me. You might wanna have a look into that. Thanks in advance!
| open | 2024-05-25T09:22:19Z | 2024-06-03T12:13:27Z | https://github.com/MaartenGr/BERTopic/issues/2012 | [] | PYaDo | 1 |
iterative/dvc | data-science | 10,402 | `dvc data status` key error | # Bug Report
<!--
## Issue name
Issue names must follow the pattern `command: description` where the command is the dvc command that you are trying to run. The description should describe the consequence of the bug.
Example: `repro: doesn't detect input changes`
-->
Running `dvc data status` gives me an error on one machine but not on another:
<details>
<summary>Error</summary>
```
dvc data status -v 10:23:23
2024-04-24 10:23:26,182 DEBUG: v3.50.0 (pip), CPython 3.12.2 on Linux-6.5.0-28-generic-x86_64-with-glibc2.35
2024-04-24 10:23:26,182 DEBUG: command: /home/matt/workspace/virtual_environments/py-3.12/bin/dvc data status -v
2024-04-24 10:23:26,494 ERROR: unexpected error - b'2c6373811567f2b2023f065fb5a333fdeefd54bb'
Traceback (most recent call last):
File "/home/matt/workspace/virtual_environments/py-3.12/lib/python3.12/site-packages/dvc/cli/__init__.py", line 211, in main
ret = cmd.do_run()
^^^^^^^^^^^^
File "/home/matt/workspace/virtual_environments/py-3.12/lib/python3.12/site-packages/dvc/cli/command.py", line 27, in do_run
return self.run()
^^^^^^^^^^
File "/home/matt/workspace/virtual_environments/py-3.12/lib/python3.12/site-packages/dvc/commands/data.py", line 110, in run
status = self.repo.data_status(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/matt/workspace/virtual_environments/py-3.12/lib/python3.12/site-packages/dvc/repo/data.py", line 234, in status
git_info = _git_info(repo.scm, untracked_files=untracked_files)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/matt/workspace/virtual_environments/py-3.12/lib/python3.12/site-packages/dvc/repo/data.py", line 141, in _git_info
staged, unstaged, untracked = scm.status(untracked_files=untracked_files)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/matt/workspace/virtual_environments/py-3.12/lib/python3.12/site-packages/scmrepo/git/__init__.py", line 307, in _backend_func
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/matt/workspace/virtual_environments/py-3.12/lib/python3.12/site-packages/scmrepo/git/backend/dulwich/__init__.py", line 880, in status
staged, unstaged, untracked = git_status(
^^^^^^^^^^^
File "/home/matt/workspace/virtual_environments/py-3.12/lib/python3.12/site-packages/dulwich/porcelain.py", line 1318, in status
tracked_changes = get_tree_changes(r)
^^^^^^^^^^^^^^^^^^^
File "/home/matt/workspace/virtual_environments/py-3.12/lib/python3.12/site-packages/dulwich/porcelain.py", line 1456, in get_tree_changes
for change in index.changes_from_tree(r.object_store, tree_id):
File "/home/matt/workspace/virtual_environments/py-3.12/lib/python3.12/site-packages/dulwich/index.py", line 553, in changes_from_tree
yield from changes_from_tree(
File "/home/matt/workspace/virtual_environments/py-3.12/lib/python3.12/site-packages/dulwich/index.py", line 657, in changes_from_tree
for name, mode, sha in iter_tree_contents(object_store, tree):
File "/home/matt/workspace/virtual_environments/py-3.12/lib/python3.12/site-packages/dulwich/object_store.py", line 1745, in iter_tree_contents
tree = store[entry.sha]
~~~~~^^^^^^^^^^^
File "/home/matt/workspace/virtual_environments/py-3.12/lib/python3.12/site-packages/dulwich/object_store.py", line 154, in __getitem__
type_num, uncomp = self.get_raw(sha1)
^^^^^^^^^^^^^^^^^^
File "/home/matt/workspace/virtual_environments/py-3.12/lib/python3.12/site-packages/dulwich/object_store.py", line 601, in get_raw
raise KeyError(hexsha)
KeyError: b'2c6373811567f2b2023f065fb5a333fdeefd54bb'
2024-04-24 10:23:26,517 DEBUG: link type reflink is not available ([Errno 95] no more link types left to try out)
2024-04-24 10:23:26,517 DEBUG: Removing '/home/matt/workspace/HA/OD-Stuff/.pQ5HNP36nqZbOfuABjeJDA.tmp'
2024-04-24 10:23:26,517 DEBUG: Removing '/home/matt/workspace/HA/OD-Stuff/.pQ5HNP36nqZbOfuABjeJDA.tmp'
2024-04-24 10:23:26,517 DEBUG: Removing '/home/matt/workspace/HA/OD-Stuff/.pQ5HNP36nqZbOfuABjeJDA.tmp'
2024-04-24 10:23:26,517 DEBUG: Removing '/home/matt/workspace/HA/OD-Stuff/ha_gym/.dvc/.cache/files/md5/.cOGbTVUVskCrA6vs0mD64Q.tmp'
2024-04-24 10:23:26,525 DEBUG: Version info for developers:
DVC version: 3.50.0 (pip)
-------------------------
Platform: Python 3.12.2 on Linux-6.5.0-28-generic-x86_64-with-glibc2.35
Subprojects:
dvc_data = 3.15.1
dvc_objects = 5.0.0
dvc_render = 1.0.1
dvc_task = 0.3.0
scmrepo = 3.1.0
Supports:
http (aiohttp = 3.9.3, aiohttp-retry = 2.8.3),
https (aiohttp = 3.9.3, aiohttp-retry = 2.8.3),
s3 (s3fs = 2024.2.0, boto3 = 1.34.34)
Config:
Global: /home/matt/.config/dvc
System: /etc/xdg/xdg-ubuntu/dvc
Cache types: hardlink, symlink
Cache directory: ext4 on /dev/nvme1n1p3
Caches: local
Remotes: s3
Workspace directory: ext4 on /dev/nvme1n1p3
Repo: dvc, git
Repo.site_cache_dir: /var/tmp/dvc/repo/b11a8fb5114eb46d6400fbaefadf5890
Having any troubles? Hit us up at https://dvc.org/support, we are always happy to help!
2024-04-24 10:23:26,527 DEBUG: Analytics is enabled.
2024-04-24 10:23:26,550 DEBUG: Trying to spawn ['daemon', 'analytics', '/tmp/tmp3i_1azl0', '-v']
2024-04-24 10:23:26,557 DEBUG: Spawned ['daemon', 'analytics', '/tmp/tmp3i_1azl0', '-v'] with pid 141995
```
</details>
## Description
This seems to be related to the untracked changes I have in my working directory. However, the other machine that this command works on also has many untracked changes too. `dvc status` still works.
<!--
A clear and concise description of what the bug is.
-->
### Reproduce
I'm not sure how to reproduce this issue.
<!--
Step list of how to reproduce the bug
-->
<!--
Example:
1. dvc init
2. Copy dataset.zip to the directory
3. dvc add dataset.zip
4. dvc run -d dataset.zip -o model ./train.sh
5. modify dataset.zip
6. dvc repro
-->
### Expected
On the other machine the same command outputs `No changes.`.
<!--
A clear and concise description of what you expect to happen.
-->
### Environment information
<!--
This is required to ensure that we can reproduce the bug.
-->
**Output of `dvc doctor`:**
```console
$ dvc doctor
DVC version: 3.50.0 (pip)
-------------------------
Platform: Python 3.12.2 on Linux-6.5.0-28-generic-x86_64-with-glibc2.35
Subprojects:
dvc_data = 3.15.1
dvc_objects = 5.0.0
dvc_render = 1.0.1
dvc_task = 0.3.0
scmrepo = 3.1.0
Supports:
http (aiohttp = 3.9.3, aiohttp-retry = 2.8.3),
https (aiohttp = 3.9.3, aiohttp-retry = 2.8.3),
s3 (s3fs = 2024.2.0, boto3 = 1.34.34)
Config:
Global: /home/matt/.config/dvc
System: /etc/xdg/xdg-ubuntu/dvc
Cache types: hardlink, symlink
Cache directory: ext4 on /dev/nvme1n1p3
Caches: local
Remotes: s3
Workspace directory: ext4 on /dev/nvme1n1p3
Repo: dvc, git
Repo.site_cache_dir: /var/tmp/dvc/repo/b11a8fb5114eb46d6400fbaefadf5890
```
**Additional Information (if any):**
<!--
Please check https://github.com/iterative/dvc/wiki/Debugging-DVC on ways to gather more information regarding the issue.
If applicable, please also provide a `--verbose` output of the command, eg: `dvc add --verbose`.
If the issue is regarding the performance, please attach the profiling information and the benchmark comparisons.
-->
| closed | 2024-04-24T09:29:55Z | 2024-04-24T11:55:26Z | https://github.com/iterative/dvc/issues/10402 | [] | mattangus | 1 |
matplotlib/matplotlib | data-visualization | 29,049 | [Bug]: matplotlib.axes.Axes.bar_label covered axis text. | ### Bug summary
When I using bar_label(..., label_type = 'edge', ...), for value is under 0, label is cover up the y-axis.
### Code for reproduction
```Python
fig, ax = plt.subplots(figsize = (10, 2))
barh = ax.barh(y = 'test1', width = 0.5, color = 'red')
ax.bar_label(barh, labels = [f"{0.5:.3f}"], label_type = 'edge', color = '#000000')
barh = ax.barh(y = 'test2', width = -0.5, color = 'blue')
ax.bar_label(barh, labels = [f"{-0.5:.3f}"], label_type = 'edge', color = '#000000')
```
### Actual outcome

### Expected outcome
label is not overwrite the y-axis.
### Additional information
_No response_
### Operating system
WSL 2
### Matplotlib Version
3.9.2
### Matplotlib Backend
module://matplotlib_inline.backend_inline
### Python version
3.12.4
### Jupyter version
jupyter notebook version: 4.0.5
### Installation
pip | closed | 2024-10-31T10:27:44Z | 2024-10-31T19:42:41Z | https://github.com/matplotlib/matplotlib/issues/29049 | [
"status: duplicate"
] | JamesLee-Datadriven | 1 |
hankcs/HanLP | nlp | 1,003 | 带表情符的CRF分词出现编码错误 UnicodeEncodeError | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [ ] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
hanlp-1.6.3.jar
当前最新版本号是:
我使用的版本是:
pyhanlp
环境是python 3.6
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
text = '主动提出拿宝宝椅,👍。十块钱😄'
seg_list = CRFnewSegment.seg(text)
seg_w_list = [term.word for term in seg_list]
comment_seg = ' '.join(seg_w_list)
print (comment_seg)
出现以下错误:UnicodeEncodeError: 'utf-8' codec can't encode character '\ud83d' in position 39: surrogates not allowed
而用HanLP.segment(text)则是正常的。去掉👍和😄,CRF也是正常的。
<!-- 请详细描述问题,越详细越可能得到解决 -->
## 复现问题
<!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?-->
### 步骤
1. 首先……
2. 然后……
3. 接着……
### 触发代码
```
text = '主动提出拿宝宝椅,👍。十块钱😄'
seg_list = CRFnewSegment.seg(text)
seg_w_list = [term.word for term in seg_list]
comment_seg = ' '.join(seg_w_list)
print (comment_seg)
```
### 期望输出
<!-- 你希望输出什么样的正确结果?-->
```
期望输出
```
### 实际输出
<!-- HanLP实际输出了什么?产生了什么效果?错在哪里?-->
```
UnicodeEncodeError: 'utf-8' codec can't encode character '\ud83d' in position 39: surrogates not allowed
```
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
| closed | 2018-10-19T13:47:14Z | 2020-01-01T10:56:17Z | https://github.com/hankcs/HanLP/issues/1003 | [
"ignored"
] | MingleiLI | 2 |
FlareSolverr/FlareSolverr | api | 1,038 | Error solving challenge (Cloudflare verify checkbox not found on the page) | ### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version: 3.3.13
- Last working FlareSolverr version: 3.3.9
- Operating system: Linux-5.15.0-91-generic-x86_64-with-glibc2.31
- Are you using Docker: yes
- FlareSolverr User-Agent (see log traces or / endpoint): Chrome / Chromium major version: 120
- Are you using a VPN: no
- Are you using a Proxy: no
- Are you using Captcha Solver: no
- If using captcha solver, which one:
- URL to test this issue: https://yenikoymotors.sahibinden.com
```
### Description
it's seem like flaresolver not detecting checkbox for a while.
### Logged Error Messages
```text
2024-01-16 13:13:24 INFO ReqId 140017019905856 FlareSolverr 3.3.13
2024-01-16 13:13:24 DEBUG ReqId 140017019905856 Debug log enabled
2024-01-16 13:13:24 INFO ReqId 140017019905856 Testing web browser installation...
2024-01-16 13:13:24 INFO ReqId 140017019905856 Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.31
2024-01-16 13:13:24 INFO ReqId 140017019905856 Chrome / Chromium path: /usr/bin/chromium
2024-01-16 13:13:24 INFO ReqId 140017019905856 Chrome / Chromium major version: 120
2024-01-16 13:13:24 INFO ReqId 140017019905856 Launching web browser...
2024-01-16 13:13:24 DEBUG ReqId 140017019905856 Launching web browser...
version_main cannot be converted to an integer
2024-01-16 13:13:24 DEBUG ReqId 140017019905856 Started executable: `/app/chromedriver` in a child process with pid: 32
2024-01-16 13:13:26 INFO ReqId 140017019905856 FlareSolverr User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36
2024-01-16 13:13:26 INFO ReqId 140017019905856 Test successful!
2024-01-16 13:13:26 INFO ReqId 140017019905856 Serving on http://0.0.0.0:8191
2024-01-16 13:14:14 INFO ReqId 140016987346688 Incoming request => POST /v1 body: {'cmd': 'sessions.create'}
2024-01-16 13:14:14 DEBUG ReqId 140016987346688 Creating new session...
2024-01-16 13:14:14 DEBUG ReqId 140016987346688 Launching web browser...
version_main cannot be converted to an integer
2024-01-16 13:14:14 DEBUG ReqId 140016987346688 Started executable: `/app/chromedriver` in a child process with pid: 162
2024-01-16 13:14:15 DEBUG ReqId 140016987346688 Response => POST /v1 body: {'status': 'ok', 'message': 'Session created successfully.', 'session': '2529fd52-b471-11ee-8cd8-02420a000009', 'startTimestamp': 1705410854286, 'endTimestamp': 1705410855283, 'version': '3.3.13'}
2024-01-16 13:14:15 INFO ReqId 140016987346688 Response in 0.997 s
2024-01-16 13:14:15 INFO ReqId 140016987346688 10.0.0.2 POST http://31.155.90.12:8191/v1 200 OK
2024-01-16 13:14:15 INFO ReqId 140016978953984 Incoming request => POST /v1 body: {'cmd': 'request.get', 'session': '2529fd52-b471-11ee-8cd8-02420a000009', 'url': 'https://yenikoymotors.sahibinden.com', 'maxTimeout': 80000}
2024-01-16 13:14:15 DEBUG ReqId 140016978953984 existing session is used to perform the request (session_id=2529fd52-b471-11ee-8cd8-02420a000009, lifetime=0:00:00.033919, ttl=None)
2024-01-16 13:14:15 DEBUG ReqId 140016953775872 Navigating to... https://yenikoymotors.sahibinden.com
2024-01-16 13:14:17 INFO ReqId 140016953775872 Challenge detected. Title found: Just a moment...
2024-01-16 13:14:17 DEBUG ReqId 140016953775872 Waiting for title (attempt 1): Just a moment...
2024-01-16 13:14:18 DEBUG ReqId 140016953775872 Timeout waiting for selector
2024-01-16 13:14:18 DEBUG ReqId 140016953775872 Try to find the Cloudflare verify checkbox...
2024-01-16 13:14:18 DEBUG ReqId 140016953775872 Cloudflare verify checkbox not found on the page.
2024-01-16 13:14:18 DEBUG ReqId 140016953775872 Try to find the Cloudflare 'Verify you are human' button...
2024-01-16 13:14:18 DEBUG ReqId 140016953775872 The Cloudflare 'Verify you are human' button not found on the page.
2024-01-16 13:14:20 DEBUG ReqId 140016953775872 Waiting for title (attempt 2): Just a moment...
2024-01-16 13:14:21 DEBUG ReqId 140016953775872 Timeout waiting for selector
2024-01-16 13:14:21 DEBUG ReqId 140016953775872 Try to find the Cloudflare verify checkbox...
2024-01-16 13:14:22 DEBUG ReqId 140016953775872 Cloudflare verify checkbox found and clicked!
2024-01-16 13:14:22 DEBUG ReqId 140016953775872 Try to find the Cloudflare 'Verify you are human' button...
2024-01-16 13:14:22 DEBUG ReqId 140016953775872 The Cloudflare 'Verify you are human' button not found on the page.
2024-01-16 13:14:24 DEBUG ReqId 140016953775872 Waiting for title (attempt 3): Just a moment...
2024-01-16 13:14:25 DEBUG ReqId 140016953775872 Timeout waiting for selector
2024-01-16 13:14:25 DEBUG ReqId 140016953775872 Try to find the Cloudflare verify checkbox...
2024-01-16 13:14:25 DEBUG ReqId 140016953775872 Cloudflare verify checkbox not found on the page.
2024-01-16 13:14:25 DEBUG ReqId 140016953775872 Try to find the Cloudflare 'Verify you are human' button...
2024-01-16 13:14:25 DEBUG ReqId 140016953775872 The Cloudflare 'Verify you are human' button not found on the page.
2024-01-16 13:14:27 DEBUG ReqId 140016953775872 Waiting for title (attempt 4): Just a moment...
2024-01-16 13:14:28 DEBUG ReqId 140016953775872 Timeout waiting for selector
2024-01-16 13:14:28 DEBUG ReqId 140016953775872 Try to find the Cloudflare verify checkbox...
2024-01-16 13:14:29 DEBUG ReqId 140016953775872 Cloudflare verify checkbox not found on the page.
2024-01-16 13:14:29 DEBUG ReqId 140016953775872 Try to find the Cloudflare 'Verify you are human' button...
2024-01-16 13:14:29 DEBUG ReqId 140016953775872 The Cloudflare 'Verify you are human' button not found on the page.
```
### Screenshots
_No response_ | closed | 2024-01-17T07:03:45Z | 2024-01-17T22:56:03Z | https://github.com/FlareSolverr/FlareSolverr/issues/1038 | [
"duplicate"
] | ahmetzrnz | 1 |
Buuntu/fastapi-react | sqlalchemy | 18 | Add Prettier and Black to Github Actions | Should be run either as pre-commit hooks or on Github actions as a check (preferably both). | closed | 2020-05-21T05:43:52Z | 2020-07-29T03:19:47Z | https://github.com/Buuntu/fastapi-react/issues/18 | [] | Buuntu | 0 |
sqlalchemy/alembic | sqlalchemy | 357 | alembic-script unquoted executable path causes failure to run on windows. | **Migrated issue, originally created by khazhyk ([@khazhyk](https://github.com/khazhyk))**
I'm on Windows 10.
I noticed that when installing alembic through pip (version 0.8.4 as of now), when trying to run the 'alembic' command I got an error 'failed to create process.'
^ for anyone googling.
In alembic-script.py the first line is as such when it is installed:
`#!c:\program files (x86)\python35-32\python.exe`
This results in trying to invoke "C:\program" as the python interpreter. When the path is quoted and the line is changed to
`#!"c:\program files (x86)\python35-32\python.exe"`
Alembic functions as expected.
| closed | 2016-02-10T03:36:38Z | 2016-02-10T14:48:38Z | https://github.com/sqlalchemy/alembic/issues/357 | [
"bug",
"installation"
] | sqlalchemy-bot | 3 |
ufoym/deepo | jupyter | 26 | How can I get image with cuda 8.0 and cudnn v6.0 | This docker image is an excellent work. Thank you very much for the work.
Can I get older image with cuda 8.0 and cudnn v6.0 ? | closed | 2018-03-20T00:03:10Z | 2018-04-11T14:10:29Z | https://github.com/ufoym/deepo/issues/26 | [] | dfayzur | 1 |
deeppavlov/DeepPavlov | nlp | 1,205 | How to reproduce UD dependency parsing result? | I can't reproduce dependency parsing metrics (UAS=95.2, LAS=93.7) mentioned in the bottom of this page http://docs.deeppavlov.ai/en/master/features/models/syntaxparser.html
I use following script with Universal Dependencies 2.3 and conll18_ud_eval.py.
```
import io
from deeppavlov import build_model, configs
model = build_model(configs.syntax.syntax_ru_syntagrus_bert, download=True)
sentences = []
for line in io.open(sys.argv[1], 'r', encoding='utf-8').readlines():
if line.startswith('# text = '):
sentences.append(line[8:])
for sent in sentences:
for parse in model([sent]):
parse = parse.replace('\t\'\'\t', '\t"\t').replace('\t``\t', '\t"\t')
print(parse, end="\n")
```
Below is the result I get for ud-treebanks-v2.3/UD_Russian-SynTagRus/ru_syntagrus-ud-test.conllu input.
```
Metric | Precision | Recall | F1 Score | AligndAcc
-----------+-----------+-----------+-----------+-----------
UAS | 94.11 | 93.26 | 93.68 | 94.75
LAS | 92.78 | 91.95 | 92.36 | 93.41
```
What am I doing wrong? | closed | 2020-05-08T14:16:32Z | 2020-05-14T07:54:19Z | https://github.com/deeppavlov/DeepPavlov/issues/1205 | [] | victorbocharov | 4 |
ipython/ipython | jupyter | 14,018 | IPython in jupyter only accept text input | Although this issue is known with long history, I'm interested in understanding what exactly it is related to.
Specifically, when running an ipython or similar nested in a jupyter console, for example, using `IPython.embed` or entering ipdb with `%debug`. The console will only accept text input, and features such as history (where arrow keys generate escape sequences) and auto-completion (where the tab key prints a space) do not function.
There are many related questions regarding this matter, including:
https://github.com/jupyterlab/jupyterlab/issues/2459
https://github.com/ipython/ipython/issues/682
https://github.com/jupyter/jupyter_console/issues/162 | open | 2023-04-14T18:44:23Z | 2023-04-18T09:45:59Z | https://github.com/ipython/ipython/issues/14018 | [] | fecet | 1 |
explosion/spaCy | deep-learning | 12,416 | Installation issue on old macOSes for new Korean tokenizer in v4.0 alpha | Hi, I noticed from #12328 that spaCy has switched to `pymecab-ko` for the Korean tokenizer in the upcoming `spaCy` 4.0, but there seems to be some installation/import issues of this package on macOSes (cf. [pymecab-ko/#5](https://github.com/NoUnique/pymecab-ko/issues/5)).
I've tried on OS X 10.11 that [python-mecab-ko](https://github.com/jonghwanhyeon/python-mecab-ko), the alternative mentioned in #12328, could be successfully compiled, installed, and imported. I'm wondering that whether it is possible to add this as another alternative for the Korean tokenizer in `spaCy` 4.0?
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: Windows 11 x64, OS X 10.11
* Python Version Used: 3.9.16
* spaCy Version Used: 4.0 alpha
| open | 2023-03-14T08:41:29Z | 2023-03-14T15:23:15Z | https://github.com/explosion/spaCy/issues/12416 | [
"lang / ko"
] | BLKSerene | 1 |
Lightning-AI/pytorch-lightning | pytorch | 19,858 | Dynamically link arguments in `LightningCLI`? | ### Description & Motivation
Is it possible to _dynamically_ link arguments in the `LightningCLI`, say, depending on the module or datamodule subclass that is specified in a config file or at the command line?
### Pitch
_No response_
### Alternatives
_No response_
### Additional context
_No response_
cc @borda @carmocca @mauvilsa | closed | 2024-05-09T17:17:19Z | 2024-05-14T20:11:52Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19858 | [
"feature",
"lightningcli"
] | EthanMarx | 2 |
fastapi/sqlmodel | pydantic | 465 | How can I specify the order in which the tables are created via `create_all()` | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
Not applicable
```
### Description
Hi there,
I make use of ` SQLModel.metadata.create_all(engine)` to invoke SQL creation scripts of my models. I have two simple models linked by an additional association table (3 models in total: `File`, `Product` ,`ProductFileLink`)
The output of `create_all()` indicates that all three models are listed to be created:
```
2022-10-10 19:28:42,750 INFO sqlalchemy.engine.Engine select pg_catalog.version()
2022-10-10 19:28:42,751 INFO sqlalchemy.engine.Engine [raw sql] {}
2022-10-10 19:28:42,775 INFO sqlalchemy.engine.Engine select current_schema()
2022-10-10 19:28:42,776 INFO sqlalchemy.engine.Engine [raw sql] {}
2022-10-10 19:28:42,806 INFO sqlalchemy.engine.Engine show standard_conforming_strings
2022-10-10 19:28:42,806 INFO sqlalchemy.engine.Engine [raw sql] {}
2022-10-10 19:28:42,829 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2022-10-10 19:28:42,829 INFO sqlalchemy.engine.Engine select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s
2022-10-10 19:28:42,830 INFO sqlalchemy.engine.Engine [generated in 0.00067s] {'name': 'productfilelink'}
2022-10-10 19:28:42,857 INFO sqlalchemy.engine.Engine select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s
2022-10-10 19:28:42,858 INFO sqlalchemy.engine.Engine [cached since 0.02802s ago] {'name': 'file'}
2022-10-10 19:28:42,873 INFO sqlalchemy.engine.Engine select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s
2022-10-10 19:28:42,873 INFO sqlalchemy.engine.Engine [cached since 0.04406s ago] {'name': 'product'}
```
During the creation of the tables, the table `ProductFileLink` *seems* to be created before the `File` Table so that I get a bug complaining about missing the foreign key `file.id`
```
2022-10-10 19:28:42,886 INFO sqlalchemy.engine.Engine
CREATE TABLE product (
name VARCHAR NOT NULL,
id SERIAL,
PRIMARY KEY (id)
)
2022-10-10 19:28:42,888 INFO sqlalchemy.engine.Engine [no key 0.00134s] {}
2022-10-10 19:28:42,926 INFO sqlalchemy.engine.Engine CREATE UNIQUE INDEX ix_product_name ON product (name)
2022-10-10 19:28:42,927 INFO sqlalchemy.engine.Engine [no key 0.00038s] {}
2022-10-10 19:28:42,942 INFO sqlalchemy.engine.Engine
CREATE TABLE productfilelink (
f_product_id INTEGER,
f_file_id INTEGER,
PRIMARY KEY (f_product_id, f_file_id),
FOREIGN KEY(f_product_id) REFERENCES product (id),
FOREIGN KEY(f_file_id) REFERENCES file (id2)
)
2022-10-10 19:28:42,943 INFO sqlalchemy.engine.Engine [no key 0.00124s] {}
2022-10-10 19:28:42,963 INFO sqlalchemy.engine.Engine ROLLBACK
```
```
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedColumn) column "id2" referenced in foreign key constraint does not exist
[SQL:
CREATE TABLE productfilelink (
f_product_id INTEGER,
f_file_id INTEGER,
PRIMARY KEY (f_product_id, f_file_id),
FOREIGN KEY(f_product_id) REFERENCES product (id),
FOREIGN KEY(f_file_id) REFERENCES file (id2)
)
]
```
Does anyone know how I can rearrange the order of the creation scripts to that `File` gets created before `ProductFileLink`?
### Operating System
Windows
### Operating System Details
_No response_
### SQLModel Version
0.0.6
### Python Version
3.8.1
### Additional Context
_No response_ | closed | 2022-10-10T17:37:40Z | 2022-11-11T14:59:41Z | https://github.com/fastapi/sqlmodel/issues/465 | [
"question"
] | christianholland | 7 |
lorien/grab | web-scraping | 296 | spider freezes if task generator does not yield any task | closed | 2018-02-20T20:05:48Z | 2018-04-18T20:39:05Z | https://github.com/lorien/grab/issues/296 | [
"bug"
] | lorien | 0 | |
flairNLP/fundus | web-scraping | 315 | [Feature Request]: Add one Lithuanian news source to Fundus | ### Problem statement
Add one Lithuanian news source to Fundus
### Solution
@lukasgarbas :)
Also check if the guidelines are more clear now.
### Additional Context
_No response_ | closed | 2023-09-01T09:39:03Z | 2024-04-06T10:51:32Z | https://github.com/flairNLP/fundus/issues/315 | [
"feature"
] | alanakbik | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,507 | How can I reset the range of Y axis on Visdom? | Please help me how to custom the range of the Y axis on Visdom because I don't know where can I fix it. | open | 2022-11-14T07:53:49Z | 2022-11-14T07:53:49Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1507 | [] | may-phyu | 0 |
ets-labs/python-dependency-injector | flask | 115 | Fix API docs on RTD | closed | 2015-11-26T14:17:34Z | 2015-11-30T10:13:14Z | https://github.com/ets-labs/python-dependency-injector/issues/115 | [
"bug",
"enhancement"
] | rmk135 | 0 | |
alirezamika/autoscraper | automation | 25 | Pulling tables would be awesome | Perhaps I missed it somewhere, but it would be great to go here:
https://www.whoscored.com/Regions/252/Tournaments/2/Seasons/6829/Stages/15151/PlayerStatistics/England-Premier-League-2017-2018
And grab the entire table(s):
Premier League Player Statistics
Premier League Assist to Goal Scorer
| closed | 2020-09-14T12:35:10Z | 2022-11-12T12:41:43Z | https://github.com/alirezamika/autoscraper/issues/25 | [] | craine | 11 |
Farama-Foundation/PettingZoo | api | 1,176 | [Bug Report] TerminateIllegalWrapper breaks agent selection | ### Describe the bug
When running an env using TerminateIllegalWrapper and not using the action mask, the agent selection becomes corrupted when an illegal move is made.
Here is an example from tictactoe (code below). Notice in the first game that player 1 starts (as expected) and it alternates between players 1 and 2 (as expected) until player 1 makes an illegal move, which is caught by the wrapper.
However, in the second game, player 1 makes two moves in a row. That should not happen. Also note that the illegal move flagged is not actually illegal per the game rules.
This behaviour has been reported for other games.
```
New game
--------
calling reset(seed=42)
player_1 is making action: 1 current board:[0, 0, 0, 0, 0, 0, 0, 0, 0]
player_2 is making action: 5 current board:[0, 1, 0, 0, 0, 0, 0, 0, 0]
player_1 is making action: 7 current board:[0, 1, 0, 0, 0, 2, 0, 0, 0]
player_2 is making action: 4 current board:[0, 1, 0, 0, 0, 2, 0, 1, 0]
player_1 is making action: 1 current board:[0, 1, 0, 0, 2, 2, 0, 1, 0]
[WARNING]: Illegal move made, game terminating with current player losing.
obs['action_mask'] contains a mask of all legal moves that can be chosen.
New game
--------
calling reset(seed=42)
player_1 is making action: 5 current board:[0, 0, 0, 0, 0, 0, 0, 0, 0]
player_1 is making action: 0 current board:[0, 0, 0, 0, 0, 1, 0, 0, 0]
[WARNING]: Illegal move made, game terminating with current player losing.
obs['action_mask'] contains a mask of all legal moves that can be chosen.
```
### Code example
```shell
from pettingzoo.classic import tictactoe_v3
env = tictactoe_v3.env()
def do_game(seed):
print("\nNew game")
print("--------")
print(f"calling reset(seed={seed})")
env.reset(seed)
for agent in env.agent_iter():
observation, reward, termination, truncation, info = env.last()
if termination or truncation:
env.step(None)
else:
mask = observation["action_mask"]
# this is where you would insert your policy
action = env.action_space(agent).sample() # **no action_mask applied**
print(f"{env.agent_selection} is making action: {action} current board:{env.board}")
env.step(action)
do_game(42)
do_game(42)
```
### System info
```
>>> import sys; sys.version
'3.9.12 (main, Apr 5 2022, 06:56:58) \n[GCC 7.5.0]'
>>> pettingzoo.__version__
'1.24.3'
```
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/PettingZoo/issues) in the repo
| closed | 2024-02-05T20:55:56Z | 2024-06-21T01:40:44Z | https://github.com/Farama-Foundation/PettingZoo/issues/1176 | [
"bug"
] | dm-ackerman | 4 |
django-import-export/django-import-export | django | 1,823 | `resource_class` was deprecated and removed, but it still works | **Describe the bug**
`resource_class` was removed in v4 in favour of `resource_classes`. However the following declaration still works:
```
class BookAdmin(ImportExportModelAdmin):
list_display = ("name", "author", "added")
list_filter = ["categories", "author"]
resource_class = BookResource
change_list_template = "core/admin/change_list.html"
```
We probably need to re-add the deprecation and ensure it is removed in a future release.
**To Reproduce**
update `core/admin.py` with the above snippet and you will find you can import a file ok.
| closed | 2024-05-12T15:03:03Z | 2024-05-16T08:18:45Z | https://github.com/django-import-export/django-import-export/issues/1823 | [
"chore"
] | matthewhegarty | 2 |
JohnSnowLabs/nlu | streamlit | 153 | Error while trying to load nlu.load('embed_sentence.bert') | I am trying to create sentence similarity model using Spark_nlp, but i am getting the below two different errors.
sent_small_bert_L2_128 download started this may take some time.
Approximate size to download 16.1 MB
[OK!]
---------------------------------------------------------------------------
IllegalArgumentException Traceback (most recent call last)
File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\nlu\pipe\component_resolution.py:276, in get_trained_component_for_nlp_model_ref(lang, nlu_ref, nlp_ref, license_type, model_configs)
274 if component.get_pretrained_model:
275 component = component.set_metadata(
--> 276 component.get_pretrained_model(nlp_ref, lang, model_bucket),
277 nlu_ref, nlp_ref, lang, False, license_type)
278 else:
File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\nlu\components\embeddings\sentence_bert\BertSentenceEmbedding.py:13, in BertSentence.get_pretrained_model(name, language, bucket)
11 @staticmethod
12 def get_pretrained_model(name, language, bucket=None):
---> 13 return BertSentenceEmbeddings.pretrained(name,language,bucket) \
14 .setInputCols('sentence') \
15 .setOutputCol("sentence_embeddings")
File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\sparknlp\annotator\embeddings\bert_sentence_embeddings.py:231, in BertSentenceEmbeddings.pretrained(name, lang, remote_loc)
230 from sparknlp.pretrained import ResourceDownloader
--> 231 return ResourceDownloader.downloadModel(BertSentenceEmbeddings, name, lang, remote_loc)
File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\sparknlp\pretrained\resource_downloader.py:40, in ResourceDownloader.downloadModel(reader, name, language, remote_loc, j_dwn)
39 try:
---> 40 j_obj = _internal._DownloadModel(reader.name, name, language, remote_loc, j_dwn).apply()
41 except Py4JJavaError as e:
File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\sparknlp\internal\__init__.py:317, in _DownloadModel.__init__(self, reader, name, language, remote_loc, validator)
316 def __init__(self, reader, name, language, remote_loc, validator):
--> 317 super(_DownloadModel, self).__init__("com.johnsnowlabs.nlp.pretrained." + validator + ".downloadModel", reader,
318 name, language, remote_loc)
File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\sparknlp\internal\extended_java_wrapper.py:26, in ExtendedJavaWrapper.__init__(self, java_obj, *args)
25 self.sc = SparkContext._active_spark_context
---> 26 self._java_obj = self.new_java_obj(java_obj, *args)
27 self.java_obj = self._java_obj
File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\sparknlp\internal\extended_java_wrapper.py:36, in ExtendedJavaWrapper.new_java_obj(self, java_class, *args)
35 def new_java_obj(self, java_class, *args):
---> 36 return self._new_java_obj(java_class, *args)
File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\pyspark\ml\wrapper.py:69, in JavaWrapper._new_java_obj(java_class, *args)
68 java_args = [_py2java(sc, arg) for arg in args]
---> 69 return java_obj(*java_args)
File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\py4j\java_gateway.py:1304, in JavaMember.__call__(self, *args)
1303 answer = self.gateway_client.send_command(command)
-> 1304 return_value = get_return_value(
1305 answer, self.gateway_client, self.target_id, self.name)
1307 for temp_arg in temp_args:
File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\pyspark\sql\utils.py:134, in capture_sql_exception.<locals>.deco(*a, **kw)
131 if not isinstance(converted, UnknownException):
132 # Hide where the exception came from that shows a non-Pythonic
133 # JVM exception message.
--> 134 raise_from(converted)
135 else:
File <string>:3, in raise_from(e)
IllegalArgumentException: requirement failed: Was not found appropriate resource to download for request: ResourceRequest(sent_small_bert_L2_128,Some(en),public/models,4.0.2,3.3.0) with downloader: com.johnsnowlabs.nlp.pretrained.S3ResourceDownloader@c7c973f
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\nlu\__init__.py:234, in load(request, path, verbose, gpu, streamlit_caching, m1_chip)
233 continue
--> 234 nlu_component = nlu_ref_to_component(nlu_ref)
235 # if we get a list of components, then the NLU reference is a pipeline, we do not need to check order
File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\nlu\pipe\component_resolution.py:160, in nlu_ref_to_component(nlu_ref, detect_lang, authenticated)
159 else:
--> 160 resolved_component = get_trained_component_for_nlp_model_ref(lang, nlu_ref, nlp_ref, license_type, model_params)
162 if resolved_component is None:
File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\nlu\pipe\component_resolution.py:287, in get_trained_component_for_nlp_model_ref(lang, nlu_ref, nlp_ref, license_type, model_configs)
286 except Exception as e:
--> 287 raise ValueError(f'Failure making component, nlp_ref={nlp_ref}, nlu_ref={nlu_ref}, lang={lang}, \n err={e}')
289 return component
ValueError: Failure making component, nlp_ref=sent_small_bert_L2_128, nlu_ref=embed_sentence.bert, lang=en,
err=requirement failed: Was not found appropriate resource to download for request: ResourceRequest(sent_small_bert_L2_128,Some(en),public/models,4.0.2,3.3.0) with downloader: com.johnsnowlabs.nlp.pretrained.S3ResourceDownloader@c7c973f
During handling of the above exception, another exception occurred:
Exception Traceback (most recent call last)
Cell In [16], line 2
1 import nlu
----> 2 pipe = nlu.load('embed_sentence.bert')
3 print("pipe",pipe)
File c:\users\ramesar2\appdata\local\programs\python\python38\lib\site-packages\nlu\__init__.py:249, in load(request, path, verbose, gpu, streamlit_caching, m1_chip)
247 print(e[1])
248 print(err)
--> 249 raise Exception(
250 f"Something went wrong during creating the Spark NLP model_anno_obj for your request = {request} Did you use a NLU Spell?")
251 # Complete Spark NLP Pipeline, which is defined as a DAG given by the starting Annotators
252 try:
Exception: Something went wrong during creating the Spark NLP model_anno_obj for your request = embed_sentence.bert Did you use a NLU Spell? | open | 2022-10-14T12:43:57Z | 2022-10-14T12:50:50Z | https://github.com/JohnSnowLabs/nlu/issues/153 | [] | arvindacodes | 1 |
pydata/pandas-datareader | pandas | 424 | UnicodeDecodeError on scraping data from multiple sources | When I run the following code I get `UnicodeDecodeError: 'utf-8' codec can't decode byte [x] in position [y]: invalid continuation byte`, where `[x]` & `[y]` vary depending on the requested stock data or source:
```
from pandas_datareader import data, wb
import datetime
start = datetime.datetime(2017, 1, 1)
end = datetime.datetime(2017, 11, 1)
bac = data.DataReader('BAC', 'google', start, end)
```
I've tested on various sources (3 so far on 4 or 5 stocks) and I consistently get this error.
I'm running this on a conda environment, where the python's version is `3.6` and `pandas_datareader`'s version is `0.5.0`.
Could someone point out what is the issue here? | closed | 2017-11-29T16:20:51Z | 2018-08-06T09:15:25Z | https://github.com/pydata/pandas-datareader/issues/424 | [
"yahoo-finance"
] | RobertLucian | 16 |
google-research/bert | tensorflow | 1,116 | Dealing with ellipses in BERT tokenization | I have a speech transcript dataset in which ellipses (...) have been used to indicate the speaker pause. I am using BERT embeddings for text classification. It is very important for me that the BERT model properly recognizes these ellipses (...). Currently, it tokenizes them as 3 separate full stops, so I am not sure if it is capturing the context of the essential "speaker pause" in the speech transcripts. What can I do in such cases? Should I replace the ellipses (...) with another symbol like a hash or dash? or let them remain as it is? | open | 2020-06-30T10:04:19Z | 2020-10-30T17:40:58Z | https://github.com/google-research/bert/issues/1116 | [] | fliptrail | 2 |
tfranzel/drf-spectacular | rest-api | 599 | django-filter BaseInFilter type customization | Hello,
I am using django-filter with a classical "CharInFilter" that is defined (as recommended) by
```
class CharInFilter(BaseInFilter, CharFilter):
pass
```
The open api type that comes out of this is 'array[string]', but that's incorrect, as the input is actually comma separated values the right type should be 'string'.
I tried to customize this class in order to fix the issue via:
```
from drf_spectacular.contrib.django_filters import DjangoFilterExtension
class CharInFilterExtension(DjangoFilterExtension):
target_class = CharInFilter
priority = 1
match_subclasses = True
def map_serializer_field(self, auto_schema, direction):
return build_basic_type(OpenApiTypes.STR)
```
but this doesn't seem to have any effect.
I would appreciate recommendation on how to customize this.
Thanks for helping,
Loic | closed | 2021-11-10T16:46:43Z | 2021-11-10T19:02:41Z | https://github.com/tfranzel/drf-spectacular/issues/599 | [] | quertenmont | 6 |
encode/httpx | asyncio | 1,939 | 0.21.0: pytest is failing | I'm trying to package your module as an rpm package. So I'm using the typical build, install and test cycle used on building packages from non-root account.
- "setup.py build"
- "setup.py install --root </install/prefix>"
- "pytest with PYTHONPATH pointing to sitearch and sitelib inside </install/prefix>
May I ask for help because is failing on collecting units:
```console
+ PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-httpx-0.21.0-2.fc35.x86_64/usr/lib64/python3.8/site-packages:/home/tkloczko/rpmbuild/BUILDROOT/python-httpx-0.21.0-2.fc35.x86_64/usr/lib/python3.8/site-packages
+ /usr/bin/pytest -ra -p no:randomly
=========================================================================== test session starts ============================================================================
platform linux -- Python 3.8.12, pytest-6.2.5, py-1.10.0, pluggy-0.13.1
rootdir: /home/tkloczko/rpmbuild/BUILD/httpx-0.21.0, configfile: setup.cfg
plugins: rerunfailures-9.1.1, cov-2.12.1, forked-1.3.0, xdist-2.3.0, flake8-1.0.7, shutil-1.7.0, virtualenv-1.7.0, trio-0.7.0, mock-3.6.1, timeout-2.0.1, anyio-3.3.1
collected 351 items / 7 errors / 344 selected
================================================================================== ERRORS ==================================================================================
__________________________________________________________________ ERROR collecting tests/test_content.py __________________________________________________________________
tests/test_content.py:10: in <module>
@pytest.mark.asyncio
/usr/lib/python3.8/site-packages/_pytest/mark/structures.py:510: in __getattr__
warnings.warn(
E pytest.PytestUnknownMarkWarning: Unknown pytest.mark.asyncio - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
_________________________________________________________________ ERROR collecting tests/test_decoders.py __________________________________________________________________
tests/test_decoders.py:124: in <module>
???
/usr/lib/python3.8/site-packages/_pytest/mark/structures.py:510: in __getattr__
warnings.warn(
E pytest.PytestUnknownMarkWarning: Unknown pytest.mark.asyncio - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
___________________________________________________________________ ERROR collecting tests/test_utils.py ___________________________________________________________________
tests/test_utils.py:100: in <module>
???
/usr/lib/python3.8/site-packages/_pytest/mark/structures.py:510: in __getattr__
warnings.warn(
E pytest.PytestUnknownMarkWarning: Unknown pytest.mark.asyncio - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
________________________________________________________________ ERROR collecting tests/client/test_auth.py ________________________________________________________________
tests/client/test_auth.py:151: in <module>
???
/usr/lib/python3.8/site-packages/_pytest/mark/structures.py:510: in __getattr__
warnings.warn(
E pytest.PytestUnknownMarkWarning: Unknown pytest.mark.asyncio - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
______________________________________________________________ ERROR collecting tests/client/test_proxies.py _______________________________________________________________
tests/client/test_proxies.py:121: in <module>
???
/usr/lib/python3.8/site-packages/_pytest/mark/structures.py:510: in __getattr__
warnings.warn(
E pytest.PytestUnknownMarkWarning: Unknown pytest.mark.asyncio - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
______________________________________________________________ ERROR collecting tests/models/test_requests.py ______________________________________________________________
tests/models/test_requests.py:89: in <module>
???
/usr/lib/python3.8/site-packages/_pytest/mark/structures.py:510: in __getattr__
warnings.warn(
E pytest.PytestUnknownMarkWarning: Unknown pytest.mark.asyncio - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
_____________________________________________________________ ERROR collecting tests/models/test_responses.py ______________________________________________________________
tests/models/test_responses.py:335: in <module>
???
/usr/lib/python3.8/site-packages/_pytest/mark/structures.py:510: in __getattr__
warnings.warn(
E pytest.PytestUnknownMarkWarning: Unknown pytest.mark.asyncio - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
========================================================================= short test summary info ==========================================================================
ERROR tests/test_content.py - pytest.PytestUnknownMarkWarning: Unknown pytest.mark.asyncio - is this a typo? You can register custom marks to avoid this warning - for d...
ERROR tests/test_decoders.py - pytest.PytestUnknownMarkWarning: Unknown pytest.mark.asyncio - is this a typo? You can register custom marks to avoid this warning - for ...
ERROR tests/test_utils.py - pytest.PytestUnknownMarkWarning: Unknown pytest.mark.asyncio - is this a typo? You can register custom marks to avoid this warning - for det...
ERROR tests/client/test_auth.py - pytest.PytestUnknownMarkWarning: Unknown pytest.mark.asyncio - is this a typo? You can register custom marks to avoid this warning - f...
ERROR tests/client/test_proxies.py - pytest.PytestUnknownMarkWarning: Unknown pytest.mark.asyncio - is this a typo? You can register custom marks to avoid this warning ...
ERROR tests/models/test_requests.py - pytest.PytestUnknownMarkWarning: Unknown pytest.mark.asyncio - is this a typo? You can register custom marks to avoid this warning...
ERROR tests/models/test_responses.py - pytest.PytestUnknownMarkWarning: Unknown pytest.mark.asyncio - is this a typo? You can register custom marks to avoid this warnin...
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 7 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
============================================================================ 7 errors in 2.85s =============================================================================
```
| closed | 2021-11-15T20:35:42Z | 2021-11-16T09:30:31Z | https://github.com/encode/httpx/issues/1939 | [] | kloczek | 1 |
ultralytics/ultralytics | computer-vision | 18,829 | Yolo incompatible with Jetpack 6.2(Jetson Orin Nano Super) | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
_No response_
### Bug
I just installed Jepack 6.2, and sudo `sudo pip3 install ultralytics`.
It seems it uses libcudnn.so.8 not cuDNN: 9.3.0.75.
The issue might be pytorch, as I didn't see any correct version for L4T36.4.3, which is for Super performance: https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048
Any ideas?
See below link for details:
https://forums.developer.nvidia.com/t/yolo-incompatible-with-jetpack-6-2-jetson-orin-nano-super/321078
### Environment
```
Software part of jetson-stats 4.3.1 - (c) 2024, Raffaello Bonghi
Model: NVIDIA Jetson Orin Nano Developer Kit - Jetpack 6.2 [L4T 36.4.3]
NV Power Mode[0]: 15W
Serial Number: [XXX Show with: jetson_release -s XXX]
Hardware:
- P-Number: p3767-0005
- Module: NVIDIA Jetson Orin Nano (Developer kit)
Platform:
- Distribution: Ubuntu 22.04 Jammy Jellyfish
- Release: 5.15.148-tegra
jtop:
- Version: 4.3.1
- Service: Active
Libraries:
- CUDA: 12.6.68
- cuDNN: 9.3.0.75
- TensorRT: 10.3.0.30
- VPI: 3.2.4
- Vulkan: 1.3.204
- OpenCV: 4.11.0 - with CUDA: YES
```
### Minimal Reproducible Example
```
sudo pip3 install ultralytics
```
### Additional
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2025-01-22T22:21:29Z | 2025-03-19T23:51:16Z | https://github.com/ultralytics/ultralytics/issues/18829 | [
"bug",
"dependencies",
"embedded"
] | lida2003 | 8 |
lanpa/tensorboardX | numpy | 360 | embeding not working with tensorbard 1.12 | pip list|grep tensor
tensorboard 1.12.2
tensorboardX 1.6
tensorflow 1.12.0
tensorflow-probability 0.5.0
download https://github.com/lanpa/tensorboardX/blob/3a4c848ca850015ef5b2c7184dbe65da4eb674f4/examples/demo_embedding.py and embeding not showing in tensorboard. I see tensorboard need tf.saver to create embedings, maybe related to this?
Thanks
| closed | 2019-02-27T00:17:47Z | 2019-07-04T18:51:35Z | https://github.com/lanpa/tensorboardX/issues/360 | [] | jluo-bgl | 5 |
modelscope/data-juicer | streamlit | 197 | [docs] need descriptions for batch op in DeveloperGuide | I find that it is hard to find the bug when I do not know that `_batched_op` must be `True` when developing a batch op. | closed | 2024-01-26T03:48:04Z | 2024-02-26T02:08:19Z | https://github.com/modelscope/data-juicer/issues/197 | [
"documentation"
] | BeachWang | 1 |
NVIDIA/pix2pixHD | computer-vision | 335 | not work in python 3.10 | CustomDatasetDataLoader
Traceback (most recent call last):
File "C:\Users\GAMER\Desktop\pix2pixHD-master\train.py", line 43, in <module>
data_loader = CreateDataLoader(opt)
File "C:\Users\GAMER\Desktop\pix2pixHD-master\data\data_loader.py", line 6, in CreateDataLoader
data_loader.initialize(opt)
File "C:\Users\GAMER\Desktop\pix2pixHD-master\data\custom_dataset_data_loader.py", line 20, in initialize
self.dataset = CreateDataset(opt)
File "C:\Users\GAMER\Desktop\pix2pixHD-master\data\custom_dataset_data_loader.py", line 7, in CreateDataset
from data.aligned_dataset import AlignedDataset
File "C:\Users\GAMER\Desktop\pix2pixHD-master\data\aligned_dataset.py", line 2, in <module>
from data.base_dataset import BaseDataset, get_params, get_transform, normalize
File "C:\Users\GAMER\Desktop\pix2pixHD-master\data\base_dataset.py", line 3, in <module>
import torchvision.transforms as transforms
File "C:\Users\GAMER\AppData\Local\Programs\Python\Python310\lib\site-packages\torchvision\__init__.py", line 6, in <module>
from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils
File "C:\Users\GAMER\AppData\Local\Programs\Python\Python310\lib\site-packages\torchvision\_meta_registrations.py", line 163, in <module>
@torch._custom_ops.impl_abstract("torchvision::nms")
AttributeError: module 'torch._custom_ops' has no attribute 'impl_abstract' | open | 2024-03-08T16:47:30Z | 2024-03-08T16:47:30Z | https://github.com/NVIDIA/pix2pixHD/issues/335 | [] | kingstone101 | 0 |
mwaskom/seaborn | data-science | 3,486 | FutureWarning: is_categorical_dtype is deprecated and will be removed in a future version | With pandas 2.1, generating a simple histplot raises the following warning:
```
FutureWarning: is_categorical_dtype is deprecated and will be removed in a future version. Use isinstance(dtype, CategoricalDtype) instead
if pd.api.types.is_categorical_dtype(vector):
``` | closed | 2023-09-22T12:11:54Z | 2023-09-26T12:31:38Z | https://github.com/mwaskom/seaborn/issues/3486 | [] | rgoubet | 5 |
tensorflow/tensor2tensor | deep-learning | 1,042 | Error in running translate_ende_wmt32k by universal transformer | ### Description
My command is as below:
```
HOME_DIR=$HOME/workdir
PROBLEM=translate_ende_wmt32k
MODEL=universal_transformer
HPARAMS=universal_transformer_base
NUM_GPU=2
DATA_DIR=$HOME_DIR/t2t_data/$PROBLEM
TRAIN_DIR=$HOME_DIR/t2t_train/$PROBLEM/$MODEL-$HPARAMS
mkdir -p $TRAIN_DIR
t2t-trainer \
--data_dir=$DATA_DIR \
--problem=$PROBLEM \
--model=$MODEL \
--hparams_set=$HPARAMS \
--output_dir=$TRAIN_DIR \
--worker_gpu=$NUM_GPU
```
### Environment information
TF=1.8
T2T=1.9 (But I installed it from master in order to avoid a data generation bug of translate_ende_wmt32k)
OS: RHEL 7.0
```
$ pip freeze | grep tensor
-e git+https://github.com/tensorflow/tensor2tensor@54510673d1ded46575c68091d2aaf791f9007a34#egg=tensor2tensor
tensorboard==1.8.0
tensorflow==0.10.0
tensorflow-gpu==1.8.0
tensorflow-tensorboard==1.5.1
```
```
$ python -V
Python 2.7.12 :: Anaconda 4.2.0 (64-bit)
```
# Error logs:
```
WARNING:tensorflow:From /gruntdata/sanjie.lp/projects/tensor2tensor/tensor2tensor/utils/trainer_lib.py:198: __init__ (from tensorflow.contrib.learn.python.learn.estimators.run_config) is deprecated and will be removed in a future version.
Instructions for updating:
When switching to tf.estimator.Estimator, use tf.estimator.RunConfig instead.
INFO:tensorflow:schedule=continuous_train_and_eval
INFO:tensorflow:worker_gpu=2
INFO:tensorflow:sync=False
WARNING:tensorflow:Schedule=continuous_train_and_eval. Assuming that training is running on a single machine.
INFO:tensorflow:datashard_devices: ['gpu:0', 'gpu:1']
INFO:tensorflow:caching_devices: None
INFO:tensorflow:ps_devices: ['gpu:0', 'gpu:1']
INFO:tensorflow:Using config: {'_save_checkpoints_secs': None, '_keep_checkpoint_max': 20, '_task_type': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f0747cec310>, '_keep_checkpoint_every_n_hours': 10000, '_session_config': gpu_options {
per_process_gpu_memory_fraction: 0.95
}
allow_soft_placement: true
graph_options {
optimizer_options {
}
}
, 'use_tpu': False, '_tf_random_seed': None, '_num_worker_replicas': 0, '_task_id': 0, 't2t_device_info': {'num_async_replicas': 1}, '_evaluation_master': '', '_log_step_count_steps': 100, '_num_ps_replicas': 0, '_train_distribute': None, '_is_chief': True, '_tf_config': gpu_options {
per_process_gpu_memory_fraction: 1.0
}
, '_save_checkpoints_steps': 1000, '_environment': 'local', '_master': '', '_model_dir': '/home/sanjie.lp/workdir/t2t_train/translate_ende_wmt32k/universal_transformer-universal_transformer_base', 'data_parallelism': <tensor2tensor.utils.expert_utils.Parallelism object at 0x7f0747cec350>, '_save_summary_steps': 100}
WARNING:tensorflow:Estimator's model_fn (<function wrapping_model_fn at 0x7f074748caa0>) includes params argument, but params are not passed to Estimator.
WARNING:tensorflow:ValidationMonitor only works with --schedule=train_and_evaluate
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after 600 secs (eval_spec.throttle_secs) or training is finished.
INFO:tensorflow:Reading data files from /home/sanjie.lp/workdir/t2t_data/translate_ende_wmt32k/translate_ende_wmt32k-train*
INFO:tensorflow:partition: 0 num_data_files: 100
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Setting T2TModel mode to 'train'
INFO:tensorflow:Using variable initializer: uniform_unit_scaling
INFO:tensorflow:Transforming feature 'inputs' with symbol_modality_33288_1024.bottom
INFO:tensorflow:Transforming 'targets' with symbol_modality_33288_1024.targets_bottom
INFO:tensorflow:Building model body
Traceback (most recent call last):
File "/home/sanjie.lp/workdir/softwares/anaconda2/bin/t2t-trainer", line 32, in <module>
tf.app.run()
File "/home/sanjie.lp/workdir/softwares/anaconda2/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 126, in run
_sys.exit(main(argv))
File "/home/sanjie.lp/workdir/softwares/anaconda2/bin/t2t-trainer", line 28, in main
t2t_trainer.main(argv)
File "/gruntdata/sanjie.lp/projects/tensor2tensor/tensor2tensor/bin/t2t_trainer.py", line 355, in main
execute_schedule(exp)
File "/gruntdata/sanjie.lp/projects/tensor2tensor/tensor2tensor/bin/t2t_trainer.py", line 321, in execute_schedule
getattr(exp, FLAGS.schedule)()
File "/gruntdata/sanjie.lp/projects/tensor2tensor/tensor2tensor/utils/trainer_lib.py", line 331, in continuous_train_and_eval
self._eval_spec)
File "/home/sanjie.lp/workdir/softwares/anaconda2/lib/python2.7/site-packages/tensorflow/python/estimator/training.py", line 439, in train_and_evaluate
executor.run()
File "/home/sanjie.lp/workdir/softwares/anaconda2/lib/python2.7/site-packages/tensorflow/python/estimator/training.py", line 518, in run
self.run_local()
File "/home/sanjie.lp/workdir/softwares/anaconda2/lib/python2.7/site-packages/tensorflow/python/estimator/training.py", line 650, in run_local
hooks=train_hooks)
File "/home/sanjie.lp/workdir/softwares/anaconda2/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 363, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/home/sanjie.lp/workdir/softwares/anaconda2/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 843, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/home/sanjie.lp/workdir/softwares/anaconda2/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 856, in _train_model_default
features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
File "/home/sanjie.lp/workdir/softwares/anaconda2/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 831, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/gruntdata/sanjie.lp/projects/tensor2tensor/tensor2tensor/utils/t2t_model.py", line 1225, in wrapping_model_fn
decode_hparams=decode_hparams)
File "/gruntdata/sanjie.lp/projects/tensor2tensor/tensor2tensor/utils/t2t_model.py", line 1277, in estimator_model_fn
logits, losses_dict = model(features) # pylint: disable=not-callable
File "/home/sanjie.lp/workdir/softwares/anaconda2/lib/python2.7/site-packages/tensorflow/python/layers/base.py", line 717, in __call__
outputs = self.call(inputs, *args, **kwargs)
File "/gruntdata/sanjie.lp/projects/tensor2tensor/tensor2tensor/utils/t2t_model.py", line 213, in call
sharded_logits, losses = self.model_fn_sharded(sharded_features)
File "/gruntdata/sanjie.lp/projects/tensor2tensor/tensor2tensor/utils/t2t_model.py", line 268, in model_fn_sharded
sharded_logits, sharded_losses = dp(self.model_fn, datashard_to_features)
File "/gruntdata/sanjie.lp/projects/tensor2tensor/tensor2tensor/utils/expert_utils.py", line 230, in __call__
outputs.append(fns[i](*my_args[i], **my_kwargs[i]))
File "/gruntdata/sanjie.lp/projects/tensor2tensor/tensor2tensor/utils/t2t_model.py", line 304, in model_fn
body_out = self.body(transformed_features)
File "/gruntdata/sanjie.lp/projects/tensor2tensor/tensor2tensor/models/research/universal_transformer.py", line 172, in body
inputs, target_space, hparams, features=features)
File "/gruntdata/sanjie.lp/projects/tensor2tensor/tensor2tensor/models/research/universal_transformer.py", line 85, in encode
save_weights_to=self.attention_weights))
File "/gruntdata/sanjie.lp/projects/tensor2tensor/tensor2tensor/models/research/universal_transformer_util.py", line 125, in universal_transformer_encoder
x, hparams, ffn_unit, attention_unit, pad_remover=pad_remover)
File "/gruntdata/sanjie.lp/projects/tensor2tensor/tensor2tensor/models/research/universal_transformer_util.py", line 251, in universal_transformer_layer
ut_function, tf.range(hparams.num_rec_steps), initializer=initializer)
File "/home/sanjie.lp/workdir/softwares/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/functional_ops.py", line 132, in foldl
swap_memory=swap_memory)
File "/home/sanjie.lp/workdir/softwares/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 3224, in while_loop
result = loop_context.BuildLoop(cond, body, loop_vars, shape_invariants)
File "/home/sanjie.lp/workdir/softwares/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2956, in BuildLoop
pred, body, original_loop_vars, loop_vars, shape_invariants)
File "/home/sanjie.lp/workdir/softwares/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2915, in _BuildLoop
nest.assert_same_structure(list(packed_vars_for_body), list(body_result))
File "/home/sanjie.lp/workdir/softwares/anaconda2/lib/python2.7/site-packages/tensorflow/python/util/nest.py", line 183, in assert_same_structure
_pywrap_tensorflow.AssertSameStructure(nest1, nest2, check_types)
ValueError: The two structures don't have the same nested structure.
First structure: type=list str=[<tf.Tensor 'universal_transformer/parallel_0_5/universal_transformer/universal_transformer/body/encoder/universal_transformer_basic/foldl/while/Identity:0' shape=() dtype=int32>, <tf.Tensor 'universal_transformer/parallel_0_5/universal_transformer/universal_transformer/body/encoder/universal_transformer_basic/foldl/while/Identity_1:0' shape=(3, ?, ?, 1024) dtype=float32>]
Second structure: type=list str=[<tf.Tensor 'universal_transformer/parallel_0_5/universal_transformer/universal_transformer/body/encoder/universal_transformer_basic/foldl/while/add_2:0' shape=() dtype=int32>, (<tf.Tensor 'universal_transformer/parallel_0_5/universal_transformer/universal_transformer/body/encoder/universal_transformer_basic/foldl/while/ffn/layer_postprocess/add:0' shape=(?, ?, 1024) dtype=float32>, <tf.Tensor 'universal_transformer/parallel_0_5/universal_transformer/universal_transformer/body/encoder/universal_transformer_basic/foldl/while/unstack:1' shape=(?, ?, 1024) dtype=float32>, <tf.Tensor 'universal_transformer/parallel_0_5/universal_transformer/universal_transformer/body/encoder/universal_transformer_basic/foldl/while/unstack:2' shape=(?, ?, 1024) dtype=float32>)]
More specifically: Substructure "type=tuple str=(<tf.Tensor 'universal_transformer/parallel_0_5/universal_transformer/universal_transformer/body/encoder/universal_transformer_basic/foldl/while/ffn/layer_postprocess/add:0' shape=(?, ?, 1024) dtype=float32>, <tf.Tensor 'universal_transformer/parallel_0_5/universal_transformer/universal_transformer/body/encoder/universal_transformer_basic/foldl/while/unstack:1' shape=(?, ?, 1024) dtype=float32>, <tf.Tensor 'universal_transformer/parallel_0_5/universal_transformer/universal_transformer/body/encoder/universal_transformer_basic/foldl/while/unstack:2' shape=(?, ?, 1024) dtype=float32>)" is a sequence, while substructure "type=Tensor str=Tensor("universal_transformer/parallel_0_5/universal_transformer/universal_transformer/body/encoder/universal_transformer_basic/foldl/while/Identity_1:0", shape=(3, ?, ?, 1024), dtype=float32, device=/device:GPU:0)" is not
```
| closed | 2018-09-05T08:59:11Z | 2018-09-07T05:41:57Z | https://github.com/tensorflow/tensor2tensor/issues/1042 | [] | lipond | 2 |
dmlc/gluon-cv | computer-vision | 1,770 | Update readme.md | Update the readme.md since this not in recommended format and contribution.md should also be there. readme.md provides information about the repo and it should be in recommended format and complete.
| closed | 2023-05-14T05:04:02Z | 2023-08-20T06:31:06Z | https://github.com/dmlc/gluon-cv/issues/1770 | [
"Stale"
] | harshsingh32 | 2 |
ultralytics/ultralytics | machine-learning | 19,162 | If I try to "fine-tune" a model on a dataset where the number of classes are not 80 (coco) then am I essentially training from scratch? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Whenever I try to fine-tune, the scores start from 0 when training.
I am confused as to what the benefit is exactly of the pre-trained model given that I see this line: Transferred 448/499 items from pretrained weights.
When someone fine-tune something like ResNet for classification, you change just the linear layer.
Thanks!
### Additional
_No response_ | closed | 2025-02-10T12:22:15Z | 2025-02-13T09:01:52Z | https://github.com/ultralytics/ultralytics/issues/19162 | [
"question",
"classify"
] | aymuos15 | 6 |
pydata/pandas-datareader | pandas | 238 | get_components_yahoo raising eroors | When I run the following codes
`from pandas_datareader import data`
`data.get_components_yahoo('^DJI')`
I always get an error 'AssertionError: 3 columns passed, passed data had 1 columns'. Could anyone tell me why?
| closed | 2016-09-13T08:50:19Z | 2018-01-18T22:29:35Z | https://github.com/pydata/pandas-datareader/issues/238 | [
"yahoo-finance"
] | jingdayan | 2 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 862 | [FEATURE]: <add a config about driver license> | ### Feature summary
During the application, when it asks if I have driver license, it selected 'no', is there a feature that I can select that I have driver license?
### Feature description
During the application, when it asks if I have driver license, it selected 'no', is there a feature that I can select that I have driver license?
### Motivation
some job need travel, so I want to let employer know I have driver license.
### Alternatives considered
_No response_
### Additional context
_No response_ | closed | 2024-11-15T19:27:19Z | 2025-01-31T23:40:33Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/862 | [
"enhancement"
] | Shuo-Wang-UCBerkeley | 2 |
albumentations-team/albumentations | deep-learning | 2,448 | [Feature request] Add apply_to_images to CoarseDropout | open | 2025-03-11T01:22:10Z | 2025-03-11T01:22:17Z | https://github.com/albumentations-team/albumentations/issues/2448 | [
"enhancement",
"good first issue"
] | ternaus | 0 | |
pydantic/pydantic-core | pydantic | 1,116 | not a supported wheel on this platform | Tried to install a python package using command:
`pip3 install ./pydantic_core[any of x86_64]
`but everytime I get this error:` [filename] is not a supported wheel on this platform.
`
my env:
```
❯❯❯❯ uname -a
Linux 3.10.0-1160.99.1.el7.x86_64 #1 SMP Thu Aug 10 10:46:21 EDT 2023 x86_64 x86_64 x86_64 GNU/Linux
❯❯❯❯ python3 --version
Python 3.9.10
```
what package should I use to install in my envs? I'm unable to build it from sources.
| closed | 2023-12-11T13:41:40Z | 2024-05-21T11:51:11Z | https://github.com/pydantic/pydantic-core/issues/1116 | [
"unconfirmed"
] | halturin | 3 |
miguelgrinberg/python-socketio | asyncio | 1,004 | AsyncClient crashes after reconnect to server with short reconnection_delay | I have consistent crashes with AsyncClient (v.5.7.1 and checked previous 5.6.0, 5.50 the same) with following slightly modified example from your documentation:
```
import asyncio
import socketio
sio = socketio.AsyncClient(reconnection_delay=0.1, handle_sigint=False, logger=True, engineio_logger=True)
@sio.event
async def connect():
print('connection established')
@sio.event
async def disconnect():
print('disconnected from server')
async def main():
await sio.connect('http://127.0.0.1:5000')
await sio.wait()
if __name__ == '__main__':
asyncio.run(main())
```
My socketio server is also written in Python and running on Ubuntu 22.04 machine as a daemon. So every time I execute:
```
sudo systemctl restart my-socketio-server.service
```
That client crashes. Increasing reconnection_delay to 3-5-10 seconds does help. Sometimes even 1s is enough. The crash occurs after successful reconnect, that's weird.
Here is example of output when crashed:
On Windows:
```
Attempting polling connection to http://127.0.0.1:5000/socket.io/?transport=polling&EIO=4
Polling connection accepted with {'sid': 'IvMLd7wvWhvUp8p-AAAG', 'upgrades': ['websocket'], 'pingTimeout': 20000, 'pingInterval': 25000}
Engine.IO connection established
Sending packet MESSAGE data 0{}
Attempting WebSocket upgrade to ws://127.0.0.1:5000/socket.io/?transport=websocket&EIO=4
WebSocket upgrade was successful
Received packet NOOP data
Received packet MESSAGE data 0{"sid":"zTurflKPNrlpwslHAAAH"}
Namespace / is connected
connection established
Waiting for write loop task to end
Exiting write loop task
Engine.IO connection dropped
disconnected from server
Exiting read loop task
Connection failed, new attempt in 0.37 seconds
Attempting polling connection to http://127.0.0.1:5000/socket.io/?transport=polling&EIO=4
Polling connection accepted with {'sid': 'deH2fm-2n9AQjJpiAAAA', 'upgrades': ['websocket'], 'pingTimeout': 20000, 'pingInterval': 25000}
Engine.IO connection established
Sending packet MESSAGE data 0{}
Attempting WebSocket upgrade to ws://127.0.0.1:5000/socket.io/?transport=websocket&EIO=4
WebSocket upgrade was successful
Received packet NOOP data
Received packet MESSAGE data 0{"sid":"xnhGO0se25HYAJztAAAB"}
Namespace / is connected
connection established
Reconnection successful
packet queue is empty, aborting
Exiting write loop task
Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x0000001C75DE2F50>
Exception ignored in: <function _ProactorBasePipeTransport.__del__ at 0x0000001C7532C430>
Traceback (most recent call last):
File "C:\Python\3.10-64\lib\asyncio\proactor_events.py", line 116, in __del__
File "C:\Python\3.10-64\lib\asyncio\proactor_events.py", line 108, in close
File "C:\Python\3.10-64\lib\asyncio\base_events.py", line 750, in call_soon
File "C:\Python\3.10-64\lib\asyncio\base_events.py", line 515, in _check_closed
RuntimeError: Event loop is closed
```
| closed | 2022-08-16T10:14:47Z | 2024-12-14T15:35:25Z | https://github.com/miguelgrinberg/python-socketio/issues/1004 | [
"bug"
] | bialix | 5 |
microsoft/nni | data-science | 4,786 | run quantization_speedup.py in /examples/tutorials get an error | when I run quantization_speedup.py in /examples/tutorials, get erros like this:
```
Traceback (most recent call last):
File "quantization_speedup.py", line 114, in <module>
engine.compress()
File "/opt/conda/lib/python3.7/site-packages/nni/compression/pytorch/quantization_speedup/integrated_tensorrt.py", line 291, in compress
model_path=self.onnx_path, input_names=self.input_names, output_names=self.output_names)
File "/opt/conda/lib/python3.7/site-packages/nni/compression/pytorch/quantization_speedup/frontend_to_onnx.py", line 144, in torch_to_onnx
model_onnx, onnx_config = unwrapper(model_onnx, index2name, config)
File "/opt/conda/lib/python3.7/site-packages/nni/compression/pytorch/quantization_speedup/frontend_to_onnx.py", line 82, in unwrapper
index = int(onnx.numpy_helper.to_array(const_nd.attribute[0].t))
IndexError: list index (0) out of range
```
| open | 2022-04-20T15:35:21Z | 2022-11-21T02:28:41Z | https://github.com/microsoft/nni/issues/4786 | [
"support",
"cannot reproduce",
"quantize",
"need more info"
] | DemonHan | 7 |
proplot-dev/proplot | matplotlib | 73 | Fix "super title" issue in popup backend | Popup backends seem to mess up the automatic offset of row/column labels and "super" titles. Thought I fixed this before but evidently not. Related to #62. See this example:
<img width="494" alt="Screen Shot 2019-11-29 at 10 15 40 PM" src="https://user-images.githubusercontent.com/19657652/69895922-e961a700-12f5-11ea-9936-8e08384bc0b2.png">
| closed | 2019-11-30T05:17:03Z | 2019-11-30T05:36:42Z | https://github.com/proplot-dev/proplot/issues/73 | [
"bug"
] | lukelbd | 1 |
tox-dev/tox | automation | 2,741 | Release tox 3.28.0 on PyPI | Hello. Could you please release https://github.com/tox-dev/tox/releases/tag/3.28.0 on PyPI?
Thanks. | closed | 2022-12-17T15:55:46Z | 2022-12-18T14:09:38Z | https://github.com/tox-dev/tox/issues/2741 | [
"enhancement"
] | hroncok | 3 |
xzkostyan/clickhouse-sqlalchemy | sqlalchemy | 162 | How does sqlalchemy connect to the cluster | The code for sqlalchemy to connect to a single machine is as follows:
conn_str = 'clickhouse://default:@localhost/default' engine = create_engine(conn_str)
How does sqlalchemy connect to the cluster?Thanks~ | closed | 2022-01-05T05:47:09Z | 2022-02-20T15:30:22Z | https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/162 | [] | phpsxg | 3 |
babysor/MockingBird | pytorch | 322 | 在ubuntu系统训练时,找不到train.txt目录 | 如图。在windows端完全没问题,但上服务器就报错
(base) root@f7a701305fdd:~/data/MockingBird-main# python synthesizer_train.py 22 data/datasets_root/SV2TTS/synthesizer
Arguments:
run_id: 22
syn_dir: data/datasets_root/SV2TTS/synthesizer
models_dir: synthesizer/saved_models/
save_every: 1000
backup_every: 25000
log_every: 200
force_restart: False
hparams:
Checkpoint path: synthesizer/saved_models/22/22.pt
Loading training data from: data/datasets_root/SV2TTS/synthesizer/train.txt
Using model: Tacotron
Using device: cuda
Initialising Tacotron Model...
Trainable Parameters: 32.869M
Loading weights at synthesizer/saved_models/22/22.pt
Tacotron weights loaded from step 0
Using inputs from:
data/datasets_root/SV2TTS/synthesizer/train.txt
data/datasets_root/SV2TTS/synthesizer/mels
data/datasets_root/SV2TTS/synthesizer/embeds
Traceback (most recent call last):
File "synthesizer_train.py", line 37, in <module>
train(**vars(args))
File "/root/data/MockingBird-main/synthesizer/train.py", line 121, in train
dataset = SynthesizerDataset(metadata_fpath, mel_dir, embed_dir, hparams)
File "/root/data/MockingBird-main/synthesizer/synthesizer_dataset.py", line 12, in __init__
with metadata_fpath.open("r", encoding="utf-8") as metadata_file:
File "/root/miniconda3/lib/python3.8/pathlib.py", line 1218, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/root/miniconda3/lib/python3.8/pathlib.py", line 1074, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: 'data/datasets_root/SV2TTS/synthesizer/train.txt'
` | open | 2022-01-07T06:41:04Z | 2022-01-22T09:08:44Z | https://github.com/babysor/MockingBird/issues/322 | [] | HuaHuaOfficial | 3 |
vitalik/django-ninja | rest-api | 851 | Add schema field at runtime | Hi...
Is there any way to add a ModelField to an existing schema class at runtime?
Thank you very much.
| closed | 2023-09-12T08:55:26Z | 2023-09-14T14:38:42Z | https://github.com/vitalik/django-ninja/issues/851 | [] | aegeavaz | 12 |
explosion/spaCy | data-science | 13,709 | Unable to fine-tune previously trained transformer based spaCy NER. | ## How to reproduce the behaviour
Use spacy to fine-tune a base model with a transformer from hugging face:
python -m spacy train config.cfg --output ./output --paths.train ./train.spacy --paths.dev ./dev.spacy
Collect new tagged entries under new sets and set your model location to the output/model-last in a new config:
python -m spacy train fine_tune_config.cfg --output ./fine_tune_output --paths.train ./newtrain.spacy --paths.dev ./newdev.spacy
You will get an error about a missing config.json. Even replacing this will then lead to an error of a missing tokenizer.
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: Windows 11
- **spaCy version:** 3.7.2
- **Platform:** Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
- **Python version:** 3.10.13
| open | 2024-12-06T04:46:11Z | 2024-12-06T04:57:26Z | https://github.com/explosion/spaCy/issues/13709 | [] | jlustgarten | 1 |
tensorlayer/TensorLayer | tensorflow | 652 | Feature Request: expect tl layers support/derived from `CheckpointableBase` ? | [INSERT DESCRIPTION OF THE PROBLEM]
So far, Tensorlayer Layers doesn`t support eager execution, so we can`t save tensorlayer based model with tfe.Checkpoint(object based ckpt). | closed | 2018-05-28T09:01:47Z | 2019-05-13T15:27:43Z | https://github.com/tensorlayer/TensorLayer/issues/652 | [] | Windaway | 1 |
SYSTRAN/faster-whisper | deep-learning | 69 | Cannot produce the same result | Hello,
I tried to produce same result of the reported table,
I used a whisper small, cpu_threads=8, on 13 min audio, Intel(R) Xeon(R) Gold 6242 CPU 2.8 GHz 16 cores,
And it took:
openai/whisper fp 32: 6min 19s
faster whisper fp 32 : 5min 1s
faster whisper int 8: 4min 17s
Could you please guide me?
Thanks. | closed | 2023-03-22T20:47:59Z | 2023-03-27T12:12:30Z | https://github.com/SYSTRAN/faster-whisper/issues/69 | [] | zara0m | 9 |
lexiforest/curl_cffi | web-scraping | 209 | Chrome120 not supported. readme says it is | curl_cffi.requests.errors.RequestsError: impersonate chrome120 is not supported
readme says supported, error says it doesnt? | closed | 2024-01-05T17:02:48Z | 2024-01-05T17:09:19Z | https://github.com/lexiforest/curl_cffi/issues/209 | [
"bug"
] | rwqrbqb12 | 1 |
JaidedAI/EasyOCR | deep-learning | 489 | the link "https://www.jaided.ai/custom_model.md" is lost could you provide again? | closed | 2021-07-13T13:01:48Z | 2021-07-21T07:20:06Z | https://github.com/JaidedAI/EasyOCR/issues/489 | [] | neverstoplearn | 1 | |
databricks/koalas | pandas | 1,729 | Koalas doesn't work with spatial dataframes | Hi,
I work with spatial dataframes which are created using the geopandas library. As the spatial dataframes contain geometries, they can get pretty massive. It would be awesome if Koalas can support Spatial DF's. Is there any plan to support geopandas - something like `ks.from_geopandas(spdf)` in the future roadmap? | closed | 2020-08-26T20:17:42Z | 2020-08-27T03:41:25Z | https://github.com/databricks/koalas/issues/1729 | [
"discussions"
] | applecool | 2 |
dropbox/PyHive | sqlalchemy | 332 | Fails when insert a record includ an invalid XML character | I am trying to use pyhive to insert a record includ an invalid XML character to a hive table and it fails, but is ok when i use pyhive to query a record includ an invalid XML character
```
>>>from pyhive import hive
>>>cursor = hive.connect('localhost').cursor()
>>>cursor.execute("select * from demo where bar='\x1a'")
>>>cursor.execute("insert into demo values('bug', '\x1a')")
Traceback (most recent call last):
File "/Users/liyatao/github/PyHive/venv/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3331, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-5-6731f48476b7>", line 3, in <module>
cursor.execute("insert into demo values('bug', '\x1a')")
File "/Users/liyatao/github/PyHive/pyhive/hive.py", line 368, in execute
_check_status(response)
File "/Users/liyatao/github/PyHive/pyhive/hive.py", line 518, in _check_status
raise OperationalError(response)
pyhive.exc.OperationalError: TExecuteStatementResp(status=TStatus(statusCode=3, infoMessages=['*org.apache.hive.service.cli.HiveSQLException:Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. org.xml.sax.SAXParseException; systemId: file:/tmp/hadoop-root/mapred/staging/root698919480/.staging/job_local698919480_0012/job.xml; lineNumber: 1311; columnNumber: 86; Character reference "" is an invalid XML character.:17:16', 'org.apache.hive.service.cli.operation.Operation:toSQLException:Operation.java:380', 'org.apache.hive.service.cli.operation.SQLOperation:runQuery:SQLOperation.java:257', 'org.apache.hive.service.cli.operation.SQLOperation:runInternal:SQLOperation.java:293', 'org.apache.hive.service.cli.operation.Operation:run:Operation.java:320', 'org.apache.hive.service.cli.session.HiveSessionImpl:executeStatementInternal:HiveSessionImpl.java:530', 'org.apache.hive.service.cli.session.HiveSessionImpl:executeStatement:HiveSessionImpl.java:506', 'org.apache.hive.service.cli.CLIService:executeStatement:CLIService.java:280', 'org.apache.hive.service.cli.thrift.ThriftCLIService:ExecuteStatement:ThriftCLIService.java:531', 'org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement:getResult:TCLIService.java:1437', 'org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement:getResult:TCLIService.java:1422', 'org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39', 'org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39', 'org.apache.hive.service.auth.TSetIpAddressProcessor:process:TSetIpAddressProcessor.java:56', 'org.apache.thrift.server.TThreadPoolServer$WorkerProcess:run:TThreadPoolServer.java:286', 'java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1142', 'java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:617', 'java.lang.Thread:run:Thread.java:748', '*java.lang.RuntimeException:org.xml.sax.SAXParseException; systemId: file:/tmp/hadoop-root/mapred/staging/root698919480/.staging/job_local698919480_0012/job.xml; lineNumber: 1311; columnNumber: 86; Character reference "" is an invalid XML character.:47:31', 'org.apache.hadoop.conf.Configuration:loadResource:Configuration.java:2647', 'org.apache.hadoop.conf.Configuration:loadResources:Configuration.java:2504', 'org.apache.hadoop.conf.Configuration:getProps:Configuration.java:2407', 'org.apache.hadoop.conf.Configuration:get:Configuration.java:981', 'org.apache.hadoop.mapred.JobConf:checkAndWarnDeprecation:JobConf.java:2007', 'org.apache.hadoop.mapred.JobConf:<init>:JobConf.java:479', 'org.apache.hadoop.mapred.LocalJobRunner$Job:<init>:LocalJobRunner.java:153', 'org.apache.hadoop.mapred.LocalJobRunner:submitJob:LocalJobRunner.java:731', 'org.apache.hadoop.mapreduce.JobSubmitter:submitJobInternal:JobSubmitter.java:240', 'org.apache.hadoop.mapreduce.Job$10:run:Job.java:1290', 'org.apache.hadoop.mapreduce.Job$10:run:Job.java:1287', 'java.security.AccessController:doPrivileged:AccessController.java:-2', 'javax.security.auth.Subject:doAs:Subject.java:422', 'org.apache.hadoop.security.UserGroupInformation:doAs:UserGroupInformation.java:1746', 'org.apache.hadoop.mapreduce.Job:submit:Job.java:1287', 'org.apache.hadoop.mapred.JobClient$1:run:JobClient.java:575', 'org.apache.hadoop.mapred.JobClient$1:run:JobClient.java:570', 'java.security.AccessController:doPrivileged:AccessController.java:-2', 'javax.security.auth.Subject:doAs:Subject.java:422', 'org.apache.hadoop.security.UserGroupInformation:doAs:UserGroupInformation.java:1746', 'org.apache.hadoop.mapred.JobClient:submitJobInternal:JobClient.java:570', 'org.apache.hadoop.mapred.JobClient:submitJob:JobClient.java:561', 'org.apache.hadoop.hive.ql.exec.mr.ExecDriver:execute:ExecDriver.java:411', 'org.apache.hadoop.hive.ql.exec.mr.MapRedTask:execute:MapRedTask.java:151', 'org.apache.hadoop.hive.ql.exec.Task:executeTask:Task.java:199', 'org.apache.hadoop.hive.ql.exec.TaskRunner:runSequential:TaskRunner.java:100', 'org.apache.hadoop.hive.ql.Driver:launchTask:Driver.java:2183', 'org.apache.hadoop.hive.ql.Driver:execute:Driver.java:1839', 'org.apache.hadoop.hive.ql.Driver:runInternal:Driver.java:1526', 'org.apache.hadoop.hive.ql.Driver:run:Driver.java:1237', 'org.apache.hadoop.hive.ql.Driver:run:Driver.java:1232', 'org.apache.hive.service.cli.operation.SQLOperation:runQuery:SQLOperation.java:255', '*org.xml.sax.SAXParseException:Character reference "" is an invalid XML character.:51:4', 'org.apache.xerces.parsers.DOMParser:parse::-1', 'org.apache.xerces.jaxp.DocumentBuilderImpl:parse::-1', 'javax.xml.parsers.DocumentBuilder:parse:DocumentBuilder.java:150', 'org.apache.hadoop.conf.Configuration:parse:Configuration.java:2482', 'org.apache.hadoop.conf.Configuration:loadResource:Configuration.java:2551'], sqlState='08S01', errorCode=1, errorMessage='Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. org.xml.sax.SAXParseException; systemId: file:/tmp/hadoop-root/mapred/staging/root698919480/.staging/job_local698919480_0012/job.xml; lineNumber: 1311; columnNumber: 86; Character reference "" is an invalid XML character.'), operationHandle=None)
```
| open | 2020-05-19T08:22:00Z | 2020-05-19T08:22:00Z | https://github.com/dropbox/PyHive/issues/332 | [] | taogeYT | 0 |
WZMIAOMIAO/deep-learning-for-image-processing | deep-learning | 447 | “FileNotFoundError: VOCdevkit dose not in path:'./'.” | 新手入门,哔站过来的,折腾两天配好环境,运行`python train_mobilenetv2.py`出现下列报错:
```
Using cuda device training.
Traceback (most recent call last):
File "train_mobilenetv2.py", line 224, in <module>
main()
File "train_mobilenetv2.py", line 65, in main
raise FileNotFoundError("VOCdevkit dose not in path:'{}'.".format(VOC_root))
FileNotFoundError: VOCdevkit dose not in path:'./'.
```
看到README里面注意事项“在使用训练脚本时,注意要将'--data-path'(VOC_root)设置为自己存放'VOCdevkit'文件夹所在的根目录”,直觉告诉我跟这有关,但是没太看懂引号里面这句话,不知道怎么操作…
然后我在`train_mobilenetv2.py`文件里面看到了下面这行代码,
```python
VOC_root = "./" # VOCdevkit
```
我在想,这个报错是不是在说“VOCdevkit”根目录设置不对呢,假如我把根目录设为“./”,那么是不是要把“VOCdevkit”放在“./”目录之下。我比较小白,想问一下这个“VOCdevkit”到底是什么呢?又或者说,是我想多了,“VOCdevkit”就是一个自建的文件夹? | closed | 2021-12-23T04:48:09Z | 2021-12-25T05:50:02Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/447 | [] | volmodaoist | 1 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 729 | Deprecate configs in version 3? | The [docs indicate](https://flask-sqlalchemy.palletsprojects.com/en/2.x/config/#configuration-keys) that a number of config settings will be deprecated in version 3. Is it too soon to work on removing those settings for version 3? | closed | 2019-05-07T13:02:48Z | 2020-12-05T19:58:32Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/729 | [] | lbeaufort | 11 |
piccolo-orm/piccolo | fastapi | 777 | Nested transactions | ### Discussed in https://github.com/piccolo-orm/piccolo/discussions/729
<div type='discussions-op-text'>
<sup>Originally posted by **powellnorma** December 18, 2022</sup>
Does it make sense to just "ignore" nested transactions instead of throwing an `TransactionError`? Or could this by dangerous?
https://github.com/piccolo-orm/piccolo/blob/d5123e94337fe4ee05816b890cfdbcc4a897009c/piccolo/engine/postgres.py#L142-L146</div> | closed | 2023-02-28T21:39:42Z | 2023-03-01T14:54:21Z | https://github.com/piccolo-orm/piccolo/issues/777 | [
"enhancement"
] | dantownsend | 0 |
netbox-community/netbox | django | 17,732 | Diagram in Upgrade Instructions is Not Readable | ### Change Type
Cleanup (formatting, typos, etc.)
### Area
Installation/upgrade
### Proposed Changes
In the documentation for [Upgrading Netbox](https://netboxlabs.com/docs/netbox/en/stable/installation/upgrading/), there is a diagram showing what version you need to be on to upgrade to the next major release.
The background color of the page is so dark that the information on the diagram is unreadable:

If you click on the image, it opens in a new tab where the background color is white and the information _is_ readable:

It would probably be best to make this image readable without having to click on it to open in a new tab, in my opinion. | closed | 2024-10-10T18:53:15Z | 2025-01-17T03:02:37Z | https://github.com/netbox-community/netbox/issues/17732 | [
"type: documentation",
"status: accepted"
] | MattDRogers | 1 |
pytorch/pytorch | deep-learning | 149,284 | No examples in documentation for masked_fill and masked_fill_ | ### 📚 The doc issue
No examples in documentation for masked_fill and masked_fill_
masked_fill - https://pytorch.org/docs/stable/generated/torch.Tensor.masked_fill.html
masked_fill_- https://pytorch.org/docs/stable/generated/torch.Tensor.masked_fill_.html
### Suggest a potential alternative/fix
add example functions for both of them
cc @svekars @sekyondaMeta @AlannaBurke | open | 2025-03-17T00:56:48Z | 2025-03-17T15:33:18Z | https://github.com/pytorch/pytorch/issues/149284 | [
"module: docs",
"triaged",
"topic: docs"
] | julurisaichandu | 0 |
dgtlmoon/changedetection.io | web-scraping | 2,761 | [feature] Purely visual monitoring mode | **Version and OS**
N/A
**Is your feature request related to a problem? Please describe.**
Some/many websites change elements regularly, so watching specific elements is not reliable.
**Describe the solution you'd like**
A mode that just compare the visual output (screenshot) of the website would be useful for this.
**Describe the use-case and give concrete real-world examples**
See above
**Additional context**
None
| open | 2024-11-02T11:30:20Z | 2024-11-02T11:30:20Z | https://github.com/dgtlmoon/changedetection.io/issues/2761 | [
"enhancement"
] | mariomadproductions | 0 |
modoboa/modoboa | django | 3,007 | pdfcredentials "strange" filename if "delete after first download" option set to yes | # Impacted versions
* OS Type: Debian
* OS Version: 11
* Database Type: MariaDB
* Database version: 10.5
* Modoboa: 2.1.2
* installer used: No
* Webserver: Nginx
# Steps to reproduce
upgraded modoboa yesterday 2.0.5 -> 2.1.2, had some issues afterwards, this is one of them..
after upgrade Modoboa -> Parameters -> PDF credentials had option to "delete pdf after first download" set to yes.
pdf file created/downloaded for new users after upgrade, was named like _srv_modoboa_pdfcredentials_$email.pdf
after changing this particular option to no, pdf was created and saved as expected : $email.pdf
same behavior whenever that option is set to yes.
this is minor, just wanted to report.. | closed | 2023-05-17T12:53:43Z | 2023-06-30T14:36:09Z | https://github.com/modoboa/modoboa/issues/3007 | [
"bug"
] | xinomilo | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.