repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
ijl/orjson | numpy | 2 | ImportError on CentOS 7 | Installed with:
pip3 install --user --upgrade orjson
Fails on import:
import orjson
ImportError: /lib64/libc.so.6: version `GLIBC_2.18' not found (required by /home/davidgaleano/.local/lib/python3.6/site-packages/orjson.cpython-36m-x86_64-linux-gnu.so)
The python version is 3.6.3, and the CentOS version is 7.4.1708.
| closed | 2018-12-20T10:29:16Z | 2020-02-20T15:54:55Z | https://github.com/ijl/orjson/issues/2 | [] | davidgaleano | 9 |
postmanlabs/httpbin | api | 381 | flask_common imported, but not used for anything | Unless I'm missing something, this commit seems a bit bizarre: https://github.com/kennethreitz/httpbin/commit/a39de83be1b7330f6a99981bf54152c525847299 . Neither it nor any subsequent commit seems to *do* anything with common - it just pulls it in, does the initial setup, and...nothing else. Is it just a vestige of some plan that didn't go ahead? | closed | 2017-08-30T15:32:35Z | 2017-08-31T14:20:53Z | https://github.com/postmanlabs/httpbin/issues/381 | [] | AdamWill | 3 |
gunthercox/ChatterBot | machine-learning | 1,415 | Make Django and Sqlalchemy ext names consistent | They are currently `sqlalchemy_app` and `django_chatterbot`. Making these consistently named would be nice.
* Perhaps `django_integration` and `sqlalchemy_integration`? | closed | 2018-09-20T00:22:05Z | 2019-11-11T12:23:10Z | https://github.com/gunthercox/ChatterBot/issues/1415 | [] | gunthercox | 2 |
huggingface/datasets | numpy | 7,193 | Support of num_workers (multiprocessing) in map for IterableDataset | ### Feature request
Currently, IterableDataset doesn't support setting num_worker in .map(), which results in slow processing here. Could we add support for it? As .map() can be run in the batch fashion (e.g., batch_size is default to 1000 in datasets), it seems to be doable for IterableDataset as the regular Dataset.
### Motivation
Improving data processing efficiency
### Your contribution
Testing | open | 2024-10-02T18:34:04Z | 2024-10-03T09:54:15Z | https://github.com/huggingface/datasets/issues/7193 | [
"enhancement"
] | getao | 1 |
BeanieODM/beanie | asyncio | 166 | [Bug] Encrypted Binary fields resulting in Binary type with default subtype instead of user defined subtype. | Hi ,
I have noticed with one of my projects that storing the data in bson Binary (using encrypted fields,binary subtype = 6) results in data getting stored as Binary with default subtype(subtype=0).
I checked and found that, it is due to the Encoder Class treating the Binary type as instance of bytes, and converting it into Binary and as the default subtype is 0, it is taking subtype as 0.
Edit: Added PR https://github.com/roman-right/beanie/pull/167 | closed | 2021-12-14T10:37:11Z | 2021-12-14T18:10:10Z | https://github.com/BeanieODM/beanie/issues/166 | [] | UtkarshMish | 1 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,082 | UC getting detected on a website(protected by DataDome) whereas original browser is not | Hi, I'm trying to open a [website](https://casesearch.courts.state.md.us/casesearch/processDisclaimer.jis) but after accepting the T&C, it shows a captcha, solving which it says blocked on using UC. But normal google chrome works fine. I've also tried using NordVPN but no luck.
From what I know they are using datadome for security. Does anyone have a workaround to bypass this? | open | 2023-02-21T15:50:46Z | 2023-04-22T16:30:03Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1082 | [] | ignis-tech-solutions | 14 |
yt-dlp/yt-dlp | python | 12,164 | PBS An extractor error has occurred. (caused by KeyError('title')) | This just happened today. I have never had this problem before.
```
url=https://www.pbs.org/video/lost-tombs-of-notre-dame-ayjw0k/
yt-dlp -F $url
[pbs] Downloading JSON metadata
[pbs] Extracting URL: https://www.pbs.org/video/lost-tombs-of-notre-dame-ayjw0k/
[pbs] lost-tombs-of-notre-dame-ayjw0k: Downloading webpage
[pbs] Downloading widget/partnerplayer page
[pbs] Downloading portalplayer page
ERROR: An extractor error has occurred. (caused by KeyError('title')); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
```
This happens with any url from PBS now.
`yt-dlp --force-generic-extractor -F $url`
Did not work.
| closed | 2025-01-22T16:30:20Z | 2025-01-22T16:37:00Z | https://github.com/yt-dlp/yt-dlp/issues/12164 | [
"duplicate",
"invalid",
"site-bug"
] | PacoH | 2 |
gradio-app/gradio | data-visualization | 10,006 | Prediction freezing when gradio is hosted on ECS and behind a proxy. | ### Describe the bug
My gradio app is having issues displaying a prediction. It just hangs and never finishes. It works the first time the task is booted up and then never again.
Its hosted on AWS ECS Service
- Which is in a VPC
- Which is behind a Application Load Balancer
- Which is behind a Cloudfront distribution
```
import gradio as gr
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import os
def model_exists(model_dir):
# Check for required model files
required_files = ["pytorch_model.bin"]
return all(os.path.exists(os.path.join(model_dir, file)) for file in required_files)
def main():
model_id = "distilbert-base-uncased-finetuned-sst-2-english"
model_dir = "/opt/program/model"
# Check if model directory exists, if not download and save the model
if not model_exists(model_dir):
model = AutoModelForSequenceClassification.from_pretrained(model_id, )
tokenizer = AutoTokenizer.from_pretrained(model_id)
model.save_pretrained(model_dir)
tokenizer.save_pretrained(model_dir)
clf = pipeline("sentiment-analysis", model=model_dir)
def sentiment(payload):
prediction = clf(payload, return_all_scores=True)
# convert list to dict
result = {}
for pred in prediction[0]:
result[pred["label"]] = pred["score"]
return result
demo = gr.Interface(
fn=sentiment,
inputs=gr.Textbox(placeholder="Enter a positive or negative sentence here..."),
outputs="label",
examples=[["I Love Serverless Machine Learning"],["Running Gradio on AWS Lambda is amazing"]],
allow_flagging="never",
analytics_enabled=False,
concurrency_limit=8
)
demo.launch(
server_port=8501,
server_name="0.0.0.0",
strict_cors=False
)
if __name__ == "__main__":
main()
```
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
```
### Screenshot

### Logs
```shell
no stack traces in the logs
```
### System Info
```shell
gradio 5.4
```
### Severity
Blocking usage of gradio | closed | 2024-11-20T09:46:17Z | 2024-11-22T02:05:31Z | https://github.com/gradio-app/gradio/issues/10006 | [
"bug",
"cloud"
] | dahnny012 | 5 |
sammchardy/python-binance | api | 1,149 | FUTURE_ORDER_TYPE_LIMIT_MAKER orders not possible? | Any clue why we can't place LIMIT_MAKER orders, but any other order?
resultb =client.futures_create_order(
symbol='BTCUSDT',
side=Client.SIDE_BUY,
positionSide = "LONG",
type= Client.FUTURE_ORDER_TYPE_LIMIT_MAKER,
quantity=0.01,
price = 25000,
timeInForce = 'GTC')
print(resultb)
binance.exceptions.BinanceAPIException: APIError(code=-1116): Invalid orderType. | open | 2022-02-22T09:17:07Z | 2023-06-21T03:38:02Z | https://github.com/sammchardy/python-binance/issues/1149 | [] | 1-NoLimits | 4 |
microsoft/nni | machine-learning | 5,038 | can i use netadapt with yolov5? | **Describe the issue**:
**Environment**:
- NNI version:
- Training service (local|remote|pai|aml|etc):
- Client OS:
- Server OS (for remote mode only):
- Python version:
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?:
- Is running in Docker?:
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**: | open | 2022-08-01T10:26:41Z | 2022-08-04T01:49:59Z | https://github.com/microsoft/nni/issues/5038 | [] | mumu1431 | 1 |
pywinauto/pywinauto | automation | 1,031 | How to automate Pycharm? |
## Short Example of Code to Demonstrate the Problem
```
app = Application(backend='uia').connect(process=11404)
win = app.window()
win.print_control_identifiers()
win.menu_select('Edit->Copy')
```
but not works
| open | 2021-01-03T12:58:39Z | 2021-01-04T15:50:12Z | https://github.com/pywinauto/pywinauto/issues/1031 | [] | kyowill | 1 |
LAION-AI/Open-Assistant | machine-learning | 3,702 | Can't open new chat. | Tried opening a new chat. Nothing happens. Cleared cache and cookies, several different browsers, no change. | closed | 2023-09-27T01:38:19Z | 2023-10-01T14:40:16Z | https://github.com/LAION-AI/Open-Assistant/issues/3702 | [] | NK-UT | 1 |
chaoss/augur | data-visualization | 2,636 | Issue Resolution Duration metric API | The canonical definition is here: https://chaoss.community/?p=3630 | open | 2023-11-30T18:06:18Z | 2024-05-28T21:36:14Z | https://github.com/chaoss/augur/issues/2636 | [
"API",
"first-timers-only"
] | sgoggins | 1 |
python-restx/flask-restx | api | 610 | Swagger doc page wont render when deployed in kubernetes | **Ask a question**
I have written a Flask app using `flask-restx`. The docs render properly when I run locally, but not when deployed in k8s.
I run locally via a \_\_main\_\_ `if` in my main file:
```python
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8899, debug=True)
```
This runs fine, and the swagger docs are rendered properly at 127.0.0.1:8899/
In my k8s deployment, I use `gunicorn --bind 0.0.0.0:8899 ...` to run the app, I have a `Service` that maps 8899 to 8080 and an `Ingress` that runs my `Service` on the path "/my-app(/|$)(.*)"
This all seems to work such that when I hit the app's endpoints in postman (i.e. my-host.com/my-app/endpointx), I get the responses I'm looking for. However, when I navigate to my-host.com/my-app/, the expected swagger docs do not render. Instead I get a blank page.
Investigating the html, I see that the `<head>` block is rendered properly (so, for instance, the page has `<title>My App<\title>`, as defined in the code with
```python
api = Api(app, title="My App" ...)
```
But within the `<body>` block, the div which – locally – contains the main content: `<div id="swagger-ui">`, is completely empty in the k8s deployment.
The last difference I've been able to find is in `swagger.json`. The swagger json exists at my-host.com/my-app/swagger.json<sup>1</sup>. The big difference I notice is that at the top level of the json, locally I have `"basePath": "\/",` whereas in k8s I have `"basePath": "/",`. All other paths are similarly escaped locally but not in k8s. e.g. local:
```json
"paths": {
"\/path\/endpoint": {
```
k8s:
```json
"paths": {"/path/endpoint": {
```
Any ideas what is going on?
<sup>1: when I navigate here in my browser the file renders in a single wrapped line, whereas locally http://127.0.0.1:8899/swagger.json is pretty-printed. This isn't a big deal that I need to solve I don't think, but it is a curious difference between the local run and the k8s deployment</sup> | closed | 2024-07-26T16:16:13Z | 2024-07-29T15:49:39Z | https://github.com/python-restx/flask-restx/issues/610 | [
"question"
] | will-m-buchanan | 4 |
stanford-oval/storm | nlp | 223 | There is no API used in the website. | Some of the interfaces in the demo website are not present in the code, such as filling the history record, retrieving information about the next expert, and the identification of citations.
website:https://storm.genie.stanford.edu | closed | 2024-10-18T01:44:08Z | 2025-01-04T23:35:57Z | https://github.com/stanford-oval/storm/issues/223 | [] | huxinggg | 2 |
kennethreitz/responder | graphql | 358 | Recommended way to change request method before handler | What I'm trying to achieve is that a POST request with certain params will be treated as a DELETE request. (Doing this for users w/o JavaScript)
```html
<form method="POST" action="/posts/{{ post.uuid }}">
<input type="hidden" name="_method" value="DELETE">
<input type="hidden" name="_entry_id" value="{{ post.uuid }}">
<button type="submit" class="btn btn-outline-danger">Delete</button>
</form>
```
I thought I could just go with
```python
@api.route(before_request=True)
async def check_hidden_method(req, resp):
if req.method == "post":
data = await req.media()
try:
if data['_method'].lower() in ["delete", "put", "patch"]:
req.method = data['_method']
print("Changed request method to", data['_method'])
except KeyError:
pass
```
... but it says I "can't set attributes" when trying `req.method = data['_method']`.
What is the recommended way to have that request processed by my on_delete()? | closed | 2019-05-09T10:22:50Z | 2019-05-18T21:19:06Z | https://github.com/kennethreitz/responder/issues/358 | [] | smashnet | 3 |
mlfoundations/open_clip | computer-vision | 707 | More Flexibility In Setting Learning Rates | With the current setup, the same learning rate is applied to non gain or bias params of the text and image encoders. It would be nice to have flexibility in setting these. For instance, the [SigLIP paper](https://arxiv.org/abs/2303.15343) gets peak performance with pretrained image encoders by disabling weight decay on the image encoder (though I'm not sure if that's the `trunk`, `head`, or both). Here's the figure from the paper for reference:
<img width="350" alt="CleanShot 2023-10-25 at 17 30 28@2x" src="https://github.com/mlfoundations/open_clip/assets/17111474/d8174a33-23fa-453f-a4f0-a82feab44405">
I'm not sure what the best mechanism to accomodate various use cases would be. One more useful fine-tuning setup I can imagine is setting differential learning rates for diff parts of the network. | open | 2023-10-25T21:31:24Z | 2023-10-25T21:31:24Z | https://github.com/mlfoundations/open_clip/issues/707 | [] | rsomani95 | 0 |
521xueweihan/HelloGitHub | python | 2,734 | 【项目推荐】Teammates 专为教育者设计,用于支持在线点评和同行评议活动 | ## 推荐项目
<!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。-->
<!-- 点击上方 “Preview” 立刻查看提交的内容 -->
<!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址-->
- 项目地址:https://github.com/TEAMMATES/teammates
<!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)-->
- 类别:Java, TypeScript
<!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 -->
- 项目标题:Teammates 是一个开源的在线工具,专为教育者设计,用于支持在线点评和同行评议活动。这个项目由新加坡国立大学(NUS)的计算机科学系维护和开发。它允许教师创建评价会话,学生可以在这些会话中匿名或实名对彼此的表现提供反馈。
<!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符-->
- 项目描述:
Teammates 的主要特点包括:
支持多种评价类型:支持包括团队评价、个人反馈和自我评价等多种形式。
灵活的反馈收集:教师可以定制问题和反馈类型,包括多选、评分、文本答案等。
高度可配置:教师可以设置详细的规则,控制评价的开放和关闭时间,以及参与者之间的互动方式。
自动化和管理工具:系统提供自动化的电子邮件通知、统计和数据分析工具,帮助教师管理课程和评价结果。
隐私和匿名性:Teammates 支持匿名评价,帮助学生提供坦率的反馈,同时保护他们的隐私。
作为一个教育工具,Teammates 非常适合大学和高等教育机构,用于促进学生间的互动与反馈,帮助教师更好地理解学生的学习状况和小组动态。
<!--令人眼前一亮的点是什么?类比同类型项目有什么特点!-->
- 亮点:很少有独立开源的教师评价系统,teammates有自托管能力,教育机构可以选择在自己的服务器上托管Teammates,这为那些对数据控制和安全性有更高要求的机构提供了一个可行的解决方案。
- 示例代码:(可选)
- 截图:(可选)gif/png/jpg
- 后续更新计划:
| open | 2024-04-19T21:19:23Z | 2024-04-24T11:57:08Z | https://github.com/521xueweihan/HelloGitHub/issues/2734 | [
"Java 项目"
] | RuimingShen | 0 |
opengeos/leafmap | streamlit | 292 | User not specified (leafmap.connect_postgis function) |
I tried this code to load a shapefile into my web app (Streamlit), i got this error after running the code :
> ValueError: user is not specified.
> Traceback:
> File "c:\users\pc\pycharmprojects\webgis_cad\venv\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 556, in _run_script
> exec(code, module.__dict__)
> File "C:\Users\pc\PycharmProjects\WebGIS_CAD\main.py", line 56, in <module>
> database="IFE_WEBGIS", host="localhost", user="postgres", password="XXXXXXXXX", use_env_var=True
> File "c:\users\pc\pycharmprojects\webgis_cad\venv\lib\site-packages\leafmap\common.py", line 1791, in connect_postgis
> raise ValueError("user is not specified.")
the code used :
`con = leafmap.connect_postgis(
database="IFE_WEBGIS", host="localhost", user="postgres", password="XXXXXXXXX", use_env_var=True
)`
Thank you,
| closed | 2022-10-03T17:52:44Z | 2022-11-05T02:06:13Z | https://github.com/opengeos/leafmap/issues/292 | [] | REDADRISSI | 3 |
graphql-python/graphene-django | graphql | 617 | Update/Create Mutation question | Hey guys,
I'm struggling to find a good example or best practice to create an elegant way for update/create mutations for related models.
Let say you have a model Address with OneToOne relation with (City, State, Zipcode)
```
class ZipCode(models.Model):
zip = models.IntegerField(_("Zip"), blank=False)
def __str__(self):
return self.zip
class Meta:
verbose_name_plural = _('Zip codes')
class State(models.Model):
state = models.CharField(_("State"), max_length=12, blank=False)
def __str__(self):
return self.state
class Meta:
verbose_name_plural = _('States')
class City(models.Model):
city = models.CharField(_("City"), max_length=12, blank=False)
def __str__(self):
return self.city
class Meta:
verbose_name_plural = _('Cities')
class Address(models.Model):
line_1 = models.CharField(_("Address Line 1"), max_length=256, blank=True)
line_2 = models.CharField(_("Address Line 2"), max_length=256, blank=True)
state = models.OneToOneField(
State,
related_name=_("address_state"),
verbose_name=_("State"),
on_delete=models.CASCADE,
null=True
)
city = models.OneToOneField(
City,
related_name=_("address_city"),
verbose_name=_("City"),
on_delete=models.CASCADE,
null=True
)
zipcode = models.OneToOneField(
ZipCode,
related_name=_("address_zipcode"),
verbose_name=_("ZipCode"),
on_delete=models.CASCADE,
null=True)
def __str__(self):
return self.id
class Meta:
abstract = True
verbose_name_plural = _("Addresses")
```
How would you create a mutation in a elegant way that can accept query like:
```
mutation {
updateAddress(addressData: {line_1: "Test"}, zipData: { zip: "12345" }, cityData: {city: "New York"}, stateData: { state: "NY" }) { ... }
``` | closed | 2019-04-13T11:06:39Z | 2019-04-15T20:41:27Z | https://github.com/graphql-python/graphene-django/issues/617 | [] | MarkoKauzlaric | 0 |
marimo-team/marimo | data-visualization | 3,973 | improve column selection in data tables | ### Description
This is a pretty cool feature that's useful for data analysis and investigation. I use DataGrip so I miss this feature in marimo. It also exists in Excel/Google sheets but isn't as convenient (multiple clicks) to use.
**DataGrip**
<img width="450" alt="Image" src="https://github.com/user-attachments/assets/16eb1af8-6de7-481c-b47b-f92bc81561e3" />
https://www.jetbrains.com/help/datagrip/tables-filter.html#use_the_local_filter
**Google sheets**
<img width="200" alt="Image" src="https://github.com/user-attachments/assets/91b14791-e774-44ee-9fcb-2ad93b6ad7ad" />
### Suggested solution
Likely need to add more backend functions to perform this search and call them from the frontend. Also need to keep in mind multi column selection.
<img width="450" alt="Image" src="https://github.com/user-attachments/assets/dc3b0c3a-20cd-44c2-98c7-843d89b779fe" />
I think we could simplify the filtering, allow search & selection here.
### Alternative
_No response_
### Additional context
may need to set a limit for large tables / prevent long computation. | open | 2025-03-04T15:38:12Z | 2025-03-18T21:55:04Z | https://github.com/marimo-team/marimo/issues/3973 | [
"enhancement"
] | Light2Dark | 1 |
kaliiiiiiiiii/Selenium-Driverless | web-scraping | 186 | execute_cdp_cmd Not support `Network.enable` ? | cdp_socket.exceptions.CDPError: {'code': -32601, 'message': "'Network.enable' wasn't found"} | closed | 2024-03-11T11:19:40Z | 2024-03-12T00:43:23Z | https://github.com/kaliiiiiiiiii/Selenium-Driverless/issues/186 | [] | langhuihui | 2 |
Miserlou/Zappa | flask | 1,605 | All responses are base64 encoded | It seems that Zappa always responds with a base64 encoded string for the response body and "isBase64Encoded" set to true.
The relevant file is the "handler.py" file provided/generated by Zappa.
On line 513, we have this if statement:
```python
if not response.mimetype.startswith("text/") \
or response.mimetype != "application/json":
```
The condition's negation is:
```python
response.mimetype.startswith("text/") and response.mimetype == "application/json"
```
Clearly this is a contradiction, so the condition itself is a tautology. The true branch is taken and Zappa will always base64 encode responses.
It seems that the intention of the condition is to NOT base64 encode responses that are text (but to take a conservative approach to determine what is "text").
It seems that what we want is to NOT base64 encode when this condition is true:
```python
response.mimetype.startswith('text/') or response.mimetype == 'application/json'
```
If this is the case, the condition starting on line 513 should instead be:
```python
not response.mimetype.startswith('text/') and response.mimetype != 'application/json'
```
I noticed this while using Zappa 0.46.2
| open | 2018-09-11T06:24:10Z | 2020-10-06T02:28:06Z | https://github.com/Miserlou/Zappa/issues/1605 | [] | joeldentici | 6 |
sinaptik-ai/pandas-ai | data-science | 982 | "polars" is required in new version | ### System Info
pandas 2.0.2, windows
### 🐛 Describe the bug
I have error:
```
ModuleNotFoundError: No module named 'polars'
Traceback:
File "\.env\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 535, in _run_script
exec(code, module.__dict__)
File "\main.py", line 13, in <module>
from pandasai.llm.openai import OpenAI
File "\.env\Lib\site-packages\pandasai\__init__.py", line 7, in <module>
from pandasai.smart_dataframe import SmartDataframe
File "\.env\Lib\site-packages\pandasai\smart_dataframe\__init__.py", line 26, in <module>
from pandasai.agent.base import Agent
File "\.env\Lib\site-packages\pandasai\agent\__init__.py", line 1, in <module>
from .base import Agent
File "\.env\Lib\site-packages\pandasai\agent\base.py", line 8, in <module>
from pandasai.pipelines.chat.chat_pipeline_input import (
File "\.env\Lib\site-packages\pandasai\pipelines\__init__.py", line 3, in <module>
from .pipeline import Pipeline
File "\.env\Lib\site-packages\pandasai\pipelines\pipeline.py", line 8, in <module>
from pandasai.helpers.query_exec_tracker import QueryExecTracker
File "\.env\Lib\site-packages\pandasai\helpers\query_exec_tracker.py", line 10, in <module>
from pandasai.connectors import BaseConnector
File "\.env\Lib\site-packages\pandasai\connectors\__init__.py", line 10, in <module>
from .polars import PolarsConnector
File "\.env\Lib\site-packages\pandasai\connectors\polars.py", line 9, in <module>
import polars as pl
``` | closed | 2024-03-03T12:28:12Z | 2024-03-16T16:19:58Z | https://github.com/sinaptik-ai/pandas-ai/issues/982 | [] | PavelAgurov | 5 |
deepspeedai/DeepSpeed | machine-learning | 5,663 | AssertionError: Unable to pre-compile ops without torch installed. Please install torch before attempting to pre-compile ops. | Environment: Windows 11
```
(venv) C:\sd\HunyuanDiT>pip install deepspeed==0.6.3
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting deepspeed==0.6.3
Downloading deepspeed-0.6.3.tar.gz (554 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 554.6/554.6 kB 11.6 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [20 lines of output]
[WARNING] Unable to import torch, pre-compiling ops will be disabled. Please visit https://pytorch.org/ to see how to properly install torch on your system.
[WARNING] unable to import torch, please install it if you want to pre-compile any deepspeed ops.
DS_BUILD_OPS=1
Traceback (most recent call last):
File "C:\sd\HunyuanDiT\venv\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\sd\HunyuanDiT\venv\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "C:\sd\HunyuanDiT\venv\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "C:\Users\nitin\AppData\Local\Temp\pip-build-env-9vra48g9\overlay\Lib\site-packages\setuptools\build_meta.py", line 325, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
File "C:\Users\nitin\AppData\Local\Temp\pip-build-env-9vra48g9\overlay\Lib\site-packages\setuptools\build_meta.py", line 295, in _get_build_requires
self.run_setup()
File "C:\Users\nitin\AppData\Local\Temp\pip-build-env-9vra48g9\overlay\Lib\site-packages\setuptools\build_meta.py", line 487, in run_setup
super().run_setup(setup_script=setup_script)
File "C:\Users\nitin\AppData\Local\Temp\pip-build-env-9vra48g9\overlay\Lib\site-packages\setuptools\build_meta.py", line 311, in run_setup
exec(code, locals())
File "<string>", line 118, in <module>
AssertionError: Unable to pre-compile ops without torch installed. Please install torch before attempting to pre-compile ops.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
```
```
(venv) C:\sd\HunyuanDiT>python -c "import torch; print('torch:', torch.__version__, torch)"
torch: 2.0.1+cu117 <module 'torch' from 'C:\\sd\\HunyuanDiT\\venv\\lib\\site-packages\\torch\\__init__.py'>
(venv) C:\sd\HunyuanDiT>python -c "import torch; print('CUDA available:', torch.cuda.is_available())"
CUDA available: True
```
PIP LIST
```
(venv) C:\sd\HunyuanDiT>pip list
Package Version
------------------------- ------------
accelerate 0.29.3
aiofiles 23.2.1
altair 5.3.0
annotated-types 0.7.0
anyio 4.4.0
attrs 23.2.0
certifi 2022.12.7
charset-normalizer 2.1.1
click 8.1.7
colorama 0.4.6
coloredlogs 15.0.1
contourpy 1.2.1
cuda-python 11.7.1
cycler 0.12.1
Cython 3.0.10
diffusers 0.21.2
dnspython 2.6.1
einops 0.7.0
email_validator 2.1.1
exceptiongroup 1.2.1
fastapi 0.111.0
fastapi-cli 0.0.4
ffmpy 0.3.2
filelock 3.13.1
flatbuffers 24.3.25
fonttools 4.53.0
fsspec 2024.6.0
gradio 3.50.2
gradio_client 0.6.1
h11 0.14.0
httpcore 1.0.5
httptools 0.6.1
httpx 0.27.0
huggingface-hub 0.23.4
humanfriendly 10.0
idna 3.4
importlib_metadata 7.1.0
importlib_resources 6.4.0
Jinja2 3.1.3
jsonschema 4.22.0
jsonschema-specifications 2023.12.1
kiwisolver 1.4.5
loguru 0.7.2
markdown-it-py 3.0.0
MarkupSafe 2.1.5
matplotlib 3.9.0
mdurl 0.1.2
mpmath 1.3.0
networkx 3.2.1
numpy 1.26.3
onnx 1.12.0
onnx-graphsurgeon 0.3.27
onnxruntime 1.12.1
orjson 3.10.5
packaging 24.1
pandas 2.0.3
peft 0.10.0
pillow 10.2.0
pip 24.0
polygraphy 0.47.1
protobuf 3.19.0
psutil 5.9.8
pydantic 2.7.4
pydantic_core 2.18.4
pydub 0.25.1
Pygments 2.18.0
pyparsing 3.1.2
pyreadline3 3.4.1
python-dateutil 2.9.0.post0
python-dotenv 1.0.1
python-multipart 0.0.9
pytz 2024.1
PyYAML 6.0.1
referencing 0.35.1
regex 2024.5.15
requests 2.28.1
rich 13.7.1
rpds-py 0.18.1
safetensors 0.4.3
semantic-version 2.10.0
sentencepiece 0.1.99
setuptools 63.2.0
shellingham 1.5.4
six 1.16.0
sniffio 1.3.1
starlette 0.37.2
sympy 1.12
timm 0.9.5
tokenizers 0.15.2
toolz 0.12.1
torch 2.0.1+cu117
torchaudio 2.0.2+cu117
torchvision 0.15.2+cu117
tqdm 4.66.4
transformers 4.39.1
typer 0.12.3
typing_extensions 4.9.0
tzdata 2024.1
ujson 5.10.0
urllib3 1.26.13
uvicorn 0.30.1
watchfiles 0.22.0
websockets 11.0.3
win32-setctime 1.1.0
zipp 3.19.2
```
```
(venv) C:\sd\HunyuanDiT>python --version
Python 3.10.6
```
Please let me know if any other information is required | closed | 2024-06-14T19:21:21Z | 2024-08-23T00:44:02Z | https://github.com/deepspeedai/DeepSpeed/issues/5663 | [] | nitinmukesh | 12 |
huggingface/text-generation-inference | nlp | 2,417 | torch.cuda.OutOfMemoryError: CUDA out of memory. Why isn't it handle by the queue system ? | ### System Info
text-generation-inference v2.2.0
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Exception ignored in: <function Server.__del__ at 0xXXXXXXXXX>
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/grpc/aio/_server.py", line 194, in __del__
cygrpc.schedule_coro_threadsafe(
File "src/python/grpcio/grpc/_cython/_cygrpc/aio/common.pyx.pxi", line 120, in grpc._cython.cygrpc.schedule_coro_threadsafe
File "src/python/grpcio/grpc/_cython/_cygrpc/aio/common.pyx.pxi", line 112, in grpc._cython.cygrpc.schedule_coro_threadsafe
File "/opt/conda/lib/python3.10/asyncio/base_events.py", line 436, in create_task
self._check_closed()
File "/opt/conda/lib/python3.10/asyncio/base_events.py", line 515, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
sys:1: RuntimeWarning: coroutine 'AioServer.shutdown' was never awaited
Task exception was never retrieved
Error: ShardFailed
future: <Task finished name='HandleExceptions[/generate.v2.TextGenerationService/Prefill]' coro=<<coroutine without __name__>()> exception=SystemExit(1)>
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/text_generation_server/interceptor.py", line 21, in intercept
return await response
File "/opt/conda/lib/python3.10/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 82, in _unary_interceptor
raise error
File "/opt/conda/lib/python3.10/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 73, in _unary_interceptor
return await behavior(request_or_iterator, context)
File "/opt/conda/lib/python3.10/site-packages/text_generation_server/server.py", line 142, in Prefill
generations, next_batch, timings = self.model.generate_token(batch)
File "/opt/conda/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/flash_causal_lm.py", line 1141, in generate_token
prefill_logprobs_tensor = torch.log_softmax(out, -1)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 7.50 GiB. GPU has a total capacity of 79.15 GiB of which 6.94 GiB is free. Process 63385 has 72.21 GiB memory in use. Of the allocated memory 69.81 GiB is allocated by PyTorch, and 309.29 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (
https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
### Expected behavior
Actually, the service reboot so all the requests on the queue and the ones running goes:
openai.InternalServerError: upstream
connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: delayed connect error: 111
To avoid this, it would be really good to check the future memory in place before adding too big info. It could be added as a critera to decide if it needs to go on the queue or not before treating request that we know will blow up the system.
Actually, I'm doing a manual pre check on the length of all the payloads of all the requests I do to avoid this. | open | 2024-08-14T13:26:48Z | 2024-08-14T13:28:01Z | https://github.com/huggingface/text-generation-inference/issues/2417 | [] | JustAnotherVeryNormalDeveloper | 0 |
microsoft/nni | pytorch | 5,349 | How to get the sensitivity pruner results? | **Describe the issue**:
How to get the final sensitivity pruner results? There is no mask to modelspeedup.
**Environment**:linux
- NNI version:2.10
- Training service (local|remote|pai|aml|etc):
- Client OS:
- Server OS (for remote mode only):
- Python version:3.8
- PyTorch/TensorFlow version:pytorch1.8.1
- Is conda/virtualenv/venv used?:yes
- Is running in Docker?:
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**: | closed | 2023-02-14T09:01:34Z | 2023-03-09T02:02:39Z | https://github.com/microsoft/nni/issues/5349 | [] | OberstWB | 2 |
twopirllc/pandas-ta | pandas | 880 | ImportError: cannot import name 'NaN' from 'numpy' (/Users/anees.a/.venv/lib/python3.12/site-packages/numpy/__init__.py). Did you mean: 'nan'? | **Which version are you running? The lastest version is on Github. Pip is for major releases.**
```bash
pip3 install pandas_ta
Collecting pandas_ta
Downloading pandas_ta-0.3.14b.tar.gz (115 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: pandas in /Users/anees.a/.venv/lib/python3.12/site-packages (from pandas_ta) (2.2.3)
Requirement already satisfied: numpy>=1.26.0 in /Users/anees.a/.venv/lib/python3.12/site-packages (from pandas->pandas_ta) (2.1.3)
Requirement already satisfied: python-dateutil>=2.8.2 in /Users/anees.a/.venv/lib/python3.12/site-packages (from pandas->pandas_ta) (2.9.0.post0)
Requirement already satisfied: pytz>=2020.1 in /Users/anees.a/.venv/lib/python3.12/site-packages (from pandas->pandas_ta) (2024.1)
Requirement already satisfied: tzdata>=2022.7 in /Users/anees.a/.venv/lib/python3.12/site-packages (from pandas->pandas_ta) (2024.1)
Requirement already satisfied: six>=1.5 in /Users/anees.a/.venv/lib/python3.12/site-packages (from python-dateutil>=2.8.2->pandas->pandas_ta) (1.16.0)
Building wheels for collected packages: pandas_ta
Building wheel for pandas_ta (pyproject.toml) ... done
Created wheel for pandas_ta: filename=pandas_ta-0.3.14b0-py3-none-any.whl size=218986 sha256=6d59621e9c405876539bffef379a5edc455d8910818bc1ca8260f0bbde756b5a
Stored in directory: /Users/anees.a/Library/Caches/pip/wheels/fd/ed/18/2a12fd1b7906c63efca6accb351929f2c7f6bbc674e1c0ba5d
Successfully built pandas_ta
Installing collected packages: pandas_ta
Successfully installed pandas_ta-0.3.14b0
```
**Do you have _TA Lib_ also installed in your environment?**
Yes
```bash
brew install ta-lib
```
**Have you tried the _development_ version? Did it resolve the issue?**
No
**Describe the bug**
```bash
python3
Python 3.12.4 (main, Jun 6 2024, 18:26:44) [Clang 15.0.0 (clang-1500.3.9.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
>>> import pandas_ta as ta
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/anees.a/.venv/lib/python3.12/site-packages/pandas_ta/__init__.py", line 116, in <module>
from pandas_ta.core import *
File "/Users/anees.a/.venv/lib/python3.12/site-packages/pandas_ta/core.py", line 18, in <module>
from pandas_ta.momentum import *
File "/Users/anees.a/.venv/lib/python3.12/site-packages/pandas_ta/momentum/__init__.py", line 34, in <module>
from .squeeze_pro import squeeze_pro
File "/Users/anees.a/.venv/lib/python3.12/site-packages/pandas_ta/momentum/squeeze_pro.py", line 2, in <module>
from numpy import NaN as npNaN
ImportError: cannot import name 'NaN' from 'numpy' (/Users/anees.a/.venv/lib/python3.12/site-packages/numpy/__init__.py). Did you mean: 'nan'?
>>> exit()
```
**To Reproduce**
Do the above
**Expected behavior**
Successful import
**Additional context**
This suggestion from ChatGPT worked for me.

| closed | 2025-01-27T11:57:00Z | 2025-01-27T20:57:06Z | https://github.com/twopirllc/pandas-ta/issues/880 | [
"bug",
"duplicate"
] | aneeskA | 0 |
HumanSignal/labelImg | deep-learning | 592 | Deleted Annotations | I am trying to use Labelimg to identify 2 classes. I am working on windows 10.
All goes fine for awhile, but I am labelling several thousand images so at some point I stop and my computer turns off. When I log back in, reboot labelimg, open directory, and label the next image it has then swapped every single one of my annotations to the opposite class.
I thought ok, easy fix surely. Went to first picture. clicked edit label, swapped it back. and it deletes every single one of the boxes I have drawn.
The generated .txt files are gone, nowhere to be found. just immediately deleted with no apparent way to get them back. This has happened twice now losing me days worth of work, so I really hope there is a fix to this.
It seems it is safer to just not try and close labelimg and do all labelling in one go to avoid this error. | open | 2020-05-07T10:11:53Z | 2020-05-07T10:11:53Z | https://github.com/HumanSignal/labelImg/issues/592 | [] | Smellegg | 0 |
mwaskom/seaborn | data-science | 3,032 | Seaborn Objects Bar plots with log y scale | When specifying a log y scale for a Bar plot the bars are not drawn correctly.
Example code:
```
df = pd.DataFrame.from_dict(
{
'x_values': [1,2,3,20],
'frequency': [10000,100,10,1]
}
)
plot = (
so.Plot(df, x="x_values", y = "frequency")
.add(so.Bars(edgecolor="C0", edgewidth=2))
.scale(y="log")
)
plot.show()
```
result:

| closed | 2022-09-17T16:08:07Z | 2024-06-03T19:46:59Z | https://github.com/mwaskom/seaborn/issues/3032 | [] | antunderwood | 4 |
modin-project/modin | pandas | 7,475 | FEAT: Choose the correct __init__ method from extensions | open | 2025-03-20T17:05:03Z | 2025-03-20T17:05:03Z | https://github.com/modin-project/modin/issues/7475 | [] | sfc-gh-mvashishtha | 0 | |
pydantic/logfire | pydantic | 218 | self-host for enterprise | ### Description
I am currently work at Mediatek and we are dealing with some machine learning/GAI projects.
As a PM of two of these projects, I am wondering that if it is possible to have self-host with enterprise.
logfire is a pretty good logging system for all kind of job, but we will consider buying this only if logfire can make sure our data won't get leaked and self-hosted since security is always our first priority. | closed | 2024-05-28T09:45:45Z | 2024-05-28T10:21:47Z | https://github.com/pydantic/logfire/issues/218 | [
"Feature Request"
] | Mai0313 | 1 |
huggingface/transformers | python | 36,611 | Not installable on arm64 due to jaxlib upper bound | `jaxlib` does not ship source distributions on PyPI, i.e. is only available as a wheel.
Newer versions of jaxlib do provide `aarch64` wheels, but `transformers` constrains `jaxlib` to `<=0.4.13`
Is it possible to relax this upper bound constraint? Or is there some very specific API-level breakage that was being guarded against? (I tried to infer by trawling through the `git blame` but nothing obvious jumped out at me -- looked like general defensiveness more than anything specific) | open | 2025-03-07T18:56:53Z | 2025-03-10T14:14:45Z | https://github.com/huggingface/transformers/issues/36611 | [
"Flax"
] | paveldikov | 0 |
sloria/TextBlob | nlp | 222 | Training the NaiveBayes Classifier using large JSON files | Hello!
I'm trying to use the Naive Bayes Classifier included in TextBlob. While initialising the object, I'm passing a file handle to a json file, and have also specified the json format. This code can reproduce my problem:
from textblob.classifiers import NaiveBayesClassifier as nbc
with open('data.json', 'r') as file_handle:
_ = nbc(file_handle, format="json")
Now, `data.json` is a file of size 7.8 GB. After a couple of seconds, the program errors out:
```
Traceback (most recent call last):
File "load_test.py", line 4, in <module>
_ = nbc(file_handle, format="json")
File "/usr/local/lib/python3.6/site-packages/textblob/classifiers.py", line 205, in __init__
super(NLTKClassifier, self).__init__(train_set, feature_extractor, format, **kwargs)
File "/usr/local/lib/python3.6/site-packages/textblob/classifiers.py", line 136, in __init__
self.train_set = self._read_data(train_set, format)
File "/usr/local/lib/python3.6/site-packages/textblob/classifiers.py", line 157, in _read_data
return format_class(dataset, **self.format_kwargs).to_iterable()
File "/usr/local/lib/python3.6/site-packages/textblob/formats.py", line 115, in __init__
self.dict = json.load(fp)
File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/__init__.py", line 296, in load
return loads(fp.read(),
OSError: [Errno 22] Invalid argument
```
After some brief googling, it looks like this is happening because of the large size of the json file. I could not find any clear-cut solutions to the problem though.
Considering that Naive Bayes Classifiers often have large data-sets they are trained upon, this would be a necessity for many users. Could this issue please be rectified?
(If the `data.json` file is required to reproduce this issue, please inform me and I will upload it to a host. AFAIK, any file slightly on the bigger size (1GB?) should help in reproducing the above issue.) | open | 2018-08-19T11:04:57Z | 2018-08-19T11:04:57Z | https://github.com/sloria/TextBlob/issues/222 | [] | double-fault | 0 |
ContextLab/hypertools | data-visualization | 265 | Plotting animations in Jupyter does not seem to be compatible with current Numpy (version 2 or greater) | This is a follow-up to [Plotting animations do not seem to be compatible with Jupyter Notebook 7](https://github.com/ContextLab/hypertools/issues/261); however, I think because the new issue with animations is isolatable to the version of Numpy, it probably deserves its own issue post.
Essentially, if you do as it says in [Plotting animations do not seem to be compatible with Jupyter Notebook 7](https://github.com/ContextLab/hypertools/issues/261) **and specify Numpy to not be above version `1.26.4`**, the animation will work.
If you leave things unspecified so it gets current Numpy, which is v2.1.1, then you get the following when you run the code:
```python
AttributeError: np.string_was removed in the NumPy 2.0 release. Use np.bytes_ instead.
```
You can easily test this going [here](https://discourse.jupyter.org/t/error-displaying-animation/28149/4?u=fomightez) and following the steps outlined there. If you skip the step specifying the numpy version, you'll see the error.
<details>
<summary>Full traceback seen when initiating animation when using Numpy 2.1.1</summary>
Even when ipympl is installed, working, and specified and you use the version of Hypertools obtained with `pip install git+https://github.com/ContextLab/hypertools.git`, you'll see the following whether using Python 3.11 or 3.10:
```python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[1], line 6
4 import numpy as np
5 arr = np.array( [[math.sin(3*i[/100](https://hub.ovh2.mybinder.org/100)), math.cos (3*i[/100](https://hub.ovh2.mybinder.org/100)), (1[/100](https://hub.ovh2.mybinder.org/100))**2, (i[/100](https://hub.ovh2.mybinder.org/100))**3, 1[/](https://hub.ovh2.mybinder.org/)(1+i[/100](https://hub.ovh2.mybinder.org/100))] for i in range(0,300)])
----> 6 hyp.plot(arr, animate=True)
File [/srv/conda/envs/notebook/lib/python3.11/site-packages/hypertools/plot/backend.py:1070](https://hub.ovh2.mybinder.org/srv/conda/envs/notebook/lib/python3.11/site-packages/hypertools/plot/backend.py#line=1069), in manage_backend.<locals>.plot_wrapper(*args, **kwargs)
1067 if BACKEND_WARNING is not None:
1068 warnings.warn(BACKEND_WARNING)
-> 1070 return plot_func(*args, **kwargs)
1072 finally:
1073 # restore rcParams prior to plot
1074 with warnings.catch_warnings():
1075 # if the matplotlibrc was cached from <=v3.3.0, a TON of
1076 # (harmless as of v3.2.0) MatplotlibDeprecationWarnings
1077 # about `axes.Axes3D`-related rcParams fields are issued
File [/srv/conda/envs/notebook/lib/python3.11/site-packages/hypertools/plot/plot.py:313](https://hub.ovh2.mybinder.org/srv/conda/envs/notebook/lib/python3.11/site-packages/hypertools/plot/plot.py#line=312), in plot(x, fmt, marker, markers, linestyle, linestyles, color, colors, palette, group, hue, labels, legend, title, size, elev, azim, ndims, model, model_params, reduce, cluster, align, normalize, n_clusters, save_path, animate, duration, tail_duration, rotations, zoom, chemtrails, precog, bullettime, frame_rate, interactive, explore, mpl_backend, show, transform, vectorizer, semantic, corpus, ax, frame_kwargs)
311 if transform is None:
312 raw = format_data(x, **text_args)
--> 313 xform = analyze(
314 raw,
315 ndims=ndims,
316 normalize=normalize,
317 reduce=reduce,
318 align=align,
319 internal=True,
320 )
321 else:
322 xform = transform
File [/srv/conda/envs/notebook/lib/python3.11/site-packages/hypertools/tools/analyze.py:53](https://hub.ovh2.mybinder.org/srv/conda/envs/notebook/lib/python3.11/site-packages/hypertools/tools/analyze.py#line=52), in analyze(data, normalize, reduce, ndims, align, internal)
9 """
10 Wrapper function for normalize -> reduce -> align transformations.
11
(...)
49
50 """
52 # return processed data
---> 53 return aligner(reducer(normalizer(data, normalize=normalize, internal=internal),
54 reduce=reduce, ndims=ndims, internal=internal), align=align)
File [/srv/conda/envs/notebook/lib/python3.11/site-packages/hypertools/_shared/helpers.py:159](https://hub.ovh2.mybinder.org/srv/conda/envs/notebook/lib/python3.11/site-packages/hypertools/_shared/helpers.py#line=158), in memoize.<locals>.memoizer(*args, **kwargs)
157 key = str(args) + str(kwargs)
158 if key not in cache:
--> 159 cache[key] = obj(*args, **kwargs)
160 return cache[key]
File [/srv/conda/envs/notebook/lib/python3.11/site-packages/hypertools/tools/reduce.py:95](https://hub.ovh2.mybinder.org/srv/conda/envs/notebook/lib/python3.11/site-packages/hypertools/tools/reduce.py#line=94), in reduce(x, reduce, ndims, normalize, align, model, model_params, internal, format_data)
92 if reduce is None:
93 return x
---> 95 elif isinstance(reduce, (str, np.string_)):
96 model_name = reduce
97 model_params = {
98 'n_components': ndims
99 }
File [/srv/conda/envs/notebook/lib/python3.11/site-packages/numpy/__init__.py:397](https://hub.ovh2.mybinder.org/srv/conda/envs/notebook/lib/python3.11/site-packages/numpy/__init__.py#line=396), in __getattr__(attr)
394 raise AttributeError(__former_attrs__[attr])
396 if attr in __expired_attributes__:
--> 397 raise AttributeError(
398 f"`np.{attr}` was removed in the NumPy 2.0 release. "
399 f"{__expired_attributes__[attr]}"
400 )
402 if attr == "chararray":
403 warnings.warn(
404 "`np.chararray` is deprecated and will be removed from "
405 "the main namespace in the future. Use an array with a string "
406 "or bytes dtype instead.", DeprecationWarning, stacklevel=2)
AttributeError: `np.string_` was removed in the NumPy 2.0 release. Use `np.bytes_` instead.
```
</details> | open | 2024-09-06T16:19:45Z | 2024-09-06T16:36:39Z | https://github.com/ContextLab/hypertools/issues/265 | [] | fomightez | 0 |
hankcs/HanLP | nlp | 687 | HanLP的二元文法词典如何使用? | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [ ] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:1.5.2
我使用的版本是:1.5.2
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
<!-- 请详细描述问题,越详细越可能得到解决 -->
## 复现问题
<!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?-->
我在data\dictionary\CoreNatureDictionary.ngram.txt文件中发现了"琐碎@事情 2", '琐碎'和'事情' 是合理的接续.我不知道运行之后是出现什么样的效果
### 步骤
1. 首先……
2. 然后……
3. 接着……
### 触发代码
```
public void testIssue1234() throws Exception
{
String rawText = "琐碎的事情";
List<Term> termList = StandardTokenizer.segment(rawText);
}
```
### 期望输出
<!-- 你希望输出什么样的正确结果?-->
[琐碎事情]
### 实际输出
<!-- HanLP实际输出了什么?产生了什么效果?错在哪里?-->
[琐碎, 的, 事情]
## 其他信息
不知道我这样的想法是不是错误的.还是我哪里操作有问题
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
| closed | 2017-11-22T09:42:49Z | 2020-01-01T10:51:46Z | https://github.com/hankcs/HanLP/issues/687 | [
"ignored"
] | mijiaxiaojiu | 2 |
tfranzel/drf-spectacular | rest-api | 664 | AttributeError: 'ImplTestView1' object has no attribute 'model' | I managed to integrate spectacular in most of my views.
What I still have problems with some views that subclass some BaseView and have attributes on that parent view.
Can I bypass this errors with some annotations without changing code (because this is not possible for some reasons)
So in base class there is a:
model: ModelTest
but when generating schema for all views that subclass that base class I get
object has no attribute 'model'
| closed | 2022-02-21T18:37:09Z | 2022-02-24T20:13:42Z | https://github.com/tfranzel/drf-spectacular/issues/664 | [] | danilocubrovic | 3 |
nschloe/tikzplotlib | matplotlib | 613 | tikzplotlib should have a legend for seaborn | ```python
import matplotlib.pyplot as plt
import seaborn as sns
import tikzplotlib
import numpy as np
np.random.seed(42)
data = np.random.rand(4, 2)
fig, ax = plt.subplots()
sns.scatterplot({str(i): d for i, d in enumerate(data)}, ax=ax)
print(tikzplotlib.get_tikz_code(fig))
plt.close(fig)
fig, ax = plt.subplots()
for i, d in enumerate(data):
x = range(len(d))
plt.scatter(x, d, label=str(i))
plt.legend()
# plt.show()
print(tikzplotlib.get_tikz_code(fig))
plt.close(fig)
```
gives
```latex
% This file was created with tikzplotlib v0.10.1.
\begin{tikzpicture}
\definecolor{darkgray176}{RGB}{176,176,176}
\definecolor{lightgray204}{RGB}{204,204,204}
\begin{axis}[
legend cell align={left},
legend style={
fill opacity=0.8,
draw opacity=1,
text opacity=1,
at={(0.03,0.97)},
anchor=north west,
draw=lightgray204
},
tick align=outside,
tick pos=left,
x grid style={darkgray176},
xmin=-0.05, xmax=1.05,
xtick style={color=black},
y grid style={darkgray176},
ymin=0.0134520774561136, ymax=0.995345841122002,
ytick style={color=black}
]
\addplot [
draw=white,
forget plot,
mark=*,
only marks,
scatter,
scatter/@post marker code/.code={%
\endscope
},
scatter/@pre marker code/.code={%
\expanded{%
\noexpand\definecolor{thispointfillcolor}{RGB}{\fillcolor}%
}%
\scope[fill=thispointfillcolor]%
},
visualization depends on={value \thisrow{fill} \as \fillcolor}
]
table{%
x y fill
0 0.374540118847362 31,119,180
1 0.950714306409916 31,119,180
0 0.731993941811405 255,127,14
1 0.598658484197037 255,127,14
0 0.156018640442437 44,160,44
1 0.155994520336203 44,160,44
0 0.0580836121681995 214,39,40
1 0.866176145774935 214,39,40
};
\end{axis}
\end{tikzpicture}
```
```latex
% This file was created with tikzplotlib v0.10.1.
\begin{tikzpicture}
\definecolor{crimson2143940}{RGB}{214,39,40}
\definecolor{darkgray176}{RGB}{176,176,176}
\definecolor{darkorange25512714}{RGB}{255,127,14}
\definecolor{forestgreen4416044}{RGB}{44,160,44}
\definecolor{lightgray204}{RGB}{204,204,204}
\definecolor{steelblue31119180}{RGB}{31,119,180}
\begin{axis}[
legend cell align={left},
legend style={
fill opacity=0.8,
draw opacity=1,
text opacity=1,
at={(0.03,0.97)},
anchor=north west,
draw=lightgray204
},
tick align=outside,
tick pos=left,
x grid style={darkgray176},
xmin=-0.05, xmax=1.05,
xtick style={color=black},
y grid style={darkgray176},
ymin=0.0134520774561136, ymax=0.995345841122002,
ytick style={color=black}
]
\addplot [draw=steelblue31119180, fill=steelblue31119180, mark=*, only marks]
table{%
x y
0 0.374540118847362
1 0.950714306409916
};
\addlegendentry{0}
\addplot [draw=darkorange25512714, fill=darkorange25512714, mark=*, only marks]
table{%
x y
0 0.731993941811405
1 0.598658484197037
};
\addlegendentry{1}
\addplot [draw=forestgreen4416044, fill=forestgreen4416044, mark=*, only marks]
table{%
x y
0 0.156018640442437
1 0.155994520336203
};
\addlegendentry{2}
\addplot [draw=crimson2143940, fill=crimson2143940, mark=*, only marks]
table{%
x y
0 0.0580836121681995
1 0.866176145774935
};
\addlegendentry{3}
\end{axis}
\end{tikzpicture}
```
Note that the second one has a legend but the first does not. This is because
https://github.com/nschloe/tikzplotlib/blob/450712b4014799ec5f151f234df84335c90f4b9d/src/tikzplotlib/_util.py#L20
fails, because the single line collection is labeled `_child0` instead of having a series label | open | 2024-04-30T04:36:32Z | 2024-04-30T04:36:32Z | https://github.com/nschloe/tikzplotlib/issues/613 | [] | JasonGross | 0 |
ets-labs/python-dependency-injector | flask | 792 | cannot import name 'providers' from 'dependency_injector' | I am using a package that import dependency_injector, and when I try to import that package I get the error message
"ImportError: cannot import name 'providers' from 'dependency_injector' (c:\ProgramData\Anaconda3\envs\darpa-env\lib\site-packages\dependency_injector\__init__.py)"
This issue is similar to issue #386. | open | 2024-03-15T01:42:11Z | 2024-10-03T10:35:37Z | https://github.com/ets-labs/python-dependency-injector/issues/792 | [] | evlachos93 | 2 |
Avaiga/taipy | automation | 1,710 | table shows delete and add when not editable | <|{data}|table|> Should not show delete and add row buttons. | closed | 2024-08-27T08:49:33Z | 2024-08-27T10:07:20Z | https://github.com/Avaiga/taipy/issues/1710 | [
"🖰 GUI",
"💥Malfunction",
"🟧 Priority: High",
"GUI: Front-End"
] | FredLL-Avaiga | 0 |
Lightning-AI/pytorch-lightning | deep-learning | 20,328 | RuntimeError when running basic GAN model (from tutorial at lightning.ai) with DDP | ### Bug description
I am trying to train a GAN model on multiple GPUs using DDP. I followed the tutorial at https://lightning.ai/docs/pytorch/stable/notebooks/lightning_examples/basic-gan.html, changing the arguments to Trainer to
```
trainer = pl.Trainer(
accelerator="auto",
devices=[0, 1, 2, 3],
strategy="ddp",
max_epochs=5,
)
```
Running the script raise Runtime error as follows:
```
[rank0]: RuntimeError: It looks like your LightningModule has parameters that were not used in producing the loss returned by training_step. If this is intentional, you must enable the detection of unused parameters in DDP, either by setting the string value `strategy='ddp_find_unused_parameters_true'` or by setting the flag in the strategy with `strategy=DDPStrategy(find_unused_parameters=True)`.
```
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
```python
import os
import numpy as np
import pytorch_lightning as pl
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import DataLoader, random_split
from torchvision.datasets import MNIST
PATH_DATASETS = os.environ.get("PATH_DATASETS", ".")
BATCH_SIZE = 256 if torch.cuda.is_available() else 64
NUM_WORKERS = int(os.cpu_count() / 2)
class MNISTDataModule(pl.LightningDataModule):
def __init__(
self,
data_dir: str = PATH_DATASETS,
batch_size: int = BATCH_SIZE,
num_workers: int = NUM_WORKERS,
):
super().__init__()
self.data_dir = data_dir
self.batch_size = batch_size
self.num_workers = num_workers
self.transform = transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)),
]
)
self.dims = (1, 28, 28)
self.num_classes = 10
def prepare_data(self):
# download
MNIST(self.data_dir, train=True, download=True)
MNIST(self.data_dir, train=False, download=True)
def setup(self, stage=None):
# Assign train/val datasets for use in dataloaders
if stage == "fit" or stage is None:
mnist_full = MNIST(self.data_dir, train=True, transform=self.transform)
self.mnist_train, self.mnist_val = random_split(mnist_full, [55000, 5000])
# Assign test dataset for use in dataloader(s)
if stage == "test" or stage is None:
self.mnist_test = MNIST(self.data_dir, train=False, transform=self.transform)
def train_dataloader(self):
return DataLoader(
self.mnist_train,
batch_size=self.batch_size,
num_workers=self.num_workers,
)
def val_dataloader(self):
return DataLoader(self.mnist_val, batch_size=self.batch_size, num_workers=self.num_workers)
def test_dataloader(self):
return DataLoader(self.mnist_test, batch_size=self.batch_size, num_workers=self.num_workers)
class Generator(nn.Module):
def __init__(self, latent_dim, img_shape):
super().__init__()
self.img_shape = img_shape
def block(in_feat, out_feat, normalize=True):
layers = [nn.Linear(in_feat, out_feat)]
if normalize:
layers.append(nn.BatchNorm1d(out_feat, 0.8))
layers.append(nn.LeakyReLU(0.01, inplace=True))
return layers
self.model = nn.Sequential(
*block(latent_dim, 128, normalize=False),
*block(128, 256),
*block(256, 512),
*block(512, 1024),
nn.Linear(1024, int(np.prod(img_shape))),
nn.Tanh(),
)
def forward(self, z):
img = self.model(z)
img = img.view(img.size(0), *self.img_shape)
return img
class Discriminator(nn.Module):
def __init__(self, img_shape):
super().__init__()
self.model = nn.Sequential(
nn.Linear(int(np.prod(img_shape)), 512),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(512, 256),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(256, 1),
nn.Sigmoid(),
)
def forward(self, img):
img_flat = img.view(img.size(0), -1)
validity = self.model(img_flat)
return validity
class GAN(pl.LightningModule):
def __init__(
self,
channels,
width,
height,
latent_dim: int = 100,
lr: float = 0.0002,
b1: float = 0.5,
b2: float = 0.999,
batch_size: int = BATCH_SIZE,
**kwargs,
):
super().__init__()
self.save_hyperparameters()
self.automatic_optimization = False
# networks
data_shape = (channels, width, height)
self.generator = Generator(latent_dim=self.hparams.latent_dim, img_shape=data_shape)
self.discriminator = Discriminator(img_shape=data_shape)
self.validation_z = torch.randn(8, self.hparams.latent_dim)
self.example_input_array = torch.zeros(2, self.hparams.latent_dim)
def forward(self, z):
return self.generator(z)
def adversarial_loss(self, y_hat, y):
return F.binary_cross_entropy(y_hat, y)
def training_step(self, batch):
imgs, _ = batch
optimizer_g, optimizer_d = self.optimizers()
# sample noise
z = torch.randn(imgs.shape[0], self.hparams.latent_dim)
z = z.type_as(imgs)
# train generator
# generate images
self.toggle_optimizer(optimizer_g)
self.generated_imgs = self(z)
# log sampled images
sample_imgs = self.generated_imgs[:6]
grid = torchvision.utils.make_grid(sample_imgs)
# self.logger.experiment.add_image("train/generated_images", grid, self.current_epoch)
# ground truth result (ie: all fake)
# put on GPU because we created this tensor inside training_loop
valid = torch.ones(imgs.size(0), 1)
valid = valid.type_as(imgs)
# adversarial loss is binary cross-entropy
g_loss = self.adversarial_loss(self.discriminator(self.generated_imgs), valid)
self.log("g_loss", g_loss, prog_bar=True)
self.manual_backward(g_loss)
optimizer_g.step()
optimizer_g.zero_grad()
self.untoggle_optimizer(optimizer_g)
# train discriminator
# Measure discriminator's ability to classify real from generated samples
self.toggle_optimizer(optimizer_d)
# how well can it label as real?
valid = torch.ones(imgs.size(0), 1)
valid = valid.type_as(imgs)
real_loss = self.adversarial_loss(self.discriminator(imgs), valid)
# how well can it label as fake?
fake = torch.zeros(imgs.size(0), 1)
fake = fake.type_as(imgs)
fake_loss = self.adversarial_loss(self.discriminator(self.generated_imgs.detach()), fake)
# discriminator loss is the average of these
d_loss = (real_loss + fake_loss) / 2
self.log("d_loss", d_loss, prog_bar=True)
self.manual_backward(d_loss)
optimizer_d.step()
optimizer_d.zero_grad()
self.untoggle_optimizer(optimizer_d)
def validation_step(self, batch, batch_idx):
pass
def configure_optimizers(self):
lr = self.hparams.lr
b1 = self.hparams.b1
b2 = self.hparams.b2
opt_g = torch.optim.Adam(self.generator.parameters(), lr=lr, betas=(b1, b2))
opt_d = torch.optim.Adam(self.discriminator.parameters(), lr=lr, betas=(b1, b2))
return [opt_g, opt_d], []
def on_validation_epoch_end(self):
z = self.validation_z.type_as(self.generator.model[0].weight)
# log sampled images
sample_imgs = self(z)
grid = torchvision.utils.make_grid(sample_imgs)
# self.logger.experiment.add_image("validation/generated_images", grid, self.current_epoch)
if __name__ == '__main__':
dm = MNISTDataModule()
model = GAN(*dm.dims)
trainer = pl.Trainer(
accelerator="auto",
devices=[0, 1, 2, 3],
strategy="ddp",
max_epochs=5,
)
trainer.fit(model, dm)
```
### Error messages and logs
```
[rank0]: File "/home/ubuntu/pranav.rao/qxr_training/test_gan_lightning_ddp.py", line 196, in training_step
[rank0]: self.manual_backward(d_loss)
[rank0]: File "/home/ubuntu/miniconda3/envs/lightningTest/lib/python3.10/site-packages/pytorch_lightning/core/module.py", line 1082, in manual_backward
[rank0]: self.trainer.strategy.backward(loss, None, *args, **kwargs)
[rank0]: File "/home/ubuntu/miniconda3/envs/lightningTest/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 208, in backward
[rank0]: self.pre_backward(closure_loss)
[rank0]: File "/home/ubuntu/miniconda3/envs/lightningTest/lib/python3.10/site-packages/pytorch_lightning/strategies/ddp.py", line 317, in pre_backward
[rank0]: prepare_for_backward(self.model, closure_loss)
[rank0]: File "/home/ubuntu/miniconda3/envs/lightningTest/lib/python3.10/site-packages/pytorch_lightning/overrides/distributed.py", line 55, in prepare_for_backward
[rank0]: reducer._rebuild_buckets() # avoids "INTERNAL ASSERT FAILED" with `find_unused_parameters=False`
[rank0]: RuntimeError: It looks like your LightningModule has parameters that were not used in producing the loss returned by training_step. If this is intentional, you must enable the detection of unused parameters in DDP, either by setting the string value `strategy='ddp_find_unused_parameters_true'` or by setting the flag in the strategy with `strategy=DDPStrategy(find_unused_parameters=True)`.
```
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU:
- NVIDIA L40S
- NVIDIA L40S
- NVIDIA L40S
- NVIDIA L40S
- available: True
- version: 12.1
* Lightning:
- lightning: 2.4.0
- lightning-utilities: 0.11.7
- pytorch-lightning: 2.4.0
- torch: 2.4.1
- torchmetrics: 1.4.2
- torchvision: 0.19.1
* Packages:
- aiohappyeyeballs: 2.4.3
- aiohttp: 3.10.9
- aiosignal: 1.3.1
- async-timeout: 4.0.3
- attrs: 24.2.0
- autocommand: 2.2.2
- backports.tarfile: 1.2.0
- cxr-training: 0.1.0
- filelock: 3.16.1
- frozenlist: 1.4.1
- fsspec: 2024.9.0
- idna: 3.10
- importlib-metadata: 8.0.0
- importlib-resources: 6.4.0
- inflect: 7.3.1
- jaraco.collections: 5.1.0
- jaraco.context: 5.3.0
- jaraco.functools: 4.0.1
- jaraco.text: 3.12.1
- jinja2: 3.1.4
- lightning: 2.4.0
- lightning-utilities: 0.11.7
- markupsafe: 3.0.1
- more-itertools: 10.3.0
- mpmath: 1.3.0
- multidict: 6.1.0
- networkx: 3.3
- numpy: 2.1.2
- nvidia-cublas-cu12: 12.1.3.1
- nvidia-cuda-cupti-cu12: 12.1.105
- nvidia-cuda-nvrtc-cu12: 12.1.105
- nvidia-cuda-runtime-cu12: 12.1.105
- nvidia-cudnn-cu12: 9.1.0.70
- nvidia-cufft-cu12: 11.0.2.54
- nvidia-curand-cu12: 10.3.2.106
- nvidia-cusolver-cu12: 11.4.5.107
- nvidia-cusparse-cu12: 12.1.0.106
- nvidia-nccl-cu12: 2.20.5
- nvidia-nvjitlink-cu12: 12.6.77
- nvidia-nvtx-cu12: 12.1.105
- packaging: 24.1
- pillow: 10.4.0
- pip: 24.2
- platformdirs: 4.2.2
- propcache: 0.2.0
- pytorch-lightning: 2.4.0
- pyyaml: 6.0.2
- setuptools: 75.1.0
- sympy: 1.13.3
- tomli: 2.0.1
- torch: 2.4.1
- torchmetrics: 1.4.2
- torchvision: 0.19.1
- tqdm: 4.66.5
- triton: 3.0.0
- typeguard: 4.3.0
- typing-extensions: 4.12.2
- wheel: 0.44.0
- yarl: 1.14.0
- zipp: 3.19.2
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.10.0
- release: 5.15.0-1063-nvidia
- version: #64-Ubuntu SMP Fri Aug 9 17:13:45 UTC 2024
</details>
### More info
_No response_ | open | 2024-10-09T10:56:35Z | 2024-10-10T07:14:39Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20328 | [
"bug",
"needs triage",
"ver: 2.4.x"
] | pranavrao-qure | 3 |
TencentARC/GFPGAN | deep-learning | 41 | update broke the program | I'm getting:
Traceback (most recent call last):
File "E:\location to GFPGAN\inference_gfpgan.py", line 98, in <module>
main()
File "E:\location to GFPGAN\inference_gfpgan.py", line 52, in main
restorer = GFPGANer(
File "E:\location to GFPGAN\gfpgan\utils.py", line 50, in __init__
self.face_helper = FaceRestoreHelper(
TypeError: __init__() got an unexpected keyword argument 'device'
without BASICSR_JIt=TRUE
and with BASICR_JIT=TRUE it just hangs, resources are not used at all and it just waits and waits. Waited 30 minutes already.
Old version works though.
Is it a BasicSR issue?
Also, the command in upscale_factor is only "upscale" now.
| closed | 2021-08-14T09:49:43Z | 2021-08-18T16:39:00Z | https://github.com/TencentARC/GFPGAN/issues/41 | [] | NoUserNameForYou | 2 |
zappa/Zappa | flask | 571 | [Migrated] How can I use existed domaine name and an existed api-gateway to deploy with certificate? | Originally from: https://github.com/Miserlou/Zappa/issues/1496 by [tianyuchen](https://github.com/tianyuchen)
| closed | 2021-02-20T12:22:55Z | 2024-04-13T17:09:31Z | https://github.com/zappa/Zappa/issues/571 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
piskvorky/gensim | nlp | 3,145 | The wiki page should be presented more prominently | @mpenkov Thanks. The wiki page should (in my opinion) be presented more prominently.
I totally missed this benchmark page although I am using gensim for a while.
_Originally posted by @jonaschn in https://github.com/RaRe-Technologies/gensim/issues/3141#issuecomment-841700119_ | open | 2021-05-15T20:40:33Z | 2021-05-17T09:26:36Z | https://github.com/piskvorky/gensim/issues/3145 | [
"documentation"
] | mpenkov | 4 |
mwaskom/seaborn | data-visualization | 3,411 | seaborn.objects so.Line() linestyle='none' ignored | so.Line() linestyle='none' keeps drawing a line between points. We know so.Dot() is meant for scatter plots but it would also be convenient to handle linestyle = 'none' for plotting lines and scatters in one command. Expected behavior would be for 'g2' group to show no line between points which should also be reflected in the legend.
```
import pandas as pd
import seaborn.objects as so
dataset = pd.DataFrame(dict(
x=[1, 2, 3, 4],
y=[1, 2, 3, 4],
group=['g1', 'g1', 'g2', 'g2'],
))
p = (
so.Plot(dataset,
x='x',
y='y',
marker='group',
linestyle='group',
)
.add(so.Line())
.scale(
linestyle=so.Nominal({'g1': 'dashed', 'g2': 'none'}),
)
)
p.show()
```

| closed | 2023-06-29T16:29:05Z | 2023-08-28T11:42:21Z | https://github.com/mwaskom/seaborn/issues/3411 | [] | subsurfaceiodev | 2 |
streamlit/streamlit | streamlit | 10,852 | Add "Download as PDF" Feature with Proper Wide Mode Support | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Currently, Streamlit does not provide a built-in way to download the entire page as a PDF. Users need to rely on external tools like browser print options or third-party libraries, which do not always work properly. Additionally, when Wide Mode is enabled, manually generated PDFs often have formatting issues.
A built-in "Download as PDF" button would make it easier for users to save and share reports, dashboards, and analytics results directly from Streamlit.
### Why?
I'm always frustrated when:
There is no direct way to download the Streamlit page as a PDF without using browser-based workarounds.
Using print-to-PDF or third-party tools often breaks layouts, cuts off content, or distorts figures in Wide Mode.
Users want a one-click solution to generate PDFs that retain formatting, images, and charts correctly.
A native PDF export feature would allow users to easily generate sharable and well-formatted reports from their Streamlit apps.
### How?
Introduce a new Streamlit function to download the page as a PDF, similar to:
st.download_pdf("Download as PDF", file_name="streamlit_page.pdf", preserve_wide_mode=True)
This function should:
✅ Convert the entire rendered Streamlit page to a PDF.
✅ Preserve text, tables, charts, and images.
✅ Ensure Wide Mode layouts are correctly applied in the exported PDF.
✅ Support interactive elements (e.g., dropdowns, buttons) by rendering them as static content in the PDF.
✅ Work without requiring external dependencies like Puppeteer or Selenium.
### Additional Context
Many users create data-driven reports, dashboards, and analytics pages in Streamlit. A built-in "Download as PDF" feature would make it easier to export and share insights. | closed | 2025-03-19T16:56:56Z | 2025-03-21T04:40:28Z | https://github.com/streamlit/streamlit/issues/10852 | [
"type:enhancement",
"area:printing"
] | JayeshSChauhan | 5 |
streamlit/streamlit | machine-learning | 10,079 | Improper behavior of st.form_submit_button | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
The form_submit_button shows preview of new page rather than switching to the new page.

### Reproducible Code Example
```Python
import streamlit as st
from hmac import compare_digest
import secrets
# TASK: move this to database
def verify(username, password):
credentials = {"Darshan": "Metro", "Jain": "Express"}
if username not in credentials:
return False
return compare_digest(password, credentials[username])
def verify_token():
token = st.session_state.get("token", "")
if token and len(token) == 32:
return True
return False
def login_page():
for key, default_value in [("username_input", ""), ("password", ""), ("token", ""), ("status", False)]:
if key not in st.session_state:
st.session_state[key] = default_value
col1, col2, col3 = st.columns([1,2,1])
with col2:
st.title("Metro Express")
with st.form(key="login_form", clear_on_submit=True, enter_to_submit = False):
username = st.text_input(label="Username: ", placeholder="Enter your username", key="username_input")
password = st.text_input(label="Password: ", placeholder="Enter your password", key="password", type="password")
st.write("")
if st.form_submit_button("Login", type="primary"):
if verify(username, password):
st.session_state["status"] = True
st.session_state["token"] = secrets.token_hex(16)
st.session_state["username"] = username
st.switch_page("sections/home.py")
else:
st.error("Incorrect Username or Password")
login_page()
```
### Steps To Reproduce
This happens only once when i start the app using streamlit run main.py. After which the issue disappears (in consequent runs, i.e. after reloading the page)
The issue can also be resolved by clicking twice on the Submit button
### Expected Behavior
The ideal behavior should be opening the home page in new tab and no instance of login page should be visible
### Current Behavior
_No response_
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.41.0
- Python version: 3.12.7
- Operating System: Windows
- Browser: Microsoft Edge
### Additional Information
_No response_ | closed | 2024-12-25T03:51:36Z | 2025-01-13T18:53:24Z | https://github.com/streamlit/streamlit/issues/10079 | [
"type:bug",
"status:cannot-reproduce",
"feature:st.form_submit_button"
] | DarshanJain-07 | 6 |
MaartenGr/BERTopic | nlp | 1,846 | Error when setting chain representation models as main | When chain models are named anything else, it works fine. But if I want to use a chain model as the main representation, it will produce an error.
```python
representation_model = {
"Main": [KeyBERT, MMR],
# "ChatGPT": aspect_model1
"KeyBERT": KeyBERT,
"BERT MMR": [KeyBERT, MMR],
"POS": aspect_model2,
"POS MMR": [aspect_model2, MMR]
}```
#The error produced:
```python
File ~/.local/lib/python3.10/site-packages/bertopic/_bertopic.py:433, in BERTopic.fit_transform(self, documents, embeddings, images, y)
430 self._save_representative_docs(custom_documents)
431 else:
432 # Extract topics by calculating c-TF-IDF
--> 433 self._extract_topics(documents, embeddings=embeddings, verbose=self.verbose)
435 # Reduce topics
436 if self.nr_topics:
File ~/.local/lib/python3.10/site-packages/bertopic/_bertopic.py:3637, in BERTopic._extract_topics(self, documents, embeddings, mappings, verbose)
3635 documents_per_topic = documents.groupby(['Topic'], as_index=False).agg({'Document': ' '.join})
3636 self.c_tf_idf_, words = self._c_tf_idf(documents_per_topic)
-> 3637 self.topic_representations_ = self._extract_words_per_topic(words, documents)
3638 self._create_topic_vectors(documents=documents, embeddings=embeddings, mappings=mappings)
3639 self.topic_labels_ = {key: f"{key}_" + "_".join([word[0] for word in values[:4]])
3640 for key, values in
3641 self.topic_representations_.items()}
File ~/.local/lib/python3.10/site-packages/bertopic/_bertopic.py:3925, in BERTopic._extract_words_per_topic(self, words, documents, c_tf_idf, calculate_aspects)
3923 elif isinstance(self.representation_model, dict):
3924 if self.representation_model.get("Main"):
-> 3925 topics = self.representation_model["Main"].extract_topics(self, documents, c_tf_idf, topics)
3926 topics = {label: values[:self.top_n_words] for label, values in topics.items()}
3928 # Extract additional topic aspects
AttributeError: 'list' object has no attribute 'extract_topics'
``` | open | 2024-02-28T11:06:44Z | 2024-03-03T11:08:45Z | https://github.com/MaartenGr/BERTopic/issues/1846 | [] | coryzhangia | 3 |
ipython/ipython | data-science | 14,397 | Cannot access local variable in list comprehension while in function | I was debugging some code using IPython embed function but I got an error I believe is bug.
```
from IPython import embed
def foo():
l = list(range(100))
k = [l[i] for i in range(len(l))]
embed()
foo()
```
when in interactive shell I can access the 'l' variable,
```
In [1]: l #works fine
```
but cannot access it in list comprehension '[l[i] for i in range(len(l))]'.
```
In [2]: [l[i] for i in l] #throws an error
```
outside function it works fine.
Is this error on my side?
| closed | 2024-04-11T16:28:25Z | 2024-04-12T13:37:54Z | https://github.com/ipython/ipython/issues/14397 | [] | NejsemTonda | 1 |
DistrictDataLabs/yellowbrick | scikit-learn | 1,122 | Problem with import yellowbrick.model_selection | **Describe the bug**
Problem with importing libraries
**To Reproduce**
```python
# Steps to reproduce the behavior (code snippet):
# Should include imports, dataset loading, and execution
# Add the traceback below
```
from yellowbrick.model_selection import LearningCurve
**Dataset**
Did you use a specific dataset to produce the bug? Where can we access it?
**Expected behavior**
A clear and concise description of what you expected to happen.
**Traceback**
```
If applicable, add the traceback from the exception.
```
<ipython-input-35-82e8f0d75030> in <module>
10 from sklearn.tree import DecisionTreeClassifier
11 from sklearn.model_selection import learning_curve
---> 12 from yellowbrick.model_selection import LearningCurve
~\Anaconda3\lib\site-packages\yellowbrick\__init__.py in <module>
37 from .anscombe import anscombe
38 from .datasaurus import datasaurus
---> 39 from .classifier import ROCAUC, ClassBalance, ClassificationScoreVisualizer
40
41 # from .classifier import crplot, rocplot
~\Anaconda3\lib\site-packages\yellowbrick\classifier\__init__.py in <module>
24 from ..base import ScoreVisualizer
25 from .base import ClassificationScoreVisualizer
---> 26 from .class_prediction_error import ClassPredictionError, class_prediction_error
27 from .classification_report import ClassificationReport, classification_report
28 from .confusion_matrix import ConfusionMatrix, confusion_matrix
~\Anaconda3\lib\site-packages\yellowbrick\classifier\class_prediction_error.py in <module>
22
23 from sklearn.utils.multiclass import unique_labels
---> 24 from sklearn.metrics._classification import _check_targets
25
26 from yellowbrick.draw import bar_stack
ModuleNotFoundError: No module named 'sklearn.metrics._classification'
**Desktop (please complete the following information):**
- OS: [e.g. macOS]
- Python Version [e.g. 2.7, 3.6, miniconda]
- Yellowbrick Version [e.g. 0.7]
**Additional context**
Add any other context about the problem here.
| closed | 2020-10-17T04:13:28Z | 2020-10-22T11:11:47Z | https://github.com/DistrictDataLabs/yellowbrick/issues/1122 | [
"type: bug",
"type: technical debt"
] | djyerabati | 3 |
plotly/dash | plotly | 3,001 | race condition when updating dcc.Store | Hello!
```
dash 2.18.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: macOS Sonoma
- Browser: Tested in Firefox and Chrome
- FF Version: 129.0.2
- Chrome Version: 128.0.6613.121
**Describe the bug**
When 2 callbacks perform partial updates to a dcc.Store at the same time (or nearly the same time), only 1 of those updates is reflected in the store. I tested and found the same behaviour in dash versions 2.17.0 and 2.18.0, and this happens for all storage types (memory, session, and local).
A minimal example is below. Most of this example is setting up preconditions to cause the race condition, but it roughly matches our real-world use-case and can reliably exhibit the behaviour.
The example app works like this:
We have multiple components on the page that need to load, and each has 2 elements to manage: the content Div and the Loading indicator. We also have a dispatcher (Interval component + `loading_dispatcher` callback) that kicks off the loading of these components in chunks. For each component, the dispatcher first turns on the Loading indicator, which then triggers the content Div to load (`load_component` function), which then triggers the Loading indicator to stop (`stop_spinner` function). We also have a cleanup function (`mark_loaded`) that waits for the components to finish loading, then pushes data to the store about which components have loaded. Finally, the `set_status` function checks the store, and if all of the components have loaded it updates the status Div at the bottom to indicate everything is fully loaded.
**Minimal Example**
```
from dash import Dash, html,callback,Output,Input,State,no_update,dcc, MATCH, ALL, Patch, callback_context, clientside_callback
from dash.exceptions import PreventUpdate
import time
import random
app = Dash(__name__)
NUM_COMPONENTS = 21
STORAGE_TYPE = 'local'
slow_components = [
html.Div([
html.Div(children='loading...', id={'type': 'slow-component', 'index': i}),
dcc.Loading(id={'type': 'slow-component-animation', 'index': i}, display='hide')
])
for i in range(NUM_COMPONENTS)]
app.layout = html.Div(
slow_components +
[
html.Hr(),
html.Div(id='status', children='not all loaded'),
dcc.Interval(id='timer', interval=2000, max_intervals=10),
dcc.Store(id='loading-data', data={}, storage_type=STORAGE_TYPE, clear_data=True),
]
)
@callback(Output({'type': 'slow-component-animation', 'index':ALL}, 'display'),
Input('timer', 'n_intervals'), prevent_initial_call=True)
def loading_dispatcher(n):
# Kicks off loading for 3 components at a time
if n is None or n > NUM_COMPONENTS/3:
raise PreventUpdate()
output_list = [no_update] * NUM_COMPONENTS
current_chunk_start = list(range(0,NUM_COMPONENTS, 3))[n-1]
output_list[current_chunk_start:current_chunk_start+3] = ['show']*3
return output_list
@callback(
Output({'type': 'slow-component', 'index': MATCH}, 'children'),
Input({'type': 'slow-component-animation', 'index': MATCH}, 'display'),
State({'type': 'slow-component-animation', 'index': MATCH}, 'id'),
State({'type': 'slow-component', 'index': MATCH}, 'children'),
prevent_initial_call=True
)
def load_component(display, id_, current_state):
# "Loads" data for 1 second, updates loading text
if current_state == 'loaded':
raise PreventUpdate()
print(f'loading {id_['index']}, {current_state}')
time.sleep(1)
print(f'loaded {id_['index']}')
return 'loaded'
@callback(
Output({'type': 'slow-component-animation', 'index':MATCH}, 'display', allow_duplicate=True),
Input({'type': 'slow-component', 'index': MATCH}, 'children'),
prevent_initial_call=True
)
def stop_spinner(loading_text):
# After loading, removes spinner
if loading_text == 'loaded':
return 'hide'
return no_update
@callback(
Output('loading-data', 'data', allow_duplicate=True),
Input({'type': 'slow-component-animation', 'index': ALL}, 'display'),
prevent_initial_call=True
)
def mark_loaded(components):
# When a component is fully loaded, mark it as such in the data store
print('checking if components are loaded')
update_dict = {}
for component in callback_context.triggered:
if component['value'] == 'hide':
component_id = callback_context.triggered_prop_ids[component['prop_id']]['index']
print(f'component {component_id} loaded')
update_dict[component_id] = 'loaded'
patch = Patch()
patch.update(update_dict)
print(f'adding to data store: {update_dict}')
return patch # <- This is where the race condition happens. If 2 callbacks patch the store at the same time, only 1 of those patches is applied
@callback(
Output('status', 'children'),
Output('loading-data', 'data', allow_duplicate=True),
Input('loading-data', 'data'),
prevent_initial_call=True
)
def set_status(loading_data):
# Once all components are loaded, update the status bar to show we are fully loaded
print(f'{loading_data=}')
if loading_data is None:
return no_update, no_update
if len(loading_data) == NUM_COMPONENTS:
print('FULLY LOADED')
return 'FULLY LOADED', {}
return no_update, no_update
if __name__ == '__main__':
app.run(debug=True)
```
**Expected behavior**
The app should load each component, and once they are finished the bottom text would update to say "FULLY LOADED".
The logs would also show that after each item is added to the store, the next time "loading_data=" is printed it would contain all of the component indices that have been added to the store. At the end of the logs we would see every number from 0-20 as a key in the `loading_data` dictionary.
Example (abbreviated):
```
loading 0, loading...
loading 1, loading...
loading 2, loading...
loaded 2
loaded 1
loaded 0
checking if components are loaded
component 2 loaded
component 1 loaded
adding to data store: {2: 'loaded', 1: 'loaded'}
checking if components are loaded
component 0 loaded
adding to data store: {0: 'loaded'}
loading_data={'0': 'loaded', '1': 'loaded', '2': 'loaded'}
loading 5, loading...
loading 4, loading...
checking if components are loaded
adding to data store: {}
loading 3, loading...
loading_data={'0': 'loaded', '1': 'loaded', '2': 'loaded'}
loaded 5
loaded 4
loaded 3
checking if components are loaded
component 5 loaded
adding to data store: {5: 'loaded'}
checking if components are loaded
component 4 loaded
adding to data store: {4: 'loaded'}
checking if components are loaded
component 3 loaded
adding to data store: {3: 'loaded'}
loading_data={'0': 'loaded', '1': 'loaded', '2': 'loaded', '3': 'loaded', '4': 'loaded', '5': 'loaded'}
...
loading_data={'0': 'loaded', '1': 'loaded', '2': 'loaded', '3': 'loaded', '4': 'loaded', '5': 'loaded', ... '20': 'loaded'}
FULLY LOADED
```
**Exhibited Behaviour**
After all components are loaded, the bottom text does not update to say "FULLY LOADED" and we see that the "loading_data" dictionary has not received all of the updates that were sent to it, as it does not include every index from 0 to 20.
```
loading_data=None
loading 2, loading...
loading 1, loading...
checking if components are loaded
adding to data store: {}
loading 0, loading...
loading_data={}
loaded 1
loaded 0
loaded 2
checking if components are loaded
component 0 loaded
adding to data store: {0: 'loaded'}
checking if components are loaded
component 1 loaded
adding to data store: {1: 'loaded'}
checking if components are loaded
component 2 loaded
adding to data store: {2: 'loaded'}
loading_data={'2': 'loaded'}
loading 5, loading...
loading 4, loading...
checking if components are loaded
adding to data store: {}
loading 3, loading...
loading_data={'2': 'loaded'}
loaded 5
loaded 4
loaded 3
checking if components are loaded
component 5 loaded
adding to data store: {5: 'loaded'}
checking if components are loaded
component 4 loaded
component 3 loaded
adding to data store: {4: 'loaded', 3: 'loaded'}
loading_data={'2': 'loaded', '3': 'loaded', '4': 'loaded'}
loading 8, loading...
loading 7, loading...
checking if components are loaded
adding to data store: {}
loading 6, loading...
loading_data={'2': 'loaded', '3': 'loaded', '4': 'loaded'}
loaded 8
loaded 6
loaded 7
checking if components are loaded
component 8 loaded
adding to data store: {8: 'loaded'}
loading_data={'2': 'loaded', '3': 'loaded', '4': 'loaded', '8': 'loaded'}
checking if components are loaded
component 7 loaded
component 6 loaded
adding to data store: {7: 'loaded', 6: 'loaded'}
loading_data={'2': 'loaded', '3': 'loaded', '4': 'loaded', '6': 'loaded', '7': 'loaded', '8': 'loaded'}
loading 11, loading...
loading 10, loading...
checking if components are loaded
adding to data store: {}
loading 9, loading...
loading_data={'2': 'loaded', '3': 'loaded', '4': 'loaded', '6': 'loaded', '7': 'loaded', '8': 'loaded'}
loaded 11
loaded 9
loaded 10
checking if components are loaded
component 11 loaded
adding to data store: {11: 'loaded'}
checking if components are loaded
component 9 loaded
component 10 loaded
adding to data store: {9: 'loaded', 10: 'loaded'}
loading_data={'2': 'loaded', '3': 'loaded', '4': 'loaded', '6': 'loaded', '7': 'loaded', '8': 'loaded', '9': 'loaded', '10': 'loaded'}
loading 14, loading...
loading 13, loading...
checking if components are loaded
adding to data store: {}
loading 12, loading...
loading_data={'2': 'loaded', '3': 'loaded', '4': 'loaded', '6': 'loaded', '7': 'loaded', '8': 'loaded', '9': 'loaded', '10': 'loaded'}
loaded 14
loaded 12
loaded 13
checking if components are loaded
component 14 loaded
adding to data store: {14: 'loaded'}
checking if components are loaded
component 13 loaded
component 12 loaded
adding to data store: {13: 'loaded', 12: 'loaded'}
loading_data={'2': 'loaded', '3': 'loaded', '4': 'loaded', '6': 'loaded', '7': 'loaded', '8': 'loaded', '9': 'loaded', '10': 'loaded', '12': 'loaded', '13': 'loaded'}
loading 17, loading...
loading 16, loading...
checking if components are loaded
adding to data store: {}
loading 15, loading...
loading_data={'2': 'loaded', '3': 'loaded', '4': 'loaded', '6': 'loaded', '7': 'loaded', '8': 'loaded', '9': 'loaded', '10': 'loaded', '12': 'loaded', '13': 'loaded'}
loaded 17
loaded 16
loaded 15
checking if components are loaded
component 17 loaded
adding to data store: {17: 'loaded'}
checking if components are loaded
component 16 loaded
component 15 loaded
adding to data store: {16: 'loaded', 15: 'loaded'}
loading_data={'2': 'loaded', '3': 'loaded', '4': 'loaded', '6': 'loaded', '7': 'loaded', '8': 'loaded', '9': 'loaded', '10': 'loaded', '12': 'loaded', '13': 'loaded', '15': 'loaded', '16': 'loaded'}
loading 20, loading...
loading 19, loading...
checking if components are loaded
adding to data store: {}
loading 18, loading...
loading_data={'2': 'loaded', '3': 'loaded', '4': 'loaded', '6': 'loaded', '7': 'loaded', '8': 'loaded', '9': 'loaded', '10': 'loaded', '12': 'loaded', '13': 'loaded', '15': 'loaded', '16': 'loaded'}
loaded 20
loaded 19
loaded 18
checking if components are loaded
component 18 loaded
adding to data store: {18: 'loaded'}
checking if components are loaded
component 19 loaded
component 20 loaded
adding to data store: {19: 'loaded', 20: 'loaded'}
loading_data={'2': 'loaded', '3': 'loaded', '4': 'loaded', '6': 'loaded', '7': 'loaded', '8': 'loaded', '9': 'loaded', '10': 'loaded', '12': 'loaded', '13': 'loaded', '15': 'loaded', '16': 'loaded', '19': 'loaded', '20': 'loaded'}
```
| open | 2024-09-12T16:54:42Z | 2024-09-12T18:13:47Z | https://github.com/plotly/dash/issues/3001 | [
"bug",
"P3"
] | logankopas | 0 |
marcomusy/vedo | numpy | 984 | Cut a circle hole interactively | I try interactive cut a circle hole in the mesh that depends on camera angle like this example
[https://github.com/marcomusy/vedo/blob/master/examples/basic/cut_freehand.py](https://github.com/marcomusy/vedo/blob/master/examples/basic/cut_freehand.py)
except, I want to only click once (not draw a spline) and create a circle around the cursor to cut the mesh.
I tried to create a cylinder and had a problem with the axis of the cylinder doesn't align well with the camera angle.
Would you please give me an idea, of how to achieve this in Vedo?
| closed | 2023-11-24T10:44:04Z | 2024-01-08T17:06:03Z | https://github.com/marcomusy/vedo/issues/984 | [] | Thanatossan | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,487 | Problem of pix2pix on two different devices that shows 'nan' at the begining | When I use 2 different devices to run the pix2pix training part , one can smoothly finish the training part but another leads to 'nan' in loss function since the begining as the figure shows. The environments and dataset(facades) are quite the same.

| open | 2022-09-26T04:56:27Z | 2022-09-27T20:24:54Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1487 | [] | YujieXiang | 1 |
matterport/Mask_RCNN | tensorflow | 2,595 | tensorflow:Model failed to serialize as JSON. Ignoring... can't pickle _thread.RLock objects | I am training the Mask RCNN model on google collaboratory. I have been using the TensorFlow 2.x compatible version by [https://github.com/ahmedfgad](url) and facing this issue:
`WARNING:tensorflow:Model failed to serialize as JSON. Ignoring... can't pickle _thread.RLock objects`
The error caused is affecting the saving of trained weights. Model training continues but there is no file saved. | open | 2021-06-12T10:30:51Z | 2023-02-24T06:00:30Z | https://github.com/matterport/Mask_RCNN/issues/2595 | [] | jamalihuzaifa9 | 1 |
streamlit/streamlit | machine-learning | 9,892 | `st.audio_input` throws an error for audio > 10s for an app deployed on EC2 | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
Using `st.audio_input` works correctly locally but upon deploying my app to an EC2 instance (inside a container), it only works if the recorded audio is less than 10 seconds. For a longer audio, I get this error:
<img width="817" alt="Screenshot 2024-11-21 at 5 20 33 PM" src="https://github.com/user-attachments/assets/143b04f5-97ab-487c-8b09-e28620e4d338">
### Reproducible Code Example
```Python
import streamlit as st
audio_value = st.audio_input("Record a voice message by pressing on the mic icon")
```
### Steps To Reproduce
1. Start recording (on an app deployed on an EC2 instance within a container)
2. Record for more than 10 seconds
3. Stop recording
### Expected Behavior
Getting the correct audio data irrespective of the length of the audio and the deployment setup.
### Current Behavior
_No response_
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.40.1
- Python version: 3.9.19
- Operating System: Debian GNU/Linux 12
- Browser: Chrome
### Additional Information
_No response_ | closed | 2024-11-21T11:54:43Z | 2024-11-22T23:34:57Z | https://github.com/streamlit/streamlit/issues/9892 | [
"type:bug",
"status:awaiting-user-response",
"feature:st.audio_input"
] | dalmia | 4 |
lepture/authlib | django | 268 | HTTPX OAuth1 implementation does not set Content-Length header | **Describe the bug**
The OAuth1 implementation for HTTPX does not set the content-length header, which can cause problems when using body signatures and talking to servers who block body POSTs where the length header is not specified.
**Expected behavior**
Content-Length header is set.
**Environment:**
- OS: linux
- Python Version: 3.8
- Authlib Version: 0.14.3
Incoming PR fixes this.
| closed | 2020-09-10T05:03:25Z | 2020-09-17T07:22:14Z | https://github.com/lepture/authlib/issues/268 | [
"bug"
] | dustydecapod | 0 |
pyjanitor-devs/pyjanitor | pandas | 754 | [DOC] Quick Fix to Time Series Docs | # Brief Description of Fix
- Quick update to the `Time Series` section of the docs
- #746 (my PR) incorrectly left out a `timeseries.rst` file within `docs/reference`
I would like to propose a change, such that now the docs correctly represent the `Time Series` section. This will including adding a `timeseries.rst` file within `docs/reference`
| closed | 2020-09-22T13:07:56Z | 2020-10-09T01:17:53Z | https://github.com/pyjanitor-devs/pyjanitor/issues/754 | [
"docfix",
"being worked on",
"hacktoberfest"
] | loganthomas | 3 |
voila-dashboards/voila | jupyter | 778 | Voila frontend does not support input requests | Hello,
I've been trying to build a dashboard using Jupyter and Voila that requires input from the user (such as a password using getpass()) but I got the following error :
`StdinNotImplementedError: getpass was called, but this frontend does not support input requests.`
Is there an alternative or it is just, for now, impossible to use Voila when requiring an input from the user ?
Thanks a lot for your help !
| open | 2020-12-10T10:57:37Z | 2021-04-16T08:39:50Z | https://github.com/voila-dashboards/voila/issues/778 | [] | aqwvinh | 6 |
PeterL1n/RobustVideoMatting | computer-vision | 207 | is it possible to add .gif or .mp4 video insted of image/green/white background image? | I want to add .gif or .mp4 video in my background,instead of static image
@PeterL1n @ | closed | 2022-11-10T13:19:22Z | 2024-01-11T19:01:00Z | https://github.com/PeterL1n/RobustVideoMatting/issues/207 | [] | akashAD98 | 23 |
Nemo2011/bilibili-api | api | 424 | [提问] 未知接口返回-401 anti-crawler | **Python 版本:3.9.2
**模块版本:** 15.5.3
**运行环境:** Linux
<!-- 务必提供模块版本并确保为最新版 -->

在真寻bot的B站订阅模块使用bilibli-api-python,原本用的bilireq不会返回-401,bilireq能自动获取bilibili网站首页cookie再调用api就可以正常访问,想请教下大佬bilibli-api-python这块应该怎么处理呀 | closed | 2023-08-13T18:11:46Z | 2023-08-16T03:33:54Z | https://github.com/Nemo2011/bilibili-api/issues/424 | [
"need debug info"
] | saltyplum | 8 |
encode/apistar | api | 53 | CSRF tokens for single-page applications | Similar to Django it would be interesting to have a CSRF mechanism for single-page applications running on client-side, eg. via a `/csrf-token` endpoint. | closed | 2017-04-17T07:41:27Z | 2017-04-17T13:32:58Z | https://github.com/encode/apistar/issues/53 | [] | maziarz | 1 |
tox-dev/tox | automation | 2,783 | ModuleNotFoundError on GitHub tox test run | ## Issue
When running tests on GitHub, I am getting the error message:
```
Run python -m tox -- --junit-xml pytest.xml
py38: install_deps> python -I -m pip install pytest pytest-benchmark pytest-xdist
tox: py38
py38: commands[0]> pytest tests/ --ignore=tests/lab_extension --junit-xml pytest.xml
ImportError while loading conftest '/home/runner/work/quibbler/quibbler/tests/conftest.py'.
tests/conftest.py:6: in <module>
from matplotlib import pyplot as plt
E ModuleNotFoundError: No module named 'matplotlib'
py38: exit 4 (0.87 seconds) /home/runner/work/quibbler/quibbler> pytest tests/ --ignore=tests/lab_extension --junit-xml pytest.xml pid=1849
py38: FAIL code 4 (4.01=setup[3.14]+cmd[0.87] seconds)
evaluation failed :( (4.10 seconds)
Error: Process completed with exit code 4.
```
Notably, this error occurs only with tox ver 4 (everything runs ok when using `pip install tox==3.28.0`)
## Environment
This happens on GitHub.
See here:
https://github.com/Technion-Kishony-lab/quibbler/actions/runs/3792710874/jobs/6449140682#step:5:19
| closed | 2022-12-28T18:17:38Z | 2023-01-07T15:21:47Z | https://github.com/tox-dev/tox/issues/2783 | [
"needs:more-info"
] | rkishony | 3 |
python-gino/gino | sqlalchemy | 739 | demo 运输 | * GINO version:
* Python version:
* asyncpg version:
* aiocontextvars version:

* PostgreSQL version:
### Description
Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.
### What I Did
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```

| closed | 2020-11-30T12:41:19Z | 2021-06-20T13:52:08Z | https://github.com/python-gino/gino/issues/739 | [] | VarWolf | 2 |
matterport/Mask_RCNN | tensorflow | 2,449 | Convergence Quality check - Run Validation after every Epoch | I want to access the quality of convergence after each epoch or after every 5 epoch. It helps me to correct/debug the Architecture immediately after finding out convergence quality is not optimal. I don't want to wait for 5 days then find out validation is not satisfactory. It will waste the time.
How should i run validation after every 5 epoch? Could you suggest me? | open | 2020-12-24T06:11:39Z | 2020-12-24T06:14:03Z | https://github.com/matterport/Mask_RCNN/issues/2449 | [] | suresh-s | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,243 | Can I use unet-256 and have a crop_size of 512 for the pix2pix model? | I wanted to check how using different architectures for pix2pix makes a difference. For example how does resnet_9blocks make a difference when compared with a unet. | open | 2021-02-25T19:44:38Z | 2021-02-25T19:44:38Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1243 | [] | yajain | 0 |
psf/requests | python | 5,884 | DNS round robin not working | I want to setup a PyPI Cloud service with two servers in different data centers and leverage DNS round robin as failover mechanic (see [here](https://github.com/pypa/pip/issues/10185)). To do that, I have configured a multi-value DNS record:
```
$ dig +short pypi-dev.company.de
172.31.33.222
172.31.19.77
```
In the ticket linked above, @uranusjr points out that the requests library should take care of DNS round robin (see [here](https://github.com/psf/requests/issues/2537)) and that I might have configured my setup incorrectly. To reproduce that, I shut down the application server on 172.31.33.222 and kept the reverse proxy alive. On 172.31.19.77 both application server and reverse proxy are operational.
Furthermore, I created this script to retrieve the list of available python packages on my PyPI Cloud service:
```
import requests
import logging
from urllib3.util.retry import Retry
from requests.adapters import HTTPAdapter
def get_session(retries=3):
s = requests.Session()
retry = Retry(
total=retries,
read=retries,
connect=retries,
status_forcelist=[502],
)
adapter = HTTPAdapter(max_retries=retry)
s.mount('http://', adapter)
s.mount('https://', adapter)
return s
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
s = get_session()
r = s.get('https://pypi-dev.company.de/simple/', stream=True, timeout=3)
ip = r.raw._connection.sock.getpeername()
logging.info(f'status code is {r.status_code}')
logging.info(f'ip is {ip}')
```
## Expected Result
```
$ python req.py
INFO:root:status code is 200
INFO:root:ip is ('172.31.19.77', 443)
```
## Actual Result
```
$ python req.py
WARNING:urllib3.connectionpool:Retrying (Retry(total=0, connect=1, read=0, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi-dev.company.de', port=443): Read timed out. (read timeout=3)")': /simple/
Traceback (most recent call last):
File "/Users/oschlueter/git/requests-dns-round-robin/venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 445, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "/Users/oschlueter/git/requests-dns-round-robin/venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 440, in _make_request
httplib_response = conn.getresponse()
File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1349, in getresponse
response.begin()
File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 316, in begin
version, status, reason = self._read_status()
File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 277, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/socket.py", line 704, in readinto
return self._sock.recv_into(b)
File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ssl.py", line 1241, in recv_into
return self.read(nbytes, buffer)
File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ssl.py", line 1099, in read
return self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/oschlueter/git/requests-dns-round-robin/venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 699, in urlopen
httplib_response = self._make_request(
File "/Users/oschlueter/git/requests-dns-round-robin/venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 447, in _make_request
self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
File "/Users/oschlueter/git/requests-dns-round-robin/venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 336, in _raise_timeout
raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='pypi-dev.company.de', port=443): Read timed out. (read timeout=3)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/oschlueter/git/requests-dns-round-robin/venv/lib/python3.9/site-packages/requests/adapters.py", line 439, in send
resp = conn.urlopen(
File "/Users/oschlueter/git/requests-dns-round-robin/venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 783, in urlopen
return self.urlopen(
File "/Users/oschlueter/git/requests-dns-round-robin/venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 755, in urlopen
retries = retries.increment(
File "/Users/oschlueter/git/requests-dns-round-robin/venv/lib/python3.9/site-packages/urllib3/util/retry.py", line 574, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='pypi-dev.company.de', port=443): Max retries exceeded with url: /simple/ (Caused by ReadTimeoutError("HTTPSConnectionPool(host='pypi-dev.company.de', port=443): Read timed out. (read timeout=3)"))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/oschlueter/git/requests-dns-round-robin/req.py", line 25, in <module>
r = s.get('https://pypi-dev.company.de/simple/', stream=True, timeout=3)
File "/Users/oschlueter/git/requests-dns-round-robin/venv/lib/python3.9/site-packages/requests/sessions.py", line 555, in get
return self.request('GET', url, **kwargs)
File "/Users/oschlueter/git/requests-dns-round-robin/venv/lib/python3.9/site-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/Users/oschlueter/git/requests-dns-round-robin/venv/lib/python3.9/site-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/Users/oschlueter/git/requests-dns-round-robin/venv/lib/python3.9/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='pypi-dev.company.de', port=443): Max retries exceeded with url: /simple/ (Caused by ReadTimeoutError("HTTPSConnectionPool(host='pypi-dev.company.de', port=443): Read timed out. (read timeout=3)"))
```
Please note that everything is fine if I restart the application server on 172.31.33.222 and that requests only seems to use that one IP address to connect to my PyPI Cloud server:
```
for ((n=0;n<5;n++)); do python req.py; done
INFO:root:status code is 200
INFO:root:ip is ('172.31.33.222', 443)
INFO:root:status code is 200
INFO:root:ip is ('172.31.33.222', 443)
INFO:root:status code is 200
INFO:root:ip is ('172.31.33.222', 443)
INFO:root:status code is 200
INFO:root:ip is ('172.31.33.222', 443)
INFO:root:status code is 200
INFO:root:ip is ('172.31.33.222', 443)
```
## Reproduction Steps
* Setup a multi-value DNS record and two servers with reverse proxies and upstream services.
* Invoke the script above with your domain name instead of `pypi-dev.company.de` to identify which server requests connects to.
* Shut down the upstream service of that server but keep the reverse proxy up and running.
* Invoke the script again.
## System Information
```
$ python -m requests.help
{
"chardet": {
"version": null
},
"charset_normalizer": {
"version": "2.0.3"
},
"cryptography": {
"version": ""
},
"idna": {
"version": "3.2"
},
"implementation": {
"name": "CPython",
"version": "3.9.6"
},
"platform": {
"release": "20.6.0",
"system": "Darwin"
},
"pyOpenSSL": {
"openssl_version": "",
"version": null
},
"requests": {
"version": "2.26.0"
},
"system_ssl": {
"version": "101010bf"
},
"urllib3": {
"version": "1.26.6"
},
"using_charset_normalizer": true,
"using_pyopenssl": false
}
```
| closed | 2021-07-23T14:53:42Z | 2021-10-21T22:00:19Z | https://github.com/psf/requests/issues/5884 | [] | oschlueter | 1 |
jofpin/trape | flask | 216 | No module named flask_socketio | 
| open | 2020-02-14T12:48:33Z | 2020-06-05T10:39:31Z | https://github.com/jofpin/trape/issues/216 | [] | MrDmitry1107 | 1 |
huggingface/datasets | nlp | 7,363 | ImportError: To support decoding images, please install 'Pillow'. | ### Describe the bug
Following this tutorial locally using a macboko and VSCode: https://huggingface.co/docs/diffusers/en/tutorials/basic_training
This line of code: for i, image in enumerate(dataset[:4]["image"]):
throws: ImportError: To support decoding images, please install 'Pillow'.
Pillow is installed.
### Steps to reproduce the bug
Run the tutorial
### Expected behavior
Images should be rendered
### Environment info
MacBook, VSCode | open | 2025-01-08T02:22:57Z | 2025-02-07T07:30:33Z | https://github.com/huggingface/datasets/issues/7363 | [] | jamessdixon | 3 |
microsoft/qlib | deep-learning | 1,268 | How Qlib handles dividend by factor data? | ## ❓ Questions and Help
I've been seeking a solution to update Qlib database to reflect the real stock market in China, while found that the 'factor' is supposed to be the bridge connecting qlib data to real world. But this Qlib factor appreas to reflect the real dividend date but somehow not the value itself. Example for instrument SH600016 in
datetime, instrument, $close, $factor, $close/$factor, Real_close, Real_foreAdjustFactor
2019-07-04, SH600016, 10.784764, 1.6746529, 6.4399996, 6.4399996, 0.850308
2019-07-04, SH600016, 10.775917, 1.7694445, 6.09, 6.09, 0.850308
When the stock executed dividend plan and the price changed accordingly, Qlib seems taking the non-dividend values as its price data and put them into storate. In above my test it shows $close/$factor = Real_close (non-dividend price)
I calculated these two qlib $factors and found: 1.6746529 / 1.7694445 = 0.946429 != 0.850308, it made my question: when it comes to a day that I got the real stock price together with a new real adjust factor, how could I decide a proper $factor for qlib so to make all prices for Qlib data to update?
| closed | 2022-08-27T09:10:18Z | 2023-01-31T15:01:50Z | https://github.com/microsoft/qlib/issues/1268 | [
"question",
"stale"
] | hensejiang | 4 |
mirumee/ariadne | graphql | 12 | Documentation | We'll need to setup sphinx docs and start documenting our features as well as providing guides for common use cases. | closed | 2018-08-07T08:32:16Z | 2018-11-07T19:59:19Z | https://github.com/mirumee/ariadne/issues/12 | [
"roadmap"
] | rafalp | 1 |
biolab/orange3 | pandas | 6,323 | add RangeSlider gui component | <!--
Thanks for taking the time to submit a feature request!
For the best chance at our team considering your request, please answer the following questions to the best of your ability.
-->
**What's your use case?**
<!-- In other words, what's your pain point? -->
Orange3 does not currently support RangeSlider gui components [see here](https://doc.qt.io/qt-6/qml-qtquick-controls2-rangeslider.html)
<!-- Is your request related to a problem, or perhaps a frustration? -->
Not a problem. More a frustration.
<!-- Tell us the story that led you to write this request. -->
I am building an Orange addon for text analysis that calculates scores for words based on certain metrics. I want to be able to filter words which have scores within a certain range. I am aware that there are other ways to specify numerical ranges using two [QSpinBox](https://www.tutorialspoint.com/pyqt/pyqt_qspinbox_widget.htm) components for example for the min and max values. However, I think it is more user-friendly and intuitive to use a range slider because then the user can more quickly and easily slide through different ranges to dynamically update the filtered results. Rather than manually typing arbitrary numbers for the min and max values for the QSpinBox components.
**What's your proposed solution?**
<!-- Be specific, clear, and concise. -->
Port Range Slider widget to PyQt like in [this thread](https://stackoverflow.com/questions/47342158/porting-range-slider-widget-to-pyqt5).
**Are there any alternative solutions?**
I don't know.
| closed | 2023-01-31T11:01:26Z | 2023-02-17T07:51:32Z | https://github.com/biolab/orange3/issues/6323 | [] | kodymoodley | 3 |
sqlalchemy/alembic | sqlalchemy | 336 | upgrading to implicit head that's already applied emits error message when it should likely pass silently, as is the case for normal heads already applied | **Migrated issue, originally created by Cedric Shock**
alembic 0.8.0 fails upgrades to a head that another branch `depends_on`.
We are going to create the following revision graph.
```
base --- core 1 (core@head)
\_____
\
base --- branch 1 --- branch 2 (branch@head)
```
Upgrading to `core@head` is successful until we upgrade to `branch@head`. After upgrading to `branch@head`, subsequent upgrades to `core@head` fail with the message `Destination core@head is not a valid upgrade target from current head(s)`.
The following commands set up the desired revision graph.
```
alembic init alembic
alembic revision -m "core 1" --head=base --branch-label=core
alembic revision -m "branch 1" --head=base --branch-label=branch
alembic revision -m "branch 2" --head=branch@head --depends-on=core@head
```
Upgrading to `core@head` is successful multiple times in a row.
```
$ alembic upgrade core@head
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> 50e7acdb8305, core 1
$ alembic upgrade core@head
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
```
We can upgrade to `branch@head`.
```
$ alembic upgrade branch@head
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> f65508c78e3, branch 1
INFO [alembic.runtime.migration] Running upgrade f65508c78e3, 50e7acdb8305 -> 1f50b22f2131, branch 2
```
But if we now try to upgrade to `core@head` again we get an error.
```
$ alembic upgrade core@head
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
ERROR [alembic.util.messaging] Destination core@head is not a valid upgrade target from current head(s)
FAILED: Destination core@head is not a valid upgrade target from current head(s)
```
The error is a result of `alembic.script.revision.RevisionMap.iterate_revisions` raising `RangeNotAncestorError`.
| closed | 2015-10-29T21:18:36Z | 2016-07-19T17:35:56Z | https://github.com/sqlalchemy/alembic/issues/336 | [
"bug",
"versioning model"
] | sqlalchemy-bot | 6 |
man-group/arctic | pandas | 840 | check updates for TickStore for real-time minute bar | #### Arctic Version
1.79.3
#### Arctic Store
TickStore
#### Platform and version
Windows 10
PyCharm
#### Description of problem and/or code sample that reproduces the issue
What i'm working on is to stream real-time minute bar data into arctic database for 500 symbols using tickstore. (The reason why I choose tick store is because I read about it's real-time capabilities.) The question I have is that I want to check for the database updates continuously so that the trading algorithm can act upon the latest bar.
Currently, I am using a very ad-hoc way of looking for the updates below (query_single_ticker.py). The intention is to keep reading the database and only print if the last row's datetime minute index is not the same as the one previously recorded. While this works fine for single ticker, running it for 20+ symbols seem to kill the performance (using a bash script). I am seeking your suggestion how could one read real-time data whenever there's an update.
if __name__ == "__main__":
ticker = sys.argv[1]
previous_last_idx = None
while (True):
# Get a library
tickstore_lib = store['tick_store_example']
# current_time_est = dt.now().astimezone(pytz.timezone('US/Eastern'))
current_time_est = dt.now().astimezone(mktz('US/Eastern'))
last_hour_date_time = current_time_est - timedelta(minutes=2)
data = tickstore_lib.read(ticker, date_range=DateRange(last_hour_date_time, current_time_est), columns=None)
# ideally this should only be called if there's update
if (data.index[-1] != previous_last_idx):
print("UPDATE:: TICKER:" + ticker + " CURRENT EST TIME = " + str(current_time_est) + " " + " LAST ROW STOCK TIME = " + str(
data.index[-1].tz_convert(mktz('US/Eastern'))) + " N UNITS =" + str(len(data.index)))
previous_last_idx = data.index[-1]
Here's a code snippet of how the data gets stored to arctic. Note here the write to arctic is called once per minute whenever the minute bar is ready.
for listener in self._listeners:
#listener.process_live_bar(interval_data)
(tzaware_date_time, ret_dict, ticker) = self.iqfeed_msg_to_artic_line(fields)
data = pd.DataFrame(ret_dict, index=[tzaware_date_time])
#store the data into arctic here
self.tickstore_lib.write(ticker, data)
| open | 2020-02-03T21:31:37Z | 2021-06-08T14:36:28Z | https://github.com/man-group/arctic/issues/840 | [] | darkknight9394 | 1 |
tqdm/tqdm | jupyter | 1,202 | Running pandas df.progress_apply in notebook run under vscode generates 'dataframe object has no attribute _is_builtin_func' | Code Sample, a copy-pastable example
df['col'].progress_apply(lambda x: x+1)
Problem description
generates an error message when run in notebook under vscode (but works fine when run in a normal jupyter notebook)
dataframe object has no attribute _is_builtin_func
Expected Output
Should show a progress bar and perform the apply lambda
Output of pd.show_versions()
1.3.0
| closed | 2021-07-05T23:03:16Z | 2021-07-06T06:38:49Z | https://github.com/tqdm/tqdm/issues/1202 | [
"duplicate 🗐",
"submodule ⊂"
] | dickreuter | 1 |
writer/writer-framework | data-visualization | 432 | How to Package a StreamSync App into an Executable Using PyInstaller? | Hello,I have developed a Python application using StreamSync and I am trying to package it into an executable (.exe) using PyInstaller.
Here are the steps I have taken so far, but I am facing issues getting it to work correctly.
------------------------------------------------------------------------------------------------------------------------
1. Created run_main.py :

2. Created hook_streamsync.py :

3. Created a PyInstaller spec file using:
pyinstaller --onefile --additional-hooks-dir=./hooks run_main.py --clean
4. Edited the generated run_app.spec file to include the necessary static files from StreamSync:
datas=[
('<path_to_streamsync_static>', 'streamsync/static')
]
5. Built the executable using:
pyinstaller run_main.spec --clean
6. Copied ui.json and main.py to the dist folder

------------------------------------------------------------------------------------------------------------------------
The executable is generated successfully in the dist folder, but when I try to run it, I encounter issues with the StreamSync static logo repeated in the command line and the screen doesn't progress.

Could someone please guide me on how to correctly package a StreamSync application into an executable using PyInstaller?
Any help or suggestions would be greatly appreciated.
Thank you! | open | 2024-05-17T06:50:10Z | 2024-08-26T08:29:14Z | https://github.com/writer/writer-framework/issues/432 | [] | Masa0208 | 0 |
apache/airflow | data-science | 48,024 | Getting 403 forbidden while creating namespaced pod | ### Apache Airflow version
3.0.0
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
ERROR - Task failed with exception source="task" error_detail=[{"exc_type":"ApiException","exc_value":"(403)\nReason: Forbidden\nHTTP response headers: HTTPHeaderDict({'Audit-Id': 'a20f9afa-d32d-44c6-93a7-5dd213f2ea29', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '7ea62e44-19c5-48a2-a560-ba8ca979b02c', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'de91312c-33da-49cb-8500-93b259a7b0a1', 'Date': 'Thu, 20 Mar 2025 16:21:27 GMT', 'Content-Length': '346'})\nHTTP response body: {\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"pods \\\"cowsay-statc-79n0g2sj\\\" is forbidden: exceeded quota: primitive-aurora-5047-default, requested: cpu=250m,memory=512Mi, used: cpu=1,memory=2Gi, limited: cpu=1,memory=2Gi\",\"reason\":\"Forbidden\",\"details\":{\"name\":\"cowsay-statc-79n0g2sj\",\"kind\":\"pods\"},\"code\":403}\n\n","exc_notes":[],"syntax_error":null,"is_cause":false,"frames":[{"filename":"/usr/local/lib/python3.12/site-packages/airflow/sdk/execution_time/task_runner.py","lineno":582,"name":"run"},{"filename":"/usr/local/lib/python3.12/site-packages/airflow/sdk/execution_time/task_runner.py","lineno":718,"name":"_execute_task"},{"filename":"/usr/local/lib/python3.12/site-packages/airflow/sdk/definitions/baseoperator.py","lineno":373,"name":"wrapper"},{"filename":"/home/astro/.local/lib/python3.12/site-packages/airflow/providers/cncf/kubernetes/operators/pod.py","lineno":583,"name":"execute"},{"filename":"/home/astro/.local/lib/python3.12/site-packages/airflow/providers/cncf/kubernetes/operators/pod.py","lineno":593,"name":"execute_sync"},{"filename":"/home/astro/.local/lib/python3.12/site-packages/airflow/providers/cncf/kubernetes/operators/pod.py","lineno":555,"name":"get_or_create_pod"},{"filename":"/usr/local/lib/python3.12/site-packages/tenacity/__init__.py","lineno":336,"name":"wrapped_f"},{"filename":"/usr/local/lib/python3.12/site-packages/tenacity/__init__.py","lineno":475,"name":"__call__"},{"filename":"/usr/local/lib/python3.12/site-packages/tenacity/__init__.py","lineno":376,"name":"iter"},{"filename":"/usr/local/lib/python3.12/site-packages/tenacity/__init__.py","lineno":398,"name":"<lambda>"},{"filename":"/usr/local/lib/python3.12/concurrent/futures/_base.py","lineno":449,"name":"result"},{"filename":"/usr/local/lib/python3.12/concurrent/futures/_base.py","lineno":401,"name":"__get_result"},{"filename":"/usr/local/lib/python3.12/site-packages/tenacity/__init__.py","lineno":478,"name":"__call__"},{"filename":"/home/astro/.local/lib/python3.12/site-packages/airflow/providers/cncf/kubernetes/utils/pod_manager.py","lineno":373,"name":"create_pod"},{"filename":"/home/astro/.local/lib/python3.12/site-packages/airflow/providers/cncf/kubernetes/utils/pod_manager.py","lineno":351,"name":"run_pod_async"},{"filename":"/home/astro/.local/lib/python3.12/site-packages/airflow/providers/cncf/kubernetes/utils/pod_manager.py","lineno":343,"name":"run_pod_async"},{"filename":"/usr/local/lib/python3.12/site-packages/kubernetes/client/api/core_v1_api.py","lineno":7356,"name":"create_namespaced_pod"},{"filename":"/usr/local/lib/python3.12/site-packages/kubernetes/client/api/core_v1_api.py","lineno":7455,"name":"create_namespaced_pod_with_http_info"},{"filename":"/usr/local/lib/python3.12/site-packages/kubernetes/client/api_client.py","lineno":348,"name":"call_api"},{"filename":"/usr/local/lib/python3.12/site-packages/kubernetes/client/api_client.py","lineno":180,"name":"__call_api"},{"filename":"/usr/local/lib/python3.12/site-packages/kubernetes/client/api_client.py","lineno":391,"name":"request"},{"filename":"/usr/local/lib/python3.12/site-packages/kubernetes/client/rest.py","lineno":279,"name":"POST"},{"filename":"/usr/local/lib/python3.12/site-packages/kubernetes/client/rest.py","lineno":238,"name":"request"}]}]
### What you think should happen instead?
_No response_
### How to reproduce
Run the below Dag in k8s executor:
```python
from datetime import datetime
from airflow import DAG
from airflow.providers.cncf.kubernetes.operators.pod import (
KubernetesPodOperator,
)
from airflow.configuration import conf
namespace = conf.get("kubernetes_executor", "NAMESPACE")
with DAG(
dag_id="kpo_mapped",
start_date=datetime(1970, 1, 1),
schedule=None,
tags=["taskmap"]
# render_template_as_native_obj=True,
) as dag:
KubernetesPodOperator(
task_id="cowsay_static",
name="cowsay_statc",
namespace=namespace,
image="docker.io/rancher/cowsay",
cmds=["cowsay"],
arguments=["moo"],
log_events_on_failure=True,
)
KubernetesPodOperator.partial(
task_id="cowsay_mapped",
name="cowsay_mapped",
namespace=namespace,
image="docker.io/rancher/cowsay",
cmds=["cowsay"],
log_events_on_failure=True,
).expand(arguments=[["mooooove"], ["cow"], ["get out the way"]])
```
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-03-20T16:45:19Z | 2025-03-20T16:57:31Z | https://github.com/apache/airflow/issues/48024 | [
"kind:bug",
"area:core",
"provider:cncf-kubernetes",
"needs-triage"
] | atul-astronomer | 1 |
opengeos/leafmap | jupyter | 212 | Add support for visualizing LiDAR data | References:
- https://github.com/laspy/laspy
- https://github.com/isl-org/Open3D
- https://medium.com/spatial-data-science/an-easy-way-to-work-and-visualize-lidar-data-in-python-eed0e028996c | closed | 2022-03-02T13:48:59Z | 2022-03-04T22:12:10Z | https://github.com/opengeos/leafmap/issues/212 | [
"Feature Request"
] | giswqs | 11 |
tartiflette/tartiflette | graphql | 80 | Unused fragment doesn't raise any error | A GraphQL request containing an unused fragment doesn't raise any error (cf. [GraphQL spec](https://facebook.github.io/graphql/June2018/#sec-Fragments-Must-Be-Used)):
```sdlang
type User {
name: String
}
type Query {
viewer: User
}
```
```graphql
fragment UserFields on User {
name
}
query {
viewer {
name
}
}
``` | closed | 2019-01-15T08:18:53Z | 2019-01-15T13:58:18Z | https://github.com/tartiflette/tartiflette/issues/80 | [
"bug"
] | Maximilien-R | 0 |
roboflow/supervision | computer-vision | 1,338 | Add Gpu support to time in zone solutions ! | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Description
it not support executes the code in the gpu/cuda
### Use case
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2024-07-10T04:52:50Z | 2024-07-10T08:47:07Z | https://github.com/roboflow/supervision/issues/1338 | [
"enhancement"
] | Rasantis | 1 |
gunthercox/ChatterBot | machine-learning | 1,936 | problem when calling custom adapter | Hi good day!
I am something new to chatterbot, I am currently following this documentation: [Docs](https://chatterbot.readthedocs.io/en/stable/logic/create-a-logic-adapter.html) in order to create a logical adapter to not accept rude input, until Now I have not been lucky.
This is my mistake:
`Traceback (most recent call last):
File "C:\Users\MTI\Documents\MTI\chatbot\covid19\lib\site-packages\flask\app.py", line 2463, in __call__
return self.wsgi_app(environ, start_response)
File "C:\Users\MTI\Documents\MTI\chatbot\covid19\lib\site-packages\flask\app.py", line 2449, in wsgi_app
response = self.handle_exception(e)
File "C:\Users\MTI\Documents\MTI\chatbot\covid19\lib\site-packages\flask\app.py", line 1866, in handle_exception
reraise(exc_type, exc_value, tb)
File "C:\Users\MTI\Documents\MTI\chatbot\covid19\lib\site-packages\flask\_compat.py", line 39, in reraise
raise value
File "C:\Users\MTI\Documents\MTI\chatbot\covid19\lib\site-packages\flask\app.py", line 2446, in wsgi_app
response = self.full_dispatch_request()
File "C:\Users\MTI\Documents\MTI\chatbot\covid19\lib\site-packages\flask\app.py", line 1951, in full_dispatch_request
rv = self.handle_user_exception(e)
File "C:\Users\MTI\Documents\MTI\chatbot\covid19\lib\site-packages\flask\app.py", line 1820, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "C:\Users\MTI\Documents\MTI\chatbot\covid19\lib\site-packages\flask\_compat.py", line 39, in reraise
raise value
File "C:\Users\MTI\Documents\MTI\chatbot\covid19\lib\site-packages\flask\app.py", line 1949, in full_dispatch_request
rv = self.dispatch_request()
File "C:\Users\MTI\Documents\MTI\chatbot\covid19\lib\site-packages\flask\app.py", line 1935, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "C:\Users\MTI\Documents\MTI\chatbot\chatbot-covid19\app.py", line 16, in getResponse
respuesta_chat = getResultBot(pregunta_cliente, entrenar_act)
File "C:\Users\MTI\Documents\MTI\chatbot\chatbot-covid19\main_core.py", line 40, in getResultBot
respuesta = chatbot.get_response(pregunta)
File "C:\Users\MTI\Documents\MTI\chatbot\covid19\lib\site-packages\chatterbot\chatterbot.py", line 139, in get_response
response = self.generate_response(input_statement, additional_response_selection_parameters)
File "C:\Users\MTI\Documents\MTI\chatbot\covid19\lib\site-packages\chatterbot\chatterbot.py", line 175, in generate_response
output = adapter.process(input_statement, additional_response_selection_parameters)
TypeError: process() takes 2 positional arguments but 3 were given`
this is my code:
- Config. Chatbot
`chatbot = ChatBot(
"Covid-19",
storage_adapter="chatterbot.storage.SQLStorageAdapter",
database="./database.sqlite5",
trainer="chatterbot.trainers.ListTrainer",
logic_adapters=[
{
"import_path": "chatterbot.logic.BestMatch",
"statement_comparison_function": "chatterbot.comparisons.levenshtein_distance",
"response_selection_method": "chatterbot.response_selection.get_most_frequent_response"
},
{
"import_path": "my_adapter.MyLogicAdapter1"
}
],
read_only=entrenamiento
)`
- Logic Adapter
`class MyLogicAdapter1(LogicAdapter):
def __init__(self, chatbot, **kwargs):
super().__init__(chatbot, **kwargs)
def can_process(self, statement):
word_list = ['stupid', 'idiot']
for word in word_list:
if word in statement.text:
print('valido groseria')
return True
return False
def process(self, statement, **kwargs):
response = Statement(text='¡Heeeey!')
confidence = 1
return confidence, response`
in advance I appreciate your help :/ | open | 2020-03-26T14:44:17Z | 2020-04-05T16:00:48Z | https://github.com/gunthercox/ChatterBot/issues/1936 | [] | mcallejas95 | 2 |
ageitgey/face_recognition | machine-learning | 1,083 | ValueError for face_recognition_svm.py example | * face_recognition version: 1.3.0
* Python version: 3.8
* Operating System: Win 10
*scikit-learn: 0.22.2.post1
### Description
I am attempting to use the example face_recognition_svm.py to train multiple images for only one person.
### What I Did
I passed 5 image encodings into the encodings array and only one string in the names array. However, the line clf.fit(encodings, names) raised ValueError: Found input variables with inconsistent number of samples: [5, 1]
Below is the code i used:
```
encodings = []
names = ["xin"]
input_path = "C:\\Users\\Desktop\\positive_images"
train_dir = Path(input_path).glob("**/*.jpg")
for pic_path in train_dir:
xin_face = face_recognition.load_image_file(str(pic_path))
face_boxes = face_recognition.face_locations(xin_face)
if len(face_boxes) == 1:
xin_encoding = face_recognition.face_encodings(xin_face)[0]
encodings.append(xin_encoding)
rel_path = os.path.relpath(str(pic_path),"C:\\Users\\Desktop\\positive_images")
print(rel_path + " added")
else:
print(rel_path + " was skipped, can't be used for training")
clf = svm.SVC(gamma="scale")
clf.fit(encodings,names)
```
https://stackoverflow.com/questions/44181664/sklearn-valueerror-found-input-variables-with-inconsistent-numbers-of-samples
Talks about having to reshape the array but I do not understand what is going on. Any help is appreciated :) | closed | 2020-03-11T01:48:12Z | 2021-08-11T12:17:19Z | https://github.com/ageitgey/face_recognition/issues/1083 | [] | VVinter-melon | 3 |
suitenumerique/docs | django | 153 | ✨Link doc | ## What we have now
For the moment we can share the doc link to our members, if the doc is public we can share it with anyone, but they will have only the `reader` role.
The doc can be:
- public (with reader role)
- private
## What we want ?
We would like to provide more "power" to the ones who will have the link, so it will just be another public state to the doc actually:
- public-reader
- public-editor
- private
⚠️ It is what is doing Google Docs.


## Thought
- We should probably stop to display the public docs to everybody in the Grid list, except if you are a member of the public doc.

## More challenging
Have multiples links per doc, each link will have:
- Role associated
- Duration
| closed | 2024-08-02T14:45:57Z | 2024-10-23T09:20:34Z | https://github.com/suitenumerique/docs/issues/153 | [
"frontend",
"feature",
"backend"
] | AntoLC | 0 |
ultralytics/ultralytics | python | 19,730 | How to get loss value from a middle module of my model? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I'v designed a module to process the features and now I need to calculate a loss value in this module. Is there a way to add this loss to the final loss value calculated in my customized loss class, which will trigger the backward process?
Here is part of my model.yaml
```
.....
- [[19,21], 1, MyModule, [module_args]] # 22
......
- [[28,30,32], 1, Detect, [det_nc]] # 34 Detect(P3,P4,P5)
```
For example, a loss will be calculated in "MyModule" and the output of "Detect" will be used to get another loss value. How could I fuse them two?
### Additional
_No response_ | closed | 2025-03-16T16:21:09Z | 2025-03-20T18:27:53Z | https://github.com/ultralytics/ultralytics/issues/19730 | [
"question"
] | xiyuxx | 4 |
vaexio/vaex | data-science | 2,024 | [FEATURE-REQUEST] Convert (or promote) arrow `time64` to numpy `timedelta64` (and vice-versa?) | **Description**
It appears vaex is not able to convert arrow `time64` to numpy `timedelta64`.
This would be great!
```python
import pandas as pd
import vaex
from fastparquet import write
# Setup.
parquet = '/home/yoh/Documents/code/data/vaex/td_test'
# Creating a `timedelta64` and recording it in a parquet file.
td = pd.Timestamp('2021/01/01 08:00') - pd.Timestamp('2021/01/01 07:59')
pdf = pd.DataFrame({'td':[td]})
write(parquet, pdf, file_scheme='hive')
# Reading it back
vdf = vaex.open(parquet)
```
Checking types.
```python
pdf['td']
Out[24]:
0 0 days 00:01:00
Name: td, dtype: timedelta64[ns]
vdf['td']
Out[25]:
Expression = td
Length: 1 dtype: time64[us] (column)
------------------------------------
0 00:01:00
```
Converting arrow `time64` to numpy `timedelta64`: problem
```python
vdf['td'].to_numpy()
Traceback (most recent call last):
File ~/anaconda3/lib/python3.9/site-packages/vaex/array_types.py:327 in numpy_dtype_from_arrow_type
return map_arrow_to_numpy[arrow_type]
KeyError: DataType(time64[us])
[...]
raise NotImplementedError(f'Cannot convert {arrow_type}')
NotImplementedError: Cannot convert time64[us]
```
Vaex version
{'vaex-core': '4.9.1',
'vaex-viz': '0.5.1',
'vaex-hdf5': '0.12.1',
'vaex-server': '0.8.1',
'vaex-astro': '0.9.1',
'vaex-jupyter': '0.7.0',
'vaex-ml': '0.17.0'} | open | 2022-04-19T19:02:52Z | 2022-04-29T16:36:30Z | https://github.com/vaexio/vaex/issues/2024 | [] | yohplala | 2 |
mitmproxy/pdoc | api | 254 | pdoc fails with Jinja2>=3.0 | #### Problem Description
Although I am able to build docs locally on my Windows computer, it doesn't work in the [Github Actions Machines](https://github.com/Piphi5/globe-observer-utils/runs/2561612426?check_suite_focus=true) or locally on WSL (tried on regular python files and packages such as pandas).
I'm sure this is most likely a problem on my end, but I haven't been able to find much documentation on what the issue is and how to resolve it.
All of these systems fail with the following error from Jinja:
`AssertionError: Tried to resolve a name to a reference that was unknown to the frame ('m')
`
A Full Stack Trace can be found [here](https://github.com/Piphi5/globe-observer-utils/runs/2561612426?check_suite_focus=true).
#### Steps to reproduce the behavior:
1. On WSL 1, install pdoc with `python3 -m pip install pdoc`
2. Try to generate docs for a python file or package (e.g. `python3 -m pdoc pandas`)
#### System Information
Paste the output of "pdoc --version" here.
```
pdoc: 6.6.0
Python: 3.9.2
Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.31
``` | closed | 2021-05-12T02:00:01Z | 2021-05-12T12:27:49Z | https://github.com/mitmproxy/pdoc/issues/254 | [
"bug"
] | Piphi5 | 5 |
pywinauto/pywinauto | automation | 1,339 | Different wrapper behavior between Application.window() vs. .windows() | Perhaps I am misinterpreting the documentation, but I expect the object returned by Application.window() to be of the same type as the equivalent list item returned by Application.windows(). However, as shown in the example below, window() returns a WindowSpecification while the equivalent element of the list returned by windows() is a DialogWrapper.
From the 0.6.8 documentation:
> **window(\*\*kwargs)**
> Return a window of the application
>
> You can specify the same parameters as findwindows.find_windows. It will add the process parameter to ensure that the window is from the current process.
>
> See pywinauto.findwindows.find_elements() for the full parameters description.
>
> **windows(\*\*kwargs)**
> Return a list of wrapped top level windows of the application
This behavior exists in v0.6.8. I can't test in atspi due to Issue #1338
## Expected Behavior
```
<class 'pywinauto.application.WindowSpecification'>
<class 'pywinauto.application.WindowSpecification'>
```
## Actual Behavior
```
<class 'pywinauto.application.WindowSpecification'>
<class 'pywinauto.controls.hwndwrapper.DialogWrapper'>
```
## Steps to Reproduce the Problem
Run the example code in the specified environment.
## Short Example of Code to Demonstrate the Problem
```python
import pywinauto
app = pywinauto.Application(backend="win32").start(cmd_line=r"C:\WINDOWS\system32\notepad.exe")
print(type(app.window(title_re=".*Notepad.*")))
print(type(app.windows(title_re=".*Notepad.*")[0]))
```
## Specifications
- Pywinauto version: 0.6.8
- Python version and bitness: Python 3.8.2, 32-bit
- Platform and OS:
Windows info:
64-bit version
Edition: Windows 10 Enterprise
Version: 22H2
OS build: 19045.3324
Experience: Windows Feature Experience Pack 1000.19041.1000.0
| open | 2023-09-20T20:19:24Z | 2023-09-21T03:36:58Z | https://github.com/pywinauto/pywinauto/issues/1339 | [] | logkirkland | 1 |
seleniumbase/SeleniumBase | pytest | 3,261 | Add an example test for CDP Mode to demonstrate XHR requests being collected and displayed | ### Add an example test for CDP Mode to demonstrate XHR requests being collected and displayed
----
The test should show how the XHR data gets collected. Then the test should display the request/response headers. | closed | 2024-11-14T18:39:06Z | 2024-11-14T21:44:37Z | https://github.com/seleniumbase/SeleniumBase/issues/3261 | [
"tests",
"UC Mode / CDP Mode"
] | mdmintz | 1 |
awesto/django-shop | django | 422 | django-shop breaks my project's testsuite | Traceback below.
The problem seems to be that we are calling `connection.cursor()` in `shop/models/fields.py` at module level, and that uses the connection to the default database, not the test database. I will investigate this ASAP.
```
Traceback (most recent call last):
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/django/db/backends/base/base.py", line 199, in ensure_connection
self.connect()
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/django/db/backends/base/base.py", line 171, in connect
self.connection = self.get_new_connection(conn_params)
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/django/db/backends/postgresql/base.py", line 175, in get_new_connection
connection = Database.connect(**conn_params)
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/psycopg2/__init__.py", line 164, in connect
conn = _connect(dsn, connection_factory=connection_factory, async=async)
psycopg2.OperationalError: FATAL: database "acceed-shop" does not exist
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/py/test.py", line 4, in <module>
sys.exit(pytest.main())
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/_pytest/config.py", line 47, in main
config = _prepareconfig(args, plugins)
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/_pytest/config.py", line 132, in _prepareconfig
pluginmanager=pluginmanager, args=args)
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
_MultiCall(methods, kwargs, hook.spec_opts).execute()
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 595, in execute
return _wrapped_call(hook_impl.function(*args), self.execute)
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 249, in _wrapped_call
wrap_controller.send(call_outcome)
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/_pytest/helpconfig.py", line 32, in pytest_cmdline_parse
config = outcome.get_result()
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 278, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 264, in __init__
self.result = func()
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute
res = hook_impl.function(*args)
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/_pytest/config.py", line 882, in pytest_cmdline_parse
self.parse(args)
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/_pytest/config.py", line 1032, in parse
self._preparse(args, addopts=addopts)
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/_pytest/config.py", line 1003, in _preparse
args=args, parser=self._parser)
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
_MultiCall(methods, kwargs, hook.spec_opts).execute()
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 595, in execute
return _wrapped_call(hook_impl.function(*args), self.execute)
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 253, in _wrapped_call
return call_outcome.get_result()
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 278, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 264, in __init__
self.result = func()
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute
res = hook_impl.function(*args)
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/pytest_django/plugin.py", line 245, in pytest_load_initial_conftests
_setup_django()
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/pytest_django/plugin.py", line 148, in _setup_django
django.setup()
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/django/apps/registry.py", line 108, in populate
app_config.import_models(all_models)
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/django/apps/config.py", line 202, in import_models
self.models_module = import_module(models_module_name)
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 662, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "/home/rene/acceed2/acceed/models.py", line 15, in <module>
from shop.models.defaults.address import BillingAddress, ShippingAddress # noqa
File "/home/rene/django-shop/shop/models/__init__.py", line 2, in <module>
from .notification import Notification, NotificationAttachment
File "/home/rene/django-shop/shop/models/notification.py", line 18, in <module>
from .customer import CustomerModel
File "/home/rene/django-shop/shop/models/customer.py", line 18, in <module>
from shop.models.fields import JSONField
File "/home/rene/django-shop/shop/models/fields.py", line 13, in <module>
with connection.cursor() as cursor:
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/django/db/backends/base/base.py", line 231, in cursor
cursor = self.make_debug_cursor(self._cursor())
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/django/db/backends/base/base.py", line 204, in _cursor
self.ensure_connection()
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/django/db/backends/base/base.py", line 199, in ensure_connection
self.connect()
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/django/db/utils.py", line 95, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/django/db/backends/base/base.py", line 199, in ensure_connection
self.connect()
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/django/db/backends/base/base.py", line 171, in connect
self.connection = self.get_new_connection(conn_params)
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/django/db/backends/postgresql/base.py", line 175, in get_new_connection
connection = Database.connect(**conn_params)
File "/home/rene/.virtualenvs/acceed-shop/lib/python3.5/site-packages/psycopg2/__init__.py", line 164, in connect
conn = _connect(dsn, connection_factory=connection_factory, async=async)
django.db.utils.OperationalError: FATAL: database "acceed-shop" does not exist
```
| open | 2016-09-21T18:40:40Z | 2016-11-20T14:06:13Z | https://github.com/awesto/django-shop/issues/422 | [
"bug"
] | rfleschenberg | 8 |
Yorko/mlcourse.ai | scikit-learn | 151 | Undefined names 'dprev_h' and 'dprev_c' | flake8 testing of https://github.com/Yorko/mlcourse_open
$ __flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics__
```
./class_cs231n/assignment3/cs231n/rnn_layers.py:264:16: F821 undefined name 'dprev_h'
return dx, dprev_h, dprev_c, dWx, dWh, db
^
./class_cs231n/assignment3/cs231n/rnn_layers.py:264:25: F821 undefined name 'dprev_c'
return dx, dprev_h, dprev_c, dWx, dWh, db
^
``` | closed | 2018-01-30T13:52:37Z | 2018-08-04T16:07:50Z | https://github.com/Yorko/mlcourse.ai/issues/151 | [
"invalid"
] | cclauss | 1 |
mwaskom/seaborn | data-visualization | 2,791 | UserWarning: ``square=True`` ignored in clustermap | Hi,
Whenever I run
```
sb.clustermap(
master_table_top_10_pearson.transpose(),
square=True
)
```
for my DataFrame which looks like a perfectly normal DataFrame

I get `UserWarning: ``square=True`` ignored in clustermap warnings.warn(msg)`. However, I need the squares in my plot. I cannot see a reason why the parameter gets ignored.
Thank you very much! | closed | 2022-05-09T10:38:50Z | 2022-05-09T10:45:44Z | https://github.com/mwaskom/seaborn/issues/2791 | [] | Zethson | 1 |
streamlit/streamlit | python | 10,239 | Dynamic popover height | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Currently, the height of a popover in Streamlit remains static, even when it contains components with expandable elements, such as a selectbox. When such elements are opened, the popover should dynamically adjust its height to fit the expanded content and revert to its original height once the element is closed.
### Why?
Popovers in Streamlit do not dynamically adjust their height when containing expandable components like selectbox, multiselect, etc. This can cause parts of the expanded content to overflow outside the container.
As motivation, Streamlit is widely used for building dynamic and interactive applications. Ensuring that popovers (and other containers) dynamically adjust their size to fit expandable elements improves the overall responsiveness and usability of the app.
The use case is obvious, the dynamic height ensures that when a user interacts, for example, with a selectbox inside a popover, the popover container adjusts to fully display the expanded options, preventing them from overflowing.
### How?
Add a parameter to st.popover called **auto_height** (default: True): st.popover(auto_height=True)
When **auto_height=True**, the popover container recalculates its height whenever the state of an expandable component inside it changes (e.g., options are expanded or collapsed).
### Additional Context
_No response_ | open | 2025-01-23T16:02:39Z | 2025-01-23T16:53:45Z | https://github.com/streamlit/streamlit/issues/10239 | [
"type:enhancement",
"feature:st.popover"
] | taugustinov-ness | 2 |
alpacahq/alpaca-trade-api-python | rest-api | 459 | Missing Key 'n' in entity mapping | Hi!
It looks like web api was updated recently, with two extra fields: 'n', 'vw'
`{'t': '2021-06-28T04:00:00Z', 'o': 10.15, 'h': 10.3, 'l': 10.04, 'c': 10.04, 'v': 20356, 'n': 111, 'vw': 10.113978}`
So now conversion to pandas dataframe using '.df' method fails with exception
```
Traceback (most recent call last):
File "\divscan.py", line 548, in <module>
File "\divscan.py", line 293, in prepareData
data = fetchDelta(ticker, TimeFrame.Day, minBars, past, now)
File "\divscan.py", line 279, in fetchDelta
df = fetchData(ticker, tf, start, end)
File "\divscan.py", line 258, in fetchData
return data.df
File "\lib\site-packages\alpaca_trade_api\entity_v2.py", line 61, in df
df.columns = [self.mapping[c] for c in df.columns]
File "\lib\site-packages\alpaca_trade_api\entity_v2.py", line 61, in <listcomp>
df.columns = [self.mapping[c] for c in df.columns]
KeyError: 'n'
``` | closed | 2021-06-30T13:25:44Z | 2021-07-02T09:15:05Z | https://github.com/alpacahq/alpaca-trade-api-python/issues/459 | [] | haron4igg | 6 |
sngyai/Sequoia | pandas | 11 | HDFStore requires PyTables, "DLL load failed | 硕世生物('688399') generated an exception: HDFStore requires PyTables, "DLL load failed: 找不到指定的模块。" problem importing | closed | 2020-01-28T05:03:20Z | 2022-12-30T06:35:07Z | https://github.com/sngyai/Sequoia/issues/11 | [] | ferris1993 | 3 |
psf/black | python | 3,786 | Documentation of current code-style formatting of comment omits mention of hashbangs | ## **Describe the problem**
The documentation of the current code-style fails to list hashbang comments (`#!`) among the types of comments that are excepted from the spacing rules, even though they are indeed excepted from the spacing rules (see [src/black/comments.py](https://github.com/psf/black/blob/main/src/black/comments.py#L26)).
From the [comments subsection of docs/the_black_code_style/current_style.md](https://github.com/psf/black/blob/main/docs/the_black_code_style/current_style.md#comments):
> Some types of comments that require specific spacing rules are respected: doc comments (#: comment), section comments with long runs of hashes, and Spyder cells.
## **Describe the solution you'd like**
I would like this behaviour to be documented. I will create a Pull Request to rectify this if/when given the go-ahead from whomever has authority over maintenance of the documentation.
## **Describe alternatives you've considered**
I did (briefly) consider allowing this to go undocumented. I then immediately stopped considering that, as allowing this omission to continue existing would be completely and utterly silly. Regardless of how minor the omission is, it's my opinion there's no good reason to leave this (or indeed, any) behaviour undocumented. This applies doubly in this case specifically, because of how little effort it would require to fix the documentation (only one sentence needs to be modified).
| closed | 2023-07-11T18:17:19Z | 2023-07-11T19:16:52Z | https://github.com/psf/black/issues/3786 | [
"T: documentation"
] | mqyhlkahu | 1 |
django-import-export/django-import-export | django | 1,712 | IllegalCharacterError is not handled | **Describe the bug**
When there are ASCII control characters > 32 in the values for the export, the export to `.xlsx` will fail, because `openpyxl` can't handle it.
**To Reproduce**
Steps to reproduce the behavior:
1. In a CharField add '\x0b' to the value.
2. Click on 'export' in the ModelAdmin Changelist Browser window.
3. Chose 'xlsx' and submit.
4. See error
**Versions (please complete the following information):**
- Django Import Export: [3.3.4]
- Python [3.11]
- Django [4.2.7]
**Expected behavior**
Just have these characters escaped or replaced and continue with the export. In best case have a message to the user about the replacement of some characters.
**Screenshots**

**Additional context**
In my case the problem occurred with the "vertical tab", which is in `string.whitespace`.
```python
>>> '\v'
'\x0b'
>>> import string
>>> string.whitespace
' \t\n\r\x0b\x0c'
```
But I assume there are more characters as described in: http://www.havnemark.dk/?p=185 | closed | 2023-12-12T17:15:43Z | 2024-01-11T19:37:40Z | https://github.com/django-import-export/django-import-export/issues/1712 | [
"bug"
] | ralfzen | 1 |
stanfordnlp/stanza | nlp | 529 | Use specific exception class if language isn't supported | **Describe the bug**
Currently if a language isn't supported a generic error with the class `Exception` and message `No processor to load. Please check if your language or package is correctly set.` is raised.
If an user is catching exceptions it will need to do something like:
```
except Exception as e:
if str(e) == 'No processor to load. Please check if your language or package is correctly set.':
<handle error>
```
An exception class like `UnsupportedLanguage` should be used for easier exceptions catch. | closed | 2020-11-20T18:22:26Z | 2021-01-27T17:21:34Z | https://github.com/stanfordnlp/stanza/issues/529 | [
"bug",
"fixed on dev"
] | brauliobo | 1 |
zappa/Zappa | flask | 801 | [Migrated] ReadTimeoutError on Zappa Deploy | Originally from: https://github.com/Miserlou/Zappa/issues/1956 by [Faaab](https://github.com/Faaab)
After deploying my Flask app successfully several times, `zappa deploy` stopped working. When I try to deploy, I get the following error:
```
Read timeout on endpoint URL: "{not disclosing my URL for security reasons}"
Error: Unable to upload to S3. Quitting.
```
## Expected Behavior
When deploying, Zappa should create (and has created using this machine in the past) all necessary resources and upload my zipped project to S3.
## Actual Behavior
The app package is created and zipped, then the IAM policy and S3 bucket are created successfully. Then I see a progress bar (which does not update regularly, but only a handful times over the course of several minutes). After ~5 minutes or so, the progress bar disappears. Then, after another ~5 minutes, I get the error message above.
I have replaced the statement that `print`s this exception with a call to `logging.exception`, which gave the following stacktrace:
```
13:19 $ zappa deploy
Calling deploy for stage dev..
Downloading and installing dependencies..
- psycopg2-binary==2.8.4: Using locally cached manylinux wheel
- markupsafe==1.1.1: Using locally cached manylinux wheel
- sqlite==python3: Using precompiled lambda package
'python3.7'
Packaging project as zip.
Uploading fabian-test-zappa-poging-12006-dev-1573215672.zip (23.4MiB)..
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 24.5M/24.5M [05:40<00:00, 20.4KB/s]ERROR:Read timeout on endpoint URL: "https://fabian-test-zappa-12006.s3.eu-west-1.amazonaws.com/fabian-test-zappa-poging-12006-dev-1573215672.zip?uploadId=u6kP9nuJRwK0wFBFOVa9kIJECBYftFW3p1a0o__Ne9SxKT4i7jmvY_0rwJ4uy7Zapo0JzAFRgrn9nwJN4RothWiO9_7G_MXhQUAistjbJ9QVmWBX0ZnZUEGB572T2jjH&partNumber=1"
Traceback (most recent call last):
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/urllib3/connectionpool.py", line 421, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/urllib3/connectionpool.py", line 416, in _make_request
httplib_response = conn.getresponse()
File "/usr/lib/python3.7/http/client.py", line 1321, in getresponse
response.begin()
File "/usr/lib/python3.7/http/client.py", line 296, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.7/http/client.py", line 257, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/lib/python3.7/socket.py", line 589, in readinto
return self._sock.recv_into(b)
File "/usr/lib/python3.7/ssl.py", line 1052, in recv_into
return self.read(nbytes, buffer)
File "/usr/lib/python3.7/ssl.py", line 911, in read
return self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/httpsession.py", line 263, in send
chunked=self._chunked(request.headers),
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/urllib3/connectionpool.py", line 720, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/urllib3/util/retry.py", line 376, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/urllib3/packages/six.py", line 735, in reraise
raise value
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/urllib3/connectionpool.py", line 672, in urlopen
chunked=chunked,
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/urllib3/connectionpool.py", line 423, in _make_request
self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/urllib3/connectionpool.py", line 331, in _raise_timeout
self, url, "Read timed out. (read timeout=%s)" % timeout_value
urllib3.exceptions.ReadTimeoutError: AWSHTTPSConnectionPool(host='fabian-test-zappa-12006.s3.eu-west-1.amazonaws.com', port=443): Read timed out. (read timeout=60)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/zappa/core.py", line 962, in upload_to_s3
Callback=progress.update
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/boto3/s3/inject.py", line 131, in upload_file
extra_args=ExtraArgs, callback=Callback)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/boto3/s3/transfer.py", line 279, in upload_file
future.result()
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/s3transfer/futures.py", line 106, in result
return self._coordinator.result()
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/s3transfer/futures.py", line 265, in result
raise self._exception
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/s3transfer/tasks.py", line 126, in __call__
return self._execute_main(kwargs)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/s3transfer/tasks.py", line 150, in _execute_main
return_value = self._main(**kwargs)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/s3transfer/upload.py", line 722, in _main
Body=body, **extra_args)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/client.py", line 648, in _make_api_call
operation_model, request_dict, request_context)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/client.py", line 667, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/endpoint.py", line 102, in make_request
return self._send_request(request_dict, operation_model)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/endpoint.py", line 137, in _send_request
success_response, exception):
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/endpoint.py", line 231, in _needs_retry
caught_exception=caught_exception, request_dict=request_dict)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/hooks.py", line 356, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/hooks.py", line 228, in emit
return self._emit(event_name, kwargs)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/hooks.py", line 211, in _emit
response = handler(**kwargs)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/retryhandler.py", line 183, in __call__
if self._checker(attempts, response, caught_exception):
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/retryhandler.py", line 251, in __call__
caught_exception)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/retryhandler.py", line 277, in _should_retry
return self._checker(attempt_number, response, caught_exception)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/retryhandler.py", line 317, in __call__
caught_exception)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/retryhandler.py", line 223, in __call__
attempt_number, caught_exception)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/retryhandler.py", line 359, in _check_caught_exception
raise caught_exception
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/endpoint.py", line 200, in _do_get_response
http_response = self._send(request)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/endpoint.py", line 244, in _send
return self.http_session.send(request)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/httpsession.py", line 289, in send
raise ReadTimeoutError(endpoint_url=request.url, error=e)
botocore.exceptions.ReadTimeoutError: Read timeout on endpoint URL: "https://fabian-test-zappa-12006.s3.eu-west-1.amazonaws.com/fabian-test-zappa-poging-12006-dev-1573215672.zip?uploadId=W5Ishdue_X_UFTLhPVDQ.TCR600JN1GxNHEGE9RkRaY.fWWElxjvSDQ2IOiwH.A4eg7fCYBUNjBdWikVY9Mz3nPtIBSgK3MkShShiMRtpcKfC2uW_jniCCblTJsMkKaE&partNumber=2"
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/zappa/core.py", line 965, in upload_to_s3
self.s3_client.upload_file(source_path, bucket_name, dest_path)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/boto3/s3/inject.py", line 131, in upload_file
extra_args=ExtraArgs, callback=Callback)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/boto3/s3/transfer.py", line 279, in upload_file
future.result()
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/s3transfer/futures.py", line 106, in result
return self._coordinator.result()
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/s3transfer/futures.py", line 265, in result
raise self._exception
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/s3transfer/tasks.py", line 126, in __call__
return self._execute_main(kwargs)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/s3transfer/tasks.py", line 150, in _execute_main
return_value = self._main(**kwargs)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/s3transfer/upload.py", line 722, in _main
Body=body, **extra_args)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/client.py", line 648, in _make_api_call
operation_model, request_dict, request_context)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/client.py", line 667, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/endpoint.py", line 102, in make_request
return self._send_request(request_dict, operation_model)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/endpoint.py", line 137, in _send_request
success_response, exception):
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/endpoint.py", line 231, in _needs_retry
caught_exception=caught_exception, request_dict=request_dict)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/hooks.py", line 356, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/hooks.py", line 228, in emit
return self._emit(event_name, kwargs)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/hooks.py", line 211, in _emit
response = handler(**kwargs)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/retryhandler.py", line 183, in __call__
if self._checker(attempts, response, caught_exception):
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/retryhandler.py", line 251, in __call__
caught_exception)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/retryhandler.py", line 277, in _should_retry
return self._checker(attempt_number, response, caught_exception)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/retryhandler.py", line 317, in __call__
caught_exception)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/retryhandler.py", line 223, in __call__
attempt_number, caught_exception)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/retryhandler.py", line 359, in _check_caught_exception
raise caught_exception
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/endpoint.py", line 200, in _do_get_response
http_response = self._send(request)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/endpoint.py", line 244, in _send
return self.http_session.send(request)
File "/c/Users/Fabian.vanDijk/Documents/zelfstudie/zappa_test2/venv/lib/python3.7/site-packages/botocore/httpsession.py", line 289, in send
raise ReadTimeoutError(endpoint_url=request.url, error=e)
botocore.exceptions.ReadTimeoutError: Read timeout on endpoint URL: "{not disclosing the URL for obvious reasons}"
Error: Unable to upload to S3. Quitting.
```
## Possible Fix
I know no possible fix, but I have tested the following:
My teammates can `zappa deploy` with the exact code/configuration that I use, using their machines.
They can also use `zappa deploy` using an aws profile with an access key to my account, which means this is probably not a permissions issue.
I have deployed to a different AZ, which gives the same error.
I have thrown away my repo and virtualenv and made a new one. The error persists.
I have successfully uploaded a file to this S3 bucket with boto3.
## Your Environment
* Zappa version used: `0.48.2`
* Operating System and Python version: Ubuntu 18.04, Python 3.7.5
* The output of `pip freeze`:
```
apispec==1.3.3
argcomplete==1.9.3
attrs==19.3.0
Babel==2.7.0
boto3==1.10.12
botocore==1.13.12
certifi==2019.9.11
cfn-flip==1.2.2
chardet==3.0.4
Click==7.0
colorama==0.4.1
defusedxml==0.6.0
docutils==0.15.2
durationpy==0.5
Flask==1.1.1
Flask-AppBuilder==2.2.0
Flask-Babel==0.12.2
Flask-JWT-Extended==3.24.1
Flask-Login==0.4.1
Flask-OpenID==1.2.5
Flask-SQLAlchemy==2.4.1
Flask-WTF==0.14.2
future==0.16.0
hjson==3.0.1
idna==2.8
importlib-metadata==0.23
itsdangerous==1.1.0
Jinja2==2.10.3
jmespath==0.9.3
jsonschema==3.1.1
kappa==0.6.0
lambda-packages==0.20.0
MarkupSafe==1.1.1
marshmallow==2.19.5
marshmallow-enum==1.5.1
marshmallow-sqlalchemy==0.19.0
more-itertools==7.2.0
pipdeptree==0.13.2
placebo==0.9.0
prison==0.1.2
psycopg2-binary==2.8.4
PyJWT==1.7.1
pyrsistent==0.15.5
python-dateutil==2.6.1
python-slugify==1.2.4
python3-openid==3.1.0
pytz==2019.3
PyYAML==5.1.2
requests==2.22.0
s3transfer==0.2.1
six==1.13.0
SQLAlchemy==1.3.10
SQLAlchemy-Utils==0.35.0
toml==0.10.0
tqdm==4.19.1
troposphere==2.5.2
Unidecode==1.1.1
urllib3==1.25.6
Werkzeug==0.16.0
wsgi-request-logger==0.4.6
WTForms==2.2.1
zappa==0.48.2
zipp==0.6.0
```
* Link to your project (optional):
* Your `zappa_settings.py`:
```
{
"dev": {
"app_function": "app.app_init.app",
"aws_region": "eu-west-1",
"profile_name": "default",
"project_name": "fabian-test-zappa-poging-12006",
"runtime": "python3.7",
"s3_bucket": "fabian-test-zappa-12006"
}
}
```
| closed | 2021-02-20T12:42:39Z | 2022-07-16T06:13:24Z | https://github.com/zappa/Zappa/issues/801 | [] | jneves | 1 |
pyjanitor-devs/pyjanitor | pandas | 541 | [BUG] Error message on "new column names" is incorrect in deconcatenate_columns | # Brief Description
If I pass in the wrong number of column names in deconcatenate_columns, the error message simply reflects back the number of columns that I provided, rather than the number of columns I need to provide.
Specifically, these two lines need to be changed:
```python
-> 1033 f"you need to provide {len(new_column_names)} names"
1034 "to new_column_names"
```
to:
```python
-> 1033 f"you need to provide {len(deconcat.shape[1])} names "
1034 "to new_column_names"
```
(Extra space needed at the end of the first line as well.) | closed | 2019-08-20T15:31:54Z | 2019-10-10T00:21:29Z | https://github.com/pyjanitor-devs/pyjanitor/issues/541 | [
"bug",
"good first issue",
"being worked on"
] | ericmjl | 1 |
eriklindernoren/ML-From-Scratch | machine-learning | 34 | Is there any special instructions to install it in windows? | I have been trying to install it on windows 10 because it looks very cool for extra learning material, but
I have not been able to install it so far. I tried to install vs build tools as the requirements said but without luck. Any ideas or suggestions? | open | 2018-03-01T05:51:33Z | 2020-10-15T10:53:48Z | https://github.com/eriklindernoren/ML-From-Scratch/issues/34 | [] | jaircastruita | 4 |
keras-team/keras | deep-learning | 20,722 | Is it possible to use tf.data with tf operations while utilizing jax or torch as the backend? | Apart from tensorflow as backend, what are the proper approach to use basic operatons (i.e. tf.concat) inside the tf.data API pipelines? The following code works with tensorflow backend, but not with torch or jax.
```python
import os
os.environ["KERAS_BACKEND"] = "jax" # tensorflow, torch, jax
import keras
from keras import layers
import tensorflow as tf
aug_model = keras.Sequential([
keras.Input(shape=(224, 224, 3)),
layers.RandomFlip("horizontal_and_vertical")
])
def augment_data_tf(x, y):
combined = tf.concat([x, y], axis=-1)
z = aug_model(combined)
x = z[..., :3]
y = z[..., 3:]
return x, y
a = np.ones((4, 224, 224, 3)).astype(np.float32)
b = np.ones((4, 224, 224, 2)).astype(np.float32)
dataset = tf.data.Dataset.from_tensor_slices((a, b))
dataset = dataset.batch(3, drop_remainder=True)
dataset = dataset.map(
augment_data_tf, num_parallel_calls=tf.data.AUTOTUNE
)
```
```bash
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
[<ipython-input-7-2d25b0c0bbad>](https://localhost:8080/#) in <cell line: 3>()
1 dataset = tf.data.Dataset.from_tensor_slices((a, b))
2 dataset = dataset.batch(3, drop_remainder=True)
----> 3 dataset = dataset.map(
4 augment_data_tf, num_parallel_calls=tf.data.AUTOTUNE
5 )
25 frames
[/usr/local/lib/python3.10/dist-packages/jax/_src/numpy/lax_numpy.py](https://localhost:8080/#) in _convert_to_array_if_dtype_fails(x)
4102 dtypes.dtype(x)
4103 except TypeError:
-> 4104 return np.asarray(x)
4105 else:
4106 return x
NotImplementedError: in user code:
File "<ipython-input-5-ca4b074b58a5>", line 6, in augment_data_tf *
z = aug_model(combined)
File "/usr/local/lib/python3.10/dist-packages/keras/src/utils/traceback_utils.py", line 122, in error_handler **
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.10/dist-packages/optree/ops.py", line 752, in tree_map
return treespec.unflatten(map(func, *flat_args))
File "/usr/local/lib/python3.10/dist-packages/jax/_src/numpy/lax_numpy.py", line 4252, in asarray
return array(a, dtype=dtype, copy=bool(copy), order=order, device=device)
File "/usr/local/lib/python3.10/dist-packages/jax/_src/numpy/lax_numpy.py", line 4058, in array
leaves = [_convert_to_array_if_dtype_fails(leaf) for leaf in leaves]
File "/usr/local/lib/python3.10/dist-packages/jax/_src/numpy/lax_numpy.py", line 4058, in <listcomp>
leaves = [_convert_to_array_if_dtype_fails(leaf) for leaf in leaves]
File "/usr/local/lib/python3.10/dist-packages/jax/_src/numpy/lax_numpy.py", line 4104, in _convert_to_array_if_dtype_fails
return np.asarray(x)
NotImplementedError: Cannot convert a symbolic tf.Tensor (concat:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported.
``` | closed | 2025-01-03T19:44:45Z | 2025-01-04T22:44:43Z | https://github.com/keras-team/keras/issues/20722 | [] | innat | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.