repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
nerfstudio-project/nerfstudio | computer-vision | 2,589 | Unable to add new nerf methods to latest image | **Describe the bug**
I have built a docker image based off the Dockerfile in the current repo. When I try to install the default new method template (https://github.com/tauzn-clock/plane-nerf) I am greeted with an error.
**To Reproduce**
Steps to reproduce the behavior:
1. Build a Docker image from Dockerfile
2. Go into the Docker container
3. Clone https://github.com/nerfstudio-project/nerfstudio-method-template
4. `cd nerfstudio-method-template`
5. `sudo pip install -e .`
6. Error

**Expected behavior**
What I should expect is that the new method gets installed correctly and becomes integrated with nerfstudio. This is the behaviour I observe when testing on `dromni/nerfstudio:0.3.4`
**Additional context**
Here is a more detailed report of the errors I encountered. please see https://github.com/nerfstudio-project/nerfstudio-method-template/issues/3
| open | 2023-11-06T21:03:55Z | 2023-11-06T21:03:55Z | https://github.com/nerfstudio-project/nerfstudio/issues/2589 | [] | tauzn-clock | 0 |
huggingface/datasets | deep-learning | 6,610 | cast_column to Sequence(subfeatures_dict) has err | ### Describe the bug
I am working with the following demo code:
```
from datasets import load_dataset
from datasets.features import Sequence, Value, ClassLabel, Features
ais_dataset = load_dataset("/data/ryan.gao/ais_dataset_cache/raw/1978/")
ais_dataset = ais_dataset["train"]
def add_class(example):
example["my_labeled_bbox"] = {"bbox": [100,100,200,200], "label": "cat"}
return example
ais_dataset = ais_dataset.map(add_class, batched=False, num_proc=32)
ais_dataset = ais_dataset.cast_column("my_labeled_bbox", Sequence(
{
"bbox": Sequence(Value(dtype="int64")),
"label": ClassLabel(names=["cat", "dog"])
}))
print(ais_dataset[0])
```
However, executing this code results in an error:
```
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 2111, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
int64
to
Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)
```
Upon examining the source code in datasets/table.py at line 2035:
```
if isinstance(feature, Sequence) and isinstance(feature.feature, dict):
feature = {
name: Sequence(subfeature, length=feature.length) for name, subfeature in feature.feature.items()
}
```
I noticed that if subfeature is of type Sequence, the code results in Sequence(Sequence(...), ...) and Sequence(ClassLabel(...), ...), which appears to be the source of the error.
### Steps to reproduce the bug
run my demo code
### Expected behavior
no exception
### Environment info
python 3.9
datasets: 2.16.1 | closed | 2024-01-23T09:32:32Z | 2024-01-25T02:15:23Z | https://github.com/huggingface/datasets/issues/6610 | [] | neiblegy | 2 |
FlareSolverr/FlareSolverr | api | 465 | [btschool] (updating) The cookies provided by FlareSolverr are not valid | **Please use the search bar** at the top of the page and make sure you are not creating an already submitted issue.
Check closed issues as well, because your issue may have already been fixed.
### How to enable debug and html traces
[Follow the instructions from this wiki page](https://github.com/FlareSolverr/FlareSolverr/wiki/How-to-enable-debug-and-html-trace)
### Environment
* **FlareSolverr version**:
* **Last working FlareSolverr version**:
* **Operating system**:
* **Are you using Docker**: [yes/no]
* **FlareSolverr User-Agent (see log traces or / endpoint)**:
* **Are you using a proxy or VPN?** [yes/no]
* **Are you using Captcha Solver:** [yes/no]
* **If using captcha solver, which one:**
* **URL to test this issue:**
### Description
[List steps to reproduce the error and details on what happens and what you expected to happen]
### Logged Error Messages
[Place any relevant error messages you noticed from the logs here.]
[Make sure you attach the full logs with your personal information removed in case we need more information]
### Screenshots
[Place any screenshots of the issue here if needed]
| closed | 2022-08-13T02:32:56Z | 2022-08-13T22:43:39Z | https://github.com/FlareSolverr/FlareSolverr/issues/465 | [
"invalid"
] | vsin123 | 1 |
Farama-Foundation/PettingZoo | api | 911 | [Bug Report] GymV26Environment-v0 already in registry | ### Describe the bug
Not sure if this is an error with pettingzoo or just the env creation tutorial, but I think I have seen it previously in other testing. Putting an issue up in case it is a bug in the code.
```
/Users/elliottower/anaconda3/envs/PettingZoo/lib/python3.8/site-packages/gymnasium/envs/registration.py:498: UserWarning: WARN: Overriding environment GymV26Environment-v0 already in registry.
logger.warn(f"Overriding environment {new_spec.id} already in registry.")
```
### Code example
```shell
from pettingzoo.test import parallel_api_test # noqa: E402
from pettingzoo.butterfly import pistonball_v6
if __name__ == "__main__":
parallel_api_test(pistonball_v6.parallel_env(), num_cycles=1_000_000)
```
### System info
_No response_
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/PettingZoo/issues) in the repo
| closed | 2023-03-21T23:16:48Z | 2023-03-21T23:26:51Z | https://github.com/Farama-Foundation/PettingZoo/issues/911 | [
"bug"
] | elliottower | 2 |
mwaskom/seaborn | pandas | 3,192 | TypeError: ufunc 'isfinite' not supported with numpy 1.24.0 | This is the code that I ran
```python
import matplotlib.pyplot as plt
import seaborn as sns
fmri = sns.load_dataset("fmri")
fmri.info()
sns.set(style="darkgrid")
sns.lineplot(data=fmri, x="timepoint", y="signal", hue="region", style="event")
plt.show()
```
This is the error I got
```python
❯ python3 example.py
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1064 entries, 0 to 1063
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 subject 1064 non-null object
1 timepoint 1064 non-null int64
2 event 1064 non-null object
3 region 1064 non-null object
4 signal 1064 non-null float64
dtypes: float64(1), int64(1), object(3)
memory usage: 41.7+ KB
Traceback (most recent call last):
File "/home/rizwan/Downloads/Seaborn-Issue/example.py", line 8, in <module>
sns.lineplot(
File "/home/rizwan/Miniconda3/envs/py39/lib/python3.9/site-packages/seaborn/relational.py", line 645, in lineplot
p.plot(ax, kwargs)
File "/home/rizwan/Miniconda3/envs/py39/lib/python3.9/site-packages/seaborn/relational.py", line 489, in plot
func(
File "/home/rizwan/Miniconda3/envs/py39/lib/python3.9/site-packages/matplotlib/__init__.py", line 1423, in inner
return func(ax, *map(sanitize_sequence, args), **kwargs)
File "/home/rizwan/Miniconda3/envs/py39/lib/python3.9/site-packages/matplotlib/axes/_axes.py", line 5367, in fill_between
return self._fill_between_x_or_y(
File "/home/rizwan/Miniconda3/envs/py39/lib/python3.9/site-packages/matplotlib/axes/_axes.py", line 5272, in _fill_between_x_or_y
ind, dep1, dep2 = map(
File "/home/rizwan/Miniconda3/envs/py39/lib/python3.9/site-packages/numpy/ma/core.py", line 2360, in masked_invalid
return masked_where(~(np.isfinite(getdata(a))), a, copy=copy)
TypeError: ufunc 'isfinite' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
```
Environment
- Python `3.9.15`
- Seaborn `0.12.1`
- Matplotlib `3.6.2`
- Pandas `1.5.2`
- Numpy `1.24.0`
This error only occurs when I use numpy `1.24.0`, version `1.23.5` or lower works as usual. | closed | 2022-12-19T13:57:57Z | 2024-02-28T20:42:27Z | https://github.com/mwaskom/seaborn/issues/3192 | [
"upstream"
] | Rizwan-Hasan | 3 |
ARM-DOE/pyart | data-visualization | 1,387 | ENH: Move to ruff for linting | We should move to [ruff](https://github.com/charliermarsh/ruff) for Python linting - we did this with xradar and the speed improvement is quite dramatic. | closed | 2023-02-14T14:52:43Z | 2023-02-16T21:21:27Z | https://github.com/ARM-DOE/pyart/issues/1387 | [
"Enhancement"
] | mgrover1 | 0 |
Johnserf-Seed/TikTokDownload | api | 369 | [BUG]启动Server.py,XB的js执行出错execjs._exceptions.ProgramError: Error: Cannot find module 'md5' | **描述出现的错误**
启动Server.py,XB的js执行出错execjs._exceptions.ProgramError: Error: Cannot find module 'md5'
**bug复现**
复现这次行为的步骤:
1.启动Server.py
2.启动TikTokDonwload.py
3.输入douyin video url
**截图**
如果适用,添加屏幕截图以帮助解释您的问题。
**桌面(请填写以下信息):**
-操作系统:Mac
-vpn代理无
-版本13070
**附文**
写了一个python通过execjs执行js代码的小sample,require("md5")也失败了,将require替换成md5的函数实现是可以的。
请问execjs在执行时,js文件中require的依赖怎么处理?
| closed | 2023-03-26T09:54:56Z | 2023-04-03T07:14:20Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/369 | [
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] | sharkmao622 | 3 |
slackapi/bolt-python | fastapi | 464 | Not_authed after installing bot to workspace | #### The `slack_bolt` version
slack-bolt==1.6.0
slack-sdk==3.5.1
#### Python runtime version
Python 3.9.5
#### OS info
Kubuntu 21.04
#### Steps to reproduce:
I installed my bot to the workspace using `https://{my_domain}/slack/install`. After redirecting the new folder in the bot directory was created (fig. #1). But when I'm trying to interact with the bot, I receive errors. I have no idea how to fix it.
```
slack_sdk.oauth.installation_store.file:Failed to find an installation data for enterprise: none, team: T02EE4YKJSD: [Errno 2] No such file or directory: './data/none-T02EE4YKJSD/installer-U02DYHJN5B7-latest'
The server responded with: {'ok': False, 'error': 'not_authed'})
```
**Fig #1**

**ENV Vairables**
```
SLACK_SIGNING_SECRET=XXXX
SLACK_CLIENT_SECRET=YYYY
```
**Code**
```
SLACK_CLIENT_ID = _here's_client_id_
oauth_settings = OAuthSettings(
client_id=SLACK_CLIENT_ID,
client_secret=os.environ['SLACK_CLIENT_SECRET'],
scopes=['channels:history', 'channels:read', 'chat:write', 'commands,groups:read', 'im:read', 'im:write',
'users.profile:read', 'users:read', 'users:read.email'],
user_scopes=['users:read'],
state_store=FileOAuthStateStore(expiration_seconds=600, base_dir="./data")
)
app = App(signing_secret=os.environ['SLACK_SIGNING_SECRET'],
installation_store=FileInstallationStore(base_dir="./data"),
oauth_settings=oauth_settings)
handler = SlackRequestHandler(app)
# Flask adapter
flask_adapter = Flask(__name__)
@app.event('member_joined_channel')
def member_joined_channel(body):
channel_id = body.get('event')['channel']
channel_members = app.client.conversations_members(channel=channel_id).data.get('members')
print(channel_members)
@flask_adapter.route("/slack/install", methods=["GET"])
def install():
return handler.handle(request)
@flask_adapter.route("/slack/oauth_redirect", methods=["GET"])
def oauth_redirect():
return handler.handle(request)
```
### Expected result:
Receive a correct response from the server
### Actual result:
Server responded with: `{'ok': False, 'error': 'not_authed'})`
| closed | 2021-09-12T11:34:20Z | 2021-09-23T22:06:48Z | https://github.com/slackapi/bolt-python/issues/464 | [
"question",
"area:async",
"area:sync"
] | moffire | 2 |
gradio-app/gradio | data-visualization | 10,767 | Jittery animiation when streaming to chat_interface.chatbot_value | ### Describe the bug

### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```
async def chat_fn_hidden_input(history: list[dict] = [], input_graph_state: dict = {}, uuid: Optional[UUID] = None, prompt: str = "", goal = "", user_input: str = ""):
async for item in chat_fn(user_input, history, input_graph_state, uuid, prompt, goal):
message, state, state2 = item
yield (history + [{"role": "assistant", "content": message}]), state, state2
```
```
start_button.click(show_chatbot, inputs=[], outputs=[chatbot], js=SHOW_CHATINTERFACE).then(
fn=chat_fn_hidden_input,
inputs=[
chatbot,
current_langgraph_state,
current_uuid_state,
prompt_textbox,
goal_textbox,
],
outputs=[
chat_interface.chatbot_value,
current_langgraph_state,
end_of_assistant_response_state
],
).then(show_button, outputs=[build_prompt_button]).then(
hide_button, outputs=[start_button]
)
```
A more complete example is available upon request
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Windows
gradio version: 5.16.0
gradio_client version: 1.7.0
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts: 0.2.1
fastapi: 0.115.8
ffmpy: 0.5.0
gradio-client==1.7.0 is not installed.
httpx: 0.28.1
huggingface-hub: 0.28.1
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 2.2.3
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.3
pillow: 11.1.0
pydantic: 2.10.6
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.9.6
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.45.3
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2025.2.0
httpx: 0.28.1
huggingface-hub: 0.28.1
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.2
```
### Severity
I can work around it | closed | 2025-03-09T03:09:17Z | 2025-03-12T11:54:24Z | https://github.com/gradio-app/gradio/issues/10767 | [
"bug",
"needs repro"
] | brycepg | 3 |
pallets-eco/flask-sqlalchemy | flask | 649 | expected update syntax can be too aggressive | Not a bug report, more of a question.
Because the docs don't provide examples of UPDATE queries, I experimented with what seemed to make the most sense from the SQLA docs. I found that performing an .update() on (what I thought was) a single record turned out to update every row in database. See code sample.
Is there a preferred way of performing an update on a single record where you change multiple attributes at once? Or better yet, use a dict to specify all the attributes to change, and the new values?
```python
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///e:\\foo.db'
db = SQLAlchemy(app)
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80))
email = db.Column(db.String(120))
def __repr__(self):
return '<User %r>' % self.username
db.create_all()
john = User(username='jlennon', email='john@example.com')
paul = User(username='pmac', email='paul@example.com')
george = User(username='ghar', email='george@example.com')
ringo = User(username='rstarr', email='ringo@example.com')
db.session.add_all([john, paul, george, ringo])
db.session.commit()
user_four = User.query.get(4)
# UNSAFE!! Will change EVERY row!!
user_four.query.update({ 'email':'ringo@beatles.com' })
db.session.commit()
# SAFE. But how to change multiple attributes? or load from a dict?
user_three = User.query.get(3)
user_three.username = 'georgeh'
db.session.commit()
``` | closed | 2018-10-25T23:47:40Z | 2021-04-06T00:14:01Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/649 | [] | 2x2xplz | 12 |
keras-rl/keras-rl | tensorflow | 309 | Use of mutable default arguments throughout project | The use of mutable values for default arguments is [well documented](https://docs.python-guide.org/writing/gotchas/#mutable-default-arguments) to cause problems.
Anything following this format
```
# Function with mutable argument as default
def func(self, var=[]):
```
Should be adapted to the following.
```
# Should become
def func(self, var=None):
if var is None:
var = []
```
Other examples of default mutable arguments throughout the project are empty dicts. | closed | 2019-04-15T22:26:19Z | 2019-07-21T23:10:26Z | https://github.com/keras-rl/keras-rl/issues/309 | [
"wontfix"
] | Qinusty | 1 |
robinhood/faust | asyncio | 736 | Faust Application processes messages but does not commit offset after connection to kafka broker is reset | I see this issue when the connection to the kafka broker is reset.
In such a case, the faust application continues to process messages, but is not able to commit offset with the following error:
Could not send <class 'aiokafka.protocol.transaction.AddOffsetsToTxnRequest_v0'>: StaleMetadata('Broker id 8 not in current metadata')
This occurs till the faust application restarts.
Once the application restarts, it re-processes messages and starts committing offsets
As a result, a number of duplicate messages are processed. | open | 2021-08-24T18:07:28Z | 2021-09-04T15:24:44Z | https://github.com/robinhood/faust/issues/736 | [] | vinaygopalkrishnan | 1 |
huggingface/transformers | pytorch | 36,501 | TypeError: object of type 'IterableDataset' has no len() | ### System Info
When I tried a LLM using run_clm.py in example, I met the following error:
```
File "/workspace/train.py", line 611, in main
data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset)
TypeError: object of type 'IterableDataset' has no len()
```
`len(train_dataset)` is not compatible with `IterableDataset`.
### Who can help?
@ArthurZucker
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Using the command in the [README](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/README.md).
### Expected behavior
fix this error | open | 2025-03-03T01:44:46Z | 2025-03-06T11:21:57Z | https://github.com/huggingface/transformers/issues/36501 | [
"bug"
] | mxjmtxrm | 6 |
datadvance/DjangoChannelsGraphqlWs | graphql | 2 | Could you please provide an working example with django and channel2 | I tried to use the one you provided in the readme. However, I did not make it. I am wondering if you can provide a working example in this project with django and channel 2.
Thanks a lot. | closed | 2018-06-28T22:09:38Z | 2019-04-20T01:31:46Z | https://github.com/datadvance/DjangoChannelsGraphqlWs/issues/2 | [
"enhancement"
] | junchiz | 5 |
cvat-ai/cvat | tensorflow | 9,057 | (Self Hosted) Annotation Download Fails Over HTTPS (but over http it works fine) | **Story**
When attempting to download a dataset in my CVAT deployment, a new tab briefly opens and closes without downloading anything. The issue seems to be related to mixed content, where CVAT is running over HTTPS, but dataset download URLs are served over HTTP. Modern browsers block such downloads for security reasons.
```
"File not downloaded: Potential security risk.
The download is offered over HTTP even though the current document was
delivered over a secure HTTPS connection. If you proceed, the download may be
corrupted or tampered with during the download process.
You can search for an alternate download source or try again later."
In Chrome I don't even have an option to download it; it fails silently.
```
**Expected Behavior**
The dataset should download successfully over HTTPS without any browser blocking issues.
**What I've Tried**
Modified `nginx.conf` to add:
`proxy_set_header X-Forwarded-Proto $scheme;`
but the issue persists. | closed | 2025-02-05T12:44:04Z | 2025-02-06T07:23:43Z | https://github.com/cvat-ai/cvat/issues/9057 | [
"need info"
] | osman-goni-cse | 5 |
mljar/mercury | jupyter | 114 | add preheated kernel for execution to speedup the computation | It should be possible to add preheated kernels for executing notebook. Please check the following discussions:
- https://github.com/jupyter/nbconvert/pull/852/
- https://github.com/jupyter/nbconvert/issues/1802
- https://github.com/jupyter/jupyter_client/issues/811
- https://github.com/jupyter/nbclient/issues/238 | closed | 2022-06-28T10:22:54Z | 2023-02-13T14:42:54Z | https://github.com/mljar/mercury/issues/114 | [
"enhancement"
] | pplonski | 2 |
seleniumbase/SeleniumBase | pytest | 2,350 | UC Mode Ineffective on Streamlit Cloud | I have deployed a web automation script on Streamlit Cloud, utilizing the `uc` mode to bypass the web scraping detection system. During local device testing, the script consistently successfully circumvents the web scraping detection system. However, when executing on Streamlit Cloud, the `uc` mode appears to be ineffective, leading to the detection of the script. Could someone please provide guidance on how to address this issue?
I have reviewed #2063. Is there a method available to make the `uc` mode effective without using a proxy?
| closed | 2023-12-08T11:01:59Z | 2023-12-08T13:39:22Z | https://github.com/seleniumbase/SeleniumBase/issues/2350 | [
"external",
"UC Mode / CDP Mode"
] | LiPingYen | 1 |
rafsaf/minimal-fastapi-postgres-template | sqlalchemy | 2 | Question : Add Fastapi-users library for authentication ? | I was asking myself if this would be a good idea to use this library :
https://github.com/fastapi-users/fastapi-users
Since it supports a lot of features out of the box, including oauth ...
What do you think ? | closed | 2021-11-24T16:45:41Z | 2022-01-30T14:20:38Z | https://github.com/rafsaf/minimal-fastapi-postgres-template/issues/2 | [] | sorasful | 3 |
drivendataorg/cookiecutter-data-science | data-science | 107 | importing scripts from other folders | It would be great if the structure came along with default importing commands or files (__init__ files). | closed | 2018-04-13T19:24:48Z | 2018-04-13T19:33:39Z | https://github.com/drivendataorg/cookiecutter-data-science/issues/107 | [] | gulfemd | 1 |
schemathesis/schemathesis | graphql | 2,371 | [FEATURE] Filling missing examples for basic data types | ## Description of Problem
`--contrib-openapi-fill-missing-examples` not working as expected along with `--hypothesis-phases=explicit`
let us say i have a field `name`, and i use `--hypothesis-phases=explicit` and `--contrib-openapi-fill-missing-examples`
schemathesis generates `name: ''` which is not valid json
type of name is `string`
also, i have noticed that it will generate empty string for the name field only if it is in the `required` list, otherwise it skips it altogether
issue with both swagger 2.0 and openapi 3.0
## Steps to Reproduce
1. take attached schema
[tree-openapi30.txt](https://github.com/user-attachments/files/16366400/tree-openapi30.txt)
2. change extension from txt to yaml (github not allowing yaml upload)
3. run `st run -v --hypothesis-phases=explicit --contrib-openapi-fill-missing-examples --validate-schema=true --hypothesis-verbosity=debug tree-openapi30.yaml --dry-run`
after testing, i notice the missing examples being filled only when a pattern is specified
for the year, i see `0000` if pattern given is `^\d{4}`
for basic schema types like https://swagger.io/docs/specification/data-models/data-types/
expecting examples to be filled, for examples :-
`string`: abcd ( or consider random from [a-zA-Z0-9]+)
`integer`: 1 ( or consider random from range 1-100)
`number`: 1.0 ( or consider random with single decimal)
`boolean`: random between true or false
## Workaround
workaround would be to specify patterns everywhere
if this was intended, it was missing in the documentation!
## Version
st, version 3.33.1
## Expectation
expecting generated values to be filled for default data types if explicit examples are missing irrespective of required list.
For example, for a field `name` of `type: string`, `name: 'qwrgte3q5w'` is generated irrespective of whether `name` is `required`. | open | 2024-07-24T19:07:25Z | 2024-07-28T09:08:55Z | https://github.com/schemathesis/schemathesis/issues/2371 | [
"Type: Feature"
] | ravy | 1 |
thomaxxl/safrs | rest-api | 66 | Type coercion does not always work for JSON columns | The JSON type in `sqlalchemy` seems to be hard-coded to return `dict` as its `python_type`:
```python
@property
def python_type(self):
return dict
```
There are many ways valid JSON can be something other than a `dict`. In safrs 2.8.0, these columns fall through to the base case of being coerced to dictionaries [see here](https://github.com/thomaxxl/safrs/blob/master/safrs/base.py#L373):
```python
else:
attr_val = attr.type.python_type(attr_val)
return attr_val
```
This doesn't work if the column is e.g., `None` or a `list`.
**Possible solutions**
* Skip type coercion on JSON columns
* Special-case JSON columns to use `json.loads` if a string and pass through if `None`, a `list`, or `dict` | closed | 2020-05-30T19:45:31Z | 2020-05-31T07:13:13Z | https://github.com/thomaxxl/safrs/issues/66 | [] | polyatail | 2 |
apache/airflow | python | 47,887 | Missing Dropdown in Grid View to Increase DAG Run Count | ### Apache Airflow version
main (development)
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
The Grid View does not seem to have a dropdown to increase the DAG run count when there are many runs. This makes it difficult to navigate and view a large number of DAG runs efficiently.

### What you think should happen instead?
_No response_
### How to reproduce
1. Execute multiple runs for a DAG
2. Now there is no dropdown by which we can increase DAG run count in GRID as we had in AF2
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| open | 2025-03-18T05:10:00Z | 2025-03-18T10:11:27Z | https://github.com/apache/airflow/issues/47887 | [
"kind:bug",
"kind:feature",
"priority:high",
"area:core",
"area:UI",
"affected_version:3.0.0beta"
] | vatsrahul1001 | 3 |
httpie/cli | python | 1,530 | werkzeug compatibility | There no longer appears to be an issue with werkzeug 2.1.0 as no test failures are produced using 2.2.3.
Allowing up to the latest release (2.3.7) of werkzeug would be preferable but that produces test failures.
I do not know which project the test failures should be addressed.
[Test results](https://gist.github.com/loqs/aea684d99623d2298ba630ec5333f1d9) for httpie/cli 2.2.3 with
werkzeug 2.3.7
https://github.com/psf/httpbin 0.10.1
| open | 2023-09-10T19:05:41Z | 2024-05-18T05:31:40Z | https://github.com/httpie/cli/issues/1530 | [
"testing"
] | loqs | 0 |
geex-arts/django-jet | django | 424 | ValueError: Empty module name, using cookie cutter |
$ python manage.py migrate jet
Traceback (most recent call last):
File "manage.py", line 30, in <module>
execute_from_command_line(sys.argv)
File "/home/kapil/Documents/PycharmProjects/COOKIECUTTER/project_1_test/venv/lib/python3.6/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line
utility.execute()
File "/home/kapil/Documents/PycharmProjects/COOKIECUTTER/project_1_test/venv/lib/python3.6/site-packages/django/core/management/__init__.py", line 357, in execute
django.setup()
File "/home/kapil/Documents/PycharmProjects/COOKIECUTTER/project_1_test/venv/lib/python3.6/site-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/kapil/Documents/PycharmProjects/COOKIECUTTER/project_1_test/venv/lib/python3.6/site-packages/django/apps/registry.py", line 91, in populate
app_config = AppConfig.create(entry)
File "/home/kapil/Documents/PycharmProjects/COOKIECUTTER/project_1_test/venv/lib/python3.6/site-packages/django/apps/config.py", line 90, in create
module = import_module(entry)
File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 991, in _gcd_import
File "<frozen importlib._bootstrap>", line 930, in _sanity_check
ValueError: Empty module name
| open | 2020-01-18T14:10:52Z | 2020-01-18T14:10:52Z | https://github.com/geex-arts/django-jet/issues/424 | [] | kp96-info | 0 |
nteract/papermill | jupyter | 431 | Cleanup phase before exiting when papermill is interrupted? | **Is there any common way to handle cases where one would need papermill to execute some cleanup cell(s) before papermill exits?**
This should happen regardless of why it's being terminated – due to an error, or because the papermill process was killed.
So it would be similar to the `finally` block of a `try-finally` clause in Python or Java.
----
My sparkmagic use case:
I have notebooks that use sparkmagic to launch livy sessions. Each livy session ends up as a running YARN application, reserving some resources from a cluster. The last cell of my notebook tells sparkmagic to terminate the session(s).
If I run such notebook successfully with papermill it works as expected: the livy session is terminated before papermill exits. But if I interrupt papermill while it's running, the livy session / YARN application is left running (until it finishes and/or livy session times out). | closed | 2019-09-13T23:41:34Z | 2019-09-17T20:51:20Z | https://github.com/nteract/papermill/issues/431 | [] | juhoautio | 2 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 247 | [BUG] Docker环境配置了Scraper代理但是无法正常工作 | docker中部署,使用host方式对外服务,配置了代理后,无法解析抖音视频,也没找到异常日志,感觉是代理并没有生效。
另外,这个代理支不支持socks5代理类型呢?支不支持用户认证呢?
```
[Scraper] # scraper.py
# 是否使用代理(如果部署在IP受限国家需要开启默认为False关闭,请自行收集代理,下面代理仅作为示例不保证可用性)
# Whether to use proxy (if deployed in a country with IP restrictions, it needs to be turned on by default, False is closed. Please collect proxies yourself. The following proxies are only for reference and do not guarantee availability)
Proxy_switch = True
# 是否根据不同协议(http/https)使用不同代理,设置为True时修改Http_proxy/Https_proxy这两个变量的值
# Whether to use different proxies for different protocols (http/https). When set to True, modify the values of the two variables Http_proxy/Https_proxy
Use_different_protocols = True
# http/https协议都使用以下代理(Use_different_protocols为False时生效)
# Both http/https protocols use the following proxy (effective when Use_different_protocols is False)
All = 200.101.51.87:9992
# http协议使用以下代理(Use_different_protocols为True时生效)
# The http protocol uses the following proxy (effective when Use_different_protocols is True)
Http_proxy = http://200.101.51.87:9992
# https协议使用以下代理(Use_different_protocols为True时生效)
# The https protocol uses the following proxy (effective when Use_different_protocols is True)
Https_proxy = http://200.101.51.87:9992
``` | closed | 2023-08-20T08:45:27Z | 2023-08-24T08:01:59Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/247 | [
"BUG"
] | lifei6671 | 1 |
explosion/spaCy | nlp | 13,482 | Macbook Pro M3 Max Install Issue: Cythonizing spacy/kb.pyx | I am posting this issue here after encountering it while trying to install a different program using it. The link to that other issue is below and the recommendation was that I file a report here. I will also reproduce the response from the other dev below my paste of my issue:
https://github.com/erew123/alltalk_tts/issues/205
This is a 4 day old MBP with a fresh install of Python, with all other relevant updates applied.
Describe the bug
text-generation-webui installed without issue. Followed steps on Alltalk page up to this one:
```
pip install -r system/requirements/requirements_textgen.txt
```
Consistently errors out (see below)
To Reproduce
Run "pip install -r system/requirements/requirements_textgen.txt"
Text/logs
```
[ 2/41] Cythonizing spacy/kb.pyx
Traceback (most recent call last):
File "/Users/brianragle/Documents/TextGen/text-generation-webui/installer_files/env/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in
main()
File "/Users/brianragle/Documents/TextGen/text-generation-webui/installer_files/env/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/brianragle/Documents/TextGen/text-generation-webui/installer_files/env/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/vs/r77xwd5136q3gy3m_nncg_1h0000gn/T/pip-build-env-nop3b6d2/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 325, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/vs/r77xwd5136q3gy3m_nncg_1h0000gn/T/pip-build-env-nop3b6d2/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 295, in _get_build_requires
self.run_setup()
File "/private/var/folders/vs/r77xwd5136q3gy3m_nncg_1h0000gn/T/pip-build-env-nop3b6d2/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 311, in run_setup
exec(code, locals())
File "", line 224, in
File "", line 211, in setup_package
File "/private/var/folders/vs/r77xwd5136q3gy3m_nncg_1h0000gn/T/pip-build-env-nop3b6d2/overlay/lib/python3.11/site-packages/Cython/Build/Dependencies.py", line 1154, in cythonize
cythonize_one(*args)
File "/private/var/folders/vs/r77xwd5136q3gy3m_nncg_1h0000gn/T/pip-build-env-nop3b6d2/overlay/lib/python3.11/site-packages/Cython/Build/Dependencies.py", line 1321, in cythonize_one
raise CompileError(None, pyx_file)
Cython.Compiler.Errors.CompileError: spacy/kb.pyx
[end of output]
```
note: This error originates from a subprocess, and is likely not a problem with pip.
```
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
```
Desktop (please complete the following information):
AllTalk was updated:NA
Custom Python environment: No
Text-generation-webUI was updated: 2024May06
## AllTalk dev response:
Investigation/Research
This is a difficult one for me to help/advise upon as I don't have a mac, let alone a ARM based one. However, I have done what research I can on this and found that:
Cython don't make pre-build AMD wheels for mac. So Im not sure what version of Cython will have installed on your mac. https://github.com/cython/cython/issues/5904 (re pre built wheels). As I understand reading that chain, Cython should compile from source (maybe) when its requested as someone suggests ARM macs are fast enough to do this (but I don't know the truth of that and have no way to test).
Because there is no wheel file for ARM based macs, you cant put it in the requirements file for PIP to compare what is the correct version to install for your requirements file. Pip is designed to look through the requirements file and figure out what versions of everything you request it installs will happily work together, its an installed with a compatibility checker.
Cython and Spacy are pre-requisites of the Coqui TTS engine:
[image.png (view on web)](https://github.com/erew123/alltalk_tts/assets/35898566/f3833e53-e1fc-41d8-ab5e-636ddf9b59f4)
as far as I am aware people have installed both the Coqui TTS engine on x86 based macs and I believe ARM based macs. e.g https://github.com/coqui-ai/TTS/discussions/2177
The error you have seems to be specifically related to when the Coqui TTS engine is being called to be installed, its calling on Cython to compile Spacy (as per the prerequisites of Coqui TTS).
I've been through the Pull Request, Code updates, Releases and Issues for Spacy and see no listing of problems with Mac installation or updates for Mac installation https://github.com/explosion/spaCy
As such, I can only conclude that this is either the version of Cython on your mac maybe being an older version (though I have no idea how it installed into your Python environment, being that it needs compiling) OR something in a recent release of Cython that causes an incompatibility with compiling Spacy. Either way, the issue seems to be between these two packages. (Neither of which I have control over of course and I cannot give you a wheel file to suggest installing for Cython, because no wheel file exists).
Possible things to try
My best thought at this point in time would be to open your terminal on your mac and go into the text-gen-webui folder and start the Python environment with cmd_macos.sh
From there you can type pip show cython to confirm what version of Cython is installed. I cant say exactly what versions do or don't work. What I can say is that I started AllTalk when Cython 3.0.6 was the current build and Cython 3.0.10 is the current build as of March 2024. So I would assume that versions in that region would work, but that is an assumption.
If possible, I would try to get your text-gen-webui Python environment on Cython 3.0.10
If its on Cython 3.0.10 (or at least 3.0.6) I would manually try installing Spacy, which again is, text-gen-webui folder and start the Python environment with cmd_macos.sh then pip install spacy and see if that installs/compiles.
If Spacy installs/compiles, then re-try the AllTalk setup by installing the requirements. If it doesn't, then its probably something for Spacy to look into, though they would need to know what version of Cython you have installed on your machine. They would have to figure out why Spacy isnt compiling on the version of Cython you have installed and guide you as to how to resolve it.
If I had a ARM based mac, I would happily go to them, but I would be unable to test anything or give them further information based on anything they ask/suggest.
Updated requirements for Text-gen-webui
The only other thing I have done in all of this is to install a fresh install of Text-gen-webui on my machine (in Windows I admit) however, I did notice that when installing the TTS engine PIP was hunting around trying to find a copy of Spacy to install. I think it went through 10+ attempts at finding a version and I gave up at that point as it was taking maybe 5 minutes per attempt.
What I've done is to add Spacy to the requirements file and nailed it to versions greater than 3.7.2 (which is over 6 months old) which will at least stop it hunting around for multiple versions. This commit is here https://github.com/erew123/alltalk_tts/commit/6808439ca103cb076851cbfb45e28801587a1bd8
I would suggest you git pull AllTalk down before you attempt to reinstall its requirements.
Ill close the ticket for now, but you are welcome to come back to me if you wish.
Thanks | open | 2024-05-08T18:03:56Z | 2024-11-01T15:10:41Z | https://github.com/explosion/spaCy/issues/13482 | [
"install"
] | brianragle | 3 |
mljar/mercury | data-visualization | 67 | allow embeding | Please add a parameter in url for embedding. The embedded version should hide the navbar and show a small logo at the bottom: "Powered by Mercury". | closed | 2022-03-15T07:54:14Z | 2022-03-15T09:03:31Z | https://github.com/mljar/mercury/issues/67 | [
"enhancement"
] | pplonski | 1 |
albumentations-team/albumentations | machine-learning | 1,946 | albucore 0.0.17 breaks albumentation | The following line imports `preserve_channel_dim` which has been deleted from `albucore/utils.py` on version 0.0.17
https://github.com/albumentations-team/albumentations/blob/b358a8826de3bbd853d55a672f0b4d20c59bf5ff/albumentations/augmentations/blur/functional.py#L9C68-L9C88
https://github.com/albumentations-team/albucore/compare/0.0.16...0.0.17 | closed | 2024-09-21T08:14:38Z | 2024-09-22T20:44:25Z | https://github.com/albumentations-team/albumentations/issues/1946 | [] | gui-miotto | 2 |
Netflix/metaflow | data-science | 1,815 | R tests on Github fail with macos-latest runner | #1814 swapped R test runner to ubuntu, due to them failing with macos-latest runners since the macos 14 update.
Suggest looking into eventually fixing R tests so they can be run on macos-latest, as this should be a more common platform among users requiring the R support. | open | 2024-04-26T23:37:52Z | 2024-12-04T16:12:13Z | https://github.com/Netflix/metaflow/issues/1815 | [
"bug"
] | saikonen | 1 |
wagtail/wagtail | django | 12,618 | Page choosers no longer working on `main` | <!--
Found a bug? Please fill out the sections below. 👍
-->
### Issue Summary
The page chooser modal will not respond to any clicks when choosing a page, and an error is thrown in the console:
`Uncaught TypeError: Cannot read properties of undefined (reading 'toLowerCase')`

### Steps to Reproduce
1. Spin up bakerydemo
2. Edit the "Welcome to the Wagtail bakery!" page
3. On the "Hero CTA link", click on the ... button > Choose another page
4. Try to choose any other page
5. Observe that the modal stays open and the error is thrown in the console
Any other relevant information. For example, why do you consider this a bug and what did you expect to happen instead? The modal should be closed and the newly chosen page should replace the previously chosen one.
- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes on bakerydemo
### Working on this
This is a confirmed regression in #12380, specifically the changes in `ajaxifyForm` where the anonymous function is replaced with an arrow function, which results in a different object being bound to `this`.
There are other similar changes in the PR, so there might be other choosers/modals broken as a result. | closed | 2024-11-22T11:13:19Z | 2024-11-22T16:22:30Z | https://github.com/wagtail/wagtail/issues/12618 | [
"type:Bug"
] | laymonage | 0 |
sinaptik-ai/pandas-ai | pandas | 1,330 | Expose pandas API on SmartDataframe | ### 🚀 The feature
Expose commonly used pandas API on SmartDataframe so that a smart dataframe can be used as if it is a normal pandas dataframe.
### Motivation, pitch
Continuous modification of a pandas dataframe is often needed. Is there a support for modifying an existing SmartDataframe after it has been constructed? If not, one way of enabling this is to expose pandas API on SmartDataframes.
### Alternatives
_No response_
### Additional context
_No response_ | closed | 2024-08-19T17:29:23Z | 2024-11-25T16:07:45Z | https://github.com/sinaptik-ai/pandas-ai/issues/1330 | [
"enhancement"
] | c3-yiminliu | 1 |
litestar-org/litestar | api | 3,781 | Unexpected `ContextVar` handling for lifespan context managers | ### Description
A `ContextVar` set within a `lifespan` context manager is not available inside request handlers. By contrast, doing the same thing via an application level dependency does appear set in the require handler.
### MCVE
```python
import asyncio
from collections.abc import AsyncIterator
from contextlib import asynccontextmanager
from contextvars import ContextVar
from litestar import Litestar
from litestar import get
from litestar.testing import AsyncTestClient
VAR = ContextVar[int]("VAR", default=0)
@asynccontextmanager
async def set_var() -> AsyncIterator[None]:
token = VAR.set(1)
try:
yield
finally:
VAR.reset(token)
@get("/example")
async def example() -> int:
return VAR.get()
app = Litestar([example], lifespan=[set_var()])
async def test() -> None:
async with AsyncTestClient(app) as client:
response = await client.get("/example")
assert (value := response.json()) == 1, f"Expected 1, got {value}"
if __name__ == "__main__":
asyncio.run(test())
```
### Steps to reproduce
Run the example above. Either via `python example.py` or `uvicorn example:app` and check the `/example` route.
We should expect the response to be `1` since that's what's set during the `lifespan` context manager, but it's always 0.
If you instead swap the `lifespan` context manager for a dependency it works as expected:
```python
import asyncio
from collections.abc import AsyncIterator
from contextvars import ContextVar
from litestar import Litestar
from litestar import get
from litestar.testing import AsyncTestClient
VAR = ContextVar[int]("VAR", default=0)
async def set_var() -> AsyncIterator[None]:
token = VAR.set(1)
try:
yield
finally:
VAR.reset(token)
@get("/example")
async def example(set_var: None) -> int:
return VAR.get()
app = Litestar([example], dependencies={"set_var": set_var})
async def test() -> None:
async with AsyncTestClient(app) as client:
response = await client.get("/example")
assert (value := response.json()) == 1, f"Expected 1, got {value}"
if __name__ == "__main__":
asyncio.run(test())
```
### Litestar Version
2.12.1
### Platform
- [ ] Linux
- [X] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-10-09T02:07:27Z | 2025-03-20T15:54:57Z | https://github.com/litestar-org/litestar/issues/3781 | [
"Bug :bug:"
] | rmorshea | 8 |
zappa/Zappa | flask | 1,283 | Delayed asynchronous invocation using SFN | # Summary
I'd like to open a discussion on the feature of delayed asynchronous task invocation as in the following example:
```python
@task(delay_seconds=1800)
make_pie():
""" This task is invoked asynchronously 30 minutes after it is initially run. """
```
# History
I initially created a PR on the old repo with this functionality using SQS as a task queue. See:
- https://github.com/Miserlou/Zappa/pull/1648
- https://github.com/Miserlou/Zappa/issues/1647
- https://github.com/zappa/Zappa/issues/649
- https://github.com/zappa/Zappa/issues/648
Since then we've had the code from the original PR running smoothly in a production environment. We are happy with the solution, but delaying tasks too far ahead in the future (> 1 hour), although technically possible, has a couple of drawbacks:
- Costs increase linearly with the time a task is delayed, as for each delayed invocation, an additional lambda invocation is performed **every 15 minutes**.
- With sufficient concurrent delayed tasks, this results in lots of concurrent lambda invocations that have no purpose other than rescheduling the task for another 15 minutes.
- When the lambda function experiences downtime, or is being throttled, tasks can accumulate in the queue, resulting in a burst of invocations when the lambda function is back online. Resulting in increased stress on the system, possibly bringing the system back down.
# Proposal
Because of these drawbacks I've looked into an alternative to the original task delaying with SQS and found one in [AWS step functions](https://aws.amazon.com/step-functions/). This service includes the ability to delay execution using a [Wait state](https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-wait-state.html) for up to 1 year with no cost or performance drawbacks. The only drawback is that the *fixed* $ cost/task is lower with SQS than with SFN. This means that tasks delayed for < 15 minutes are $ cheaper using SQS than using SFN, but for all tasks delayed > 15 minutes, SFN outperforms SQS on all fronts.
I've currently implemented the basic functionality outside of Zappa for a client organization of mine to test and evaluate the solution and it currently performs admirably, without any notable drawbacks. I'm willing to perform the work to integrate it into Zappa if there is a need for this feature and if there is support from the maintainers to get it merged into the master branch.
However there are some decisions to be made before the feature can be implemented and I'd like some input from the community on this.
## Should `async_source` be more customizable?
Currently, Zappa allows setting a known `async_source` in the settings. This is by default set to `lambda` to use direct invocation, but can also be set to `sns` to use that service as an intermediary. However, in most cases the to be introduced `sfn` `async_source` is not a good default for all async invocations, only for delayed invocations. The ideal source would be *smart* where it chooses to invoke either lambda or sfn based on the delay. But would we then introduce a `sfn_and_lambda` `async_source`? You can see where this will end.
My proposal is to allow setting the `async_source` setting to an import path, which allows us to add *smart* implementations. This has the added benefit of users being able to bring their own implementation. The original `lambda` and `sns` would be deprecated, but of course still supported for backwards compatibility reasons.
```
{
"dev": {
..
"async_source": "zappa.async.LambdaAsync",
..
}
}
```
## Should `async_source` also manage the infrastructure?
All the infrastructure that is managed by Zappa for the `async_source` is currently managed in the Zappa cli [schedule](https://github.com/zappa/Zappa/blob/35af3cd7d1e76af902b9e22b9357d0c4b50e4655/zappa/cli.py#L1297) and [unschedule](https://github.com/zappa/Zappa/blob/35af3cd7d1e76af902b9e22b9357d0c4b50e4655/zappa/cli.py#L1355) functions. I'd like to move the bulk of this functionality to the same class that is pointed to by `async_source`. Again with the added benefit that it is then user customizable.
```python
class LambdaAsync:
# ...
def schedule_infrastructure():
# Schedule or update the infrastructure.
def unschedule_infrastructure():
# Unschedule the infrastructure.
```
# Conclusion
I think delayed asynchronous invocation is a welcome feature and I am willing to put in the work to create a PR if there is support from the community and maintainers.
I propose to change the current implementation and work with a more customizable setup, pointing to an implementation class in the settings and allowing that class to manage the entire lifecycle of the feature: scheduling/unscheduling the infra and being smarter about the use of async services.
So please comment and answer:
- Should we have this feature in Zappa?
- Should `async_source` be more customizable?
- Should `async_source` also manage the infrastructure?
# Tags
*Participants from the previous PR*
@paulclarkaranz @jeffreybrowning @jneves @lu911 @M0dM @ironslob @vmiguellima | closed | 2023-11-06T10:18:35Z | 2024-04-13T20:36:57Z | https://github.com/zappa/Zappa/issues/1283 | [
"no-activity",
"auto-closed"
] | oliviersels | 5 |
microsoft/nni | tensorflow | 4,918 | Retiarii - How To Get Final Model | I am following your tutorial at https://nni.readthedocs.io/en/latest/tutorials/hello_nas.html and it runs fine, but I don't understand how to get the final results unless I am watching the experiment while it is running. I checked the logs at /content/nni/ec90fnpg/log/nnimanager.log but they don't seem to show any accuracy info or what the best model is. I just want to know what the accuracy score and the final model was, without having to watch it run.
I can export the top model as you describe at https://nni.readthedocs.io/en/latest/tutorials/hello_nas.html#export-top-models but it seems like that would only show while the experiment is active, and either way I am not sure how to then use that info to train the model.
An alternative would be for me to use "nnictl view" to directly grab and parse the info for the experiment while it is running but RetiariiExperiment can't be viewed with nnictl view (see https://github.com/microsoft/nni/issues/4743).
I simply want to run Retiarii to get an accuracy score and get a trained model (or an export of the model parameters which I can then train). I would think that is what almost all users of an NAS program would want, but after hours or trying I can't figure out how to do that. | open | 2022-06-07T11:54:28Z | 2024-06-12T07:35:27Z | https://github.com/microsoft/nni/issues/4918 | [
"user raised",
"NAS 2.0"
] | impulsecorp | 6 |
huggingface/datasets | pandas | 6,483 | Iterable Dataset: rename column clashes with remove column | ### Describe the bug
Suppose I have a two iterable datasets, one with the features:
* `{"audio", "text", "column_a"}`
And the other with the features:
* `{"audio", "sentence", "column_b"}`
I want to combine both datasets using `interleave_datasets`, which requires me to unify the column names. I would typically do this by:
1. Renaming the common columns to the same name (e.g. `"text"` -> `"sentence"`)
2. Removing the unwanted columns (e.g. `"column_a"`, `"column_b"`)
However, the process of renaming and removing columns in an iterable dataset doesn't work, since we need to preserve the original text column, meaning we can't combine the datasets.
### Steps to reproduce the bug
```python
from datasets import load_dataset
# load LS in streaming mode
dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True)
# check original features
dataset_features = dataset.features.keys()
print("Original features: ", dataset_features)
# rename "text" -> "sentence"
dataset = dataset.rename_column("text", "sentence")
# remove unwanted columns
COLUMNS_TO_KEEP = {"audio", "sentence"}
dataset = dataset.remove_columns(set(dataset_features - COLUMNS_TO_KEEP))
# stream first sample, should return "audio" and "sentence" columns
print(next(iter(dataset)))
```
Traceback:
```python
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[5], line 17
14 COLUMNS_TO_KEEP = {"audio", "sentence"}
15 dataset = dataset.remove_columns(set(dataset_features - COLUMNS_TO_KEEP))
---> 17 print(next(iter(dataset)))
File ~/datasets/src/datasets/iterable_dataset.py:1353, in IterableDataset.__iter__(self)
1350 yield formatter.format_row(pa_table)
1351 return
-> 1353 for key, example in ex_iterable:
1354 if self.features:
1355 # `IterableDataset` automatically fills missing columns with None.
1356 # This is done with `_apply_feature_types_on_example`.
1357 example = _apply_feature_types_on_example(
1358 example, self.features, token_per_repo_id=self._token_per_repo_id
1359 )
File ~/datasets/src/datasets/iterable_dataset.py:652, in MappedExamplesIterable.__iter__(self)
650 yield from ArrowExamplesIterable(self._iter_arrow, {})
651 else:
--> 652 yield from self._iter()
File ~/datasets/src/datasets/iterable_dataset.py:729, in MappedExamplesIterable._iter(self)
727 if self.remove_columns:
728 for c in self.remove_columns:
--> 729 del transformed_example[c]
730 yield key, transformed_example
731 current_idx += 1
KeyError: 'text'
```
=> we see that `datasets` is looking for the column "text", even though we've renamed this to "sentence" and then removed the un-wanted "text" column from our dataset.
### Expected behavior
Should be able to rename and remove columns from iterable dataset.
### Environment info
- `datasets` version: 2.15.1.dev0
- Platform: macOS-13.5.1-arm64-arm-64bit
- Python version: 3.11.6
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.2
- `fsspec` version: 2023.9.2 | closed | 2023-12-08T16:11:30Z | 2023-12-08T16:27:16Z | https://github.com/huggingface/datasets/issues/6483 | [
"streaming"
] | sanchit-gandhi | 4 |
streamlit/streamlit | data-science | 10,521 | Slow download of csv file when using the inbuild download as csv function for tables displayed as dataframes in Edge Browser | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
Issue is only in MS Edge Browser:
When pressing "download as csv" on a table the download is really slow. and i run on a thinkpad p16 gen1.
15 columns x 20 k rows takes 9-11 sec
15 columns x 50 k rows takes 19-22 sec
When i do it on with my own function using to_csv from the pandas libary i can do it in less than 1 sec for both 20 k and 50 k
**Issue only occur in Edge browser.**
Brave and firefox works just fine with the inbuild
### Reproducible Code Example
```Python
import streamlit as st
import pandas as pd
# tested in both 1.38 and 1.42.2
# Name: streamlit
# Version: 1.39.0 / 1.42.2
# Define number of rows and columns
num_rows = 20000 # 20 k rows takes 9-11 sec to download via inbuild download as csv
# num_rows = 50000 # 50 k rows takes 19-22 sec to download via inbuild download as csv
num_cols = 15
# Generate random data
data = {
f"Col_{i+1}": np.random.choice(['A', 'B', 'C', 'D', 1, 2, 3, 4, 5, 10.5, 20.8, 30.1], num_rows)
for i in range(num_cols)
}
data = pd.DataFrame(data)
st.write(data) # the same issue when using st.dataframe(data)
# the below method takes less a secound for both 20 k and 50 k rows
# to_csv() is from the pandas libary which also are used in the streamlit package.
csv = data.to_csv(index=False).encode('utf-8')
# Download button
st.download_button(
label="Download as CSV OWN",
data=csv,
file_name='data.csv',
mime='text/csv',
)
```
### Steps To Reproduce
hover over the table, click download as csv and watch your download folder for how slow it loads only a few 50-100 kb a sec
then try using the custom made button: "Download as CSV OWN" then it instantly downloads
### Expected Behavior
i would expect the inbuild download as csv function would be as fast as the pandas.to_csv() function.
I tried it on a Thinkpad T14 gen 3, P16 gen 1 and on a linux server, all have the same issue

### Current Behavior
no error msg, but it just super slow
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.39.0 and 1.42.2
- Python version: 3.12.1
- Operating System: Windows 11 / windows 10, Linux server
- Browser: Edge for business: Version 133.0.3065.82 (Official build) (64-bit)
### Additional Information
_No response_ | open | 2025-02-26T08:43:56Z | 2025-03-03T11:49:27Z | https://github.com/streamlit/streamlit/issues/10521 | [
"type:bug",
"feature:st.dataframe",
"status:confirmed",
"priority:P3",
"feature:st.download_button",
"feature:st.data_editor"
] | LazerLars | 2 |
microsoft/unilm | nlp | 1,637 | Reproducing Differential Transformer. | I compared the training losses of the Transformer and Differential Transformer under my own setting but failed to reproduce the experimental results. I wonder whether the operation of "taking difference" and applying softmax should be reversed in official code.
[official code](https://github.com/microsoft/unilm/blob/095574c8bf5999526df8a069b36b9d96e1fd337e/Diff-Transformer/multihead_diffattn.py#L101)
@YTianZHU
| closed | 2024-10-14T05:21:16Z | 2024-10-14T14:37:57Z | https://github.com/microsoft/unilm/issues/1637 | [] | Zcchill | 4 |
TracecatHQ/tracecat | automation | 872 | Editing secrets wipes out previous keys | **Describe the bug**
When editing a secret the previously set keys are not loaded as an option to update, when adding a new key to the existing secret it wipes out previous keys associated with the secret.
**To Reproduce**
- Create secret with multiple keys
- Edit the same secret to add or update existing key
- Existing keys aren't listed to update and adding a new key removes all previously associated keys. (see screenshot walkthrough)
**Expected behavior**
Would expect to be able to update existing keys and add new keys without wiping out previously configured keys in the secret. Scenario of just needing to update a rotated password shouldn't wipe out the username or other settings stored in secrets.
**Screenshots**
New secret with multiple keys


Editing secret, existing keys not listed to update if needed

Add new key to existing secret

Previous keys removed and only new key listed

| closed | 2025-02-18T15:58:09Z | 2025-03-15T14:27:24Z | https://github.com/TracecatHQ/tracecat/issues/872 | [
"good first issue",
"frontend"
] | landonc | 4 |
axnsan12/drf-yasg | rest-api | 784 | get/put/patch id key type error | # Bug Report
## Description
`id` field is the primary key, and in url we set it to int like below,
```
path('api/v2/device/<int:id>', app_views.DeviceViewSet.as_view({
'get': 'retrieve',
'put': 'update',
'patch': 'partial_update',
})),
```
But in the swagger json, this field change to `string`, and no description like `A unique integer value identifying this`.
```
"parameters": [
{
"name": "id",
"in": "path",
"required": true,
"type": "string"
}
]
```
And, if the view set has only list or get method, the `id` field type is right in the swagger.
By the way i can add the manual parameters to override the default `id` parameter.
```
openapi.Parameter('id', openapi.IN_PATH,
description="A unique integer value identifying this xxx.",
type=openapi.TYPE_INTEGER)
],
```
## Your Environment
```
Django==3.2.7
djangorestframework==3.12.4
drf-yasg==1.20.0
```
| closed | 2022-03-21T02:44:58Z | 2022-03-21T04:02:21Z | https://github.com/axnsan12/drf-yasg/issues/784 | [] | cataglyphis | 1 |
scikit-learn/scikit-learn | python | 30,037 | Implement the two-parameter Box-Cox transform variant | ### Describe the workflow you want to enable
Currently, ony the single-parameter box-cox is implemented in sklearn.preprocessing.power_transform
The two parameter variant is defined as

where both the parameters are to be fit from data via MLE
### Describe your proposed solution
Add the two-parameter variant as a new method to sklearn.preprocessing.power_transform
### Describe alternatives you've considered, if relevant
Of course, the default yeo-johnson transform can be used for negative data, but that is mathematically different
### Additional context
wikipedia page: https://en.wikipedia.org/wiki/Power_transform | open | 2024-10-09T12:35:03Z | 2024-10-09T14:59:44Z | https://github.com/scikit-learn/scikit-learn/issues/30037 | [
"New Feature"
] | jachymb | 1 |
skypilot-org/skypilot | data-science | 4,990 | [R2] Treat Cloudflare as a cloud | Every cloud except Cloudflare is implemented as subclass of clouds.Cloud. Cloudflare is an exception to the rule because unlike other clouds, Cloudflare does not offer compute and did not align well with compute-centric view of Skypilot. Once https://github.com/skypilot-org/skypilot/issues/4957 is resolved, Cloudflare should be able to get its full implementation as a cloud that supports storage and not compute. | open | 2025-03-19T05:11:55Z | 2025-03-19T17:15:28Z | https://github.com/skypilot-org/skypilot/issues/4990 | [] | SeungjinYang | 0 |
huggingface/peft | pytorch | 2,255 | Is this the right way to check whether a model has been trained as expected? | I'd like to check whether my PEFT model has been trained as intended, i.e. whether the PEFT weights have changed, but not the base weights. The following code works, but I'm sure a PEFT specialist will suggest a better way.
```python
import tempfile
import torch
from datasets import load_dataset
from peft import LoraConfig, get_peft_model
from transformers import AutoModelForCausalLM
from trl import SFTConfig, SFTTrainer
# Get the base model
model_id = "trl-internal-testing/tiny-Qwen2ForCausalLM-2.5"
model = AutoModelForCausalLM.from_pretrained(model_id)
# Get the base model parameter names
base_param_names = [f"base_model.model.{n}" for n, _ in model.named_parameters()]
# Turn the model into a peft model
lora_config = LoraConfig()
model = get_peft_model(model, lora_config)
# Get the dataset
dataset = load_dataset("trl-internal-testing/zen", "standard_language_modeling", split="train")
with tempfile.TemporaryDirectory() as tmp_dir:
# Initialize the trainer
training_args = SFTConfig(output_dir=tmp_dir, report_to="none")
trainer = SFTTrainer(args=training_args, model=model, train_dataset=dataset)
# Save the initial parameters to compare them later
previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()}
trainer.train()
# Check the peft params have changed and the base model params have not changed
for n, param in previous_trainable_params.items():
new_param = trainer.model.get_parameter(n)
if n in base_param_names: # We expect the base model parameters to be the same
if not torch.allclose(param, new_param):
print(f"Parameter {n} has changed, but it should not have changed")
elif "base_layer" not in n: # We expect the peft parameters to be different (except for the base layer)
if torch.allclose(param, new_param):
print(f"Parameter {n} has not changed, but it should have changed")
``` | closed | 2024-12-03T17:36:00Z | 2024-12-04T12:01:37Z | https://github.com/huggingface/peft/issues/2255 | [] | qgallouedec | 5 |
xinntao/Real-ESRGAN | pytorch | 785 | inference_realesrgan.py how work on multi-gpu? | inference_realesrgan.py have argument: parser.add_argument('-g', '--gpu-id', type=int, default=None, help='gpu device to use (default=None) can be 0,1,2 for multi-gpu') but cause exception: error: argument -g/--gpu-id: invalid int value: '0,1' | open | 2024-04-18T07:38:18Z | 2024-08-22T09:43:22Z | https://github.com/xinntao/Real-ESRGAN/issues/785 | [] | siestina | 2 |
jonaswinkler/paperless-ng | django | 1,474 | [Other] Rerun OCR on uploaded documents | Hi there!
Thanks for the wonderful project!
Just uploaded my first documents to paperless and found out that it's seems like OCR uses `eng` language which is set as default one. I've changed the default language to `rus` and I don't see an option to rerun OCR on the uploaded files. Is there a way to perform this? | closed | 2021-12-10T11:40:05Z | 2022-01-16T07:50:16Z | https://github.com/jonaswinkler/paperless-ng/issues/1474 | [] | sprnza | 5 |
encode/databases | asyncio | 141 | Interval type column raises TypeError on record access | Hi
I found an issue when selecting from a table with an [`Interval`](https://docs.sqlalchemy.org/en/13/core/type_basics.html#sqlalchemy.types.Interval) column, this should parse to a `timedelta` object.
When used with databases, sqlalchemy raises a `TypeError` when trying to access the column in the result:
```python
~/projects/dummy/scripts/interval.py in <module>
33
34 loop = asyncio.get_event_loop()
---> 35 result = loop.run_until_complete(do())
/usr/lib/python3.7/asyncio/base_events.py in run_until_complete(self, future)
577 raise RuntimeError('Event loop stopped before Future completed.')
578
--> 579 return future.result()
580
581 def stop(self):
~/projects/dummy/scripts/interval.py in do()
28
29 row = await db.fetch_one(thing.select())
---> 30 result = row["duration"] # This fails
31 return result
32
~/projects/dummy/.venv/lib/python3.7/site-packages/databases/backends/postgres.py in __getitem__(self, key)
112
113 if processor is not None:
--> 114 return processor(raw)
115 return raw
116
~/projects/dummy/.venv/lib/python3.7/site-packages/sqlalchemy/sql/sqltypes.py in process(value)
1927 if value is None:
1928 return None
-> 1929 return value - epoch
1930
1931 return process
TypeError: unsupported operand type(s) for -: 'datetime.timedelta' and 'datetime.datetime'
```
It works fine when using the same table with sqlalchemy engine directly.
Suspecting it's because the result processor ( https://github.com/zzzeek/sqlalchemy/blob/master/lib/sqlalchemy/sql/sqltypes.py#L1914 ) executes when used with databases, but not with sqlalchemy. Question is why?
My understanding is that it should not be executed for postgres cause it has a native interval type.
Here is a script to reproduce the problem:
```python
import asyncio
from datetime import timedelta
from sqlalchemy import Column, create_engine, MetaData, Table, Interval
from databases import Database
DB_URL = "postgresql://dev:dev@localhost:6808/dev"
metadata = MetaData()
thing = Table(
"thing",
metadata,
Column("duration", Interval, nullable=False),
extend_existing=True,
)
async def do():
db = Database(DB_URL)
if not db.is_connected:
await db.connect()
engine = create_engine(DB_URL)
metadata.create_all(engine)
await db.execute(thing.insert().values(duration=timedelta(days=7)))
row = await db.fetch_one(thing.select())
result = row["duration"] # This fails
# But works with sqlalchemy engine, e.g.
# return next(engine.execute(thing.select()))["duration"]
return result
loop = asyncio.get_event_loop()
result = loop.run_until_complete(do())
``` | open | 2019-09-16T15:45:34Z | 2020-07-25T17:25:19Z | https://github.com/encode/databases/issues/141 | [] | steinitzu | 4 |
databricks/koalas | pandas | 1,492 | Pyspark functions vs Koalas methods | Hi, this is more of a question, but I was wondering why so many Koalas methods are explained as `pythonUDF`s?
I guess my questions are:
- Is this intended behavior? If so, could you explain why?
- What is the "optimal" pick here? Does this change when data is larger, smaller, etc?
Examples
**String methods**
*Koalas*
```python
df = ks.DataFrame(["example"], columns=["column"])
df["column"] = df["column"].str.upper().str.strip()
df.explain()
== Physical Plan ==
*(1) Project [__index_level_0__#720L, pythonUDF0#746 AS column#738]
+- ArrowEvalPython [clean_fun(clean_fun(column#721))], [__index_level_0__#720L, column#721, pythonUDF0#746]
+- Scan ExistingRDD[__index_level_0__#720L,column#721]
```
*Pyspark*
```python
df = ks.DataFrame(["example"], columns=["column"])
df = df.to_spark().withColumn("column", F.trim(F.upper(F.col("column")))).to_koalas()
df.explain()
== Physical Plan ==
*(3) Project [(_we0#709 - 1) AS __index_level_0__#707, column#704]
+- Window [row_number() windowspecdefinition(_w0#708L ASC NULLS FIRST, specifiedwindowframe(RowFrame, unboundedpreceding$(), currentrow$())) AS _we0#709], [_w0#708L ASC NULLS FIRST]
+- *(2) Sort [_w0#708L ASC NULLS FIRST], false, 0
+- Exchange SinglePartition
+- *(1) Project [trim(upper(column#694), None) AS column#704, monotonically_increasing_id() AS _w0#708L]
+- Scan ExistingRDD[__index_level_0__#693L,column#694]
```
**Datetime methods**
*Koalas*
```python
df = ks.DataFrame(["2020-01-01"], columns=["date"])
df["date"] = ks.to_datetime(df["date"])
df.explain()
== Physical Plan ==
*(1) Project [__index_level_0__#756L, pythonUDF0#776 AS date#768]
+- ArrowEvalPython [clean_fun(date#757)], [__index_level_0__#756L, date#757, pythonUDF0#776]
+- Scan ExistingRDD[__index_level_0__#756L,date#757]
```
*Pyspark*
```python
df = ks.DataFrame(["2020-01-01"], columns=["date"])
df = df.to_spark().withColumn("date", F.to_date(F.col("date"))).to_koalas()
df.explain()
== Physical Plan ==
*(3) Project [(_we0#806 - 1) AS __index_level_0__#804, date#801]
+- Window [row_number() windowspecdefinition(_w0#805L ASC NULLS FIRST, specifiedwindowframe(RowFrame, unboundedpreceding$(), currentrow$())) AS _we0#806], [_w0#805L ASC NULLS FIRST]
+- *(2) Sort [_w0#805L ASC NULLS FIRST], false, 0
+- Exchange SinglePartition
+- *(1) Project [cast(date#791 as date) AS date#801, monotonically_increasing_id() AS _w0#805L]
+- Scan ExistingRDD[__index_level_0__#790L,date#791]
```
| closed | 2020-05-13T09:11:50Z | 2020-06-16T12:08:24Z | https://github.com/databricks/koalas/issues/1492 | [
"question"
] | sebastianvermaas | 3 |
deezer/spleeter | deep-learning | 555 | fichier spleeter-cpu.yaml absent | dans le fichier ZIP de SLEEPER (version windows ) dans le dossier "conda" le fichier "" spleeter-cpu.yaml "" est absent
impossible donc de créer un nouvel environnement dans anaconda | closed | 2021-01-10T11:09:45Z | 2021-01-11T11:50:30Z | https://github.com/deezer/spleeter/issues/555 | [
"bug",
"invalid",
"conda"
] | dextermaniac | 1 |
hankcs/HanLP | nlp | 632 | 是否考虑出个python版?或者提供python接口? | 向python开放应该是个好的方向。 | closed | 2017-09-20T05:34:51Z | 2017-09-25T03:33:33Z | https://github.com/hankcs/HanLP/issues/632 | [
"duplicated"
] | suparek | 6 |
microsoft/MMdnn | tensorflow | 879 | pytorch2keras: ModuleNotFoundError: No module named 'models' | Platform (like ubuntu 16.04/win10): ubuntu 18.04
Python version: 3.6.8
Source framework with version (like Tensorflow 1.4.1 with GPU): pytorch 1.15.1
Destination framework with version (like CNTK 2.3 with GPU): keras 2.2.4
Pre-trained model path (webpath or webdisk path):
Running scripts: mmconvert -sf pytorch -in pcifar.pth -df keras -om cifar.h5
~~~
Traceback (most recent call last):
File "/home/fanzh/.conda/envs/dba/lib/python3.6/site-packages/mmdnn/conversion/pytorch/pytorch_parser.py", line 81, in __init__
model = torch.load(model_file_name)
File "/home/fanzh/.conda/envs/dba/lib/python3.6/site-packages/torch/serialization.py", line 593, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/fanzh/.conda/envs/dba/lib/python3.6/site-packages/torch/serialization.py", line 773, in _legacy_load
result = unpickler.load()
ModuleNotFoundError: No module named 'models'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/fanzh/.conda/envs/dba/bin/mmtoir", line 8, in <module>
sys.exit(_main())
File "/home/fanzh/.conda/envs/dba/lib/python3.6/site-packages/mmdnn/conversion/_script/convertToIR.py", line 197, in _main
ret = _convert(args)
File "/home/fanzh/.conda/envs/dba/lib/python3.6/site-packages/mmdnn/conversion/_script/convertToIR.py", line 97, in _convert
parser = PytorchParser151(model, inputshape[0])
File "/home/fanzh/.conda/envs/dba/lib/python3.6/site-packages/mmdnn/conversion/pytorch/pytorch_parser.py", line 521, in __init__
super(PytorchParser151, self).__init__(model_file_name, input_shape)
File "/home/fanzh/.conda/envs/dba/lib/python3.6/site-packages/mmdnn/conversion/pytorch/pytorch_parser.py", line 83, in __init__
model = torch.load(model_file_name, map_location='cpu')
File "/home/fanzh/.conda/envs/dba/lib/python3.6/site-packages/torch/serialization.py", line 593, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/fanzh/.conda/envs/dba/lib/python3.6/site-packages/torch/serialization.py", line 773, in _legacy_load
result = unpickler.load()
ModuleNotFoundError: No module named 'models'
| closed | 2020-08-02T15:19:36Z | 2020-12-15T07:31:04Z | https://github.com/microsoft/MMdnn/issues/879 | [] | zhang-f | 3 |
deepfakes/faceswap | deep-learning | 1,407 | Future Development Plans for FaceSwap | Just wondering, are there any new features or updates or new releases coming soon, or is the project wrapping up and going into the archives soon?
| closed | 2024-11-07T16:14:37Z | 2024-11-09T14:56:15Z | https://github.com/deepfakes/faceswap/issues/1407 | [] | ruidazeng | 1 |
miguelgrinberg/python-socketio | asyncio | 1,143 | namespace in conjunction with room does not work | Greetings to all, I noticed such a problem if out of context I use
`await sio.emit('new_message_response', data={'test':'tester'}, namespace='/chat', room='c50c277a-b3a8-49e8-84ae-ed2497ed6655')`
Then nothing happens, but if you remove the room, then sending happens, the room exists and there are people in it. | closed | 2023-02-26T08:08:38Z | 2023-02-26T16:01:01Z | https://github.com/miguelgrinberg/python-socketio/issues/1143 | [] | d1ksim | 0 |
sinaptik-ai/pandas-ai | data-science | 995 | AttributeError: 'GoogleGemini' object has no attribute 'google' | ### System Info
OS version: windows11
Python version: 3.11
The current version of pandasai being used: 2.0.3
### 🐛 Describe the bug
```
import pandas as pd
from pandasai.llm import GooglePalm, GoogleGemini
from pandasai import Agent
df = pd.DataFrame({
"country": [
"United States",
"United Kingdom",
"France",
"Germany",
"Italy",
"Spain",
"Canada",
"Australia",
"Japan",
"China",
],
"gdp": [
19294482071552,
2891615567872,
2411255037952,
3435817336832,
1745433788416,
1181205135360,
1607402389504,
1490967855104,
4380756541440,
14631844184064,
],
"happiness_index": [6.94, 7.16, 6.66, 7.07, 6.38, 6.4, 7.23, 7.22, 5.87, 5.12],
})
llm = GoogleGemini(api_key=API-KEY)
dfa = Agent([df], config={"llm": llm}, memory_size=10)
response = dfa.chat("Return the top 5 countries by GDP")
print(response)
response = dfa.chat("Plot the histogram of countries showing for each the gpd")
print(response)
```
--------------------------------------------------
Just a simple try with Gemini but with error: the first chat got success response, but the second(to plot a chart) got error. See below trace:
'''
country gdp happiness_index
0 United States 19294482071552 6.94
9 China 14631844184064 5.12
8 Japan 4380756541440 5.87
3 Germany 3435817336832 7.07
1 United Kingdom 2891615567872 7.16
Traceback (most recent call last):
File "C:\Users\Danzel\PycharmProjects\Tools\venv\Lib\site-packages\pandasai\pipelines\chat\generate_chat_pipeline.py", line 155, in run
output = self.pipeline.run(input)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Danzel\PycharmProjects\Tools\venv\Lib\site-packages\pandasai\pipelines\pipeline.py", line 137, in run
raise e
File "C:\Users\Danzel\PycharmProjects\Tools\venv\Lib\site-packages\pandasai\pipelines\pipeline.py", line 101, in run
step_output = logic.execute(
^^^^^^^^^^^^^^
File "C:\Users\Danzel\PycharmProjects\Tools\venv\Lib\site-packages\pandasai\pipelines\chat\code_generator.py", line 33, in execute
code = pipeline_context.config.llm.generate_code(input, pipeline_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Danzel\PycharmProjects\Tools\venv\Lib\site-packages\pandasai\llm\base.py", line 196, in generate_code
response = self.call(instruction, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Danzel\PycharmProjects\Tools\venv\Lib\site-packages\pandasai\llm\base.py", line 483, in call
return self._generate_text(self.last_prompt, memory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Danzel\PycharmProjects\Tools\venv\Lib\site-packages\pandasai\llm\google_gemini.py", line 90, in _generate_text
completion = self.google.GoogleGemini.generate_content(
^^^^^^^^^^^
AttributeError: 'GoogleGemini' object has no attribute 'google'
Unfortunately, I was not able to answer your question, because of the following error:
'GoogleGemini' object has no attribute 'google'
'''
| closed | 2024-03-05T05:20:50Z | 2024-03-15T16:19:25Z | https://github.com/sinaptik-ai/pandas-ai/issues/995 | [] | gDanzel | 2 |
inducer/pudb | pytest | 80 | Double stack entries with post_mortem | I can't figure out what is going on here. If you try [jedi](https://github.com/davidhalter/jedi), and use `./sith.py random code --pudb`, where you replace `code` with some random Python code (like try PuDB's own code base), the stack items appear twice (note, jedi is highly recursive, so the stack traces will tend to be big and repetitive).
You'll need to use my [pudb](https://github.com/davidhalter/jedi/pull/273) branch, which fixes pudb integration.
I've attached two screenshots showing what I mean. Notice how there are two `>>` in the stack trace.


| open | 2013-07-26T00:25:24Z | 2013-08-09T04:27:25Z | https://github.com/inducer/pudb/issues/80 | [] | asmeurer | 1 |
miguelgrinberg/Flask-Migrate | flask | 312 | upgrade hangs forever with multiple database binds, probably deadlocked | Running `flask db upgrade` hangs forever, until I kill the process or stop the postgres server. I have 3 database binds including default. It's getting stuck when updating the version in `alembic_version` table. The migration change isn't related, even running an empty `pass` migration still hangs.
Probably, the first `default` database transaction wasn't closed before starting the second `heartbeats` migration, causing a deadlock.
Full traceback:
```
$ flask db upgrade
INFO [alembic.env] Migrating database <default>
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.runtime.migration] Running upgrade 5c45f1292ef8 -> 52915d0437db, create integration tables
INFO [alembic.env] Migrating database heartbeats
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.runtime.migration] Running upgrade 5c45f1292ef8 -> 52915d0437db, create integration tables
Traceback (most recent call last):
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1246, in _execute_context
cursor, statement, parameters, context
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 581, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.AdminShutdown: terminating connection due to administrator command
CONTEXT: while updating tuple (0,1) in relation "alembic_version"
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "migrations/env.py", line 215, in run_migrations_online
context.run_migrations(engine_name=name)
File "<string>", line 8, in run_migrations
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/alembic/runtime/environment.py", line 846, in run_migrations
self.get_context().run_migrations(**kw)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/alembic/runtime/migration.py", line 525, in run_migrations
head_maintainer.update_to_step(step)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/alembic/runtime/migration.py", line 751, in update_to_step
self._update_version(from_, to_)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/alembic/runtime/migration.py", line 697, in _update_version
== literal_column("'%s'" % from_)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/alembic/ddl/impl.py", line 134, in _exec
return conn.execute(construct, *multiparams, **params)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 982, in execute
return meth(self, multiparams, params)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/sqlalchemy/sql/elements.py", line 287, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1101, in _execute_clauseelement
distilled_params,
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1250, in _execute_context
e, statement, parameters, cursor, context
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1476, in _handle_dbapi_exception
util.raise_from_cause(sqlalchemy_exception, exc_info)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 398, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 152, in reraise
raise value.with_traceback(tb)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1246, in _execute_context
cursor, statement, parameters, context
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 581, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (psycopg2.errors.AdminShutdown) terminating connection due to administrator command
CONTEXT: while updating tuple (0,1) in relation "alembic_version"
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
[SQL: UPDATE alembic_version SET version_num='52915d0437db' WHERE alembic_version.version_num = '5c45f1292ef8']
(Background on this error at: http://sqlalche.me/e/e3q8)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 733, in _rollback_impl
self.engine.dialect.do_rollback(self.connection)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 531, in do_rollback
dbapi_connection.rollback()
psycopg2.errors.AdminShutdown: terminating connection due to administrator command
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/alanhamlett/git/app/venv/bin/flask", line 8, in <module>
sys.exit(main())
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/flask/cli.py", line 966, in main
cli.main(prog_name="python -m flask" if as_module else None)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/flask/cli.py", line 586, in main
return super(FlaskGroup, self).main(*args, **kwargs)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/click/decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/flask/cli.py", line 426, in decorator
return __ctx.invoke(f, *args, **kwargs)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/flask_migrate/cli.py", line 134, in upgrade
_upgrade(directory, revision, sql, tag, x_arg)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/flask_migrate/__init__.py", line 95, in wrapped
f(*args, **kwargs)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/flask_migrate/__init__.py", line 280, in upgrade
command.upgrade(config, revision, sql=sql, tag=tag)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/alembic/command.py", line 298, in upgrade
script.run_env()
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/alembic/script/base.py", line 489, in run_env
util.load_python_file(self.dir, "env.py")
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/alembic/util/pyfiles.py", line 98, in load_python_file
module = load_module_py(module_id, path)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/alembic/util/compat.py", line 173, in load_module_py
spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
File "migrations/env.py", line 235, in <module>
run_migrations_online()
File "migrations/env.py", line 225, in run_migrations_online
rec['transaction'].rollback()
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1716, in rollback
self._do_rollback()
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1754, in _do_rollback
self.connection._rollback_impl()
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 735, in _rollback_impl
self._handle_dbapi_exception(e, None, None, None, None)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1476, in _handle_dbapi_exception
util.raise_from_cause(sqlalchemy_exception, exc_info)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 398, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 152, in reraise
raise value.with_traceback(tb)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 733, in _rollback_impl
self.engine.dialect.do_rollback(self.connection)
File "/Users/alanhamlett/git/app/venv/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 531, in do_rollback
dbapi_connection.rollback()
sqlalchemy.exc.OperationalError: (psycopg2.errors.AdminShutdown) terminating connection due to administrator command
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
(Background on this error at: http://sqlalche.me/e/e3q8)
```
I can generate migrations and do things like `flask db stamp head` but can't upgrade:
```
$ flask db stamp head
INFO [alembic.env] Migrating database <default>
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.env] Migrating database heartbeats
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.env] Migrating database integrations
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
$ flask db upgrade
INFO [alembic.env] Migrating database <default>
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.runtime.migration] Running upgrade 5c45f1292ef8 -> 52915d0437db, create integration tables
INFO [alembic.env] Migrating database heartbeats
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.runtime.migration] Running upgrade 5c45f1292ef8 -> 52915d0437db, create integration tables
``` | closed | 2020-01-19T07:40:03Z | 2020-01-19T12:49:08Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/312 | [] | alanhamlett | 2 |
ansible/ansible | python | 84,142 | ansible.builtin.unarchive fails to extract single file from .tar.gz archive with just this file | ### Summary
Running eg the following task
```yaml
- name: Extract archive
ansible.builtin.unarchive:
remote_src: yes
src: 'https://github.com/extrawurst/gitui/releases/download/v0.26.3/gitui-linux-x86_64.tar.gz'
dest: '{{ download_dir }}'
include:
- gitui
```
fails. Note that this archive contains only a single file called `gitui`. However the following two tasks work as expected:
```yaml
- name: Extract archive
ansible.builtin.unarchive:
remote_src: true
src: 'https://github.com/jesseduffield/lazygit/releases/download/v0.44.1/lazygit_0.44.1_Linux_x86_64.tar.gz'
dest: '{{ download_dir }}'
include:
- lazygit
- name: Extract archive
ansible.builtin.unarchive:
remote_src: yes
src: 'https://github.com/extrawurst/gitui/releases/download/v0.26.3/gitui-linux-x86_64.tar.gz'
dest: '{{ download_dir }}'
```
In the first case, the archive contains three files (LICENSE, README.md and lazygit). The second case is the exact same task as the failing one, however not limited to the single `gitui` file it includes.
The failure is:
```
TASK [Extract archive] **********************************************************************************************************************************************************************
fatal: [sandbox.bulme.at]: FAILED! => {"changed": false, "msg": "Failed to find handler for \"/root/.ansible/tmp/ansible-tmp-1729266248.625658-212277-25726237746138/gitui-linux-x86_643ik238tk.tar.gz\". Make sure the required command to extract the file is installed.\nCommand \"/usr/bin/unzip\" could not handle archive: End-of-central-directory signature not found. Either this file is not\n a zipfile, or it constitutes one disk of a multi-part archive. In the\n latter case the central directory and zipfile comment will be found on\n the last disk(s) of this archive.\nunzip: cannot find zipfile directory in one of /root/.ansible/tmp/ansible-tmp-1729266248.625658-212277-25726237746138/gitui-linux-x86_643ik238tk.tar.gz or\n /root/.ansible/tmp/ansible-tmp-1729266248.625658-212277-25726237746138/gitui-linux-x86_643ik238tk.tar.gz.zip, and cannot find /root/.ansible/tmp/ansible-tmp-1729266248.625658-212277-25726237746138/gitui-linux-x86_643ik238tk.tar.gz.ZIP, period.\n\nCommand \"/usr/bin/tar\" could not handle archive: Unable to list files in the archive: xz: (stdin): File format not recognized\n/usr/bin/tar: Child returned status 1\n/usr/bin/tar: Error is not recoverable: exiting now\n\nCommand \"/usr/bin/tar\" could not handle archive: Unable to list files in the archive: /usr/bin/tar: gitui: Not found in archive\n/usr/bin/tar: Exiting with failure status due to previous errors\n\nCommand \"/usr/bin/tar\" could not handle archive: Unable to list files in the archive: bzip2: (stdin) is not a bzip2 file.\n/usr/bin/tar: Child returned status 2\n/usr/bin/tar: Error is not recoverable: exiting now\n"}
```
### Issue Type
Bug Report
### Component Name
ansible.builtin.unarchive
### Ansible Version
```console
$ ansible --version
ansible [core 2.16.3]
config file = /home/gerald/repos/bulme/infra/ansible/ansible.cfg
configured module search path = ['/home/gerald/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/gerald/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.12.7 (main, Oct 3 2024, 15:15:22) [GCC 14.2.0] (/usr/bin/python3)
jinja version = 3.1.3
libyaml = True
```
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /home/gerald/repos/bulme/infra/ansible/ansible.cfg
DEFAULT_HOST_LIST(/home/gerald/repos/bulme/infra/ansible/ansible.cfg) = ['/home/gerald/repos/bulme/infra/ansible/inventory.yml']
DEFAULT_MANAGED_STR(/home/gerald/repos/bulme/infra/ansible/ansible.cfg) = "Ansible managed: generated on %Y-%m-%d %H:%M:%S by {uid} on {host}"
EDITOR(env: EDITOR) = vim
```
```
### OS / Environment
controller: kubuntu 24.10
host: ubuntu jammy
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Extract archive
ansible.builtin.unarchive:
remote_src: yes
src: 'https://github.com/extrawurst/gitui/releases/download/v0.26.3/gitui-linux-x86_64.tar.gz'
dest: '{{ download_dir }}'
include:
- gitui
```
run the above task as part of any playbook but make sure to define {{ download_dir }}
### Expected Results
A single file `gitui` is extracted after the task finishes.
### Actual Results
```console
TASK [Extract archive] **********************************************************************************************************************************************************************
fatal: [sandbox.bulme.at]: FAILED! => {"changed": false, "msg": "Failed to find handler for \"/root/.ansible/tmp/ansible-tmp-1729266248.625658-212277-25726237746138/gitui-linux-x86_643ik238tk.tar.gz\". Make sure the required command to extract the file is installed.\nCommand \"/usr/bin/unzip\" could not handle archive: End-of-central-directory signature not found. Either this file is not\n a zipfile, or it constitutes one disk of a multi-part archive. In the\n latter case the central directory and zipfile comment will be found on\n the last disk(s) of this archive.\nunzip: cannot find zipfile directory in one of /root/.ansible/tmp/ansible-tmp-1729266248.625658-212277-25726237746138/gitui-linux-x86_643ik238tk.tar.gz or\n /root/.ansible/tmp/ansible-tmp-1729266248.625658-212277-25726237746138/gitui-linux-x86_643ik238tk.tar.gz.zip, and cannot find /root/.ansible/tmp/ansible-tmp-1729266248.625658-212277-25726237746138/gitui-linux-x86_643ik238tk.tar.gz.ZIP, period.\n\nCommand \"/usr/bin/tar\" could not handle archive: Unable to list files in the archive: xz: (stdin): File format not recognized\n/usr/bin/tar: Child returned status 1\n/usr/bin/tar: Error is not recoverable: exiting now\n\nCommand \"/usr/bin/tar\" could not handle archive: Unable to list files in the archive: /usr/bin/tar: gitui: Not found in archive\n/usr/bin/tar: Exiting with failure status due to previous errors\n\nCommand \"/usr/bin/tar\" could not handle archive: Unable to list files in the archive: bzip2: (stdin) is not a bzip2 file.\n/usr/bin/tar: Child returned status 2\n/usr/bin/tar: Error is not recoverable: exiting now\n"}
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | closed | 2024-10-18T15:57:25Z | 2024-11-03T14:00:01Z | https://github.com/ansible/ansible/issues/84142 | [
"module",
"bug",
"affects_2.16"
] | senarclens | 8 |
davidteather/TikTok-Api | api | 358 | Exception: Invalid Response | It worked well before, but now it reports this exception Exception: Invalid Response. Is my machine blocked? | closed | 2020-11-16T06:46:19Z | 2020-12-03T02:17:39Z | https://github.com/davidteather/TikTok-Api/issues/358 | [
"bug"
] | septfish | 10 |
ymcui/Chinese-BERT-wwm | tensorflow | 231 | 链接失效求助 | 崔先生您好!之前您为某位网友解答问题时提供了如下链接:
https://github.com/ymcui/CMRC2018-DRCD-BERT
但是现在这个链接失效了,请问现在对应的链接是哪个呢? | closed | 2023-04-01T06:14:05Z | 2023-04-06T03:48:59Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/231 | [
"stale"
] | Alternate-D | 2 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 1,045 | improve inference time | Does anyone have a clue on how to fasten the inference time? I know other vocoders have been tried but they were not satisfacotry ... right?
| open | 2022-03-29T15:38:37Z | 2022-03-29T15:38:37Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1045 | [] | ireneb612 | 0 |
Kanaries/pygwalker | pandas | 510 | Would it be possbile to walk it with Datatable.frame type in the future? | https://github.com/h2oai/datatable
It has higher performance reading and manipulating big csv data than pandas/polars/modin. But I can't walk this datatable.frame type. | open | 2024-04-01T17:59:46Z | 2024-04-02T06:55:41Z | https://github.com/Kanaries/pygwalker/issues/510 | [
"Vote if you want it",
"proposal"
] | plusmid | 1 |
qubvel-org/segmentation_models.pytorch | computer-vision | 234 | dice_loss - -ve, iou_score - -ve | Dear @qubvel
Thank you for this amazing library.
I tried to build a UNet model with ResNet152 as a backbone and 1 class 'person'.
I used input images of 512x512 as input and corresponding binary masks.
I used your sample code of CamVid notebook as a base and did some tuning in the same to make it work for my dataset.
However, I am getting dice loss and iou score as negative.
`dice_loss - -10.63, iou_score - -1.209`
Have you ever come across such an issue?
Can you suggest some solution for this? | closed | 2020-07-16T09:12:55Z | 2022-02-20T01:53:51Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/234 | [
"Stale"
] | PareshKamble | 8 |
sammchardy/python-binance | api | 1,004 | futures_socket() doesn't work in testnet | I'm trying to get updates from the futures account in testnet using futures_socket() while opening-closing orders, however it generates no output on any actions.
```
api = ...
secret = ...
from binance import AsyncClient, BinanceSocketManager
import asyncio
async def order_trade(client,symbol):
order = await client.futures_create_order(symbol=symbol, side='BUY', type='MARKET', quantity='0.1')
#order = await client.get_all_tickers()
print(order)
async def main():
client = await AsyncClient.create(api, secret, testnet=True)
bm = BinanceSocketManager(client)
async with bm.futures_socket() as stream:
while True:
print('connected?')
loop.call_soon(asyncio.create_task, order_trade(client, symbol))
res = await stream.recv()
print(res)
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
```
Environment
- Python version: 3.8.8
- OS: Mac
- python-binance version 1.0.10

| closed | 2021-09-02T23:25:27Z | 2022-07-19T11:28:08Z | https://github.com/sammchardy/python-binance/issues/1004 | [] | wckr-t | 1 |
public-apis/public-apis | api | 3,210 | Public Api test validation for Microsoft Edge | Any suggestions or verified data on this topic?? | closed | 2022-06-29T21:21:02Z | 2022-07-15T21:28:06Z | https://github.com/public-apis/public-apis/issues/3210 | [] | Vmc43hub | 1 |
noirbizarre/flask-restplus | flask | 119 | Add ability to document response headers | The library has the ability to quickly document the expected [request headers](https://flask-restplus.readthedocs.org/en/stable/documenting.html#headers) but currently there is not greay way to document the response headers. There was some stuff added in https://github.com/noirbizarre/flask-restplus/commit/a2b73da8d3023cf9450ecc320a6eb1c8f79bb9d6 to be able to do it but only with error handlers.
I was thinking the best place to do it would be in [`response()`](https://github.com/noirbizarre/flask-restplus/blob/master/flask_restplus/namespace.py#L253).
Seems like it might be better to convert responses to a `dict` rather than a `tuple` which has to get unpacked and makes this a bit difficult.
Perhaps this is part of the _todo_ planned:
https://github.com/noirbizarre/flask-restplus/blob/master/flask_restplus/swagger.py#L384
| closed | 2016-01-18T03:26:01Z | 2017-03-07T09:40:51Z | https://github.com/noirbizarre/flask-restplus/issues/119 | [
"bug",
"enhancement"
] | awiddersheim | 4 |
mwaskom/seaborn | matplotlib | 2,884 | Would it be possible to optionally use jax.gaussian_kde instead of scipy? | [Now that Jax has](https://github.com/google/jax/pull/11237) a GPU-accelerated version of `gaussian_kde`, would it be possible to use that instead of the scipy or the internal version of this function? | closed | 2022-06-29T05:12:10Z | 2022-06-29T21:30:10Z | https://github.com/mwaskom/seaborn/issues/2884 | [] | NeilGirdhar | 2 |
gunthercox/ChatterBot | machine-learning | 1,900 | Typo | closed | 2020-01-20T03:21:03Z | 2020-04-27T21:31:47Z | https://github.com/gunthercox/ChatterBot/issues/1900 | [
"invalid"
] | scottwedge | 0 | |
strawberry-graphql/strawberry | asyncio | 2,883 | GraphQL query timeouts | <!--- Provide a general summary of the changes you want in the title above. -->
On top of providing security via query complexity analysis and query depth analysis.
I believe having query timeouts is an important feature.
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Feature Request Type
- [ ] Core functionality
- [ ] Alteration (enhancement/optimization) of existing feature(s)
- [x] New behavior
## Description
Backend design is a hard process and inevitably involves introducing unexpected and hard to debug issues
especially involving the database or filesystem that it is pulling information from.
Amatuer devs could write SQL queries that _may_ halt the DB or perform file operations that are not sustainable.
So I propose having the following feature set:
1. Having the query timeout and kill the respective event_loop would be a safer option in these cases.
2. In addition, there could be a **propagate timeout** function that would run a cleanup of an external DB query that's taking too long to execute.
3. Since returning a GraphQL error may break the spec. We could return a plain http reponse with a custom 5xx error.
4. Global default timeout
5. Granular timeout control, meaning, we can define a safe timeout for each query.
I'd love to know what the rest of the community thinks about this suggestion.
<!-- A few sentences describing what it is. --> | open | 2023-06-23T00:22:35Z | 2025-03-20T15:56:15Z | https://github.com/strawberry-graphql/strawberry/issues/2883 | [] | XChikuX | 0 |
dask/dask | pandas | 11,149 | a tutorial for distributed text deduplication | Can you provide an example of distributed text deduplication based on dask, such as:
- https://github.com/xorbitsai/xorbits/blob/main/python/xorbits/experimental/dedup.py
- https://github.com/ChenghaoMou/text-dedup/blob/main/text_dedup/minhash_spark.py
- https://github.com/FlagOpen/FlagData/blob/main/flagdata/deduplication/minhash.py | open | 2024-05-27T07:38:30Z | 2024-06-03T07:29:57Z | https://github.com/dask/dask/issues/11149 | [
"documentation"
] | simplew2011 | 5 |
deepset-ai/haystack | machine-learning | 8,319 | Add workflow to verify release notes are formatted correctly when added in PRs | The release process can fail if the release notes yaml are not correctly formatted.
An example failure: https://github.com/deepset-ai/haystack/actions/runs/10669412195/job/29571286619#step:6:111
This is annoying as we usually discover this issue when drafting a new release, that slows down the whole process.
We should add a workflow that checks the release notes files added in a PR are formatted correctly to prevent this kind of issues to block the release process. | closed | 2024-09-02T15:47:01Z | 2024-09-04T15:37:33Z | https://github.com/deepset-ai/haystack/issues/8319 | [
"topic:CI",
"P2"
] | silvanocerza | 0 |
jupyter-widgets-contrib/ipycanvas | jupyter | 7 | MultiCanvas Importerror on Binder | On my Binder instance, in the notebook MultiCanvas.ipynb, the command `from ipycanvas import MultiCanvas`leads to
```
ImportError: cannot import name 'MultiCanvas' from 'ipycanvas' (/srv/conda/envs/notebook/lib/python3.7/site-packages/ipycanvas/__init__.py)
``` | closed | 2019-09-13T13:22:26Z | 2019-09-14T05:58:53Z | https://github.com/jupyter-widgets-contrib/ipycanvas/issues/7 | [] | lexnederbragt | 2 |
ageitgey/face_recognition | python | 1,542 | fcaial recognition wrong person | I created an actor database with hundreds of people and used a TV drama to extract faces for recognition. When I did face recognition on continuous frames of the same person, I found that the first few frames were correctly recognized, but then a mouth movement was recognized as another character, and the similarity was above 0.99. I set the parameter det_prob_threshold to 0.8 when creating the template and performing face recognition. I also tried adjusting it from 0.5 to 0.95, but the problem still persisted. Are there any other parameters that can be adjusted? | open | 2023-12-01T09:36:34Z | 2023-12-01T09:36:34Z | https://github.com/ageitgey/face_recognition/issues/1542 | [] | zhouyong297137551 | 0 |
PablocFonseca/streamlit-aggrid | streamlit | 195 | how to get the details of selected rows in a grid with a grouped row when selecting the group | when using a rowGroup and selecting a grouped row, streamlit returns a dictionary that does not contain the details of the selected row inside the group (and not even the detail of group name).
Here is a reproducible example:
```python
import streamlit as st
from st_aggrid import AgGrid, ColumnsAutoSizeMode
import pandas as pd
#_ Input data
df = pd.DataFrame({
'Category': ['Fruit', 'Fruit', 'Vegetable', 'Vegetable'],
'Items': ['Apple', 'Banana', 'Tomato', 'Carrots'],
'Price': [1.04, 1.15, 1.74, 1.5]})
#_ Ag Grid table
st.markdown('# Issue: how to get group selection?')
st.write("Try selecting an aggregate, and then an atomic record")
grid_options = {
"columnDefs": [
{"field": "Category", "rowGroup": True, "hide": True},
{"field": "Items"},
{"field": "Price"},
],
"rowSelection": "single",
}
#_ Playing with response
response = AgGrid(
df,
grid_options,
columns_auto_size_mode=ColumnsAutoSizeMode.FIT_ALL_COLUMNS_TO_VIEW,
)
if response['selected_rows']:
selection=response['selected_rows'][0]
st.write("Current selection is provided as a nested dictionary, requesting `['selected_rows'][0]` value of AgGrid response:")
st.write(selection)
if "Items" in selection:
st.markdown('#### Good!')
Category = selection['Category']
Item = selection['Items']
Price = selection['Price']
st.write(f"We know everything about current selection: you picked a `{Category}` called `{Item}`, with price `{Price}`!")
else:
st.markdown('#### Bad!')
nodeId = response['selected_rows'][0]['_selectedRowNodeInfo']['nodeId']
st.write(f"All we know is that a node with Id `{nodeId}` is selected.\n\r How do we know if you're looking for a `Fruit` or a `Vegetable`?")
```
when selecting the `Fruit` group the output is:
```
{
"_selectedRowNodeInfo": {
"nodeRowIndex": 0,
"nodeId": "row-group-0"
}
}
```
I would like to know if there is some way to have additional details on selcted rows (which are Apple and Banana in this case). There is rowIndex and I could get them from there but in case of multiple grouped rows it is not clear how to map row index to group details.
I asked this question also as a Stack Overflow question (with bounty): https://stackoverflow.com/questions/75523779/how-to-get-details-of-group-selection-in-streamlit-aggrid
Happy to have an answer there, in which case I would close this linking the answer.
I think it is useful to have the question tracked down also here, but if it is not appropriate feel free to close.
Thanks a lot for your work on streamlit-aggrid, it is great!
| open | 2023-03-01T11:51:25Z | 2023-03-01T11:51:25Z | https://github.com/PablocFonseca/streamlit-aggrid/issues/195 | [] | pietroppeter | 0 |
holoviz/panel | matplotlib | 7,052 | AttributeError: 'CompositeWidget' object has no attribute '_composite' | panel==latest `main` branch
While trying to document the `CompositeWidget` in https://github.com/holoviz/panel/pull/7051 I discovered that its not fully "robust" as you cannot set the `_composite_widget` in a method marked `on_init=True`:
```python
import param
import panel as pn
from panel.widgets import CompositeWidget
pn.extension()
class ButtonGroup(CompositeWidget):
"""A widget that allows selecting between multiple options by clicking a button.
The value will trigger an event if a button is re-clicked.
"""
value = param.Parameter(label="Value", doc="The selected value")
options = param.Selector(allow_None=False, doc="The available options")
@param.depends("options", watch=True, on_init=True)
def _update_composite(self):
buttons = []
with pn.config.set(sizing_mode="stretch_width"):
for option in self.options:
button = pn.widgets.Button(
name=option,
on_click=pn.bind(self._update_value, value=option),
margin=(5, 5),
)
buttons.append(button)
self._composite[:] = buttons
def _update_value(self, event, value):
if self.value != value:
self.value = value
else:
self.param.trigger("value")
button_group = ButtonGroup(options=["Option 1", "Option 2", "Option 3"], width=300)
pn.Column(button_group, button_group.param.value).servable()
```
```bash
AttributeError: 'ButtonGroup' object has no attribute '_composite'
Traceback (most recent call last):
File "/home/jovyan/repos/private/panel/panel/io/handlers.py", line 405, in run
exec(self._code, module.__dict__)
File "/home/jovyan/repos/private/panel/script.py", line 39, in <module>
button_group = ButtonGroup(options=["Option 1", "Option 2", "Option 3"], width=300)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jovyan/repos/private/panel/script.py", line 17, in __init__
super().__init__(**params)
File "/home/jovyan/repos/private/panel/panel/widgets/base.py", line 203, in __init__
super().__init__(**params)
File "/home/jovyan/repos/private/panel/panel/widgets/base.py", line 115, in __init__
super().__init__(**params)
File "/home/jovyan/repos/private/panel/panel/reactive.py", line 597, in __init__
super().__init__(**params)
File "/home/jovyan/repos/private/panel/panel/reactive.py", line 125, in __init__
super().__init__(**params)
File "/home/jovyan/repos/private/panel/panel/viewable.py", line 704, in __init__
super().__init__(**params)
File "/home/jovyan/repos/private/panel/panel/viewable.py", line 543, in __init__
super().__init__(**params)
File "/home/jovyan/repos/private/panel/panel/viewable.py", line 302, in __init__
super().__init__(**params)
File "/home/jovyan/repos/private/panel/.venv/lib/python3.11/site-packages/param/parameterized.py", line 4194, in __init__
self.param._update_deps(init=True)
File "/home/jovyan/repos/private/panel/.venv/lib/python3.11/site-packages/param/parameterized.py", line 2146, in _update_deps
m()
File "/home/jovyan/repos/private/panel/.venv/lib/python3.11/site-packages/param/depends.py", line 53, in _depends
return func(*args, **kw)
^^^^^^^^^^^^^^^^^
File "/home/jovyan/repos/private/panel/script.py", line 30, in _update_composite
self._composite[:] = buttons
^^^^^^^^^^^^^^^
AttributeError: 'ButtonGroup' object has no attribute '_composite'
```
The problem is that the `_composite` widget is not defined before after `__init__`:

For now the workaround is to set the contents of `_composite_widget` in the `__init__` instead. | closed | 2024-08-01T06:46:46Z | 2025-01-21T11:43:38Z | https://github.com/holoviz/panel/issues/7052 | [] | MarcSkovMadsen | 0 |
neuml/txtai | nlp | 881 | No Issue, just cool | closed | 2025-02-27T18:10:47Z | 2025-02-28T02:55:14Z | https://github.com/neuml/txtai/issues/881 | [] | leoProbisky | 1 | |
ansible/ansible | python | 84,327 | An error occurred (FilterLimitExceeded) when calling the DescribeInstances operation: The maximum length for a filter value is 255 characters | ### Summary
When I try to launch a new EC2 instance using the `ec2_instance` module:
```yaml
- name: launch ec2 instance
ec2_instance:
state: "running"
region: "{{ region }}"
key_name: "{{ keypair }}"
network:
assign_public_ip: true
groups: "{{ group }}"
instance_type: "{{ ec2_instance_type }}"
vpc_subnet_id: "{{ vpc_subnet }}"
image_id: "{{ vpc_image }}"
```
it fails with the following error:
```
botocore.exceptions.ClientError: An error occurred (FilterLimitExceeded) when calling the DescribeInstances operation: The maximum length for a filter value is 255 characters
```
### Issue Type
Bug Report
### Component Name
ec2_instance
### Ansible Version
```console
$ ansible --version
ansible [core 2.16.13]
config file = /home/user/configuration-management/ansible.cfg
configured module search path = ['/usr/share/ansible']
ansible python module location = /home/user/venv/ansible/v2.16/lib/python3.12/site-packages/ansible
ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections
executable location = /home/user/venv/ansible/v2.16/bin/ansible
python version = 3.12.3 (main, Sep 11 2024, 14:17:37) [GCC 13.2.0] (/home/user/venv/ansible/v2.16/bin/python)
jinja version = 3.1.4
libyaml = True
```
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
```
### OS / Environment
```
Ubuntu 24.04.1 LTS
```
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: launch ec2 instance
ec2_instance:
state: "running"
region: "{{ region }}"
key_name: "{{ keypair }}"
network:
assign_public_ip: true
groups: "{{ group }}"
instance_type: "{{ ec2_instance_type }}"
vpc_subnet_id: "{{ vpc_subnet }}"
image_id: "{{ vpc_image }}"
```
### Expected Results
Expect an new EC2 instance to get created in the specified VPC subnet using the specified AMI id.
### Actual Results
```console
botocore.exceptions.ClientError: An error occurred (FilterLimitExceeded) when calling the DescribeInstances operation: The maximum length for a filter value is 255 characters
fatal: [localhost]: FAILED! => changed=false
module_stderr: |-
Traceback (most recent call last):
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1731902914.5204961-1393456-107206627790774/AnsiballZ_ec2_instance.py", line 107, in <module>
_ansiballz_main()
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1731902914.5204961-1393456-107206627790774/AnsiballZ_ec2_instance.py", line 99, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1731902914.5204961-1393456-107206627790774/AnsiballZ_ec2_instance.py", line 47, in invoke_module
runpy.run_module(mod_name='ansible_collections.amazon.aws.plugins.modules.ec2_instance', init_globals=dict(_module_fqn='ansible_collections.amazon.aws.plugins.modules.ec2_instance', _modlib_path=modlib_path),
File "<frozen runpy>", line 226, in run_module
File "<frozen runpy>", line 98, in _run_module_code
File "<frozen runpy>", line 88, in _run_code
File "/tmp/ansible_ec2_instance_payload_h0fabx8h/ansible_ec2_instance_payload.zip/ansible_collections/amazon/aws/plugins/modules/ec2_instance.py", line 2418, in <module>
File "/tmp/ansible_ec2_instance_payload_h0fabx8h/ansible_ec2_instance_payload.zip/ansible_collections/amazon/aws/plugins/modules/ec2_instance.py", line 2390, in main
File "/tmp/ansible_ec2_instance_payload_h0fabx8h/ansible_ec2_instance_payload.zip/ansible_collections/amazon/aws/plugins/modules/ec2_instance.py", line 1723, in find_instances
File "/usr/lib/python3/dist-packages/botocore/paginate.py", line 348, in search
for page in self:
File "/usr/lib/python3/dist-packages/botocore/paginate.py", line 269, in __iter__
response = self._make_request(current_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/botocore/paginate.py", line 357, in _make_request
return self._method(**current_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/botocore/client.py", line 553, in _api_call
return self._make_api_call(operation_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/botocore/client.py", line 1009, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (FilterLimitExceeded) when calling the DescribeInstances operation: The maximum length for a filter value is 255 characters
module_stdout: ''
msg: |-
MODULE FAILURE
See stdout/stderr for the exact error
rc: 1
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | closed | 2024-11-18T04:44:16Z | 2024-12-02T14:00:02Z | https://github.com/ansible/ansible/issues/84327 | [
"bug",
"bot_closed",
"affects_2.16"
] | icicimov | 6 |
pytorch/pytorch | python | 149,124 | `torch.multinomial` fails under multi-worker DataLoader with a CUDA error: `Assertion cumdist[size - 1] > 0` failed | ### 🐛 Describe the bug
When using `torch.multinomial` in a Dataset/IterableDataset within a `DataLoader` that has multiple workers (num_workers > 0), an assertion error is thrown from a CUDA kernel:
```bash
pytorch\aten\src\ATen\native\cuda\MultinomialKernel.cu:112: block: [0,0,0], thread: [0,0,0] Assertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed.
```
A minimal reproducible example is below. Observe that if you switch `num_workers=2` to `num_workers=0`, the code runs successfully:
```py
import torch
import nltk
import nltk
import torch
from torch.utils.data import IterableDataset, DataLoader
class SimpleDataset(IterableDataset):
def __init__(self, data, word2idx, window_size, num_neg_samples, neg_sampling_dist):
self.data = data
self.word2idx = word2idx
self.window_size = window_size
self.num_neg_samples = num_neg_samples
self.neg_sampling_dist = neg_sampling_dist.to("cuda")
def __iter__(self):
for line in self.data:
tokens = nltk.word_tokenize(line)
token_ids = [
self.word2idx.get(token, self.word2idx["<unk>"]) for token in tokens
]
# neg_sampling_dist = self.neg_sampling_dist.to("cuda")
# torch.cuda.synchronize()
neg_sampling_dist = self.neg_sampling_dist.clone().detach().to('cuda')
torch.cuda.synchronize()
for i, center in enumerate(token_ids):
start = max(0, i - self.window_size)
end = min(len(token_ids), i + self.window_size + 1)
for j in range(start, end):
if i != j:
context = token_ids[j]
negative_context = torch.multinomial(
neg_sampling_dist,
self.num_neg_samples,
replacement=True,
)
yield center, context, negative_context
if __name__ == "__main__":
vocab = {"<unk>": 1, "word": 2, "example": 3, "test": 4}
word2idx = {word: idx for idx, word in enumerate(vocab.keys())}
vocab_size = len(word2idx)
freq_arr = torch.zeros(vocab_size, dtype=torch.float32)
for word, idx in word2idx.items():
freq_arr[idx] = vocab[word]
neg_sampling_dist = freq_arr / freq_arr.sum()
data = ["This is a test sentence.", "Another example of a sentence."]
dataset = SimpleDataset(
data,
word2idx,
window_size=1,
num_neg_samples=2,
neg_sampling_dist=neg_sampling_dist,
)
dataloader = DataLoader(dataset, batch_size=2, num_workers=2)
for batch in dataloader:
print(batch)
```
### Versions
PyTorch version: 2.4.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 (10.0.19045 64 位)
GCC version: (GCC) 12.2.0
Clang version: Could not collect
CMake version: version 3.25.2
Libc version: N/A
Python version: 3.11.5 (tags/v3.11.5:cce6ba9, Aug 24 2023, 14:38:34) [MSC v.1936 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2070
Nvidia driver version: 561.19
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz
Manufacturer: GenuineIntel
Family: 198
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2592
MaxClockSpeed: 2592
L2CacheSize: 1536
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] microtorch==0.5.0
[pip3] minitorch==0.5
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] onnx==1.17.0
[pip3] onnx-tf==1.9.0
[pip3] optree==0.14.0
[pip3] pytorch-fid==0.3.0
[pip3] torch==2.4.1+cu118
[pip3] torch-fidelity==0.3.0
[pip3] torch-tb-profiler==0.4.3
[pip3] torchaudio==2.4.1+cu118
[pip3] torcheval==0.0.7
[pip3] torchmetrics==1.6.0
[pip3] torchvision==0.19.1+cu118
cc @andrewkho @divyanshk @VitalyFedyunin @dzhulgakov @albanD | open | 2025-03-13T14:23:35Z | 2025-03-13T15:21:17Z | https://github.com/pytorch/pytorch/issues/149124 | [
"triaged",
"module: data",
"module: python frontend"
] | yewentao256 | 0 |
nltk/nltk | nlp | 3,154 | 3.8.1: sphinx warnings `reference target not found` | On building my packages I'm using `sphinx-build` command with `-n` switch which shows warmings about missing references. These are not critical issues.
You can peak on fixes that kind of issues in other projects
https://github.com/RDFLib/rdflib-sqlalchemy/issues/95
https://github.com/RDFLib/rdflib/pull/2036
https://github.com/click-contrib/sphinx-click/commit/abc31069
https://github.com/frostming/unearth/issues/14
https://github.com/jaraco/cssutils/issues/21
https://github.com/latchset/jwcrypto/pull/289
https://github.com/latchset/jwcrypto/pull/289
https://github.com/pypa/distlib/commit/98b9b89f
https://github.com/pywbem/pywbem/pull/2895
https://github.com/sissaschool/elementpath/commit/bf869d9e
https://github.com/sissaschool/xmlschema/commit/42ea98f2
https://github.com/sqlalchemy/sqlalchemy/commit/5e88e6e8
| open | 2023-05-16T16:51:57Z | 2023-05-17T21:18:01Z | https://github.com/nltk/nltk/issues/3154 | [
"documentation",
"enhancement"
] | kloczek | 1 |
JaidedAI/EasyOCR | machine-learning | 327 | Error occurs when i install | error: package directory 'libfuturize\tests' does not exist | closed | 2020-12-10T07:09:44Z | 2022-03-02T09:24:10Z | https://github.com/JaidedAI/EasyOCR/issues/327 | [] | AndyJMR | 3 |
torchbox/wagtail-grapple | graphql | 264 | `N*M + 1` problem when fetching image renditions | When we fetch multiple image renditions using the `srcSet` parameter on image queries, we end up having an `N * M + 1` situation, where `N` is the total number of images to retrieve renditions for and `M` is the number of renditions to generate.
I've opened a draft PR with a test which confirms this behaviour here: #263. | closed | 2022-08-31T10:21:50Z | 2022-10-31T14:10:00Z | https://github.com/torchbox/wagtail-grapple/issues/264 | [] | Tijani-Dia | 1 |
deepfakes/faceswap | machine-learning | 1,365 | あ | closed | 2023-12-30T08:39:20Z | 2023-12-30T13:54:34Z | https://github.com/deepfakes/faceswap/issues/1365 | [] | alicerei | 0 | |
2noise/ChatTTS | python | 279 | 有没有其他推荐的开源TTS,可以做到流式或者低延迟响应的,chattts的速度实在是慢 | 我在A100上运行的chattts,生成长文本需要很长时间,后面尝试去做文本分割,一段段的生成,速度也还是不行,之前实现了基于paddlespeech的语音合成,流式的效果很好,但是音色比较拉跨,有没有合成效果不那么拉跨,而且速度还可以的tts推荐 | closed | 2024-06-06T11:59:58Z | 2024-10-06T16:34:57Z | https://github.com/2noise/ChatTTS/issues/279 | [
"ad"
] | chwljy | 6 |
skypilot-org/skypilot | data-science | 4,859 | [Tests] Client server API version compatibility tests | With then new client-server architecture, we need to add tests that verify functionality of client/servers running the same API version:
* New client, old server
* OId client, new server | open | 2025-02-28T23:08:57Z | 2025-02-28T23:08:57Z | https://github.com/skypilot-org/skypilot/issues/4859 | [] | romilbhardwaj | 0 |
K3D-tools/K3D-jupyter | jupyter | 282 | Regression in 2.9.5 | There seems to be a regression in 2.9.5 about setting objects opacity. See below
**Expected: v2.9.4**
Green object opacity is set to 0.4

**2.9.5**

**Workaround**: If I toggle object visibility off and on again, it will be displayed correctly as expected
| closed | 2021-05-18T12:14:02Z | 2021-07-26T09:33:17Z | https://github.com/K3D-tools/K3D-jupyter/issues/282 | [
"Next release"
] | esalehim | 6 |
vitalik/django-ninja | django | 633 | Get response schema from type hints instead of decorator parameter | **Is your feature request related to a problem? Please describe.**
Would be a little cleaner to be able to derive the response schema from the function's type hints instead of requiring an additional parameter to the decorator. When using type hints (and mypy) pervasively, it would help to eliminate the possibility of a mismatch between the return type of the request handler and the value of the `response` parameter.
**Describe the solution you'd like**
Instead of:
```python
@api.post("/somewhere", response=ResponseSchema)
def somewhere(request, data: RequestBodySchema):
```
Cleaner:
```python
@api.post("/somewhere")
def somewhere(request: HttpRequest, data: RequestBodySchema) -> ResponseSchema:
...
return ResponseSchema(...)
```
The same process of inspecting the function parameter types can also be used to determine the response annotation type. | closed | 2022-12-16T19:32:29Z | 2022-12-17T20:18:57Z | https://github.com/vitalik/django-ninja/issues/633 | [] | ghost | 1 |
omnilib/aiomultiprocess | asyncio | 5 | Check for existing asyncio event loops after forking | It seems there are some cases where there is already an existing, active event loop after forking, which causes an exception when the child process tries to create a new event loop. It should check for an existing event loop first, and only create one if there's no active loop available.
Target function: https://github.com/jreese/aiomultiprocess/blob/master/aiomultiprocess/core.py#L93
See #4 for context. | open | 2018-07-30T18:51:25Z | 2018-10-20T02:58:55Z | https://github.com/omnilib/aiomultiprocess/issues/5 | [
"enhancement",
"good first issue"
] | amyreese | 1 |
sepandhaghighi/samila | matplotlib | 77 | Add Notebook Examples | #### Description
We can have some `*.ipynb` files providing some examples from Samila use. | closed | 2021-11-22T15:02:19Z | 2022-03-21T14:28:32Z | https://github.com/sepandhaghighi/samila/issues/77 | [
"enhancement"
] | sadrasabouri | 0 |
graphistry/pygraphistry | jupyter | 1 | Replace NaNs with nulls since node cannot parse JSON with NaNs | closed | 2015-06-23T21:06:19Z | 2015-08-06T13:54:24Z | https://github.com/graphistry/pygraphistry/issues/1 | [
"bug"
] | thibaudh | 1 | |
plotly/dash | plotly | 2,541 | [BUG] Inconsistent/buggy partial plot updates using Patch | **Example scenario / steps to reproduce:**
- App has several plots, all of which have the same x-axis coordinate units
- When one plot is rescaled (x-axis range modified, e.g. zoom in), all plots should rescale the same way
- Rather than rebuild each plot, we can now implement a callback using `Patch`. This way, the heavy lifting of initially rendering the plots happens only once, and when a plot is rescaled, only the layout of each plot is updated.
```python
@callback(
output=[Output(dict(type='plot', id=plot_id), 'figure', allow_duplicate=True) for plot_id in plot_ids))]
inputs=[Input(dict(type='plot', id=ALL), 'relayoutData'],
prevent_initial_call=True
)
def update(inputs):
plot_updates = [Patch() for _ in plot_ids]
for p in plot_updates: p.layout.xaxis.range = get_range(dash.callback_context.triggered) # implemented elsewhere
return plot_updates
```
**Unexpected behavior:**
Upon rescale of a plot, only _some_ of the other plots actually update in the UI. Which plots update in response to a rescale is inconsistent and seemingly random (sometimes they all update and the app appears to work as I expect, sometimes only a few of them update, etc). I have verified that the callback/Patch _does_ update the range property in every figure data structure, so it seems like there's just some inconsistency with listening for the update (or maybe I made an error with how I've implemented this).
Happy to provide a screen recording if it would help.
```
dash 2.9.3
dash-bootstrap-components 1.4.1
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
| closed | 2023-05-25T01:16:45Z | 2024-07-24T17:08:29Z | https://github.com/plotly/dash/issues/2541 | [] | abehr | 2 |
ultralytics/yolov5 | machine-learning | 12,506 | Make number of channels in convolution configurable | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar feature requests.
### Description
It would be great if the number of layer per convolution could be configurable. At the moment, it is hard-coded to 8.
Some ASICs might have a number of core different from 8, and therefore the optimal number of channel in a convolution should be different from a multiple of 8 to get the best performance.
At the moment, the only way to optimize the model for the number of cores is to edit from source the hard-coded value. Making it configurable would make it easier for users to optimize the model without digging into the code.
### Use case
Maximize usage of SoC for Deep Learning
### Additional
I would be happy to open a PR if you would want this feature.
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR! | closed | 2023-12-14T13:33:36Z | 2024-01-14T01:14:42Z | https://github.com/ultralytics/yolov5/issues/12506 | [
"enhancement",
"Stale"
] | Corallo | 2 |
ray-project/ray | data-science | 51,280 | [Serve] slashes in deployment name cause actor failure | ### What happened + What you expected to happen
When creating a Serve deployment with a name containing a slash (such as `TextGenerationModel.options(name="huawei-noah/TinyBERT_General_4L_312D")`) leads to actor failures. The error occurs because the slash in the name is likely being used as a path separator in log files.
Error: `FileNotFoundError: [Errno 2] No such file or directory: '/tmp/ray/session_2025-03-11_16-02-48_264712_45753/logs/serve/replica_default_huawei-noah/TinyBERT_General_4L_312D_eo3vqu7d.log'`
### Versions / Dependencies
ray, version 2.43.0
### Reproduction script
```python
from ray import serve
# The problematic model name containing a slash
model_with_slash = "huawei-noah/TinyBERT_General_4L_312D"
model_without_slash = "distilgpt2" # This one works fine
@serve.deployment
class MyModel:
def __init__(self, model_name):
self.model_name = model_name
def __call__(self, text):
return f"Generated text using {self.model_name}: {text}"
def main():
serve.start()
print("Creating deployment with model name that doesn't contain a slash (works):")
working_app = MyModel.options(name=model_without_slash).bind(model_without_slash)
print("\nCreating deployment with model name that contains a slash (fails):")
try:
failing_app = MyModel.options(name=model_with_slash).bind(model_with_slash)
serve.run(failing_app)
except Exception as e:
print(f"Error occurred: {type(e).__name__}")
print(f"Error message: {str(e)}")
if __name__ == "__main__":
main()
```
### Issue Severity
Low: It annoys or frustrates me. | open | 2025-03-11T23:05:06Z | 2025-03-11T23:05:06Z | https://github.com/ray-project/ray/issues/51280 | [
"bug",
"triage",
"serve"
] | crypdick | 0 |
ultralytics/ultralytics | deep-learning | 19,374 | classification, oriented bounding boxes, pose estimation, and instance segmentation请问预训练模型有了么,在哪里下载呢? | ### Search before asking
- [x] I have searched the Ultralytics [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar feature requests.
### Description
classification, oriented bounding boxes, pose estimation, and instance segmentation请问预训练模型有了么,在哪里下载呢?
### Use case
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2025-02-22T15:05:25Z | 2025-02-23T13:26:10Z | https://github.com/ultralytics/ultralytics/issues/19374 | [
"OBB",
"segment",
"classify",
"pose"
] | CachCheng | 3 |
simple-login/app | flask | 1,197 | Inconsistent Cache-Control header | Please note that this is only for bug report.
For help on your account, please reach out to us at hi[at]simplelogin.io. Please make sure to check out [our FAQ](https://simplelogin.io/faq/) that contains frequently asked questions.
For feature request, you can use our [forum](https://github.com/simple-login/app/discussions/categories/feature-request).
For self-hosted question/issue, please ask in [self-hosted forum](https://github.com/simple-login/app/discussions/categories/self-hosting-question)
## Prerequisites
- [x] I have searched open and closed issues to make sure that the bug has not yet been reported.
## Bug report
**Describe the bug**
Static assets, e.g., `https://app.simplelogin.io/static/node_modules/font-awesome/css/font-awesome.css`, have 2 `Cache-Control` headers:
```
Cache-Control: public, max-age=43200
Cache-Control: no-store, no-cache, must-revalidate
```
The former asks the browser or the reverse proxy to cache those assets, whereas the latter says don't.
**Expected behavior**
Only one consistent `Cache-Control` header and it should allow caching static assets.
**Screenshots**
If applicable, add screenshots to help explain your problem.

| open | 2022-07-29T05:19:00Z | 2022-08-01T07:01:10Z | https://github.com/simple-login/app/issues/1197 | [] | proletarius101 | 1 |
jofpin/trape | flask | 145 | can i use trape without google api key? | open | 2019-04-03T19:05:44Z | 2019-04-04T14:31:52Z | https://github.com/jofpin/trape/issues/145 | [] | cybergost | 1 | |
python-visualization/folium | data-visualization | 1,661 | Not able to add multiple markers on the Map | **Describe the bug**
For some reason the folium. Marker only creates one marker it should make multiple markers if it is inside the For loop I have tried multiple variations of code saving index.html inside the for loop used the code add_to(map) created a feature group and have the markers be added to the group but none works
**To Reproduce**
```
[ Include a code snippet that produces your bug. Make it standalone, we should be able to run it. ]
import folium
import pandas as pd
read_file = pd.read_csv ('Volcanoes.txt')
Latitude = read_file["LAT"]
Longitde = read_file["LON"]
State = read_file["LOCATION"]
Latitude_list = Latitude.to_list()
Longitude_list = Longitde.to_list()
state_list = State.to_list()
print(Latitude_list)
print(Longitude_list)
print(state_list)
m = folium.Map(location=[Latitude_list[0],Longitude_list[0]])
fg = folium.FeatureGroup(name="US MAP")
m.save("index.html")
tooltip = "Click me!"
for lat,lon, st in zip(Latitude_list,Longitude_list,state_list):
fg.add_child(folium.Marker([lat,lon], popup=st,tooltip=tooltip))
m.add_child(fg)
m.save("index.html")
```
[ Include a data sample or link to your data if necessary to run the code ]
OLCANX020,NUMBER,NAME,LOCATION,STATUS,ELEV,TYPE,TIMEFRAME,LAT,LON
509.000000000000000,1201-01=,Baker,US-Washington,Historical,3285.000000000000000,Stratovolcanoes,D3,48.7767982,-121.8109970
511.000000000000000,1201-02-,Glacier Peak,US-Washington,Tephrochronology,3213.000000000000000,Stratovolcano,D4,48.1118011,-121.1110001
513.000000000000000,1201-03-,Rainier,US-Washington,Dendrochronology,4392.000000000000000,Stratovolcano,D3,46.8698006,-121.7509995
515.000000000000000,1201-05-,St. Helens,US-Washington,Historical,2549.000000000000000,Stratovolcano,D1,46.1997986,-122.1809998
516.000000000000000,1201-04-,Adams,US-Washington,Tephrochronology,3742.000000000000000,Stratovolcano,D6,46.2057991,-121.4909973
517.000000000000000,1201-06-,West Crater,US-Washington,Radiocarbon,1329.000000000000000,Volcanic field,D7,45.8797989,-122.0810013
518.000000000000000,1201-07-,Indian Heaven,US-Washington,Radiocarbon,1806.000000000000000,Shield volcanoes,D7,45.9297981,-121.8209991
519.000000000000000,1202-01-,Hood,US-Oregon,Historical,3426.000000000000000,Stratovolcano,D3,45.3737984,-121.6910019```
**Expected behavior**
A clear and concise description of what you expected to happen.
As explained
**Environment (please complete the following information):**
- Browser [e.g. chrome, firefox] Chrome
- Jupyter Notebook or html files? Jupyter Notebook and also HTML file
- Python version (check it with `import sys; print(sys.version_info)`) Python 3.8.10
- folium version (check it with `import folium; print(folium.__version__)`)folium 0.13.0
- branca version (check it with `import branca; print(branca.__version__)`) Branca 0.6.0
**Additional context**
Add any other context about the problem here.
**Possible solutions**
List any solutions you may have come up with.
folium is maintained by volunteers. Can you help making a fix for this issue?
| closed | 2022-11-23T01:13:38Z | 2022-11-23T11:38:50Z | https://github.com/python-visualization/folium/issues/1661 | [] | Crimson-Zero | 1 |
plotly/plotly.py | plotly | 4,086 | 5.13.1: test suite is failing in `_plotly_utils/tests/validators/test_integer_validator.py` unit | I'm packaging your module as an rpm package so I'm using the typical PEP517 based build, install and test cycle used on building packages from non-root account.
- `python3 -sBm build -w --no-isolation`
- because I'm calling `build` with `--no-isolation` I'm using during all processes only locally installed modules
- install .whl file in </install/prefix>
- run pytest with $PYTHONPATH pointing to sitearch and sitelib inside </install/prefix>
- build is performed in env which is *`cut off from access to the public network`* (pytest is executed with `-m "not network"`)
Looks like something is wring when bumoy 1.24.2 is used.
In melow outpu there are as well some deprecation warnings
Here is pytest output:
<details>
```console
+ PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-plotly-5.13.1-2.fc35.x86_64/usr/lib64/python3.8/site-packages:/home/tkloczko/rpmbuild/BUILDROOT/python-plotly-5.13.1-2.fc35.x86_64/usr/lib/python3.8/site-packages
+ /usr/bin/pytest -ra -m 'not network' --ignore plotly/tests/test_optional/test_figure_factory/test_figure_factory.py
==================================================================================== test session starts ====================================================================================
platform linux -- Python 3.8.16, pytest-7.2.1, pluggy-1.0.0
rootdir: /home/tkloczko/rpmbuild/BUILD/plotly.py-5.13.1/packages/python/plotly, configfile: pytest.ini
plugins: anyio-3.6.2
collected 2568 items / 1 error
========================================================================================== ERRORS ===========================================================================================
_________________________________________________________ ERROR collecting _plotly_utils/tests/validators/test_integer_validator.py _________________________________________________________
_plotly_utils/tests/validators/test_integer_validator.py:77: in <module>
@pytest.mark.parametrize("val", [-2, -123, np.iinfo(np.int).min])
/usr/lib64/python3.8/site-packages/numpy/__init__.py:305: in __getattr__
raise AttributeError(__former_attrs__[attr])
E AttributeError: module 'numpy' has no attribute 'int'.
E `np.int` was a deprecated alias for the builtin `int`. To avoid this error in existing code, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
E The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
E https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
===================================================================================== warnings summary ======================================================================================
_plotly_utils/tests/validators/test_enumerated_validator.py:17
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.13.1/packages/python/plotly/_plotly_utils/tests/validators/test_enumerated_validator.py:17: DeprecationWarning:
invalid escape sequence \d
_plotly_utils/tests/validators/test_enumerated_validator.py:29
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.13.1/packages/python/plotly/_plotly_utils/tests/validators/test_enumerated_validator.py:29: DeprecationWarning:
invalid escape sequence \d
plotly/tests/test_core/test_subplots/test_make_subplots.py:1945
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.13.1/packages/python/plotly/plotly/tests/test_core/test_subplots/test_make_subplots.py:1945: DeprecationWarning:
invalid escape sequence \(
plotly/tests/test_core/test_subplots/test_make_subplots.py:1947
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.13.1/packages/python/plotly/plotly/tests/test_core/test_subplots/test_make_subplots.py:1947: DeprecationWarning:
invalid escape sequence \.
plotly/tests/test_core/test_subplots/test_make_subplots.py:1948
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.13.1/packages/python/plotly/plotly/tests/test_core/test_subplots/test_make_subplots.py:1948: DeprecationWarning:
invalid escape sequence \(
plotly/tests/test_core/test_subplots/test_make_subplots.py:1957
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.13.1/packages/python/plotly/plotly/tests/test_core/test_subplots/test_make_subplots.py:1957: DeprecationWarning:
invalid escape sequence \(
plotly/tests/test_core/test_subplots/test_make_subplots.py:1959
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.13.1/packages/python/plotly/plotly/tests/test_core/test_subplots/test_make_subplots.py:1959: DeprecationWarning:
invalid escape sequence \.
plotly/tests/test_core/test_subplots/test_make_subplots.py:1960
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.13.1/packages/python/plotly/plotly/tests/test_core/test_subplots/test_make_subplots.py:1960: DeprecationWarning:
invalid escape sequence \(
plotly/tests/test_core/test_subplots/test_make_subplots.py:2002
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.13.1/packages/python/plotly/plotly/tests/test_core/test_subplots/test_make_subplots.py:2002: DeprecationWarning:
invalid escape sequence \.
plotly/tests/test_core/test_subplots/test_make_subplots.py:2006
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.13.1/packages/python/plotly/plotly/tests/test_core/test_subplots/test_make_subplots.py:2006: DeprecationWarning:
invalid escape sequence \.
plotly/tests/test_core/test_subplots/test_make_subplots.py:2010
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.13.1/packages/python/plotly/plotly/tests/test_core/test_subplots/test_make_subplots.py:2010: DeprecationWarning:
invalid escape sequence \.
plotly/tests/test_core/test_subplots/test_make_subplots.py:2014
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.13.1/packages/python/plotly/plotly/tests/test_core/test_subplots/test_make_subplots.py:2014: DeprecationWarning:
invalid escape sequence \.
plotly/tests/test_optional/test_utils/test_utils.py:18
/home/tkloczko/rpmbuild/BUILD/plotly.py-5.13.1/packages/python/plotly/plotly/tests/test_optional/test_utils/test_utils.py:18: FutureWarning:
pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
================================================================================== short test summary info ==================================================================================
ERROR _plotly_utils/tests/validators/test_integer_validator.py - AttributeError: module 'numpy' has no attribute 'int'.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
============================================================================== 13 warnings, 1 error in 11.62s ===============================================================================
```
</details>
Here is list of installed modules in build env
<details>
```console
Package Version
-------------------- --------------
anyio 3.6.2
argon2-cffi 21.3.0
argon2-cffi-bindings 21.2.0
asttokens 2.2.1
attrs 22.2.0
Babel 2.11.0
backcall 0.2.0
beautifulsoup4 4.11.2
bleach 6.0.0
Brlapi 0.8.4
build 0.9.0
cffi 1.15.1
charset-normalizer 3.0.1
comm 0.1.2
contourpy 1.0.7
cycler 0.11.0
debugpy 1.6.6
decorator 5.1.1
defusedxml 0.7.1
distro 1.8.0
entrypoints 0.4
exceptiongroup 1.0.0
executing 1.2.0
fastjsonschema 2.16.1
fonttools 4.38.0
gpg 1.18.0-unknown
html5lib 1.1
idna 3.4
importlib-metadata 6.0.0
importlib-resources 5.12.0
iniconfig 2.0.0
ipykernel 6.20.2
ipython 8.6.0
ipython-genutils 0.2.0
jedi 0.18.2
Jinja2 3.1.2
json5 0.9.12
jsonschema 4.17.3
jupyter_client 7.4.9
jupyter_core 5.1.3
jupyter-server 1.23.3
jupyterlab 3.5.1
jupyterlab-pygments 0.1.2
jupyterlab_server 2.18.0
kiwisolver 1.4.4
libcomps 0.1.19
louis 3.24.0
MarkupSafe 2.1.2
matplotlib 3.6.3
matplotlib-inline 0.1.6
mistune 2.0.5
nbclassic 0.4.8
nbclient 0.7.2
nbconvert 7.2.9
nbformat 5.7.3
nest-asyncio 1.5.6
notebook 6.5.2
notebook_shim 0.2.2
numpy 1.24.2
olefile 0.46
packaging 23.0
pandas 1.5.2
pandocfilters 1.5.0
parso 0.8.3
pep517 0.13.0
pexpect 4.8.0
pickleshare 0.7.5
Pillow 9.4.0
pip 22.3.1
pkgutil_resolve_name 1.3.10
platformdirs 2.6.0
pluggy 1.0.0
ply 3.11
prometheus-client 0.16.0
prompt-toolkit 3.0.36
psutil 5.9.2
ptyprocess 0.7.0
pure-eval 0.2.2
pycparser 2.21
Pygments 2.14.0
PyGObject 3.43.1.dev0
pyparsing 3.0.9
pyrsistent 0.19.3
pytest 7.2.1
python-dateutil 2.8.2
pytz 2022.4
pyzmq 24.0.1
requests 2.28.2
rpm 4.17.0
SciPy 1.8.1
Send2Trash 1.8.0
setuptools 65.6.3
six 1.16.0
sniffio 1.2.0
soupsieve 2.4
stack-data 0.6.2
tenacity 8.0.1
terminado 0.17.1
tinycss2 1.2.1
tomli 2.0.1
tornado 6.2
traitlets 5.8.1
urllib3 1.26.12
wcwidth 0.2.6
webencodings 0.5.1
websocket-client 1.5.1
wheel 0.38.4
xarray 2022.12.0
zipp 3.15.0
```
</details>
| closed | 2023-02-27T20:44:26Z | 2024-07-11T14:22:19Z | https://github.com/plotly/plotly.py/issues/4086 | [] | kloczek | 7 |
huggingface/datasets | pytorch | 7,387 | Dynamic adjusting dataloader sampling weight | Hi,
Thanks for your wonderful work! I'm wondering is there a way to dynamically adjust the sampling weight of each data in the dataset during training? Looking forward to your reply, thanks again. | open | 2025-02-10T03:18:47Z | 2025-03-07T14:06:54Z | https://github.com/huggingface/datasets/issues/7387 | [] | whc688 | 3 |
Guovin/iptv-api | api | 437 | ffmpeg安装好后,依然没办法用1.4.9版本对节目源进行筛选 | 

请问一下,为啥ffmpeg都安装好了,还是不能对节目源进行筛选? | closed | 2024-10-22T10:52:42Z | 2024-10-22T12:26:36Z | https://github.com/Guovin/iptv-api/issues/437 | [
"invalid",
"question"
] | lzedwin | 4 |
hankcs/HanLP | nlp | 596 | 现在elasticsearch 如日中天,谁能提供一个 elasticsearch与 HanLP结合的分词插件?谢谢 | closed | 2017-08-04T02:54:08Z | 2017-08-06T03:32:48Z | https://github.com/hankcs/HanLP/issues/596 | [
"duplicated"
] | hubiao1 | 1 | |
stanfordnlp/stanza | nlp | 599 | Create Amharic support | **Is your feature request related to a problem? Please describe.**
Request Amharic support which does not exist in stanza
**Describe the solution you'd like**
Add Amharic support
**Describe alternatives you've considered**
SpaCY, still work in progress for handling multi-word tokens.
https://github.com/explosion/spaCy/discussions/6648#discussioncomment-266720
**Additional context**
Would like to contribute more for this language
| closed | 2021-01-18T18:48:19Z | 2021-01-25T01:19:07Z | https://github.com/stanfordnlp/stanza/issues/599 | [
"enhancement",
"data issue"
] | yosiasz | 9 |
ranaroussi/yfinance | pandas | 1,234 | Uses HTTPX instead of Requests. | **Describe the problem**
Requests and other libraries without HTTP 2 support are not performing well, causing problems due to protections in some cases.
**Describe the solution**
Replacing Requests with HTTPX (along with some request header settings) can improve the approach and perform better results.
**Additional context**
| closed | 2022-12-11T11:42:49Z | 2023-02-02T18:49:17Z | https://github.com/ranaroussi/yfinance/issues/1234 | [] | bergginu | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.