repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
psf/black | python | 4,495 | string-processing f-string debug expressions quotes changed when using conversion | **Describe the bug**
When using unstable, f-string quotes get incorrectly changed if the expression contains a conversion. This changes program behavior. As an example, `"" f'{""=!r}'` currently formats to `f"{''=!r}"`.
```pycon
>>> print("" f'{""=!r}')
""=''
>>> print(f"{''=!r}")
''=''
```
**To Reproduce**
Format this code
```py
"" f'{""=!r}'
```
[Playground link](https://black.vercel.app/?version=stable&state=_Td6WFoAAATm1rRGAgAhARYAAAB0L-Wj4ACXAFtdAD2IimZxl1N_Wk7-Dfg-d5s6AnMmL7B_WZZRlN_6aBCr8bFyFffmq3_Lm33Nvx2BWLIsbCFGc_IwSfk4cIkXOuIJOKWEI_BYc9LTyr-qC_3OnwK3-YSdkaLJQAAAALNG27MBBhCJAAF3mAEAAAAQ4riMscRn-wIAAAAABFla)
**Expected behavior**
The program behavior should be unchanged.
**Additional context**
I found this while investigating how to fix #4493/#4494. The issue comes in two parts:
- String concatenation allows for f-string quotes to be changed.
- This is the same issue that leads to both #4493 and #4494.
- Both still have the same issue even if the first string isn't an f-string.
- The regex used to detect quotes in debug f-string expressions is flawed
- Here's the current regex: `.*[\'\"].*(?<![!:=])={1}(?!=)(?![^\s:])`
- While looking at the [f-string lexical analysis section](https://docs.python.org/3/reference/lexical_analysis.html#f-strings), I noticed it is missing logic for the conversions (`!s`, `!r`, `!a`)
The second part is fairly easy to fix, the regex just needs to be updated.
The first part is what's giving me trouble. Based on the fact that [all the f-string formatting code is commented out](https://github.com/psf/black/blob/f54f34799b52dbe48c5f9d5f31ad0f9bf82d0fa5/src/black/linegen.py#L524-L555), I assume there has been some sort of mismatch in intent, since part of that is `normalize_fstring_quotes`. To me, this looks like a source of code duplication, since `normalize_fstring_quotes` will need to handle all these same edge cases. That leads to the easiest way to solve this, which is to disallow merges that change an f-string's quote. As for anything past that, I'm not sure what the best way is to prevent the code in merging and normalizing from desyncing.
| open | 2024-10-22T20:20:02Z | 2024-10-25T20:24:37Z | https://github.com/psf/black/issues/4495 | [
"T: bug",
"F: strings"
] | MeGaGiGaGon | 2 |
developmentseed/lonboard | data-visualization | 352 | Feature request: export to Protomaps? | Lonboard is amazing for exploratory data analysis - thank you!
however, not sure how best to publish the results of an analysis.
Would it be feasible to export a lonboard map to the protomaps pmtiles format? (E.g. for embedding interactively on a static website) | closed | 2024-02-09T13:56:24Z | 2024-02-29T20:30:26Z | https://github.com/developmentseed/lonboard/issues/352 | [] | jaanli | 3 |
tfranzel/drf-spectacular | rest-api | 1,006 | Auto schema generation doesn't support pydantic schema | **Describe the bug**
I currently use `SQLModel` and `Pydantic` to support queries to my database and serialization of said data into json. Pydantic comes with a handy `schema` feature for all BaseModels that generates either a json or dict schema.
When I use a single model with no nested fields and pass in the Model's dict schema to the `@extend_schema` decorator as one o the response params, ex: `responses={200: my_model.schema()}`, this works fine. However, if I have a model with another model as one of its fields (see [here](https://docs.pydantic.dev/1.10/usage/schema/#schema-customization) for an example), the schema generated from that isn't interpreted correctly by `drf-spectacular`.
Specifically, the problem seems to be that `Pydantic` generates its documentation for nested models inside of a `definitions` key. I'm wondering if support could be added to `drf-spectacular` to move anything declared under the `definitions` keyword to the `components/schema` section of the api docs. `Pydantic` does add support for customizing the `$ref` location/prefix for models referenced elsewhere, so this wouldn't be an issue.
| closed | 2023-06-18T12:28:07Z | 2023-07-17T11:27:52Z | https://github.com/tfranzel/drf-spectacular/issues/1006 | [] | sydney-runkle | 29 |
autogluon/autogluon | data-science | 4,427 | Fix Colab and Kaggle Import on Source Install | (Help Wanted) If anyone knows how fix the below issue so we no longer need to restart the runtime after source install, please let us know.
Currently on Kaggle and Colab, the following fails:
```
!git clone https://github.com/autogluon/autogluon
!cd autogluon && pip install -e common/
from autogluon.common import FeatureMetadata
```
Exception:
```
ImportError: cannot import name 'FeatureMetadata' from 'autogluon.common' (unknown location)
```
```
import sys
print(sys.path)
# --> autogluon.common is missing in sys.path
```
On Kaggle, this works:
```
!git clone https://github.com/autogluon/autogluon
!cd autogluon && pip install -e common/
# Run -> Restart & Clear Cell Outputs -> Continue to import statement
from autogluon.common import FeatureMetadata
```
On Colab, this works:
```
!git clone https://github.com/autogluon/autogluon
!cd autogluon && pip install -e common/
# Runtime -> Restart session -> Continue to import statement
from autogluon.common import FeatureMetadata
```
Note: You can replace `pip install -e common/` with `./full_install.sh` which will get the same end result.
### Potential Solutions
#### Remove `-e` (editable install flag)
The following works:
```
!git clone https://github.com/autogluon/autogluon
!cd autogluon && pip install common/
from autogluon.common import FeatureMetadata
```
This remove the `-e` (editable install) flag from the common install. Unsure what drawbacks this has.
### Related Links
- https://stackoverflow.com/questions/57838013/modulenotfounderror-after-successful-pip-install-in-google-colaboratory | open | 2024-08-24T04:21:00Z | 2025-03-07T20:36:25Z | https://github.com/autogluon/autogluon/issues/4427 | [
"bug",
"help wanted",
"env: kaggle",
"install",
"env: colab",
"priority: 0"
] | Innixma | 1 |
polakowo/vectorbt | data-visualization | 683 | Exits do not trigger consistently when using limit orders with from_signals | To replicate, let's say I have a 5min dataframe for AAPL where the index includes ETH session (04:00 to 19:55 US/Eastern), and I have a column 'vwap' representing the anchored VWAP, which is np.nan outside of the RTH session.
Generate an entry at 9:30, and an exit at 15:30. Without limit orders, this works fine:
```python
pf = vbt.Portfolio.from_signals(
df['close'],
entries = df.index.time == dt.time(9,30),
exits = df.index.time >= dt.time(15,30),
freq='5min'
)
pf.trades.plot()
```

You can see that on each day, each entry is paired with a 15:30 exit.
With a limit order, exits are inconsistently enforced:
```python
pf = vbt.Portfolio.from_signals(
df['close'],
entries = df.index.time == dt.time(9,30),
exits = df.index.time >= dt.time(15,30),
price=df['vwap'],
order_type="limit",
freq='5min'
)
pf.trades.plot()
```

The 1st and 5th entries exited on the first bar of the next day. The last entry exited at 9:30 on the next day. Some days don't have entries since VWAP isn't hit, which is correct.
I thought the issue might have been with open limit orders, but I tried playing around with different variations of limit_tif, limit_expiry and time_delta_format to get the orders to expire ASAP, but none of that worked.
For reference, my VWAP column looks like this for a single day:

| closed | 2024-01-17T22:01:55Z | 2024-01-18T14:13:33Z | https://github.com/polakowo/vectorbt/issues/683 | [] | rxhh | 2 |
browser-use/browser-use | python | 832 | buildDomTree.js may not use getEventListeners | ### Bug Description
I've found elements like this: <div data-v-555aab2b="">I Accept</div> that lack clear interactive features, but they do have click events. When I try to get the listeners using the Chrome DevTools API through JavaScript code, it returns an empty value.
This should be a relatively common situation. It's possible to retrieve these listeners within the F12 developer mode, but playwright page.evaluate lacks the necessary permissions. This fundamental issue can be resolved through the CDP (Chrome DevTools Protocol) or other means.
// Helper function to safely get event listeners
function getEventListeners(el) {
try {
// Try to get listeners using Chrome DevTools API
return window.getEventListeners?.(el) || {};
} catch (e) {
<img width="544" alt="Image" src="https://github.com/user-attachments/assets/1e341f9a-305d-40a5-9d47-ba86e5542c93" />
### Reproduction Steps
Just use the html code to test the function.
<!DOCTYPE html>
<html>
<head>
<title>Clickable Div Example</title>
<style>
div[data-v-555aab2b] {
border: 1px solid #ccc;
padding: 10px;
cursor: pointer;
}
</style>
</head>
<body>
<div data-v-555aab2b="">I Accept</div>
<script>
const divElement = document.querySelector('div[data-v-555aab2b]');
divElement.addEventListener('click', () => {
console.log('Div with data-v-555aab2b clicked!');
alert('Div with data-v-555aab2b clicked!');
});
</script>
</body>
</html>
### Code Sample
```python
url = "file:///test.html"
initial_actions = [
{'go_to_url': {'url': url}},
]
agent = Agent(
task="test",
initial_actions=initial_actions,
)
agent.run()
```
### Version
pip 0.1.37
### LLM Model
Gemini 1.5 Pro
### Operating System
macos 13
### Relevant Log Output
```shell
``` | closed | 2025-02-23T10:42:48Z | 2025-03-11T02:22:17Z | https://github.com/browser-use/browser-use/issues/832 | [
"bug"
] | awes61 | 3 |
FujiwaraChoki/MoneyPrinterV2 | automation | 15 | Error when running on Mac | 
Hi. I got error when running on Mac.
What should i do ? | closed | 2024-02-19T05:28:59Z | 2024-12-31T08:26:23Z | https://github.com/FujiwaraChoki/MoneyPrinterV2/issues/15 | [] | s4-hub | 4 |
biolab/orange3 | scikit-learn | 6,129 | Orange installed from conda/pip does not have an icon (on Mac) | ### Discussed in https://github.com/biolab/orange3/discussions/6122
<div type='discussions-op-text'>
<sup>Originally posted by **DylanZDD** September 4, 2022</sup>
<img width="144" alt="Screen Shot 2022-09-04 at 12 26 04" src="https://user-images.githubusercontent.com/44270787/188297386-c463907c-9e7f-45ea-b46f-b0ad9b6f8f23.png">
<img width="1431" alt="Screen Shot 2022-09-04 at 12 26 13" src="https://user-images.githubusercontent.com/44270787/188297398-7584db2e-be45-4b6b-839a-f20de5185e50.png">
</div>
A quick search led me here:
https://stackoverflow.com/questions/33134594/set-tkinter-python-application-icon-in-mac-os-x
I think other platforms have similar problems. | closed | 2022-09-05T11:49:28Z | 2023-01-20T08:39:42Z | https://github.com/biolab/orange3/issues/6129 | [
"bug",
"snack"
] | markotoplak | 0 |
collerek/ormar | fastapi | 1,031 | How to query a string in all table columns? | ### Discussed in https://github.com/collerek/ormar/discussions/911
<div type='discussions-op-text'>
<sup>Originally posted by **lucashahnndev** October 29, 2022</sup>
I apologize if something is wrong because I'm Brazilian and I'm using the translator
I searched the documentation and didn't find anything similar, or at least I didn't understand! I would like to make a query without knowing which column should search so make a query on all columns of the table.</div> | open | 2023-03-15T20:15:17Z | 2023-03-15T20:15:17Z | https://github.com/collerek/ormar/issues/1031 | [] | lucashahnndev | 0 |
amidaware/tacticalrmm | django | 1,182 | Stuck at Mesh Central not ready yet | **Server Info (please complete the following information):**
- OS: Ubuntu 20.04
- Browser: Chrome
- RMM Version (as shown in top left of web UI): Fresh install (Latest Version)
**Installation Method:**
- Standard
**Describe the bug**
Fresh install stuck at Mesh Central not ready yet
**Screenshots**

**Additional context**
I searched and most of case they said not enough ram . my server has 16gb of ram and 500gb of storage
Any idea what i could try ?
| closed | 2022-06-22T17:39:22Z | 2022-06-22T18:17:38Z | https://github.com/amidaware/tacticalrmm/issues/1182 | [] | Aghiad90 | 3 |
pennersr/django-allauth | django | 4,068 | WebAuthn Login POST API - Handling X-Session-Token for Initial Login Request (App Usage) | I'm encountering an issue with the WebAuthn login process using the app usage of the API in Django-Allauth. According to the documentation, the X-Session-Token header is required when using the app client for API calls.
While I am successfully able to perform actions such as adding and deleting WebAuthn tokens for a user account, I'm running into a problem when trying to log in using a Passkey. Specifically, the POST request to the WebAuthn login endpoint is returning a 400 error with the message Incorrect code.
Error Details:
```
{
"url": "/api/_allauth/app/v1/auth/webauthn/login",
"statusCode": 400,
"statusMessage": "Bad Request",
"data": {
"status": 400,
"errors": [
{
"message": "Incorrect code.",
"code": "incorrect_code",
"param": "credential"
}
]
}
}
```
Debugging Information
After extensive debugging, I've determined that the issue likely relates to the X-Session-Token header. Here's what I observed:
1.Login with Password Flow:
- Logging in with a password and being redirected to /authenticate/webauthn allows successful login using the WebAuthn passkey.
- In this scenario, I receive a 401 response from the login attempt, which includes a session_token in the error response. I then set this session_token and proceed to log in successfully.
2.Direct WebAuthn Login Flow:
- When attempting to log in directly using WebAuthn (without a password first), the login GET method does not return a session_token, and hence I cannot include the X-Session-Token in the initial POST request.
- This is the GET response from `webauthn/login`
```
{
status: 200,
data: {
request_options: {
publicKey: {
challenge: "QTc7Vf1NF2iI-LlK7A1ZlaIH4GwdcjPVvgrVxV3gbtY",
rpId: "webside.gr",
allowCredentials: [],
userVerification: "preferred"
}
}
}
}
```
3.Workaround:
- By logging in using a password first, then proceeding with WebAuthn without refreshing the browser (SPA), I can successfully log in using the passkey.
Main Question
How should the X-Session-Token be handled in the initial WebAuthn login API call? Since this is the first API call I make, there's no session token available to include in the request headers. Is there a specific flow or additional API call I should make to obtain this token, or is this a potential bug in the library?
Code Example
Here is the relevant code snippet:
```
async function onSubmit() {
try {
loading.value = true
const optResp = await getWebAuthnRequestOptionsForLogin()
const jsonOptions = optResp?.data.request_options
if (!jsonOptions) {
throw new Error('No creation options')
}
const options = parseRequestOptionsFromJSON(jsonOptions)
const credential = await get(options)
session.value = await loginUsingWebAuthn({
credential,
})
await performPostLoginActions()
}
catch (error) {
console.error('=== Error ===', error)
toast.add({
title: t('common.error.default'),
color: 'red',
})
}
finally {
await finalizeLogin()
}
}
``` | closed | 2024-08-23T15:31:03Z | 2024-08-23T22:48:17Z | https://github.com/pennersr/django-allauth/issues/4068 | [
"Unconfirmed"
] | vasilistotskas | 3 |
jmcnamara/XlsxWriter | pandas | 266 | Issues hyperlinking | Hello,
I am using XlsxWriter in order to export some information into a XLSX file.
I am using the last version of XlsxWriter and Python 2.7.6.
The issue comes when I print out the following information into a field of the excel file:
_http://www.test.com/…/test-my-stuff-well…/ blablabla blablabla_
As you can see there is a url and text. So when XlsxWriter tries to hyperlink the url it does happen an error that makes the excel creation fail or forces the excel app to recover to open the file.
My solution was disable the hyperlinking using:
_workbook = xlsxwriter.Workbook(output, {'strings_to_urls': False})_
Is there a way to handle this situation without disabling the hyperlinking?
| closed | 2015-06-12T15:45:13Z | 2015-06-15T14:17:49Z | https://github.com/jmcnamara/XlsxWriter/issues/266 | [
"question"
] | clopezcapo | 7 |
microsoft/JARVIS | deep-learning | 215 | data目录下hg上llm的元数据文件p0_models.jsonl怎么获取 | data目录下hg上llm的元数据文件p0_models.jsonl怎么获取?huggingface上的model增加了很多,这个上面过滤了下只有673个,请问下导出最新的这个数据? | open | 2023-06-27T08:23:01Z | 2023-06-27T08:23:01Z | https://github.com/microsoft/JARVIS/issues/215 | [] | elven2016 | 0 |
mirumee/ariadne | api | 294 | Error importing module (circular import) when the local script is called graphql.py | **Description**: when trying to import the ariadne module (either as `import ariadne` or `from ariadne import ...` when your own module file is named `graphql.py`, it causes the following error:
```
Exception has occurred: ImportError
cannot import name 'gql' from partially initialized module 'ariadne' (most likely due to a circular import) (/usr/local/lib/python3.8/site-packages/ariadne/__init__.py)
File "/workspaces/project/graphql.py", line 1, in <module>
from ariadne import gql
File "/workspaces/project/graphql.py", line 1, in <module>
from ariadne import gql
```
**Workaround**: Renaming your file from `graphql.py` to anything else resolves this error. | closed | 2020-01-15T15:39:08Z | 2020-01-15T15:58:10Z | https://github.com/mirumee/ariadne/issues/294 | [] | v1b1 | 3 |
vaexio/vaex | data-science | 1,275 | [QUESTION] How to handle dates in filters or how to use boolean arrays in select? | **How to handle dates in filters or how to use boolean arrays in select?**
I like to filter by a column with dates, given some date range or a boolean array. Using Pandas I'm able to use either way, however, with vaex I get back either a `NoneType` (with boolean arrays) or get an error message when directly compare vaex's date column with a date.
I think for me the option to just throw in a boolean array will be more useful, but I'm also wondering if I can use date comparison directly with vaex.
**Example**
Given a Dataframe
```
import numpy as np
import pandas as pd
np.random.seed(0)
arrays = np.concatenate(np.array([range(10), range(10)]))
df = pd.DataFrame(
pd.date_range("20200101", freq="D", periods=arrays.shape[0]),
index=arrays,
columns=["date"],
)
df.index.names = ["id"]
df.head(3)
```
returns
```
date
id
1 2020-01-01
2 2020-01-02
3 2020-01-03
```
I try to select a date range from this data, using Pandas I can do
```
pdsel = df.date < pd.Timestamp("20200103")
df.loc[pdsel]
```
returns
```
date
id
1 2020-01-01
2 2020-01-02
```
now moving on to vaex
```
import vaex
vaex_df = vaex.from_pandas(df, copy_index=True)
```
first, just using the boolean output from `pdsel` returns `None`
```
type(vaex_df[pdsel])
```
returns
```
NoneType
```
and finally using vaex itself results in a lengthy error message
```
vaex_df[vaex_df.date < pd.Timestamp("20200103")]
```
returns (just a tiny bit of the whole error message)
```
[...]
File "<string>", line 1
(date < 2020-01-03 00:00:00)
^
SyntaxError: leading zeros in decimal integer literals are not permitted; use an 0o prefix for octal integers
File "<string>", line 1
(date < 2020-01-03 00:00:00)
^
SyntaxError: leading zeros in decimal integer literals are not permitted; use an 0o prefix for octal integers
[...]
``` | closed | 2021-03-20T22:52:24Z | 2021-03-21T14:18:23Z | https://github.com/vaexio/vaex/issues/1275 | [] | Eisbrenner | 4 |
zama-ai/concrete-ml | scikit-learn | 875 | Significant Accuracy Decrease After FHE Execution | ## Summary
What happened/what you expected to happen?
## Description
We've observed significant accuracy discrepancies when running our model with different FHE settings. The original PyTorch model achieves 63% accuracy. With FHE disabled, the accuracy drops to 50%, and with FHE execution enabled, it further decreases to 32%. The compilation uses a dummy input of shape (1, 100) with random values (numpy.random.randn(1, 100).astype(numpy.float32)). Since the accuracy with FHE disabled matches the quantized model's accuracy, it suggests that the accuracy loss from 63% to 50% is likely due to quantization. However, the substantial drop to 32% when enabling FHE execution indicates a potential issue with the FHE implementation or configuration that requires further investigation.
- versions affected: concrete-ml 1.6.1
- python version: 3.10
- config (optional: HW, OS):
- workaround (optional): if you’ve a way to workaround the issue
- proposed fix (optional): if you’ve a way to fix the issue
Step by step procedure someone should follow to trigger the bug:
<details><summary>minimal POC to trigger the bug</summary>
<p>
```python
print("Minimal POC to reproduce the bug")



Andthe above screenshots are my compile process and process of performing the compile model on the encrypted data.
I guess that the issue is related to FHE execution.
And i write this file based on this open source code:https://github.com/zama-ai/concrete-ml/blob/main/docs/advanced_examples/ClientServer.ipynb
```
</p>
</details>
| open | 2024-09-19T12:50:45Z | 2025-03-04T15:02:35Z | https://github.com/zama-ai/concrete-ml/issues/875 | [
"bug"
] | Sarahfbb | 7 |
pywinauto/pywinauto | automation | 1,201 | How to use Pre-supplied Tests? | I tried the tests like below:
```
truncation.TruncationTest(app.windows(visible_only=True))
missalignment.MissalignmentTest(app.windows(visible_only=True))
```
but seems they does not work as expected, cause I do have something controls truncation, but it report nothing.
I have also tried your examples/notepad_fast.py (where I found code about tests)
`bugs = app.PageSetupDlg.run_tests('Truncation')`
It seems does not run as expected, I mean it will not run code in `if win.ref:` condition.
and what does this **win.ref** used for? | open | 2022-04-06T07:37:11Z | 2022-05-12T14:04:49Z | https://github.com/pywinauto/pywinauto/issues/1201 | [
"question"
] | saimadao | 2 |
mlfoundations/open_clip | computer-vision | 171 | Fp16 or Fp32 in the inference phase? | Hello~
Thanks for the great work!
And I'd like to know that the impact of using fp16 not fp32 in the inference phase (e.g., the ViT-H-14). I find that all the models in OpenAI clip will convert to fp16 automatically. So does it matter here?
Thank you~
Zhiliang. | closed | 2022-09-21T17:42:04Z | 2022-09-21T18:23:56Z | https://github.com/mlfoundations/open_clip/issues/171 | [] | pengzhiliang | 2 |
microsoft/JARVIS | pytorch | 72 | I seems to have deploy the server successfully, but a 404 error is returned in the web API | Dear Jarvis Team
I'm new to posting issues on Github, Please excuse any unclear descriptions.
I followed the Guidance for server as follow:
```
# setup env
cd server
conda create -n jarvis python=3.8
conda activate jarvis
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
pip install -r requirements.txt
# download models
cd models
sh download.sh # required when `inference_mode` is `local` or `hybrid`
# run server
cd ..
python models_server.py --config config.yaml # required when `inference_mode` is `local` or `hybrid`
python awesome_chat.py --config config.yaml --mode server # for text-davinci-003
```
I believe I successfully download the models and I try run command: `python models_server.py --config config.yaml `
I got followed return:

I change the config.yaml file as follow (mainly from localhost to 0.0.0.0 cause I run it on a server machine):

But when I try to test the model out according to the guidance, I got a 404 error as return, The guidance show me that I could send a request to the port to "communicate" with my deployed model on the server as follow:
```
# request
curl --location 'http://localhost:8004/tasks' \
--header 'Content-Type: application/json' \
--data '{
"messages": [
{
"role": "user",
"content": "based on pose of /examples/d.jpg and content of /examples/e.jpg, please show me a new image"
}
]
}'
# response
[{"args":{"image":"/examples/d.jpg"},"dep":[-1],"id":0,"task":"openpose-control"},{"args":{"image":"/examples/e.jpg"},"dep":[-1],"id":1,"task":"image-to-text"},{"args":{"image":"<GENERATED>-0","text":"<GENERATED>-1"},"dep":[1,0],"id":2,"task":"openpose-text-to-image"}]
```
So I write a small python code as:
```
import requests
import json
url = 'http://localhost:8005/hugginggpt'
headers = {'Content-Type': 'application/json'}
data = {
"messages": [
{
"role": "user",
"content": "please generate a video based on \"Spiderman is surfing\""
}
]
}
response = requests.post(url, headers=headers, data=json.dumps(data))
if response.status_code == 200:
result = json.loads(response.content)
print(result)
else:
print('Request failed with status code: %d' % response.status_code)
```
And I got following error:
```
Request failed with status code: 404
```
Similarly, I get the same message when I use my browser to access port 8005 of the ip of the server where I have deployed jarvis:

Is there anything I forget to do for a successfully deploy?
Many thanks!
| closed | 2023-04-06T13:17:07Z | 2023-04-06T23:07:23Z | https://github.com/microsoft/JARVIS/issues/72 | [] | Daeda1used | 13 |
collerek/ormar | fastapi | 789 | Joins result in an error with tables that have a period in their name | **Describe the bug**
https://github.com/collerek/ormar/blob/a27d7673a5354429bef4158297d76a58522c1579/ormar/queryset/join.py#L112
Because `SqlJoin._on_clause` expects a `from_clause` string in the format of "table_name.column_name", if your table name has a period in it, the string will be chunked into more than two parts, causing an error when it cannot set the two variables.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a model with a foreign key and a tablename with a period in it
2. Attempt to filter data over the foreign key relation, which should result in a join and then in an error
```ValueError: too many values to unpack (expected 2) ```
**Expected behavior**
Ormar handles the period without issue.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Versions (please complete the following information):**
- Database backend used: MySQL and SQLite
- Python version: 3.10
- `ormar` version: 0.11.2
- `pydantic` version: 1.9.1
**Solution**
One solution I would propose since users generally don't use periods in their columns is to just reserve the last chunk for the column name and do something like this.
```
parts = from_clause.split(".")
table = ".".join(parts[:-1])
column = parts[-1]
```
However, a more robust solution would be to just keep table name and column name separate strings until they are run through `quotter()`.
Let me know if there is a preferred method to solve this and I'll open a pull request. | closed | 2022-08-19T19:53:51Z | 2023-09-06T20:13:04Z | https://github.com/collerek/ormar/issues/789 | [
"bug"
] | pmdevita | 2 |
LAION-AI/Open-Assistant | python | 3,749 | Potential Information Leakage | In the source code, sensitive informaiton like `api_key` is inserted into the log. It is a potential security issue as bescribed in [cwe-532](https://cwe.mitre.org/data/definitions/532.html). The `api_key` could be redacted.
The leakage could happen in
[1](https://github.com/LAION-AI/Open-Assistant/blob/f1e6ed9526f5817531f3ab85441a40b3671ddccb/backend/oasst_backend/api/v1/admin.py#L44)
[2](https://github.com/LAION-AI/Open-Assistant/blob/f1e6ed9526f5817531f3ab85441a40b3671ddccb/inference/server/main.py#L119)
[3](https://github.com/LAION-AI/Open-Assistant/blob/f1e6ed9526f5817531f3ab85441a40b3671ddccb/inference/server/oasst_inference_server/worker_utils.py#L58) | open | 2024-03-11T14:52:20Z | 2024-03-11T14:52:20Z | https://github.com/LAION-AI/Open-Assistant/issues/3749 | [] | nevercodecorrect | 0 |
ansible/ansible | python | 84,495 | "creates" is ignored when multiple files are specified, and the files are outside current directory | ### Summary
The "creates" constraint in `shell` behaves differently depending on where the listed files are located.
When a single file is given, it appears to behave as expected, i.e., if the file already exists then the task is not performed.
When multiple files are given, and they are in the task path, behavior is as above.
When multiple files are given, and they are not in the task path, the task is run even though the files already exist.
See playbook in section "Steps to reproduce". Run that playbook twice to reproduce.
### Issue Type
Bug Report
### Component Name
shell
### Ansible Version
```console
$ ansible --version
ansible [core 2.16.3]
config file = None
configured module search path = ['/home/ubuntu/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/ubuntu/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
```
### OS / Environment
Ubuntu 24.04
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts:
- '127.0.0.1'
tasks:
- name: success - Create single file in ~
args:
creates: /home/ubuntu/a
ansible.builtin.shell: |
touch /home/ubuntu/a
- name: success - Create multiple files in current directory
args:
creates:
- a
- b
ansible.builtin.shell: |
touch a b
- name: failure - Create multiple files in ~
args:
creates:
- /home/ubuntu/a
- /home/ubuntu/b
ansible.builtin.shell: |
touch /home/ubuntu/a /home/ubuntu/b
```
### Expected Results
See the section "Summary".
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
ansible-playbook [core 2.16.3]
config file = None
configured module search path = ['/home/ubuntu/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/ubuntu/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-playbook
python version = 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
No config file found; using defaults
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: main.yml *************************************************************
1 plays in ansible/main.yml
PLAY [127.0.0.1] ***************************************************************
TASK [Gathering Facts] *********************************************************
task path: /home/ubuntu/.dotfiles/ansible/main.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: ubuntu
<127.0.0.1> EXEC /bin/sh -c 'echo ~ubuntu && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/ubuntu/.ansible/tmp `"&& mkdir "` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1735297572.0545905-2753-218625252217575 `" && echo ansible-tmp-1735297572.0545905-2753-218625252217575="` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1735297572.0545905-2753-218625252217575 `" ) && sleep 0'
Using module file /usr/lib/python3/dist-packages/ansible/modules/setup.py
<127.0.0.1> PUT /home/ubuntu/.ansible/tmp/ansible-local-27508expt46y/tmpn0vwkjic TO /home/ubuntu/.ansible/tmp/ansible-tmp-1735297572.0545905-2753-218625252217575/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/ubuntu/.ansible/tmp/ansible-tmp-1735297572.0545905-2753-218625252217575/ /home/ubuntu/.ansible/tmp/ansible-tmp-1735297572.0545905-2753-218625252217575/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 /home/ubuntu/.ansible/tmp/ansible-tmp-1735297572.0545905-2753-218625252217575/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/ubuntu/.ansible/tmp/ansible-tmp-1735297572.0545905-2753-218625252217575/ > /dev/null 2>&1 && sleep 0'
ok: [127.0.0.1]
TASK [success - Create single file in ~] ***************************************
task path: /home/ubuntu/.dotfiles/ansible/main.yml:5
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: ubuntu
<127.0.0.1> EXEC /bin/sh -c 'echo ~ubuntu && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/ubuntu/.ansible/tmp `"&& mkdir "` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1735297572.6977189-2832-40327463316247 `" && echo ansible-tmp-1735297572.6977189-2832-40327463316247="` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1735297572.6977189-2832-40327463316247 `" ) && sleep 0'
Using module file /usr/lib/python3/dist-packages/ansible/modules/command.py
<127.0.0.1> PUT /home/ubuntu/.ansible/tmp/ansible-local-27508expt46y/tmp3s5a50wb TO /home/ubuntu/.ansible/tmp/ansible-tmp-1735297572.6977189-2832-40327463316247/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/ubuntu/.ansible/tmp/ansible-tmp-1735297572.6977189-2832-40327463316247/ /home/ubuntu/.ansible/tmp/ansible-tmp-1735297572.6977189-2832-40327463316247/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 /home/ubuntu/.ansible/tmp/ansible-tmp-1735297572.6977189-2832-40327463316247/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/ubuntu/.ansible/tmp/ansible-tmp-1735297572.6977189-2832-40327463316247/ > /dev/null 2>&1 && sleep 0'
ok: [127.0.0.1] => {
"changed": false,
"cmd": "touch /home/ubuntu/a\n",
"delta": null,
"end": null,
"invocation": {
"module_args": {
"_raw_params": "touch /home/ubuntu/a\n",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": "/home/ubuntu/a",
"executable": null,
"expand_argument_vars": true,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "Did not run command since '/home/ubuntu/a' exists",
"rc": 0,
"start": null,
"stderr": "",
"stderr_lines": [],
"stdout": "skipped, since /home/ubuntu/a exists",
"stdout_lines": [
"skipped, since /home/ubuntu/a exists"
]
}
TASK [success - Create multiple files in current directory] ********************
task path: /home/ubuntu/.dotfiles/ansible/main.yml:11
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: ubuntu
<127.0.0.1> EXEC /bin/sh -c 'echo ~ubuntu && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/ubuntu/.ansible/tmp `"&& mkdir "` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1735297572.8988683-2857-167062530684105 `" && echo ansible-tmp-1735297572.8988683-2857-167062530684105="` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1735297572.8988683-2857-167062530684105 `" ) && sleep 0'
Using module file /usr/lib/python3/dist-packages/ansible/modules/command.py
<127.0.0.1> PUT /home/ubuntu/.ansible/tmp/ansible-local-27508expt46y/tmp65ji5wt4 TO /home/ubuntu/.ansible/tmp/ansible-tmp-1735297572.8988683-2857-167062530684105/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/ubuntu/.ansible/tmp/ansible-tmp-1735297572.8988683-2857-167062530684105/ /home/ubuntu/.ansible/tmp/ansible-tmp-1735297572.8988683-2857-167062530684105/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 /home/ubuntu/.ansible/tmp/ansible-tmp-1735297572.8988683-2857-167062530684105/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/ubuntu/.ansible/tmp/ansible-tmp-1735297572.8988683-2857-167062530684105/ > /dev/null 2>&1 && sleep 0'
ok: [127.0.0.1] => {
"changed": false,
"cmd": "touch a b\n",
"delta": null,
"end": null,
"invocation": {
"module_args": {
"_raw_params": "touch a b\n",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": "['a', 'b']",
"executable": null,
"expand_argument_vars": true,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "Did not run command since '['a', 'b']' exists",
"rc": 0,
"start": null,
"stderr": "",
"stderr_lines": [],
"stdout": "skipped, since ['a', 'b'] exists",
"stdout_lines": [
"skipped, since ['a', 'b'] exists"
]
}
TASK [failure - Create multiple files in ~] ************************************
task path: /home/ubuntu/.dotfiles/ansible/main.yml:19
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: ubuntu
<127.0.0.1> EXEC /bin/sh -c 'echo ~ubuntu && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/ubuntu/.ansible/tmp `"&& mkdir "` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1735297573.0390477-2882-220125337962637 `" && echo ansible-tmp-1735297573.0390477-2882-220125337962637="` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1735297573.0390477-2882-220125337962637 `" ) && sleep 0'
Using module file /usr/lib/python3/dist-packages/ansible/modules/command.py
<127.0.0.1> PUT /home/ubuntu/.ansible/tmp/ansible-local-27508expt46y/tmpa3nihfbn TO /home/ubuntu/.ansible/tmp/ansible-tmp-1735297573.0390477-2882-220125337962637/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/ubuntu/.ansible/tmp/ansible-tmp-1735297573.0390477-2882-220125337962637/ /home/ubuntu/.ansible/tmp/ansible-tmp-1735297573.0390477-2882-220125337962637/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 /home/ubuntu/.ansible/tmp/ansible-tmp-1735297573.0390477-2882-220125337962637/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/ubuntu/.ansible/tmp/ansible-tmp-1735297573.0390477-2882-220125337962637/ > /dev/null 2>&1 && sleep 0'
changed: [127.0.0.1] => {
"changed": true,
"cmd": "touch /home/ubuntu/a /home/ubuntu/b\n",
"delta": "0:00:00.003488",
"end": "2024-12-27 11:06:13.155383",
"invocation": {
"module_args": {
"_raw_params": "touch /home/ubuntu/a /home/ubuntu/b\n",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": "['/home/ubuntu/a', '/home/ubuntu/b']",
"executable": null,
"expand_argument_vars": true,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2024-12-27 11:06:13.151895",
"stderr": "",
"stderr_lines": [],
"stdout": "",
"stdout_lines": []
}
PLAY RECAP *********************************************************************
127.0.0.1 : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | open | 2024-12-27T11:19:47Z | 2025-01-14T15:48:48Z | https://github.com/ansible/ansible/issues/84495 | [
"module",
"bug",
"affects_2.16",
"data_tagging"
] | erikv85 | 7 |
Sanster/IOPaint | pytorch | 180 | I have two suggestions | First,thanks you for sharing the project,It's very nice.
details:
1)lama.py
when I have a model named big-lama.pt,I should not download from the remote server.I suggest you change lama_mdoel_url support the path,like this:
```
url_or_path = LAMA_MODEL_URL
if os.path.exists(url_or_path):
model_path = url_or_path
else:
model_path = download_model(url_or_path)
```
2)server.py
it's maybe a bug,when I run the code,the debug console says cache_timeout=0 is not support,I delete the cache_timeout=0,then all is ok. | open | 2023-01-12T03:05:28Z | 2023-01-12T03:05:28Z | https://github.com/Sanster/IOPaint/issues/180 | [] | aqie13 | 0 |
sqlalchemy/alembic | sqlalchemy | 514 | shell script doesn't work for python paths with spaces | **Migrated issue, originally created by Nonprofit Metrics**
Upon installing Alembic into a Python Virtualenv, where a space exists in the python path, the "alembic" shell command fails with the message "bad interpreter: No such file or directory". I'm using alembic 1.0.1, just installed from pip.
It appears the error is from the shebang line in the script (...bin/alembic), which for me reads:
```
#!"/Users/me/Directory With Spaces/myvirtualenv/bin/python2.7"
```
No way of escaping the path or spaces works at all. The way I got it to work is to replace the shebang line with:
```
#!/bin/sh
'''exec' "/Users/me/Directory With Spaces/myvirtualenv/bin/python2.7" "$0" "$@"
' '''
```
This is the way celery and others solve the problem.
I couldn't figure out where setup.py generated the shell "bin/alembic" file, or else I would've submitted a pull request.
Thanks!
| closed | 2018-10-24T00:34:53Z | 2018-10-24T01:15:02Z | https://github.com/sqlalchemy/alembic/issues/514 | [
"bug",
"low priority",
"command interface"
] | sqlalchemy-bot | 7 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 700 | NotADirectoryError occurs when using combine_A_and_B.py | I am preparing my own datasets and **I want to combine my own sketch and photo images into one picture for trainning on the pix2pix model**. I read the ["Prepare your own datasets for pix2pix"instruction](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/docs/tips.md#prepare-your-own-datasets-for-cyclegan) carefully, also, I searched the issues similar but there is no help. So I open a new issue for help.
### My folder **directories** are as follow:
the **sketch images** are in the
**/pytorch-CycleGAN-and-pix2pix/datasets/combine/A/train**
and there are 72 images named like frame_0028.jpg.
the **photo images** are in the
**/pytorch-CycleGAN-and-pix2pix/datasets/combine/B/train**
and there are 72 corresponding object photo images with same name in A.
and the **output folder**:
**/pytorch-CycleGAN-and-pix2pix/datasets/combine/AB**
### My commands try
my first try command is:
**python datasets/combine_A_and_B.py --fold_A ./datasets/combine/A --fold_B ./datasets/combine/B --fold_AB ./datasets/combine/AB**
but terminal says:
[fold_A] = ./datasets/combine/A
[fold_B] = ./datasets/combine/B
[fold_AB] = ./datasets/combine/AB
[num_imgs] = 1000000
[use_AB] = False
Traceback (most recent call last):
File "datasets/combine_A_and_B.py", line 22, in <module>
img_list = os.listdir(img_fold_A)
**NotADirectoryError: [Errno 20] Not a directory: './datasets/combine/A/.DS_Store'**
I think maybe I should **specify the train folder**, so my second try command is:
**python datasets/combine_A_and_B.py --fold_A ./datasets/combine/A/train --fold_B ./datasets/combine/B/train --fold_AB ./datasets/combine/AB**
terminal still unhappy:
[fold_A] = ./datasets/combine/A/train
[fold_B] = ./datasets/combine/B/train
[fold_AB] = ./datasets/combine/AB
[num_imgs] = 1000000
[use_AB] = False
Traceback (most recent call last):
File "datasets/combine_A_and_B.py", line 22, in <module>
img_list = os.listdir(img_fold_A)
**NotADirectoryError: [Errno 20] Not a directory: './datasets/combine/A/train/frame_0028.jpg'**
Could anyone give me some advice? Thanks a lot ! | closed | 2019-07-13T03:31:52Z | 2023-07-11T09:17:54Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/700 | [] | diaosiji | 4 |
streamlit/streamlit | python | 10,678 | Clickable Container | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Hi,
it would be very nice, if I could turn a st.container clickable (with an on_click event), so I could build clickable boxes with awesome content. :-)
Best Regards
### Why?
Because I have already tried to build a tile with streamlit, and it is very very difficult. That would make it much easier.
### How?
_No response_
### Additional Context
_No response_ | open | 2025-03-07T12:00:23Z | 2025-03-07T12:56:13Z | https://github.com/streamlit/streamlit/issues/10678 | [
"type:enhancement",
"feature:st.container",
"area:events"
] | alex-bork | 1 |
nltk/nltk | nlp | 2,876 | TreebankWordDetokenizer is not always an inverse of TreebankWordTokenizer | For example, for the following script:
```python
import nltk
import nltk.tokenize
tokens = nltk.tokenize.TreebankWordTokenizer().tokenize("I wanna watch something")
print(tokens)
sentence = nltk.tokenize.treebank.TreebankWordDetokenizer().detokenize(tokens)
print(sentence)
```
The result is:
```
['I', 'wan', 'na', 'watch', 'something']
I wannawatch something
```
The detokenized sentence should be: `I wanna watch something` | closed | 2021-11-04T12:38:16Z | 2021-11-06T23:34:16Z | https://github.com/nltk/nltk/issues/2876 | [
"bug",
"tokenizer"
] | kazet | 1 |
mwaskom/seaborn | data-visualization | 3,707 | Issue with facet grid and legends | Here is an MRE:
```python
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
np.random.seed(1)
n_samples = 1000
data = {
'metric': np.random.rand(n_samples),
'method': np.random.choice(['Method A', 'Method B', 'Method C'], n_samples),
'criterion': np.random.choice(['Criterion 1', 'Criterion 2'], n_samples),
'category': np.random.choice(['Category X', 'Category Y'], n_samples)
}
generic_df = pd.DataFrame(data)
# Plot using Seaborn's FacetGrid
g = sns.FacetGrid(generic_df, col='criterion', row='category',
margin_titles=True, height=3, aspect=1.2)
g.map_dataframe(sns.histplot, x='metric', hue='method', bins=30, multiple='stack')
g.set_axis_labels('Metric', 'Frequency')
g.set_titles(col_template='{col_name}', row_template='{row_name}')
g.fig.subplots_adjust(top=0.89)
g.fig.suptitle('Histogram of Metric Grouped by Criterion and Category')
g.add_legend()
# Display the plot
plt.show()
```
I can move `hue='method'` over to `sns.FacetGrid` and the legend will display, but that changes the bar plot so it is not properly stacked. | closed | 2024-06-05T21:27:21Z | 2024-06-06T18:31:58Z | https://github.com/mwaskom/seaborn/issues/3707 | [] | vsbuffalo | 2 |
TracecatHQ/tracecat | fastapi | 886 | Broken link in user interface | **Describe the bug**
The "Find playbook" button in web ui has broken link to github
The button leads to https://github.com/TracecatHQ/tracecat/tree/main/playbooks , but this folder doesn't exist in main branch
**To reproduce**
1. Open tracecat web ui and go to "Workflows"
2. Click on "Find playbook" button
**Expected behavior**
Link to existing resources
**Screenshots**
<img width="950" alt="Image" src="https://github.com/user-attachments/assets/bcaaa370-a436-40d0-be65-a2b3835a854f" />

**Environment (please complete the following information):**
- Tracecat version 0.26.1 (commit `d652ab70e15773289058dab588fdddac636eb8a7`)
- Cloned repo and deployed with docker compose
| closed | 2025-02-22T15:23:33Z | 2025-02-23T14:19:54Z | https://github.com/TracecatHQ/tracecat/issues/886 | [
"frontend",
"triage"
] | szymon-romanko | 1 |
vllm-project/vllm | pytorch | 14,443 | [Bug]: External Launcher producing NaN outputs on Large Models when Collocating with Model Training | ### Your current environment
<details>
<summary>The output of `python collect_env.py`</summary>
```text
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.34
Python version: 3.10.16 (main, Feb 5 2025, 05:53:54) [GCC 11.5.0 20240719 (Red Hat 11.5.0-2)] (64-bit runtime)
Python platform: Linux-5.14.0-284.73.1.el9_2.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
Nvidia driver version: 550.54.14
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pyzmq==26.2.1
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] transformers==4.49.0
[pip3] triton==3.1.0
```
</details>
### 🐛 Describe the bug
We are observing the following when collocating a relatively large model (e.g. 72B) using external launcher with a model sharding framework (e.g., FSDP).
Motivation: for RLHF, collocating is desirable, but when the model is large, we expect to shard both the vllm and model in training so that we can fit into the GPU.
- for VLLM, it would be TP (or PP)
- for model in training, it would be using a framework like FSDP and DeepSpeed
Bug: we are observing NaN's coming from the TPed VLLM model when the model size becomes big. We have a reproduction script in this [gist](https://gist.github.com/fabianlim/66dba9138d399f4c6c70674d92503444) which demostrates the following behavior
- in this script, we demonstrate two scenarios, i) the model in train is sharded with FSDP, ii) the model is unsharded. This is controlled by setting `shard_train_model=True` and `False`, respectively.
- For demonstration purposes: the model in train is a very small OPT model. In actual use-cases, the model in train is usually the same as the VLLM model. But this is purposely done to make the script runnable in 4 x 80GB GPUs, and also to demonstrate this observation is independent on the size of the model in train.
Setting `shard_train_model=False`, the outputs are correct:
```
# text (0): We can visualize the spheres with radii 11, 13, and 19
# text (1): First, let's denote $c = \log_{2^b}(2^{100
# text (2): First, we need to find the dimensions of the rhombus. A key property of the rh
# text (3): First, I’ll start by simplifying the given equation. We are given the equation: $\sqrt
```
Setting `shard_train_model=True`, the outputs become garbled due to NaN's creeping into the tensors:
```
# - the strange characters caused by NaN's coming from TP reduce
# text (0): We can visualize the spheres with radii 11, 13, and 19
# text (1): !!!!!!!!!!!!!!!!!!!!
# text (2): !!!!!!!!!!!!!!!!!!!!
# text (3): !!!!!!!!!!!!!!!!!!!!
```
Reproduction code found in this [gist](https://gist.github.com/fabianlim/66dba9138d399f4c6c70674d92503444).
**Work-around**
In the actual use case, turning off FSDP for the model-in-train is out of the question. I have found another workaround, which is to set `max_num_seq=1`, which slows down my inference but at least still allows me to fit all models in the GPUs. But this is not ideal as there is no more continous batching
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | open | 2025-03-07T15:16:32Z | 2025-03-07T15:22:24Z | https://github.com/vllm-project/vllm/issues/14443 | [
"bug"
] | fabianlim | 0 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 508 | Have probles to get it working on a Devuan distribution. | Hello i am having this output on a Devuan (it directly did nt work on my old linux mint :C) python3 demo_cli.py
2020-08-24 22:00:26.726664: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory
2020-08-24 22:00:26.726717: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Traceback (most recent call last):
File "demo_cli.py", line 4, in <module>
from synthesizer.inference import Synthesizer
File "/home/src/voiceclone/Real-Time-Voice-Cloning-master/synthesizer/inference.py", line 1, in <module>
from synthesizer.tacotron2 import Tacotron2
File "/home/src/voiceclone/Real-Time-Voice-Cloning-master/synthesizer/tacotron2.py", line 3, in <module>
from synthesizer.models import create_model
File "/home/src/voiceclone/Real-Time-Voice-Cloning-master/synthesizer/models/__init__.py", line 1, in <module>
from .tacotron import Tacotron
File "/home/src/voiceclone/Real-Time-Voice-Cloning-master/synthesizer/models/tacotron.py", line 4, in <module>
from synthesizer.models.helpers import TacoTrainingHelper, TacoTestHelper
File "/home/src/voiceclone/Real-Time-Voice-Cloning-master/synthesizer/models/helpers.py", line 3, in <module>
from tensorflow.contrib.seq2seq import Helper
ModuleNotFoundError: No module named 'tensorflow.contrib'
seems somethi with the tensor flow version? the requieremnt subversion from the requirment.txt give me an error for that exact version, i just install the last one :c | closed | 2020-08-25T04:19:52Z | 2020-08-26T02:02:50Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/508 | [] | afantasialiberal | 2 |
kornia/kornia | computer-vision | 2,158 | Augmentation Sampling Speed Improvement | In general, for all samplings, I am thinking to use only one sampler for each uniform sampler. However, it will lose the ability to be customized for each operation. For example:
```python
class ColorJiggleGenerator(RandomGeneratorBase):
...
def make_samplers(self, device: torch.device, dtype: torch.dtype) -> None:
brightness = _range_bound(self.brightness, 'brightness', center=1.0, bounds=(0, 2), device=device, dtype=dtype)
contrast: Tensor = _range_bound(self.contrast, 'contrast', center=1.0, device=device, dtype=dtype)
saturation: Tensor = _range_bound(self.saturation, 'saturation', center=1.0, device=device, dtype=dtype)
hue: Tensor = _range_bound(self.hue, 'hue', bounds=(-0.5, 0.5), device=device, dtype=dtype)
_joint_range_check(brightness, "brightness", (0, 2))
_joint_range_check(contrast, "contrast", (0, float('inf')))
_joint_range_check(hue, "hue", (-0.5, 0.5))
_joint_range_check(saturation, "saturation", (0, float('inf')))
self.brightness_sampler = Uniform(brightness[0], brightness[1], validate_args=False)
self.contrast_sampler = Uniform(contrast[0], contrast[1], validate_args=False)
self.hue_sampler = Uniform(hue[0], hue[1], validate_args=False)
self.saturation_sampler = Uniform(saturation[0], saturation[1], validate_args=False)
self.randperm = partial(torch.randperm, device=device, dtype=dtype)
def forward(self, batch_shape: torch.Size, same_on_batch: bool = False) -> Dict[str, Tensor]:
batch_size = batch_shape[0]
_common_param_check(batch_size, same_on_batch)
_device, _dtype = _extract_device_dtype([self.brightness, self.contrast, self.hue, self.saturation])
brightness_factor = _adapted_rsampling((batch_size,), self.brightness_sampler, same_on_batch)
contrast_factor = _adapted_rsampling((batch_size,), self.contrast_sampler, same_on_batch)
hue_factor = _adapted_rsampling((batch_size,), self.hue_sampler, same_on_batch)
saturation_factor = _adapted_rsampling((batch_size,), self.saturation_sampler, same_on_batch)
return dict(
brightness_factor=brightness_factor.to(device=_device, dtype=_dtype),
contrast_factor=contrast_factor.to(device=_device, dtype=_dtype),
hue_factor=hue_factor.to(device=_device, dtype=_dtype),
saturation_factor=saturation_factor.to(device=_device, dtype=_dtype),
order=self.randperm(4).to(device=_device, dtype=_dtype).long(),
)
```
```python
class ColorJiggleGenerator(RandomGeneratorBase):
...
def make_samplers(self, device: torch.device, dtype: torch.dtype) -> None:
brightness = _range_bound(self.brightness, 'brightness', center=1.0, bounds=(0, 2), device=device, dtype=dtype)
contrast: Tensor = _range_bound(self.contrast, 'contrast', center=1.0, device=device, dtype=dtype)
saturation: Tensor = _range_bound(self.saturation, 'saturation', center=1.0, device=device, dtype=dtype)
hue: Tensor = _range_bound(self.hue, 'hue', bounds=(-0.5, 0.5), device=device, dtype=dtype)
_joint_range_check(brightness, "brightness", (0, 2))
_joint_range_check(contrast, "contrast", (0, float('inf')))
_joint_range_check(hue, "hue", (-0.5, 0.5))
_joint_range_check(saturation, "saturation", (0, float('inf')))
self.brightness_sampler = Uniform(brightness[0], brightness[1], validate_args=False)
self.contrast_sampler = Uniform(contrast[0], contrast[1], validate_args=False)
self.hue_sampler = Uniform(hue[0], hue[1], validate_args=False)
self.saturation_sampler = Uniform(saturation[0], saturation[1], validate_args=False)
self.randperm = partial(torch.randperm, device=device, dtype=dtype)
def forward(self, batch_shape: torch.Size, same_on_batch: bool = False) -> Dict[str, Tensor]:
batch_size = batch_shape[0]
_common_param_check(batch_size, same_on_batch)
_device, _dtype = _extract_device_dtype([self.brightness, self.contrast, self.hue, self.saturation])
brightness_factor = _adapted_rsampling((batch_size,), self.brightness_sampler, same_on_batch)
contrast_factor = _adapted_rsampling((batch_size,), self.contrast_sampler, same_on_batch)
hue_factor = _adapted_rsampling((batch_size,), self.hue_sampler, same_on_batch)
saturation_factor = _adapted_rsampling((batch_size,), self.saturation_sampler, same_on_batch)
return dict(
brightness_factor=brightness_factor.to(device=_device, dtype=_dtype),
contrast_factor=contrast_factor.to(device=_device, dtype=_dtype),
hue_factor=hue_factor.to(device=_device, dtype=_dtype),
saturation_factor=saturation_factor.to(device=_device, dtype=_dtype),
order=self.randperm(4).to(device=_device, dtype=_dtype).long(),
)
```
To something like:
```python
class ColorJiggleGenerator(RandomGeneratorBase):
def make_samplers(self, device: torch.device, dtype: torch.dtype) -> None:
self._brightness = _range_bound(self.brightness, 'brightness', center=1.0, bounds=(0, 2), device=device, dtype=dtype)
self._contrast: Tensor = _range_bound(self.contrast, 'contrast', center=1.0, device=device, dtype=dtype)
self._saturation: Tensor = _range_bound(self.saturation, 'saturation', center=1.0, device=device, dtype=dtype)
self._hue: Tensor = _range_bound(self.hue, 'hue', bounds=(-0.5, 0.5), device=device, dtype=dtype)
_joint_range_check(self._brightness, "brightness", (0, 2))
_joint_range_check(self._contrast, "contrast", (0, float('inf')))
_joint_range_check(self._hue, "hue", (-0.5, 0.5))
_joint_range_check(self._saturation, "saturation", (0, float('inf')))
self.randperm = partial(torch.randperm, device=device, dtype=dtype)
self.generic_sampler = Uniform(0, 1)
def forward(self, batch_shape: torch.Size, same_on_batch: bool = False) -> Dict[str, Tensor]:
batch_size = batch_shape[0]
_common_param_check(batch_size, same_on_batch)
_device, _dtype = _extract_device_dtype([self.brightness, self.contrast, self.hue, self.saturation])
generic_factors = _adapted_rsampling((batch_size * 4,), self.generic_sampler, same_on_batch).to(device=_device, dtype=_dtype)
brightness_factor = (generic_factors[:batch_size] - self._brightness[0]) / (self._brightness[1] - self._brightness[0])
return dict(
brightness_factor=brightness_factor,
contrast_factor=...,
hue_factor=...,
saturation_factor=...,
order=self.randperm(4).to(device=_device, dtype=_dtype).long(),
)
```
I think this might improve a bit of the sampling speed. | closed | 2023-01-19T12:48:05Z | 2023-01-26T11:55:05Z | https://github.com/kornia/kornia/issues/2158 | [
"help wanted"
] | shijianjian | 1 |
vllm-project/vllm | pytorch | 14,659 | [Bug]: `subprocess.CalledProcessError` when building docker image from source on AMD MI210 | ### Your current environment
I have trouble building docker image right now.
### 🐛 Describe the bug
The raised error:
> 1913.1 File "/usr/local/lib/python3.12/dist-packages/setuptools/_distutils/cmd.py", line 339, in run_command
> 1913.1 self.distribution.run_command(command)
> 1913.1 File "/usr/local/lib/python3.12/dist-packages/setuptools/dist.py", line 999, in run_command
> 1913.1 super().run_command(command)
> 1913.1 File "/usr/local/lib/python3.12/dist-packages/setuptools/_distutils/dist.py", line 1002, in run_command
> 1913.1 cmd_obj.run()
> 1913.1 File "/usr/local/lib/python3.12/dist-packages/setuptools/_distutils/command/build.py", line 136, in run
> 1913.1 self.run_command(cmd_name)
> 1913.1 File "/usr/local/lib/python3.12/dist-packages/setuptools/_distutils/cmd.py", line 339, in run_command
> 1913.1 self.distribution.run_command(command)
> 1913.1 File "/usr/local/lib/python3.12/dist-packages/setuptools/dist.py", line 999, in run_command
> 1913.1 super().run_command(command)
> 1913.1 File "/usr/local/lib/python3.12/dist-packages/setuptools/_distutils/dist.py", line 1002, in run_command
> 1913.1 cmd_obj.run()
> 1913.1 File "/app/vllm/setup.py", line 267, in run
> 1913.1 super().run()
> 1913.1 File "/usr/local/lib/python3.12/dist-packages/setuptools/command/build_ext.py", line 99, in run
> 1913.1 _build_ext.run(self)
> 1913.1 File "/usr/local/lib/python3.12/dist-packages/setuptools/_distutils/command/build_ext.py", line 365, in run
> 1913.1 self.build_extensions()
> 1913.1 File "/app/vllm/setup.py", line 238, in build_extensions
> 1913.1 subprocess.check_call(["cmake", *build_args], cwd=self.build_temp)
> 1913.1 File "/usr/lib/python3.12/subprocess.py", line 415, in check_call
> 1913.1 raise CalledProcessError(retcode, cmd)
> 1913.1 subprocess.CalledProcessError: Command '['cmake', '--build', '.', '-j=192', '--target=_moe_C', '--target=_rocm_C', '--target=_C']' returned non-zero exit status 1.
> ------
> Dockerfile.rocm:40
> --------------------
> 39 | # Build vLLM
> 40 | >>> RUN cd vllm \
> 41 | >>> && python3 -m pip install -r requirements/rocm.txt \
> 42 | >>> && python3 setup.py clean --all \
> 43 | >>> && if [ ${USE_CYTHON} -eq "1" ]; then python3 setup_cython.py build_ext --inplace; fi \
> 44 | >>> && python3 setup.py bdist_wheel --dist-dir=dist
> 45 | FROM scratch AS export_vllm
> --------------------
> ERROR: failed to solve: process "/bin/sh -c cd vllm && python3 -m pip install -r requirements/rocm.txt && python3 setup.py clean --all && if [ ${USE_CYTHON} -eq \"1\" ]; then python3 setup_cython.py build_ext --inplace; fi && python3 setup.py bdist_wheel --dist-dir=dist" did not complete successfully: exit code: 1
It seems that the error `subprocess.CalledProcessError` indicates that the `cmake` command executed within the Dockerfile failed.
**Are the arguments passed to `cmake` correct?**
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | closed | 2025-03-12T07:12:41Z | 2025-03-20T06:37:09Z | https://github.com/vllm-project/vllm/issues/14659 | [
"bug"
] | luciaganlulu | 1 |
Evil0ctal/Douyin_TikTok_Download_API | api | 313 | [BUG] Can't download video from douyin | I use the sample python code, then return the follow error when download the video
URL: https://www.douyin.com/video/6914948781100338440
ERROR
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/jame/Code/home/video/download.py", line 12, in <module>
asyncio.run(hybrid_parsing(url=input("Paste Douyin/TikTok/Bilibili share URL here: ")))
File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/Users/jame/Code/home/video/download.py", line 8, in hybrid_parsing
result = await api.hybrid_parsing(url)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jame/.local/share/virtualenvs/video-hF2q1l9e/lib/python3.11/site-packages/douyin_tiktok_scraper/scraper.py", line 467, in hybrid_parsing
data = await self.get_douyin_video_data(video_id) if url_platform == 'douyin' \
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jame/.local/share/virtualenvs/video-hF2q1l9e/lib/python3.11/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped
return await fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jame/.local/share/virtualenvs/video-hF2q1l9e/lib/python3.11/site-packages/tenacity/_asyncio.py", line 47, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jame/.local/share/virtualenvs/video-hF2q1l9e/lib/python3.11/site-packages/tenacity/__init__.py", line 326, in iter
raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x103763b90 state=finished raised ValueError>] | closed | 2023-11-02T08:57:37Z | 2024-02-07T03:45:27Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/313 | [
"BUG",
"enhancement"
] | nhannguyentrong | 6 |
WZMIAOMIAO/deep-learning-for-image-processing | deep-learning | 72 | predict issue | 模型训练完成后,进行predict时出现如下操作:
Traceback (most recent call last):
File "/Users/mengxiangyu/Desktop/faster_rcnn/predict.py", line 47, in <module>
model.load_state_dict(torch.load(train_weights)["model"])
File "/Users/mengxiangyu/anaconda3/envs/th1.6/lib/python3.7/site-packages/torch/serialization.py", line 577, in load
with _open_zipfile_reader(opened_file) as opened_zipfile:
File "/Users/mengxiangyu/anaconda3/envs/th1.6/lib/python3.7/site-packages/torch/serialization.py", line 241, in __init__
super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer))
RuntimeError: [enforce fail at inline_container.cc:144] . PytorchStreamReader failed reading zip archive: failed finding central directory
是什么原因,是否是加载的模型有问题 | closed | 2020-10-27T03:19:56Z | 2020-11-07T03:35:35Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/72 | [] | menggerSherry | 1 |
sammchardy/python-binance | api | 717 | Client.get_all_coins_info() method missing? | **Describe the bug**
Client.get_all_coins_info() method is missing (its not shown in autocomplete and if used causes 'AttributeError: 'Client' object has no attribute 'get_all_coins_info'
**To Reproduce**
```
import binance.client as binance
client = binance.Client()
x = client.get_all_coins_info()
```
**Expected behavior**
Expect to see the response as specified in the docs: [link](https://python-binance.readthedocs.io/en/latest/binance.html?highlight=get_all_coins_info#binance.client.Client.get_all_coins_info)
**Environment (please complete the following information):**
- Python version: 3.9.1
- Virtual Env: conda
- OS: Windows 10
- python-binance version: 0.7.5
**Logs or Additional context**
N/A
| open | 2021-03-02T22:27:08Z | 2021-04-07T21:56:17Z | https://github.com/sammchardy/python-binance/issues/717 | [] | BGriffy78 | 4 |
flairNLP/flair | pytorch | 2,742 | StackedEmbeddings with PooledFlairEmbeddings returning TypeError | **Describe the bug**
While running the given code for the best configuration for CONLL-03 NER on English at the following page: [link](https://github.com/flairNLP/flair/blob/b3797096742699997e77d12a66c82310361990a4/resources/docs/EXPERIMENTS.md) I get a TypeError (see screenshot). I have tested out several embeddings (Word and Flair) and saw that this TypeError is only occurring when running with PooledFlair. I have controlled the source code where the error is (token.py), but could not come up with a possible solution.
**Screenshots**


**Environment:**
- Windows
- python 3.10
- flair -0.11
| closed | 2022-04-25T13:36:46Z | 2022-05-04T11:45:41Z | https://github.com/flairNLP/flair/issues/2742 | [
"bug"
] | Nuveyla | 2 |
Lightning-AI/LitServe | rest-api | 286 | Example in documentation on how to setup an OpenAI-spec API with LlamaIndex-RAG | ## 🚀 Feature
A new page of documentation explaining how to expose an LlamaIndex RAG using an OpenAI-compatible API.
### Motivation
It took me a good 6 hours to put together these two tutorials: [LlamaIndex RAG API](https://lightning.ai/lightning-ai/studios/deploy-a-private-llama-3-1-rag-api) and [OpenAI spec](https://lightning.ai/docs/litserve/features/open-ai-spec) to expose my LlamaIndex app with an OpenAI-spec API. Maybe I'm a bit stupid but I think this should be a pretty common use-case so I wanted to write a new page in Litserve's documentation. But I couldn't find the docs source code here so I'm writing an issue instead.
## Code
### server.py
```python
from simple_llm import SimpleLLM
import litserve as ls
class LlamaIndexAPI(ls.LitAPI):
def setup(self, device):
self.llm = SimpleLLM()
def predict(self, messages):
for token in self.llm.stream(messages):
yield token
if __name__ == "__main__":
api = LlamaIndexAPI()
server = ls.LitServer(api, spec=ls.OpenAISpec(), stream=True)
server.run(port=8000)
```
### simple_llm.py
```python
from llama_index.llms.openai import OpenAI
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.core.llms import ChatMessage
class SimpleLLM(object):
def __init__(self):
reader = SimpleDirectoryReader(input_dir="data")
docs = reader.load_data()
index = VectorStoreIndex.from_documents(docs, show_progress=True)
llm = OpenAI(model="gpt-4o-mini", temperature=0)
self.engine = index.as_chat_engine(streaming=True, similarity_top_k=2, llm=llm)
def stream(self, messages_dict):
messages = [
ChatMessage(
role=message["role"],
content=message["content"],
)
for message in messages_dict
]
return self.engine.stream_chat(
messages[-1].content, chat_history=messages[:-1]
).response_gen
```
### test.py
```python
from openai import OpenAI
endpoint = "http://localhost:8000/v1"
client = OpenAI(base_url=endpoint, api_key="lit")
response = client.chat.completions.create(
model="lit",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Give me your favourite colour"},
{"role": "assistant", "content": "I quite like green."},
{"role": "user", "content": "Why is it so?."},
],
stream=True
)
for chunk in response:
print(chunk.choices[0].delta.content)
``` | closed | 2024-09-22T16:11:01Z | 2024-10-16T09:59:16Z | https://github.com/Lightning-AI/LitServe/issues/286 | [
"enhancement",
"help wanted"
] | PierreMesure | 4 |
lexiforest/curl_cffi | web-scraping | 143 | what the acurl parameter gives and affects? | what the acurl parameter gives and affects? | closed | 2023-10-17T20:40:07Z | 2023-10-20T11:21:14Z | https://github.com/lexiforest/curl_cffi/issues/143 | [] | r00t-Taurus | 1 |
kynan/nbstripout | jupyter | 159 | `--dry-run` should exit non-0 if files would be updated | Using `nbstripout` as a pre-commit hook with the `--dry-run` option so the verification fails with a list of files that need to be updated **without** actually updating the files. But `--dry-run` always exits with `0` (success), which means the hook doesn't work with `--dry-run`.
IMO, `--dry-run` should return non-0 (ideally `1`) if any file would be changed by running `nbstripout`. | closed | 2021-10-29T23:54:00Z | 2022-01-02T16:55:38Z | https://github.com/kynan/nbstripout/issues/159 | [
"resolution:duplicate",
"type:enhancement"
] | joaonc | 1 |
autogluon/autogluon | computer-vision | 3,983 | Is Conv-LoRA available? | ## Problem
I've read this paper(Conv-LoRA), whose code links to this repository's directory. However, I noticed that this directory has been deleted already. Is Conv-LoRA still available in this repository? Where can I find the code? Many thanks for your reply.
## References
- Conv-LoRA https://arxiv.org/pdf/2401.17868.pdf
- Missing directory: https://github.com/autogluon/autogluon/tree/master/examples/automm/Conv-LoRA
| closed | 2024-03-15T07:18:06Z | 2024-04-17T00:34:49Z | https://github.com/autogluon/autogluon/issues/3983 | [
"enhancement",
"module: multimodal"
] | iamjinchen | 1 |
ultralytics/ultralytics | machine-learning | 19,172 | Yolo11obb recommended number of epochs while training | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I'm training a yolo11obb model with a dataset which is based on the Waymo Open Dataset. It cosists of over 150k training images and about 40k validation images.
I've done a few training runs with different settings and observed two things:
A smaller Batch (32 vs 256) gives me a better map
A smaller number of epochs but multiple times gives me a better map (for example 5x20 epochs continuing from last.pt gives me better map than 1x100 epochs).
Can someone explain why this is happening and what the recommended way for training is?
### Additional
_No response_ | open | 2025-02-11T01:58:52Z | 2025-02-12T21:47:47Z | https://github.com/ultralytics/ultralytics/issues/19172 | [
"question",
"OBB"
] | Berni11 | 4 |
vitalik/django-ninja | pydantic | 851 | Add schema field at runtime | Hi...
Is there any way to add a ModelField to an existing schema class at runtime?
Thank you very much.
| closed | 2023-09-12T08:55:26Z | 2023-09-14T14:38:42Z | https://github.com/vitalik/django-ninja/issues/851 | [] | aegeavaz | 12 |
InstaPy/InstaPy | automation | 5,942 | possible posts: 0 | Getting `possible posts: 0`, using a simplified [quickstart](https://github.com/InstaPy/instapy-quickstart). Similar to [#2358](https://github.com/timgrossmann/InstaPy/issues/2358).
Changing `skip_top_posts` doesn't help either.
Can you still like by tags?
<img width="746" alt="Screen Shh" src="https://user-images.githubusercontent.com/75656359/101442243-c934d280-38cf-11eb-92cb-63bbd459ffbd.png">
| closed | 2020-12-08T05:04:40Z | 2021-01-04T01:08:17Z | https://github.com/InstaPy/InstaPy/issues/5942 | [] | blue-tomato6 | 4 |
graphql-python/graphene-django | graphql | 1,314 | Input of type date required when filtering using double underscore day, month, or year in date fields. |
**Current behavior**
Given a node like this:
```python
class EventNode(DjangoObjectType):
class Meta:
model = Event
filter_fields = {
"event_date": ["exact", "year", "month", "day"],
"status": ["exact"],
}
interfaces = (relay.Node,)
```
and a query structure like this:
```graph
query filterEvents{
allEvents(){
edges{
node{
id
eventDate
}
}
}
}
```
In order to perform a filter for all events in the year 2022, I tried the following:
```
allEvents(eventDate_Year:"2022")
> "message": "['{\"event_date__year\": [{\"message\": \"Enter a number.\", \"code\": \"invalid\"}]}']",
allEvents(eventDate_Year:2022)
> "message": "Argument \"eventDate_Year\" has invalid value 2022.\nExpected type \"Date\", found 2022.",
allEvents(eventDate_Year:"2022-02-14")
> "message": "['{\"event_date__year\": [{\"message\": \"Enter a number.\", \"code\": \"invalid\"}]}']",
allEvents(eventDate_Year:"+2022")
> "message": "ISO 8601 extended year representation not supported."
```
I tried using `django_filters` but the field `eventDate_Year` ,`eventDate_Month` and `eventDate_Day` arguments where missing in the resulting api.
```python
class EventFilter(django_filters.FilterSet):
class Meta:
model = Event
fields = "__all__"
filter_fields = {
"event_date": [ "exact","year","month", "day" ],
"status": ["exact"],
}
class EventNode(DjangoObjectType):
class Meta:
model = Event
filterset_class = EventFilter
interfaces = (relay.Node,)
```
**Expected Behavior**
I was expecting a `eventDate_Year` to behave like the django double underscore fields queries, e.g. `date__year` and take either a string or number as input, e.g.
```python
Model.objects.filter(date_field__year="2022")
```
It is not possible to pass year only as a valid date.
**What is the motivation / use case for changing the behavior?**
I would like to be able to filter a large data set by time period. So if only data from April 2022 is required, that is what is asked for. It doesn't seem very efficient to explicitly declare the same filters for every date field in every node, instead of specifying in the filter fields. Specifying a from-to date filter might be a viable solution, but it wouldn't work if data for say the 5th of every month in 2022 is required.
**Please tell us about your environment:**
- Version: 2.15.0
- Platform:
- Local: Arch Linux 5.16.13-arch1-1
- Remotes: Ubuntu 20.04
**Other information**
I am not sure if this is a bug or a problem with my implementation. Before creating this issue,[ I asked on stack overflow](https://stackoverflow.com/questions/71424540/how-to-filter-by-year-month-or-day-in-django-graphene-query), but got no response. | open | 2022-03-11T10:28:05Z | 2022-03-30T17:38:47Z | https://github.com/graphql-python/graphene-django/issues/1314 | [
"🐛bug"
] | keystroke3 | 3 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 1,324 | . | closed | 2024-12-22T14:36:52Z | 2024-12-22T14:37:07Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1324 | [] | RollingDreams | 0 | |
littlecodersh/ItChat | api | 381 | 怎样发送贴图。。。 | 在提交前,请确保您已经检查了以下内容!
- [x] 您可以在浏览器中登陆微信账号,但不能使用`itchat`登陆
- [x] 我已经阅读并按[文档][document] 中的指引进行了操作
- [x] 您的问题没有在[issues][issues]报告,否则请在原有issue下报告
- [x] 本问题确实关于`itchat`, 而不是其他项目.
- [x] 如果你的问题关于稳定性,建议尝试对网络稳定性要求极低的[itchatmp][itchatmp]项目
请使用`itchat.run(debug=True)`运行,并将输出粘贴在下面:
```
[在这里粘贴完整日志]
```
您的itchat版本为:`[在这里填写版本号]`。(可通过`python -c "import itchat;print(itchat.__version__)"`获取)
其他的内容或者问题更详细的描述都可以添加在下面:
> [您的内容]
[document]: https://github.com/soimort/you-get/wiki/FAQ
[issues]: https://github.com/soimort/you-get/issues
[itchatmp]: https://github.com/littlecodersh/itchatmp
| closed | 2017-05-25T13:12:11Z | 2017-05-29T01:25:19Z | https://github.com/littlecodersh/ItChat/issues/381 | [
"question"
] | JanzenLiu | 1 |
WZMIAOMIAO/deep-learning-for-image-processing | deep-learning | 9 | What does the mean of self.include_top? | I'm confused about "self.include_top" in Test5/model in class ResNet(nn.Module), could you please show me some information about it?
Thank you in advance~ | closed | 2020-03-01T10:29:18Z | 2020-03-08T10:29:41Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/9 | [] | antenna-before | 3 |
coqui-ai/TTS | deep-learning | 3,729 | [Feature request] Allow the use of `logging` instead of `print` | **🚀 Feature Description**
The `print` function is in several places, most noticeably (to me) is in `utils.synthesizer.Synthesizer.tts`, with lines like:
```python
print(f" > Processing time: {process_time}")
print(f" > Real-time factor: {process_time / audio_time}")
```
This is great when messing around, but it'd be nice to have the option to use different types of loggers (or even just the root). For instance, if I have a distributed application, I can have this writing to something that would send the messages through a pubsub setup so that another application may read and interpret the output in real time.
**Solution**
`utils.synthesizer.Synthesizer`'s signature can be changed to look like:
```python
def __init__(
self,
tts_checkpoint: str = "",
tts_config_path: str = "",
tts_speakers_file: str = "",
tts_languages_file: str = "",
vocoder_checkpoint: str = "",
vocoder_config: str = "",
encoder_checkpoint: str = "",
encoder_config: str = "",
vc_checkpoint: str = "",
vc_config: str = "",
model_dir: str = "",
voice_dir: str = None,
use_cuda: bool = False,
logger: logging.Logger = None
) -> None:
```
and the `tts` function can look like:
```python
if self.__logger:
self.__logger.info(f" > Processing time: {process_time}")
self.__logger.info(f" > Real-time factor: {process_time / audio_time}")
else:
print(f" > Processing time: {process_time}")
print(f" > Real-time factor: {process_time / audio_time}")
```
A `Protocol` for the logger might work better than just the hint of `logging.Logger` - it'd allow programmers to put in some wackier functionality, such as writing non-loggers that just so happen to have a similar signature.
**Alternative Solutions**
An alternative solution would be to pass the writing function to `tts` itself, something like:
```python
def tts(
self,
text: str = "",
speaker_name: str = "",
language_name: str = "",
speaker_wav=None,
style_wav=None,
style_text=None,
reference_wav=None,
reference_speaker_name=None,
split_sentences: bool = True,
logging_function: typing.Callable[[str], typing.Any] = None,
**kwargs,
) -> List[int]:
...
if logging_function:
logging_function(f" > Processing time: {process_time}")
logging_function(f" > Real-time factor: {process_time / audio_time}")
else:
print(f" > Processing time: {process_time}")
print(f" > Real-time factor: {process_time / audio_time}")
```
This will enable code like:
```python
def output_sound(text: str, output_path: pathlib.Path, connection: Redis):
from TTS.api import TTS
speech_model = TTS(DEFAULT_MODEL).to("cpu")
speech_model.tts_to_file(text=text, speaker="p244", file_path=str(output_path), logging_function: connection.publish)
```
**Additional context**
I don't believe that `utils.synthesizer.Synthesizer.tts` is the only location of the standard `print` function. A consistent solution should be applied there.
The parameter for the logging functionality will need to be passed through objects and functions that lead to the current `print` statements. For instance, `TTS.api.TTS.tts_to_file` would require a `logging_function` parameter if it were to the function to `self.synthesizer.tts` within the `tts` function.
The general vibe of the solutions I've provided will make sure that pre-existing code behaves no different, making the new functionality purely opt-in.
I haven't written anything using a progress bar like the one that this uses, so I can't speak up for that aside from the fact that it might need to be excluded.
| closed | 2024-05-10T14:31:42Z | 2024-07-18T23:47:12Z | https://github.com/coqui-ai/TTS/issues/3729 | [
"wontfix",
"feature request"
] | christophertubbs | 4 |
Johnserf-Seed/TikTokDownload | api | 73 | 可以下载tiktok视频吗 | closed | 2021-12-25T08:52:35Z | 2022-02-16T16:12:59Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/73 | [
"需求建议(enhancement)"
] | Michael-YYang | 1 | |
firerpa/lamda | automation | 73 | 雷电模拟器 CRITICAL failed (1) | 

X86_64 | open | 2024-01-19T05:21:20Z | 2025-02-12T02:10:31Z | https://github.com/firerpa/lamda/issues/73 | [] | WolfMoss | 4 |
deepset-ai/haystack | machine-learning | 8,946 | Add MCPTool | We should add a Tool to Haystack that allows calling MCP servers as an Agent's Tool.
https://modelcontextprotocol.io/quickstart/client
Once we have an MCPTool implemented, we could share an example with one of the many tools available: https://github.com/modelcontextprotocol/servers
### Tasks
- [x] The code is documented with docstrings and was merged in the `main` branch
- [x] Docs are published at https://docs.haystack.deepset.ai/
- [x] There is a Github workflow running the tests for the integration nightly and at every PR
- [x] A new label named like `integration:<your integration name>` has been added to the list of labels for this [repository](https://github.com/deepset-ai/haystack-core-integrations/labels)
- [x] The [labeler.yml](https://github.com/deepset-ai/haystack-core-integrations/blob/main/.github/labeler.yml) file has been updated
- [x] The package has been released on PyPI
- [x] An integration tile has been added to https://github.com/deepset-ai/haystack-integrations
- [x] The integration has been listed in the [Inventory section](https://github.com/deepset-ai/haystack-core-integrations#inventory) of this repo README
- [x] There is an example available to demonstrate the feature
- [ ] The feature was announced through social media | open | 2025-03-03T08:52:00Z | 2025-03-12T19:13:07Z | https://github.com/deepset-ai/haystack/issues/8946 | [
"P1"
] | julian-risch | 1 |
unit8co/darts | data-science | 2,163 | [BUG] Model page indicating that probabilistic forecasting not applicable for AutoARIMA and StatsforecastAutoCES | In the model page (https://github.com/unit8co/darts?tab=readme-ov-file#forecasting-models) we can see that for AutoARIMA and StatsforecastAutoCES probabilistic forecasting is set to not applicable but pmdarima and statsforecast support prediction intervals for those so I'm wondering why it's indicating that it's not applicable ?
| open | 2024-01-13T11:16:16Z | 2024-01-19T15:03:22Z | https://github.com/unit8co/darts/issues/2163 | [
"feature request"
] | Jonathan-87 | 1 |
CTFd/CTFd | flask | 2,457 | Preconfigure or automate installations of ctfd | I am not sure if it is possible (at least I have not seen it documented) to automate installations of ctfd.
My goal is to automatically deploy ctfd as docker services that is ready to run and to achieve this I need to automate:
1. Disabling of the setup wizard
2. Configuration of a admin user and set or acquire a admin token (via api calls or environment variables to the docker container)
Alternatively:
1. Be able to pre-configure the values of the setup wizard and "admin token".
- Either via environment variables or by hard coding them in the build phase of the docker image.
Is this in anyway possible ? | closed | 2024-01-25T14:29:02Z | 2025-01-17T13:01:52Z | https://github.com/CTFd/CTFd/issues/2457 | [] | jonakarl | 8 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 1,060 | Quality of generated audio | When using a recording from the LibriSpeech downloaded dataset, a good ratio of the generated audio pieces sound good and accurate. However, whenever I record some audio and use that, no matter who the speaker is, all the generated audio pieces sound the same. Is there any way I can fix this, or am I not understanding how to use this tool correctly? I've seen others on YouTube using the tool the same way I am and the resulting audio clips sound far better than my own, | open | 2022-05-01T20:45:45Z | 2022-10-21T17:38:13Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1060 | [] | aryanpanpalia | 3 |
thunlp/OpenPrompt | nlp | 91 | Model output without gard. | I followed the tutorial code 0_basic.py and modified the Template, Verbalizer, and then defined the model, but in the end my model output had no gradient.
The code show as below:
```
plm, tokenizer, model_config, WrapperClass = load_plm("t5", "t5-base")
```
```
promptTemplate = ManualTemplate(
text = '{"placeholder": "text_a"} {"text": "In this sentence,"} {"placeholder": "text_b"} {"text": "is a"} {"mask"} ',
tokenizer = tokenizer,
)
```
```
promptVerbalizer = ManualVerbalizer(tokenizer,
num_classes=2,
#classes = classes,
label_words = [
["person"],
["NA"]
]
)
```
```
prompt_model = PromptForClassification(plm=plm,template=promptTemplate, verbalizer=promptVerbalizer, freeze_plm=False)
```
the output of the model:
```
loss_func = torch.nn.CrossEntropyLoss()
logits = prompt_model(inputs)
labels = inputs['label']
loss = loss_func(logits, labels)
```
```
---> 34 loss.backward()
35 tot_loss += loss.item()
36 optimizer.step()
~/anaconda3/envs/prompt-dev/lib/python3.7/site-packages/torch/_tensor.py in backward(self, gradient, retain_graph, create_graph, inputs)
305 create_graph=create_graph,
306 inputs=inputs)
--> 307 torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
308
309 def register_hook(self, hook):
~/anaconda3/envs/prompt-dev/lib/python3.7/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
154 Variable._execution_engine.run_backward(
155 tensors, grad_tensors_, retain_graph, create_graph, inputs,
--> 156 allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
157
158
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
```
| closed | 2022-01-03T06:26:31Z | 2022-01-12T09:47:25Z | https://github.com/thunlp/OpenPrompt/issues/91 | [] | hebicheng | 3 |
plotly/dash-core-components | dash | 529 | react-markdown ugrade has broken containerProps | According to this, `containerProps` has been removed as of react-markdown 3.0.0: https://github.com/rexxars/react-markdown/blob/master/CHANGELOG.md#breaking-1
As a result, dash-core-components 0.45.0 changes the behaviour of the following app, as `containerProps` can no longer be set on `dcc.Markdown`:
```python
import dash
import dash_core_components as dcc
import dash_html_components as html
app = dash.Dash()
app.layout = html.Div([
dcc.Markdown(
children='Background',
containerProps={'style': {'background-color': 'Salmon'}}
)
])
if __name__ == '__main__':
app.run_server()
```
| closed | 2019-04-18T15:25:47Z | 2019-04-29T12:47:33Z | https://github.com/plotly/dash-core-components/issues/529 | [] | slishak | 1 |
horovod/horovod | tensorflow | 3,281 | Containerized horovod | Hi all,
I have a problem running horovod using containerized environment.
I'm running it on the host and trying to run on one single machine first:
```
horovodrun -np 4 -H localhost:4 python keras_mnist_advanced.py
2021-11-18 00:12:14.851827: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:2021-11-18 00:12:17.677706: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
[1,2]<stderr>:2021-11-18 00:12:17.677699: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
[1,0]<stderr>:2021-11-18 00:12:17.678337: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
[1,3]<stderr>:2021-11-18 00:12:17.715725: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
[1,1]<stderr>:Traceback (most recent call last):
[1,1]<stderr>: File "keras_mnist_advanced.py", line 3, in <module>
[1,1]<stderr>: import keras
[1,1]<stderr>: File "/usr/local/lib/python3.7/dist-packages/keras/__init__.py", line 20, in <module>
[1,1]<stderr>: from . import initializers
[1,1]<stderr>: File "/usr/local/lib/python3.7/dist-packages/keras/initializers/__init__.py", line 124, in <module>
[1,1]<stderr>: populate_deserializable_objects()
[1,1]<stderr>: File "/usr/local/lib/python3.7/dist-packages/keras/initializers/__init__.py", line 82, in populate_deserializable_objects
[1,1]<stderr>: generic_utils.populate_dict_with_module_objects(
[1,1]<stderr>:AttributeError: module 'keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects'
[1,2]<stderr>:Traceback (most recent call last):
[1,2]<stderr>: File "keras_mnist_advanced.py", line 3, in <module>
[1,2]<stderr>: import keras
[1,2]<stderr>: File "/usr/local/lib/python3.7/dist-packages/keras/__init__.py", line 20, in <module>
[1,2]<stderr>: from . import initializers
[1,2]<stderr>: File "/usr/local/lib/python3.7/dist-packages/keras/initializers/__init__.py", line 124, in <module>
[1,2]<stderr>: populate_deserializable_objects()
[1,2]<stderr>: File "/usr/local/lib/python3.7/dist-packages/keras/initializers/__init__.py", line 82, in populate_deserializable_objects
[1,2]<stderr>: generic_utils.populate_dict_with_module_objects(
[1,2]<stderr>:AttributeError: module 'keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects'
[1,0]<stderr>:Traceback (most recent call last):
[1,0]<stderr>: File "keras_mnist_advanced.py", line 3, in <module>
[1,0]<stderr>: import keras
[1,0]<stderr>: File "/usr/local/lib/python3.7/dist-packages/keras/__init__.py", line 20, in <module>
[1,0]<stderr>: from . import initializers
[1,0]<stderr>: File "/usr/local/lib/python3.7/dist-packages/keras/initializers/__init__.py", line 124, in <module>
[1,0]<stderr>: populate_deserializable_objects()
[1,0]<stderr>: File "/usr/local/lib/python3.7/dist-packages/keras/initializers/__init__.py", line 82, in populate_deserializable_objects
[1,0]<stderr>: generic_utils.populate_dict_with_module_objects(
[1,0]<stderr>:AttributeError: module 'keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects'
[1,3]<stderr>:Traceback (most recent call last):
[1,3]<stderr>: File "keras_mnist_advanced.py", line 3, in <module>
[1,3]<stderr>: import keras
[1,3]<stderr>: File "/usr/local/lib/python3.7/dist-packages/keras/__init__.py", line 20, in <module>
[1,3]<stderr>: from . import initializers
[1,3]<stderr>: File "/usr/local/lib/python3.7/dist-packages/keras/initializers/__init__.py", line 124, in <module>
[1,3]<stderr>: populate_deserializable_objects()
[1,3]<stderr>: File "/usr/local/lib/python3.7/dist-packages/keras/initializers/__init__.py", line 82, in populate_deserializable_objects
[1,3]<stderr>: generic_utils.populate_dict_with_module_objects(
[1,3]<stderr>:AttributeError: module 'keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects'
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[44863,1],2]
Exit code: 1
--------------------------------------------------------------------------
```
Any idea on how to fix this?
| closed | 2021-11-18T00:13:28Z | 2021-11-18T17:09:04Z | https://github.com/horovod/horovod/issues/3281 | [] | dimanzt | 1 |
mwouts/itables | jupyter | 218 | How to show a table with both head and tail? | ie:
Is there a config such that:
instead of `df.head(5).append(df.tail(5))`,
just `df`, and **both head and tail** are shown (with `...` in middle)?
| open | 2024-01-27T05:44:12Z | 2024-01-30T20:16:51Z | https://github.com/mwouts/itables/issues/218 | [] | Norlandz | 4 |
dynaconf/dynaconf | flask | 1,038 | [RFC] add `@get` converter | I need to add a new converter called `@get` to be used to simply call `settings.get` with specified key and use whatever value is there with the current type.
Example:
```python
from dynaconf import Dynaconf
settings = Dynaconf(settings_files=["a.py", "b.py"])
# a.py
THING = "value"
# b.py
ANOTHER_THING = "@get THING"
```
This is different than using `this` inside a lazy `@format/@jinja` because those 2 would cast to string by default or to specified combo like `@int @format {this[THING]}`.
The `@get` would just attempt to perform a `settings.get` and use whatever value is there, it will also allow a default to be specified.
```python
THING = "@get THING @int 789"
```
The above will translate to
```python
settings.get("THING", default="789", cast="@int")
```
Not sure if this must be lazily evaluated.
This feature will allow the fix of https://github.com/pulp/pulp_ansible/pull/1130
## Implementation
On https://github.com/dynaconf/dynaconf/blob/6322c4dc4afe0c91160cbdfb069119ca815a60a1/dynaconf/utils/parse_conf.py#L242-L278
```python
"@get": lambda value: Lazy(value, formatter=getter_formatter)
```
then implement the `getter_formatter` as a function that takes `value` as `"THING @int 789"` parses it and resolves to `settings.get()` call.
| closed | 2024-01-03T20:56:30Z | 2024-01-12T21:19:02Z | https://github.com/dynaconf/dynaconf/issues/1038 | [
"Not a Bug",
"RFC"
] | rochacbruno | 3 |
jadore801120/attention-is-all-you-need-pytorch | nlp | 21 | Batch Beam Search Problem | In the [Beam.py-L30-L31](https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/master/transformer/Beam.py#L30-L31):
```
self.next_ys = [self.tt.LongTensor(size).fill_(Constants.PAD)]
self.next_ys[0][0] = Constants.BOS
```
It seems that only the top hypothesis get "BOS" as start while all other hypothesis get "PAD" as start. Why don't all the hypothesis get "BOS" as start?
And in the [Beam.py-L65-L68](https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/master/transformer/Beam.py#L65-L68):
```
# End condition is when top-of-beam is EOS.
if self.next_ys[-1][0] == Constants.EOS:
self.done = True
self.all_scores.append(self.scores)
```
you set that end condition is when top-of-beam is "EOS". Why top-of-beam instead of all-of-beam?
| closed | 2017-07-24T02:00:00Z | 2017-10-24T18:42:19Z | https://github.com/jadore801120/attention-is-all-you-need-pytorch/issues/21 | [] | ZiJianZhao | 3 |
autokey/autokey | automation | 507 | Empty substitution with abbreviations | ## Classification:
Bug
## Reproducibility:
Always
## Version
AutoKey version: 0.90.4
Used GUI (Gtk, Qt, or both): Gtk
If the problem is known to be present in more than one version, please list all of those.
Installed via: PPA
Linux Distribution: Ubuntu 18.04 on elementary OS 5
## Summary
An abbreviation triggers the subsitution (and the abbreviation is remove), but nothing gets added
## Steps to Reproduce (if applicable)
Create a phrase on AutoKey
Add an abbreviation for the phrase
Type the abbreviation (possibly elsewhere)
## Expected Results
The phrase should be pasted
## Actual Results
The abbreviation is removed but no text gets pasted

```
/usr/lib/python2.7/dist-packages/autokey/gtkapp.py:24: PyGIWarning: Gtk was imported without specifying a version first. Use gi.require_version('Gtk', '3.0') before import to ensure that the right version gets loaded.
from gi.repository import Gtk, Gdk, GObject, GLib
/usr/lib/python2.7/dist-packages/autokey/gtkui/notifier.py:19: PyGIWarning: Notify was imported without specifying a version first. Use gi.require_version('Notify', '0.7') before import to ensure that the right version gets loaded.
from gi.repository import Gtk, Gdk, Notify
/usr/lib/python2.7/dist-packages/autokey/gtkui/notifier.py:28: PyGIWarning: AppIndicator3 was imported without specifying a version first. Use gi.require_version('AppIndicator3', '0.1') before import to ensure that the right version gets loaded.
from gi.repository import AppIndicator3
/usr/lib/python2.7/dist-packages/autokey/gtkui/configwindow.py:20: PyGIWarning: GtkSource was imported without specifying a version first. Use gi.require_version('GtkSource', '3.0') before import to ensure that the right version gets loaded.
from gi.repository import Gtk, Pango, GtkSource, Gdk, Gio
2021-01-29 11:00:12,720 INFO - root - Initialising application
2021-01-29 11:00:12,727 INFO - root - Initialise global hotkeys
2021-01-29 11:00:12,728 INFO - config-manager - Loading config from existing file: /home/neville/.config/autokey/autokey.json
2021-01-29 11:00:12,729 DEBUG - config-manager - Loading folder at '/home/neville/.config/autokey/data/Basics'
2021-01-29 11:00:12,730 DEBUG - config-manager - Loading folder at '/home/neville/.config/autokey/data/Sample Scripts'
2021-01-29 11:00:12,732 INFO - config-manager - Configuration changed - rebuilding in-memory structures
2021-01-29 11:00:12,733 DEBUG - inotify - Adding watch for /home/neville/.config/autokey/data/Basics
2021-01-29 11:00:12,733 DEBUG - inotify - Adding watch for /home/neville/.config/autokey/data/Sample Scripts
2021-01-29 11:00:12,733 INFO - config-manager - Successfully loaded configuration
2021-01-29 11:00:12,733 DEBUG - inotify - Adding watch for /home/neville/.config/autokey/data
2021-01-29 11:00:12,733 DEBUG - inotify - Adding watch for /home/neville/.config/autokey
2021-01-29 11:00:12,734 DEBUG - config-manager - Global settings: {'showTrayIcon': True, 'sortByUsageCount': True, 'scriptGlobals': {}, 'undoUsingBackspace': True, 'notificationIcon': u'autokey-status', 'enableQT4Workaround': False, 'promptToSave': False, 'menuTakesFocus': False, 'interfaceType': u'XRecord', 'windowDefaultSize': [808, 500], 'showToolbar': True, 'serviceRunning': True, 'columnWidths': [150, 50, 100], 'workAroundApps': u'.*VirtualBox.*|krdc.Krdc', 'hPanePosition': 204, 'isFirstRun': True}
2021-01-29 11:00:12,734 INFO - service - Starting service
2021-01-29 11:00:12,759 DEBUG - interface - Modifier masks: {'<capslock>': 2, '<meta>': 8, '<alt_gr>': 128, '<numlock>': 16, '<hyper>': 64, '<ctrl>': 4, '<shift>': 1, '<alt>': 8, '<super>': 64}
2021-01-29 11:00:12,792 DEBUG - interface - Alt-Grid: XK_Alt_R, 65514
2021-01-29 11:00:12,792 DEBUG - interface - [(92, 0), (92, 2)]
2021-01-29 11:00:12,792 DEBUG - interface - X Server Keymap
2021-01-29 11:00:12,792 DEBUG - interface - [\] : [(51, 0), (51, 2)]
2021-01-29 11:00:12,793 DEBUG - interface - [|] : [(51, 1), (51, 3), (94, 4), (94, 6)]
2021-01-29 11:00:12,793 DEBUG - interface - [`] : [(49, 0), (49, 2)]
2021-01-29 11:00:12,793 DEBUG - interface - [1] : [(10, 0), (10, 2)]
2021-01-29 11:00:12,793 DEBUG - interface - [2] : [(11, 0), (11, 2)]
2021-01-29 11:00:12,793 DEBUG - interface - [3] : [(12, 0), (12, 2)]
2021-01-29 11:00:12,793 DEBUG - interface - [4] : [(13, 0), (13, 2)]
2021-01-29 11:00:12,793 DEBUG - interface - [5] : [(14, 0), (14, 2)]
2021-01-29 11:00:12,793 DEBUG - interface - [6] : [(15, 0), (15, 2)]
2021-01-29 11:00:12,793 DEBUG - interface - [7] : [(16, 0), (16, 2)]
2021-01-29 11:00:12,793 DEBUG - interface - [8] : [(17, 0), (17, 2)]
2021-01-29 11:00:12,793 DEBUG - interface - [9] : [(18, 0), (18, 2)]
2021-01-29 11:00:12,793 DEBUG - interface - [0] : [(19, 0), (19, 2)]
2021-01-29 11:00:12,793 DEBUG - interface - [-] : [(20, 0), (20, 2)]
2021-01-29 11:00:12,793 DEBUG - interface - [=] : [(21, 0), (21, 2)]
2021-01-29 11:00:12,794 DEBUG - interface - [~] : [(49, 1), (49, 3)]
2021-01-29 11:00:12,794 DEBUG - interface - [!] : [(10, 1), (10, 3)]
2021-01-29 11:00:12,794 DEBUG - interface - [@] : [(11, 1), (11, 3)]
2021-01-29 11:00:12,794 DEBUG - interface - [#] : [(12, 1), (12, 3)]
2021-01-29 11:00:12,794 DEBUG - interface - [$] : [(13, 1), (13, 3)]
2021-01-29 11:00:12,794 DEBUG - interface - [%] : [(14, 1), (14, 3)]
2021-01-29 11:00:12,794 DEBUG - interface - [^] : [(15, 1), (15, 3)]
2021-01-29 11:00:12,794 DEBUG - interface - [&] : [(16, 1), (16, 3)]
2021-01-29 11:00:12,795 DEBUG - interface - [*] : [(17, 1), (17, 3)]
2021-01-29 11:00:12,795 DEBUG - interface - [(] : [(187, 0), (18, 1), (187, 2), (18, 3)]
2021-01-29 11:00:12,795 DEBUG - interface - [)] : [(188, 0), (19, 1), (188, 2), (19, 3)]
2021-01-29 11:00:12,795 DEBUG - interface - [q] : [(24, 0), (24, 2)]
2021-01-29 11:00:12,795 DEBUG - interface - [w] : [(25, 0), (25, 2)]
2021-01-29 11:00:12,795 DEBUG - interface - [e] : [(26, 0), (26, 2)]
2021-01-29 11:00:12,795 DEBUG - interface - [r] : [(27, 0), (27, 2)]
2021-01-29 11:00:12,795 DEBUG - interface - [t] : [(28, 0), (28, 2)]
2021-01-29 11:00:12,795 DEBUG - interface - [y] : [(29, 0), (29, 2)]
2021-01-29 11:00:12,795 DEBUG - interface - [u] : [(30, 0), (30, 2)]
2021-01-29 11:00:12,795 DEBUG - interface - [i] : [(31, 0), (31, 2)]
2021-01-29 11:00:12,796 DEBUG - interface - [o] : [(32, 0), (32, 2)]
2021-01-29 11:00:12,796 DEBUG - interface - [p] : [(33, 0), (33, 2)]
2021-01-29 11:00:12,796 DEBUG - interface - [[] : [(34, 0), (34, 2)]
2021-01-29 11:00:12,796 DEBUG - interface - []] : [(35, 0), (35, 2)]
2021-01-29 11:00:12,796 DEBUG - interface - [a] : [(38, 0), (38, 2)]
2021-01-29 11:00:12,796 DEBUG - interface - [s] : [(39, 0), (39, 2)]
2021-01-29 11:00:12,796 DEBUG - interface - [d] : [(40, 0), (40, 2)]
2021-01-29 11:00:12,796 DEBUG - interface - [f] : [(41, 0), (41, 2)]
2021-01-29 11:00:12,796 DEBUG - interface - [g] : [(42, 0), (42, 2)]
2021-01-29 11:00:12,796 DEBUG - interface - [h] : [(43, 0), (43, 2)]
2021-01-29 11:00:12,796 DEBUG - interface - [j] : [(44, 0), (44, 2)]
2021-01-29 11:00:12,796 DEBUG - interface - [k] : [(45, 0), (45, 2)]
2021-01-29 11:00:12,796 DEBUG - interface - [l] : [(46, 0), (46, 2)]
2021-01-29 11:00:12,796 DEBUG - interface - [;] : [(47, 0), (47, 2)]
2021-01-29 11:00:12,797 DEBUG - interface - ['] : [(48, 0), (48, 2)]
2021-01-29 11:00:12,797 DEBUG - interface - [z] : [(52, 0), (52, 2)]
2021-01-29 11:00:12,797 DEBUG - interface - [x] : [(53, 0), (53, 2)]
2021-01-29 11:00:12,797 DEBUG - interface - [c] : [(54, 0), (54, 2)]
2021-01-29 11:00:12,797 DEBUG - interface - [v] : [(55, 0), (55, 2)]
2021-01-29 11:00:12,797 DEBUG - interface - [b] : [(56, 0), (56, 2)]
2021-01-29 11:00:12,797 DEBUG - interface - [n] : [(57, 0), (57, 2)]
2021-01-29 11:00:12,797 DEBUG - interface - [m] : [(58, 0), (58, 2)]
2021-01-29 11:00:12,797 DEBUG - interface - [,] : [(59, 0), (59, 2)]
2021-01-29 11:00:12,797 DEBUG - interface - [.] : [(60, 0), (60, 2)]
2021-01-29 11:00:12,797 DEBUG - interface - [/] : [(61, 0), (61, 2)]
2021-01-29 11:00:12,797 DEBUG - interface - [Q] : [(24, 1), (24, 3)]
2021-01-29 11:00:12,797 DEBUG - interface - [W] : [(25, 1), (25, 3)]
2021-01-29 11:00:12,797 DEBUG - interface - [E] : [(26, 1), (26, 3)]
2021-01-29 11:00:12,798 DEBUG - interface - [R] : [(27, 1), (27, 3)]
2021-01-29 11:00:12,798 DEBUG - interface - [T] : [(28, 1), (28, 3)]
2021-01-29 11:00:12,798 DEBUG - interface - [Y] : [(29, 1), (29, 3)]
2021-01-29 11:00:12,798 DEBUG - interface - [U] : [(30, 1), (30, 3)]
2021-01-29 11:00:12,798 DEBUG - interface - [I] : [(31, 1), (31, 3)]
2021-01-29 11:00:12,798 DEBUG - interface - [O] : [(32, 1), (32, 3)]
2021-01-29 11:00:12,798 DEBUG - interface - [P] : [(33, 1), (33, 3)]
2021-01-29 11:00:12,798 DEBUG - interface - [{] : [(34, 1), (34, 3)]
2021-01-29 11:00:12,798 DEBUG - interface - [}] : [(35, 1), (35, 3)]
2021-01-29 11:00:12,798 DEBUG - interface - [A] : [(38, 1), (38, 3)]
2021-01-29 11:00:12,798 DEBUG - interface - [S] : [(39, 1), (39, 3)]
2021-01-29 11:00:12,798 DEBUG - interface - [D] : [(40, 1), (40, 3)]
2021-01-29 11:00:12,799 DEBUG - interface - [F] : [(41, 1), (41, 3)]
2021-01-29 11:00:12,799 DEBUG - interface - [G] : [(42, 1), (42, 3)]
2021-01-29 11:00:12,799 DEBUG - interface - [H] : [(43, 1), (43, 3)]
2021-01-29 11:00:12,799 DEBUG - interface - [J] : [(44, 1), (44, 3)]
2021-01-29 11:00:12,799 DEBUG - interface - [K] : [(45, 1), (45, 3)]
2021-01-29 11:00:12,799 DEBUG - interface - [L] : [(46, 1), (46, 3)]
2021-01-29 11:00:12,799 DEBUG - interface - [:] : [(47, 1), (47, 3)]
2021-01-29 11:00:12,799 DEBUG - interface - ["] : [(48, 1), (48, 3)]
2021-01-29 11:00:12,799 DEBUG - interface - [Z] : [(52, 1), (52, 3)]
2021-01-29 11:00:12,799 DEBUG - interface - [X] : [(53, 1), (53, 3)]
2021-01-29 11:00:12,799 DEBUG - interface - [C] : [(54, 1), (54, 3)]
2021-01-29 11:00:12,799 DEBUG - interface - [V] : [(55, 1), (55, 3)]
2021-01-29 11:00:12,799 DEBUG - interface - [B] : [(56, 1), (56, 3)]
2021-01-29 11:00:12,799 DEBUG - interface - [N] : [(57, 1), (57, 3)]
2021-01-29 11:00:12,799 DEBUG - interface - [M] : [(58, 1), (58, 3)]
2021-01-29 11:00:12,799 DEBUG - interface - [<] : [(94, 0), (59, 1), (94, 2), (59, 3)]
2021-01-29 11:00:12,799 DEBUG - interface - [>] : [(60, 1), (94, 1), (60, 3), (94, 3)]
2021-01-29 11:00:12,799 DEBUG - interface - [?] : [(61, 1), (61, 3)]
2021-01-29 11:00:12,800 DEBUG - iomediator - Set modifier <capslock> to False
2021-01-29 11:00:12,800 DEBUG - iomediator - Set modifier <numlock> to True
2021-01-29 11:00:12,801 DEBUG - interface - Grabbing hotkey: [u'<super>'] u'k'
2021-01-29 11:00:12,801 DEBUG - interface - Grabbing hotkey: [u'<shift>', u'<super>'] u'k'
2021-01-29 11:00:12,810 INFO - interface - XRecord interface thread starting
2021-01-29 11:00:12,813 INFO - service - Service now marked as running
2021-01-29 11:00:12,853 DEBUG - phrase-menu - Sorting phrase menu by usage count
2021-01-29 11:00:12,858 INFO - root - Entering main()
2021-01-29 11:00:21,574 DEBUG - service - Received mouse click - resetting buffer
2021-01-29 11:00:24,170 DEBUG - service - Raw key: u'-', modifiers: [], Key: -
2021-01-29 11:00:24,171 DEBUG - service - Window visible title: u'Mozilla Firefox', Window class: u'Navigator.Firefox'
2021-01-29 11:00:24,171 DEBUG - service - No phrase/script matched hotkey
2021-01-29 11:00:24,171 DEBUG - service - Input stack at end of handle_keypress: [u'-']
2021-01-29 11:00:24,355 DEBUG - service - Raw key: u'-', modifiers: [], Key: -
2021-01-29 11:00:24,356 DEBUG - service - Window visible title: u'Mozilla Firefox', Window class: u'Navigator.Firefox'
2021-01-29 11:00:24,356 DEBUG - service - No phrase/script matched hotkey
2021-01-29 11:00:24,356 DEBUG - service - Input stack at end of handle_keypress: [u'-', u'-']
2021-01-29 11:00:24,811 DEBUG - iomediator - <shift> pressed
2021-01-29 11:00:25,020 DEBUG - service - Raw key: u'.', modifiers: ['<shift>'], Key: >
2021-01-29 11:00:25,021 DEBUG - service - Window visible title: u'Mozilla Firefox', Window class: u'Navigator.Firefox'
2021-01-29 11:00:25,021 DEBUG - service - No phrase/script matched hotkey
2021-01-29 11:00:25,022 DEBUG - service - Input stack at end of handle_keypress: [u'-', u'-', u'>']
2021-01-29 11:00:25,081 DEBUG - iomediator - <shift> released
2021-01-29 11:00:25,217 DEBUG - service - Raw key: ' ', modifiers: [], Key:
2021-01-29 11:00:25,217 DEBUG - service - Window visible title: u'Mozilla Firefox', Window class: u'Navigator.Firefox'
2021-01-29 11:00:25,218 DEBUG - service - No phrase/script matched hotkey
2021-01-29 11:00:25,219 DEBUG - service - Input stack at end of handle_keypress: []
2021-01-29 11:00:25,219 DEBUG - service - Ignored locking error in handle_keypress
2021-01-29 11:00:25,220 DEBUG - iomediator - Send via event interface
2021-01-29 11:00:25,221 DEBUG - interface - Send special key: ['<backspace>']
2021-01-29 11:00:25,223 DEBUG - interface - Send special key: ['<backspace>']
2021-01-29 11:00:25,225 DEBUG - interface - Send special key: ['<backspace>']
2021-01-29 11:00:25,227 DEBUG - interface - Send special key: ['<backspace>']
2021-01-29 11:00:25,228 DEBUG - interface - Sending string: u'\u2192 '
2021-01-29 11:00:25,230 DEBUG - interface - Characters requiring remapping: [u'\u2192']
2021-01-29 11:00:25,230 DEBUG - interface - Remapping with keycodes in the range: [8, 93, 97, 103, 120, 132, 149, 154, 168, 178, 183, 184, 197, 202]
2021-01-29 11:00:25,232 DEBUG - interface - Ignored keymap change event
2021-01-29 11:00:25,236 DEBUG - interface - Recorded keymap change event
2021-01-29 11:00:25,436 DEBUG - interface - Ungrabbing hotkey: [u'<super>'] u'k'
2021-01-29 11:00:25,437 DEBUG - interface - Ignored keymap change event
2021-01-29 11:00:25,437 DEBUG - interface - Ignored keymap change event
2021-01-29 11:00:25,437 DEBUG - interface - Ignored keymap change event
2021-01-29 11:00:25,440 DEBUG - interface - Ignored keymap change event
2021-01-29 11:00:25,441 DEBUG - interface - Ignored keymap change event
2021-01-29 11:00:25,441 DEBUG - interface - Ignored keymap change event
2021-01-29 11:00:25,441 DEBUG - interface - Ignored keymap change event
2021-01-29 11:00:25,441 DEBUG - interface - Ignored keymap change event
2021-01-29 11:00:25,460 DEBUG - interface - Ungrabbing hotkey: [u'<shift>', u'<super>'] u'k'
2021-01-29 11:00:25,514 DEBUG - interface - Modifier masks: {'<capslock>': 2, '<meta>': 8, '<alt_gr>': 128, '<numlock>': 16, '<hyper>': 64, '<ctrl>': 4, '<shift>': 1, '<alt>': 8, '<super>': 64}
2021-01-29 11:00:25,549 DEBUG - interface - Alt-Grid: XK_Alt_R, 65514
2021-01-29 11:00:25,549 DEBUG - interface - [(92, 0), (92, 2), (92, 4)]
2021-01-29 11:00:25,549 DEBUG - interface - X Server Keymap
2021-01-29 11:00:25,549 DEBUG - interface - [\] : [(51, 0), (51, 2), (51, 4)]
2021-01-29 11:00:25,549 DEBUG - interface - [|] : [(51, 1), (51, 3), (94, 4), (51, 5), (94, 6)]
2021-01-29 11:00:25,550 DEBUG - interface - [`] : [(49, 0), (49, 2), (49, 4)]
2021-01-29 11:00:25,550 DEBUG - interface - [1] : [(10, 0), (10, 2), (10, 4)]
2021-01-29 11:00:25,550 DEBUG - interface - [2] : [(11, 0), (11, 2), (11, 4)]
2021-01-29 11:00:25,550 DEBUG - interface - [3] : [(12, 0), (12, 2), (12, 4)]
2021-01-29 11:00:25,550 DEBUG - interface - [4] : [(13, 0), (13, 2), (13, 4)]
2021-01-29 11:00:25,550 DEBUG - interface - [5] : [(14, 0), (14, 2), (14, 4)]
2021-01-29 11:00:25,550 DEBUG - interface - [6] : [(15, 0), (15, 2), (15, 4)]
2021-01-29 11:00:25,550 DEBUG - interface - [7] : [(16, 0), (16, 2), (16, 4)]
2021-01-29 11:00:25,550 DEBUG - interface - [8] : [(17, 0), (17, 2), (17, 4)]
2021-01-29 11:00:25,550 DEBUG - interface - [9] : [(18, 0), (18, 2), (18, 4)]
2021-01-29 11:00:25,550 DEBUG - interface - [0] : [(19, 0), (19, 2), (19, 4)]
2021-01-29 11:00:25,550 DEBUG - interface - [-] : [(20, 0), (20, 2), (20, 4)]
2021-01-29 11:00:25,550 DEBUG - interface - [=] : [(21, 0), (21, 2), (21, 4)]
2021-01-29 11:00:25,550 DEBUG - interface - [~] : [(49, 1), (49, 3), (49, 5)]
2021-01-29 11:00:25,550 DEBUG - interface - [!] : [(10, 1), (10, 3), (10, 5)]
2021-01-29 11:00:25,550 DEBUG - interface - [@] : [(11, 1), (11, 3), (11, 5)]
2021-01-29 11:00:25,550 DEBUG - interface - [#] : [(12, 1), (12, 3), (12, 5)]
2021-01-29 11:00:25,550 DEBUG - interface - [$] : [(13, 1), (13, 3), (13, 5)]
2021-01-29 11:00:25,550 DEBUG - interface - [%] : [(14, 1), (14, 3), (14, 5)]
2021-01-29 11:00:25,550 DEBUG - interface - [^] : [(15, 1), (15, 3), (15, 5)]
2021-01-29 11:00:25,551 DEBUG - interface - [&] : [(16, 1), (16, 3), (16, 5)]
2021-01-29 11:00:25,551 DEBUG - interface - [*] : [(17, 1), (17, 3), (17, 5)]
2021-01-29 11:00:25,551 DEBUG - interface - [(] : [(187, 0), (18, 1), (187, 2), (18, 3), (187, 4), (18, 5)]
2021-01-29 11:00:25,551 DEBUG - interface - [)] : [(188, 0), (19, 1), (188, 2), (19, 3), (188, 4), (19, 5)]
2021-01-29 11:00:25,551 DEBUG - interface - [q] : [(24, 0), (24, 2), (24, 4)]
2021-01-29 11:00:25,551 DEBUG - interface - [w] : [(25, 0), (25, 2), (25, 4)]
2021-01-29 11:00:25,551 DEBUG - interface - [e] : [(26, 0), (26, 2), (26, 4)]
2021-01-29 11:00:25,551 DEBUG - interface - [r] : [(27, 0), (27, 2), (27, 4)]
2021-01-29 11:00:25,551 DEBUG - interface - [t] : [(28, 0), (28, 2), (28, 4)]
2021-01-29 11:00:25,551 DEBUG - interface - [y] : [(29, 0), (29, 2), (29, 4)]
2021-01-29 11:00:25,551 DEBUG - interface - [u] : [(30, 0), (30, 2), (30, 4)]
2021-01-29 11:00:25,551 DEBUG - interface - [i] : [(31, 0), (31, 2), (31, 4)]
2021-01-29 11:00:25,551 DEBUG - interface - [o] : [(32, 0), (32, 2), (32, 4)]
2021-01-29 11:00:25,551 DEBUG - interface - [p] : [(33, 0), (33, 2), (33, 4)]
2021-01-29 11:00:25,551 DEBUG - interface - [[] : [(34, 0), (34, 2), (34, 4)]
2021-01-29 11:00:25,552 DEBUG - interface - []] : [(35, 0), (35, 2), (35, 4)]
2021-01-29 11:00:25,552 DEBUG - interface - [a] : [(38, 0), (38, 2), (38, 4)]
2021-01-29 11:00:25,552 DEBUG - interface - [s] : [(39, 0), (39, 2), (39, 4)]
2021-01-29 11:00:25,552 DEBUG - interface - [d] : [(40, 0), (40, 2), (40, 4)]
2021-01-29 11:00:25,552 DEBUG - interface - [f] : [(41, 0), (41, 2), (41, 4)]
2021-01-29 11:00:25,552 DEBUG - interface - [g] : [(42, 0), (42, 2), (42, 4)]
2021-01-29 11:00:25,552 DEBUG - interface - [h] : [(43, 0), (43, 2), (43, 4)]
2021-01-29 11:00:25,552 DEBUG - interface - [j] : [(44, 0), (44, 2), (44, 4)]
2021-01-29 11:00:25,552 DEBUG - interface - [k] : [(45, 0), (45, 2), (45, 4)]
2021-01-29 11:00:25,552 DEBUG - interface - [l] : [(46, 0), (46, 2), (46, 4)]
2021-01-29 11:00:25,552 DEBUG - interface - [;] : [(47, 0), (47, 2), (47, 4)]
2021-01-29 11:00:25,552 DEBUG - interface - ['] : [(48, 0), (48, 2), (48, 4)]
2021-01-29 11:00:25,552 DEBUG - interface - [z] : [(52, 0), (52, 2), (52, 4)]
2021-01-29 11:00:25,552 DEBUG - interface - [x] : [(53, 0), (53, 2), (53, 4)]
2021-01-29 11:00:25,552 DEBUG - interface - [c] : [(54, 0), (54, 2), (54, 4)]
2021-01-29 11:00:25,552 DEBUG - interface - [v] : [(55, 0), (55, 2), (55, 4)]
2021-01-29 11:00:25,552 DEBUG - interface - [b] : [(56, 0), (56, 2), (56, 4)]
2021-01-29 11:00:25,553 DEBUG - interface - [n] : [(57, 0), (57, 2), (57, 4)]
2021-01-29 11:00:25,553 DEBUG - interface - [m] : [(58, 0), (58, 2), (58, 4)]
2021-01-29 11:00:25,553 DEBUG - interface - [,] : [(59, 0), (59, 2), (59, 4)]
2021-01-29 11:00:25,553 DEBUG - interface - [.] : [(60, 0), (60, 2), (60, 4)]
2021-01-29 11:00:25,553 DEBUG - interface - [/] : [(61, 0), (61, 2), (61, 4)]
2021-01-29 11:00:25,553 DEBUG - interface - [Q] : [(24, 1), (24, 3), (24, 5)]
2021-01-29 11:00:25,553 DEBUG - interface - [W] : [(25, 1), (25, 3), (25, 5)]
2021-01-29 11:00:25,553 DEBUG - interface - [E] : [(26, 1), (26, 3), (26, 5)]
2021-01-29 11:00:25,553 DEBUG - interface - [R] : [(27, 1), (27, 3), (27, 5)]
2021-01-29 11:00:25,553 DEBUG - interface - [T] : [(28, 1), (28, 3), (28, 5)]
2021-01-29 11:00:25,553 DEBUG - interface - [Y] : [(29, 1), (29, 3), (29, 5)]
2021-01-29 11:00:25,553 DEBUG - interface - [U] : [(30, 1), (30, 3), (30, 5)]
2021-01-29 11:00:25,553 DEBUG - interface - [I] : [(31, 1), (31, 3), (31, 5)]
2021-01-29 11:00:25,553 DEBUG - interface - [O] : [(32, 1), (32, 3), (32, 5)]
2021-01-29 11:00:25,553 DEBUG - interface - [P] : [(33, 1), (33, 3), (33, 5)]
2021-01-29 11:00:25,553 DEBUG - interface - [{] : [(34, 1), (34, 3), (34, 5)]
2021-01-29 11:00:25,553 DEBUG - interface - [}] : [(35, 1), (35, 3), (35, 5)]
2021-01-29 11:00:25,554 DEBUG - interface - [A] : [(38, 1), (38, 3), (38, 5)]
2021-01-29 11:00:25,554 DEBUG - interface - [S] : [(39, 1), (39, 3), (39, 5)]
2021-01-29 11:00:25,554 DEBUG - interface - [D] : [(40, 1), (40, 3), (40, 5)]
2021-01-29 11:00:25,554 DEBUG - interface - [F] : [(41, 1), (41, 3), (41, 5)]
2021-01-29 11:00:25,554 DEBUG - interface - [G] : [(42, 1), (42, 3), (42, 5)]
2021-01-29 11:00:25,554 DEBUG - interface - [H] : [(43, 1), (43, 3), (43, 5)]
2021-01-29 11:00:25,554 DEBUG - interface - [J] : [(44, 1), (44, 3), (44, 5)]
2021-01-29 11:00:25,554 DEBUG - interface - [K] : [(45, 1), (45, 3), (45, 5)]
2021-01-29 11:00:25,554 DEBUG - interface - [L] : [(46, 1), (46, 3), (46, 5)]
2021-01-29 11:00:25,554 DEBUG - interface - [:] : [(47, 1), (47, 3), (47, 5)]
2021-01-29 11:00:25,554 DEBUG - interface - ["] : [(48, 1), (48, 3), (48, 5)]
2021-01-29 11:00:25,554 DEBUG - interface - [Z] : [(52, 1), (52, 3), (52, 5)]
2021-01-29 11:00:25,554 DEBUG - interface - [X] : [(53, 1), (53, 3), (53, 5)]
2021-01-29 11:00:25,554 DEBUG - interface - [C] : [(54, 1), (54, 3), (54, 5)]
2021-01-29 11:00:25,554 DEBUG - interface - [V] : [(55, 1), (55, 3), (55, 5)]
2021-01-29 11:00:25,554 DEBUG - interface - [B] : [(56, 1), (56, 3), (56, 5)]
2021-01-29 11:00:25,555 DEBUG - interface - [N] : [(57, 1), (57, 3), (57, 5)]
2021-01-29 11:00:25,555 DEBUG - interface - [M] : [(58, 1), (58, 3), (58, 5)]
2021-01-29 11:00:25,555 DEBUG - interface - [<] : [(94, 0), (59, 1), (94, 2), (59, 3), (59, 5)]
2021-01-29 11:00:25,555 DEBUG - interface - [>] : [(60, 1), (94, 1), (60, 3), (94, 3), (60, 5)]
2021-01-29 11:00:25,555 DEBUG - interface - [?] : [(61, 1), (61, 3), (61, 5)]
```
## Notes
---
| closed | 2021-01-29T08:05:15Z | 2022-12-10T19:46:05Z | https://github.com/autokey/autokey/issues/507 | [
"bug",
"autokey-gtk",
"phrase expansion"
] | aphilas | 12 |
pandas-dev/pandas | pandas | 60,384 | DOC: Missing type hint for squeeze method | ### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://github.com/pandas-dev/pandas/blob/main/pandas/core/generic.py
### Documentation problem
The squeeze method is missing a type hint.
### Suggested fix for documentation
Adding a type hint to the squeeze method to be consistent with the rest of the code. | closed | 2024-11-21T06:33:11Z | 2024-11-26T21:28:40Z | https://github.com/pandas-dev/pandas/issues/60384 | [
"Typing"
] | JessJohn0 | 1 |
microsoft/nni | tensorflow | 5,726 | Mismatched hyperparameters between web server display and their actual values | **Describe the issue**:
**Environment**:
- NNI version: 3.0
- Training service (local|remote|pai|aml|etc): local
- Client OS: Ubuntu 20.04.4 LTS (GNU/Linux 5.13.0-30-generic x86_64)
- Server OS (for remote mode only):
- Python version: 3.11
- PyTorch/TensorFlow version: 2.1.2
- Is conda/virtualenv/venv used?: Conda
- Is running in Docker?: No
**Configuration**:
- Experiment config (remember to remove secrets!):
```yaml
experimentName: MRNN hyper-param searching
authorName: WenjieDu
trialConcurrency: 1
trainingServicePlatform: local
searchSpacePath: MRNN_ETTm1_tuning_space.json
multiThread: true
useAnnotation: false
tuner:
builtinTunerName: Random
trial:
command: enable_tuning=1 pypots-cli tuning --model pypots.imputation.MRNN --train_set ../../data/ettm1/train.h5 --val_set ../../data/ettm1/val.h5
codeDir: .
gpuNum: 1
localConfig:
useActiveGpu: true
maxTrialNumPerGpu: 20
gpuIndices: 3
```
- Search space:
```json
{
"n_steps": {"_type":"choice","_value":[60]},
"n_features": {"_type":"choice","_value":[7]},
"patience": {"_type":"choice","_value":[10]},
"epochs": {"_type":"choice","_value":[200]},
"rnn_hidden_size": {"_type":"choice","_value":[16,32,64,128,256,512]},
"lr":{"_type":"loguniform","_value":[0.0001,0.01]}
}
```
**Log message**:
- nnimanager.log:
```
[2023-12-27 16:16:42] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 7,
hyperParameters: {
value: '{"parameter_id": 7, "parameter_source": "algorithm", "parameters": {"n_steps": 60, "n_features": 7, "patience": 10, "epochs": 200, "rnn_hidden_size": 32, "lr": 0.0008698020401037771}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2023-12-27 16:16:42] INFO (LocalV3.local) Created trial XsB6F
```
- dispatcher.log:
```
[2023-12-27 16:15:06] INFO (numexpr.utils/MainThread) Note: detected 128 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
[2023-12-27 16:15:06] INFO (numexpr.utils/MainThread) Note: NumExpr detected 128 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8.
[2023-12-27 16:15:06] INFO (numexpr.utils/MainThread) NumExpr defaulting to 8 threads.
[2023-12-27 16:15:06] INFO (nni.tuner.random/MainThread) Using random seed 220808582
[2023-12-27 16:15:06] INFO (nni.runtime.msg_dispatcher_base/MainThread) Dispatcher started
[2023-12-27 16:15:06] INFO (nni.runtime.msg_dispatcher/Thread-1 (command_queue_worker)) Initial search space: {'n_steps': {'_type': 'choice', '_value': [60]}, 'n_features': {'_type': 'choice', '_value': [7]}, 'patience': {'_type': 'choice', '_value': [10]}, 'epochs': {'_type': 'choice', '_value': [200]}, 'rnn_hidden_size': {'_type': 'choice', '_value': [16, 32, 64, 128, 256, 512]}, 'lr': {'_type': 'loguniform', '_value': [0.0001, 0.01]}}
```
- nnictl stdout and stderr:
```
2023-12-27 16:16:44 [INFO]: Have set the random seed as 2204 for numpy and pytorch.
2023-12-27 16:16:44 [INFO]: The tunner assigns a new group of params: {'n_steps': 60, 'n_features': 7, 'patience': 10, 'epochs': 200, 'rnn_hidden_size': 256, 'lr': 0.0054442307300676335}
2023-12-27 16:16:45 [INFO]: No given device, using default device: cuda
2023-12-27 16:16:45 [WARNING]: ‼️ saving_path not given. Model files and tensorboard file will not be saved.
2023-12-27 16:16:48 [INFO]: MRNN initialized with the given hyperparameters, the number of trainable parameters: 401,619
2023-12-27 16:16:48 [INFO]: Option lazy_load is set as False, hence loading all data from file...
2023-12-27 16:16:52 [INFO]: Epoch 001 - training loss: 1.3847, validating loss: 1.3214
```
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**:
Note that in the nnimanager.log: `lr` of trial XsB6F is `0.0008698020401037771` and this is also the value displayed on the local web page, but in the nnictl stdout log, the actual `lr` received by the model is `0.0054442307300676335`, and they're mismatched. This is not a single case, I notice that hyperparameters of some trials are mismatched between the nnimanager tells and their actual values, while some of them are matched and fine. | open | 2023-12-27T09:33:25Z | 2024-07-16T03:02:25Z | https://github.com/microsoft/nni/issues/5726 | [] | WenjieDu | 4 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 1,305 | Cancion | Hoy es sabado indica las coordenadas que hoy nos vamos | open | 2024-07-19T17:25:52Z | 2024-07-19T17:25:52Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1305 | [] | 4ndr32xd | 0 |
numba/numba | numpy | 9,029 | Segfault when using assert with parallel=True | The following code triggers a segfault. Tested on Ubuntu 22.04 with Numba 0.56.4 and Numba 0.57.0.
```python
from numba import njit
import numpy as np
@njit(parallel=True)
def foo():
n = 1000
for _ in range(n):
for _ in range(n):
values = np.zeros(3)
np.max(values)
assert n >= 0
foo()
```
- [x] I have tried using the latest released version of Numba (most recent is
visible in the change log (https://github.com/numba/numba/blob/main/CHANGE_LOG).
- [x] I have included a self contained code sample to reproduce the problem.
i.e. it's possible to run as 'python bug.py'. | closed | 2023-06-21T13:41:57Z | 2023-12-01T11:29:53Z | https://github.com/numba/numba/issues/9029 | [
"ParallelAccelerator",
"bug - segfault"
] | 99991 | 8 |
Yorko/mlcourse.ai | seaborn | 360 | Add demo gif to README | __Disclaimer: This is a bot__
It looks like your repo is trending. The [github_trending_videos](https://www.instagram.com/github_trending_videos/) Instgram account automatically shows the demo gifs of trending repos in Github.
Your README doesn't seem to have any demo gifs. Add one and the next time the parser runs it will pick it up and post it on its Instagram feed. If you don't want to just close this issue we won't bother you again. | closed | 2018-10-02T07:09:26Z | 2018-10-09T15:47:16Z | https://github.com/Yorko/mlcourse.ai/issues/360 | [
"invalid"
] | va3093 | 0 |
skforecast/skforecast | scikit-learn | 629 | Custom predictors are inefficient for window features | The way that custom predictors are calculated in `ForecasterAutoregCustom` is potentially inefficient.
See the [relevant code snippet](https://github.com/JoaquinAmatRodrigo/skforecast/blob/270dcb26923eec62f8be827422de30a16bfccbd4/skforecast/ForecasterAutoregCustom/ForecasterAutoregCustom.py#L387C2-L396C36).
The features are computed in a for loop and appended to a list:
```Python
X_train = []
y_train = []
for i in range(len(y) - self.window_size):
train_index = np.arange(i, self.window_size + i)
test_index = self.window_size + i
X_train.append(self.fun_predictors(y=y_values[train_index]))
y_train.append(y_values[test_index])
```
Whilst this is very generic, it is inefficient for the most common use case of window features (e.g., using the rolling mean, standard deviation, etc. at various different window sizes). There are optimised ways for computing these rolling statistics which take the whole series as input and outputs all the features across time (not just for one time period as required by the current implementation of `ForecasterAutoregCustom`).
Pandas and Numpy have efficient implementations of rolling statistics. They can also be further speeded up with Numba (see the window ops stuff [here](https://nixtlaverse.nixtla.io/mlforecast/docs/how-to-guides/lag_transforms_guide.html)).
Some options I think that would be helpful for `skforecast` could be
1) offer an additional way to derive features from the target variable so a user can supply a more efficient way of calculating window features
2) integrate window features in the API itself in a similar way that lag features are so that a more efficient way of calculating them can be included under the hood
On a sidenote, in my opinion, it is counterintuitive that `ForecasterAutoreg` supports a `lags` argument but `ForecasterAutoregCustom` requires the user to create the lag features in a custom function that the user must provide.
Best wishes,
Kishan | closed | 2024-01-26T17:35:45Z | 2024-11-12T10:14:25Z | https://github.com/skforecast/skforecast/issues/629 | [] | KishManani | 7 |
scikit-optimize/scikit-optimize | scikit-learn | 1,182 | Inclusion of PRIMA solvers in Skit-Optimize | Dear Scikit-Optimize maintainers,
This is Dr. Zaikun Zhang from The Hong Kong Polytechnic University. Together with Professor [N.I.M. Gould](https://www.numerical.rl.ac.uk/people/nimg/), I am responsible for maintaining the renowned derivative-free optimization solvers of the late Professor [M.J.D. Powell](https://www.zhangzk.net/powell.html), namely COBYLA, UOBYQA, NEWUOA, BOBYQA, and LINCOA. I am the author of [PRIMA](http://www.libprima.net/), which provides the reference implementation for these solvers. They are widely used by engineers and scientists. For instance, see Section 1 of [a recent paper on Powell's solvers](https://arxiv.org/pdf/2302.13246.pdf) as well as the Google searches of [COBYLA](https://www.google.com/search?q=cobyla) and [BOBYQA](https://www.google.com/search?q=bobyqa).
Since your package is exactly oriented to derivative-free / black-box optimization, it might be desirable to include the PRIMA solvers. I will be happy to assist on the Fortran side if you would like to do so.
Note that, even though the old Fortran 77 implementation of the aforementioned solvers is truly a masterpiece, it contains many bugs (mostly due to the language itself), which can lead to segmentation faults or infinite loops. For example, see [Section 4.4 of the above paper](https://arxiv.org/pdf/2302.13246.pdf) and [many GitHub issues](https://github.com/libprima/prima#bug-fixes). It is strongly discouraged to use the Fortran 77 version of these solvers anymore.
Thanks and regards,
Zaikun ZHANG
Ph.D. and Assistant Professor
Dept. App. Math., Hong Kong Polytechnic University | open | 2023-09-20T02:55:53Z | 2023-10-03T11:48:12Z | https://github.com/scikit-optimize/scikit-optimize/issues/1182 | [] | zaikunzhang | 5 |
plotly/dash | data-science | 3,143 | Accessing prop_name/component_name via Deprecated loading_state in Dash 3.0 | In the Dash 3.0, the `loading_state` attribute has been deprecated/removed. In previous versions, we relied on accessing `prop_name` and `component_name` through `loading_state` to implement custom loading animation components that matched component-specific behaviors. | closed | 2025-01-30T02:46:55Z | 2025-01-30T11:20:14Z | https://github.com/plotly/dash/issues/3143 | [] | CNFeffery | 2 |
apify/crawlee-python | web-scraping | 295 | HTTP API for Spider | `Scrapy` offers an HTTP API through a third-party library called `ScrapyRT`, which exposes an HTTP API for spiders. By sending a request to `ScrapyRT` with the spider name and URL, you receive the items collected by the spider from that URL.
It would be great if `Crawlee` could provide similar functionality out of the box. Its like turning a website into realtime API. | closed | 2024-07-14T04:02:07Z | 2024-09-02T03:43:32Z | https://github.com/apify/crawlee-python/issues/295 | [
"enhancement",
"t-tooling"
] | Ehsan-U | 1 |
mage-ai/mage-ai | data-science | 5,487 | [BUG] Plots in notebooks are shown only once (first time) | ### Mage version
v0.9.74
### Describe the bug
Plots (like matplotlib.pyplot) in python notebooks are shown for a small time period and only once and then plot is missing. The same happens in https://demo.mage.ai
### To reproduce
_No response_
### Expected behavior
_No response_
### Screenshots
Plots are shown all the time.
### Operating system
_No response_
### Additional context
_No response_ | open | 2024-10-10T10:35:29Z | 2024-10-10T10:36:22Z | https://github.com/mage-ai/mage-ai/issues/5487 | [
"bug"
] | kzemis | 0 |
psf/black | python | 4,138 | f-string with internal same-quote expression | Verified with https://black.vercel.app/?version=main
**Describe the bug**
Black fails to parse format strings (f-strings) where an express
**To Reproduce**
Default black settings with either of these
```python
f"{""}"
```
**Error**
> cannot use --safe with this file; failed to parse source file AST: f-string: expecting '}' (<unknown>, line 1)
This could be caused by running Black with an older Python version that does not support new syntax used in your source file.
```python
f"{'"'}"
```
**Error**
> Cannot parse: 1:5: f"{'"'}"
```python
f'''{'''"""'''}'''
```
**Error**
> Cannot parse: 1:8: EOF in multi-line string
**Expected behavior**
The code can be parsed successfully, just as Python `3.10`, `3.11`, and `3.12` can successfully evaluate the code.
**Environment**
<!-- Please complete the following information: -->
- Black's version: `main` and `23.10.1`
- OS and Python version: Python `3.10`, `3.11`, and `3.12` on `Ubuntu 22.04.2` in **WSL**
**Additional context**
Using `f"""{""}"""` works fine so long as the triple quotes aren't present internally.
| closed | 2023-12-31T17:39:43Z | 2023-12-31T21:48:42Z | https://github.com/psf/black/issues/4138 | [
"T: bug"
] | quittle | 1 |
Zeyi-Lin/HivisionIDPhotos | fastapi | 51 | 按照demo命令运行报错 | 我的py环境为 **3.10.14**
.onnx文件已经放到根目录,目前还没排查出问题 请大佬指教
执行 `python inference.py -i images/test.jpg -o ./idphoto.png -s '(413,295)'`
```
正在加载抠图模型...
抠图采用本地模型
image_matting 函数花费的时间为 0.22.
face_number_detection_mtcnn 函数花费的时间为 0.32.
Traceback (most recent call last):
File "C:\Users\49201\working\HivisionIDPhotos\inference.py", line 57, in <module>
) = IDphotos_create(
File "C:\Users\49201\working\HivisionIDPhotos\hivisionai\hycv\vision.py", line 19, in wrapper
return_param = func(*args, **kw)
File "C:\Users\49201\working\HivisionIDPhotos\src\face_judgement_align.py", line 638, in IDphotos_create
result_image_hd, result_image_standard, clothing_params = idphoto_cutting(
File "C:\Users\49201\working\HivisionIDPhotos\hivisionai\hycv\vision.py", line 36, in wrapper
return_param = func(*args, **kw)
File "C:\Users\49201\working\HivisionIDPhotos\src\face_judgement_align.py", line 376, in idphoto_cutting
width_height_ratio = standard_size[0] / standard_size[1] # 高宽比
TypeError: unsupported operand type(s) for /: 'str' and 'str'
```
执行 `python inference.py -t add_background -i ./idphoto.png -o ./idhoto_ab.jpg -c '(0,0,0)' -k 30`
```
正在加载抠图模型...
[ WARN:0@0.620] global loadsave.cpp:244 cv::findDecoder imread_('./idphoto.png'): can't open/read file: check file path/integrity
Traceback (most recent call last):
File "C:\Users\49201\working\HivisionIDPhotos\inference.py", line 89, in <module>
result_image = add_background(input_image, bgr=color)
File "C:\Users\49201\working\HivisionIDPhotos\hivisionai\hycv\vision.py", line 302, in add_background
height, width = input_image.shape[0], input_image.shape[1]
AttributeError: 'NoneType' object has no attribute 'shape'
```
执行 `python inference.py -t generate_layout_photos -i ./idhoto_ab.jpg -o ./idhoto_layout.jpg -s '(413,295)' -k 200`
```
正在加载抠图模型...
[ WARN:0@0.621] global loadsave.cpp:244 cv::findDecoder imread_('./idhoto_ab.jpg'): can't open/read file: check file path/integrity
Traceback (most recent call last):
File "C:\Users\49201\working\HivisionIDPhotos\inference.py", line 106, in <module>
typography_arr, typography_rotate = generate_layout_photo(
File "C:\Users\49201\working\HivisionIDPhotos\src\layoutCreate.py", line 70, in generate_layout_photo
layout_mode, centerBlockWidth, centerBlockHeight = judge_layout(input_width, input_height, PHOTO_INTERVAL_W,
File "C:\Users\49201\working\HivisionIDPhotos\src\layoutCreate.py", line 12, in judge_layout
centerBlockHeight_temp = input_height * i + PHOTO_INTERVAL_H * (i-1)
TypeError: can only concatenate str (not "int") to str
``` | closed | 2024-09-05T03:17:01Z | 2024-09-05T06:14:10Z | https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/51 | [] | shen774411223d | 5 |
Integuru-AI/Integuru | api | 4 | two factor authentication | Can you please add a note about the workflow required when the destination site uses 2fa? | closed | 2024-10-29T20:13:33Z | 2024-11-21T03:13:57Z | https://github.com/Integuru-AI/Integuru/issues/4 | [] | lockmeister | 2 |
ivy-llc/ivy | pytorch | 28,566 | Fix Frontend Failing Test: jax - reduction_ops.torch.mean | closed | 2024-03-12T16:55:36Z | 2024-03-16T15:28:11Z | https://github.com/ivy-llc/ivy/issues/28566 | [
"Sub Task"
] | ZenithFlux | 0 | |
miguelgrinberg/flasky | flask | 225 | Flask_script issue | I git flasky and installed flask-script but the terminal saied No module named flask_script
| closed | 2016-12-28T12:31:36Z | 2016-12-29T14:31:01Z | https://github.com/miguelgrinberg/flasky/issues/225 | [
"question"
] | Insofan | 5 |
harry0703/MoneyPrinterTurbo | automation | 156 | huggingface_hub.utils._errors. | huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the files on the Hub and we cannot find the appropriate snapshot folder for the specified revision on the local disk. Please check your internet connection and try again. | closed | 2024-04-03T04:55:18Z | 2024-04-08T02:46:02Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/156 | [] | AIzhiqing | 1 |
aiortc/aiortc | asyncio | 1,016 | P2P RTC connection using a STUN server. | Im trying to make an echo p2p connection using a STUN server.
I understood that it should go like that:
Both clients need to create an RTCPeerConnection.
Each client creates a data channel on their RTCPeerConnection.
Each client creates an offer, sets it as their local description, and sends it to the server.
The server forwards the offer from one client to the other.
Upon receiving an offer, a client sets it as their remote description, creates an answer, sets it as their local description, and sends it back to the server.
The server forwards the answer from one client to the other.
Upon receiving an answer, a client sets it as their remote description.
and im getting an error.
```error
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Users\*****\AppData\Local\Programs\Python\Python39\lib\threading.py", line 980, in _bootstrap_inner
self.run()
File "C:\Users\*****\AppData\Local\Programs\Python\Python39\lib\threading.py", line 917, in run
self._target(*self._args, **self._kwargs)
File "C:\*****\rtc_tests\c.py", line 27, in run
loop.run_until_complete(self.client())
File "C:\Users\*****\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 647, in run_until_complete
return future.result()
File "C:\*****\rtc_tests\c.py", line 62, in client
await pc.setRemoteDescription(RTCSessionDescription(sdp=offer_sdp, type="offer"))
File "C:\*****\venv\lib\site-packages\aiortc\rtcpeerconnection.py", line 827, in setRemoteDescription
self.__validate_description(description, is_local=False)
File "C:\*****\venv\lib\site-packages\aiortc\rtcpeerconnection.py", line 1244, in __validate_description
raise InvalidStateError(
aiortc.exceptions.InvalidStateError: Cannot handle offer in signaling state "have-local-offer"
```
here is my server that sends the data to the 2 clients:
```py
import asyncio
import websockets
from aiortc import RTCPeerConnection, RTCSessionDescription, RTCIceServer, RTCConfiguration
clients = set()
async def handle_client(websocket, path):
print(f"Connection established from {websocket.remote_address}")
clients.add(websocket)
other_client = None
while other_client is None:
await asyncio.sleep(0.1)
for client in clients:
if client != websocket and client.open:
other_client = client
break
print(f"Found other client: {other_client.remote_address}")
data = await websocket.recv()
await other_client.send(data)
print('send first set of data')
data = await websocket.recv()
await other_client.send(data)
print("Connection established")
# Keep the connection open until it is closed
await asyncio.Event().wait()
print(f"Connection closed by {websocket.remote_address}")
async def server():
server = await websockets.serve(handle_client, "localhost", 12345)
print("WebSocket server listening on ws://localhost:12345")
try:
await asyncio.Future() # Run the server indefinitely
except KeyboardInterrupt:
print("Server terminated by user.")
server.close()
await server.wait_closed()
if __name__ == "__main__":
asyncio.run(server())
```
here is the client:
```py
import asyncio
import websockets
import threading
import queue
from aiortc import RTCPeerConnection, RTCSessionDescription, RTCConfiguration, RTCIceServer
class SyncClient:
def __init__(self):
self.loop = asyncio.new_event_loop()
self.queue = queue.Queue()
self.thread = threading.Thread(target=self.run, args=(self.loop,))
self.channel = None
def start(self):
self.thread.start()
def stop(self):
self.loop.call_soon_threadsafe(self.loop.stop)
self.thread.join()
def send_message(self, message):
self.queue.put(message)
def run(self, loop):
asyncio.set_event_loop(loop)
loop.run_until_complete(self.client())
loop.close()
def on_datachannel(self, channel):
self.channel = channel
channel.on("message", self.on_message)
print("Data channel created successfully")
def on_message(self, message):
print(f"Received message from server: {message}")
async def client(self):
uri = "ws://localhost:12345"
async with websockets.connect(uri) as websocket:
print(f"Connected to {uri}")
# Create a new RTCPeerConnection
configuration = RTCConfiguration(
iceServers=[
RTCIceServer(urls=["stun:stun.l.google.com:19302"]),
]
)
pc = RTCPeerConnection(configuration=configuration)
# Create a data channel
pc.createDataChannel("data", ordered=True)
pc.on("datachannel", self.on_datachannel)
# Create an offer
offer = await pc.createOffer()
await pc.setLocalDescription(offer)
# Send the offer to the server
await websocket.send(offer.sdp)
# Receive the offer from the server
offer_sdp = await websocket.recv()
await pc.setRemoteDescription(RTCSessionDescription(sdp=offer_sdp, type="offer"))
print('send first set of data')
# Create and send the answer to the server
answer = await pc.createAnswer()
await pc.setLocalDescription(answer)
print(f"Sending answer to server: {pc.localDescription.sdp}")
await websocket.send(pc.localDescription.sdp)
# Receive the answer from the server
answer_sdp = await websocket.recv()
await pc.setRemoteDescription(RTCSessionDescription(sdp=answer_sdp, type="answer"))
# Send a message to the server
while self.channel is None:
await asyncio.sleep(0.1)
print("starting to send queued messages")
while True:
try:
message = self.queue.get(block=False)
self.channel.send(message)
except queue.Empty:
await asyncio.sleep(0.1)
if __name__ == "__main__":
client = SyncClient()
client.start()
while True:
msg = input("Enter message: ")
if msg == "quit":
break
client.send_message(msg)
client.stop()
``` | closed | 2023-12-23T22:03:00Z | 2024-01-07T09:35:04Z | https://github.com/aiortc/aiortc/issues/1016 | [] | Godwhitelight | 0 |
yunjey/pytorch-tutorial | pytorch | 78 | RuntimeError: cuda runtime error (30) : unknown error at /opt/conda/conda-bld/pytorch_1501972792122/work/pytorch-0.1.12/torch/lib/THC/THCGeneral.c:66 | I get this error upon executing the [rnn.cuda()](https://github.com/yunjey/pytorch-tutorial/blob/master/tutorials/02-intermediate/recurrent_neural_network/main-gpu.py#L59) command. Privious cuda based pytorch code was running fine. Could it be a bug in pytorch (version 0.1.12_1) ? | open | 2017-11-06T08:57:22Z | 2019-02-24T12:49:18Z | https://github.com/yunjey/pytorch-tutorial/issues/78 | [] | mgarbade | 2 |
fastapi/sqlmodel | pydantic | 252 | how to auto generate created_at, updated_at, deleted_at... field with SQLModel | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
> I need some sample codes, thank you
```
### Description
I want to have , let's say, three extra columns for created_time, updated_time, deleted_time, their value are set at different operations, just like the column name suggested.
I'm new to ORM, and SQLAlchemy seems support this function.
How to achieve this using SQLModel?
### Operating System
macOS
### Operating System Details
_No response_
### SQLModel Version
0.0.6
### Python Version
Python 3.9.9
### Additional Context
_No response_ | closed | 2022-02-26T08:38:00Z | 2025-02-09T10:37:21Z | https://github.com/fastapi/sqlmodel/issues/252 | [
"question"
] | mr-m0nst3r | 17 |
axnsan12/drf-yasg | django | 393 | Rendering existing OpenAPI specifications | In our project, we have a documentation-first flow. So you have to write OpenAPI specs first and only then you can start developing that feature. That's why I don't really need a "swagger generation".
I want to specify a path to the `.yml` file or a folder containing them and serve it. Does this package have such a feature?
Thank you! | closed | 2019-06-27T05:16:36Z | 2019-06-28T09:41:19Z | https://github.com/axnsan12/drf-yasg/issues/393 | [] | progremir | 2 |
TencentARC/GFPGAN | deep-learning | 350 | Project dependencies may have API risk issues | Hi, In **GFPGAN**, inappropriate dependency versioning constraints can cause risks.
Below are the dependencies and version constraints that the project is using
```
basicsr>=1.4.2
facexlib>=0.2.5
numpy
tqdm
```
The version constraint **==** will introduce the risk of dependency conflicts because the scope of dependencies is too strict.
The version constraint **No Upper Bound** and **\*** will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.
After further analysis, in this project,
The version constraint of dependency **tqdm** can be changed to *>=3.4.0,<=4.64.0*.
The above modification suggestions can reduce the dependency conflicts as much as possible,
and introduce the latest version as much as possible without calling Error in the projects.
The invocation of the current project includes all the following methods.
In version tqdm-3.1.4, the API **tqdm.tqdm.set_description** whch is used by the current project in **gfpgan/models/gfpgan_model.py** is missing.
<img width="494" alt="image" src="https://user-images.githubusercontent.com/109138844/224530410-0d31f4e3-5b75-4110-8967-836f5bbb8d9a.png">
<details>
<summary>The calling methods from the tqdm</summary>
<pre>
tqdm.tqdm.close
tqdm.tqdm.update
tqdm.tqdm
tqdm.tqdm.set_description
</pre>
</details>
<details>
<summary>The calling methods from the all methods</summary>
<pre>
basicsr.archs.srvgg_arch.SRVGGNetCompact
torchvision.transforms.functional.adjust_saturation
z.permute.contiguous.detach
join
torch.nn.functional.avg_pool2d
basicsr.data.degradations.random_add_jpg_compression
basicsr.data.transforms.augment
self.model_ema
self.stylegan_decoder
NormStyleCode.extend
os.path.splitext
float
Upsample
i_block.i_level.self.down.block
conv2
crt_v.size
os.environ.get
tqdm.tqdm.close
_minimal_ext_cmd.strip
f.read.strip.split
ConstantInput
subprocess.Popen
basicsr.archs.stylegan2_arch.EqualConv2d
shutil.rmtree
z_q.permute.contiguous
self.net_g.eval
k.transpose.transpose.permute
basicsr.losses.gan_loss.r1_penalty
self.final_conv
self.opt.items
torch.nn.functional.leaky_relu
self.mid.attn_1
clean_folder
self.skip
self.ResBlock.super.__init__
y.self.fc.view
os.path.dirname
gt_gray.self.network_identity.detach
self.output.detach
self.cri_pix
main
EqualConv2d
self.net_d_left_eye
basicsr.utils.registry.MODEL_REGISTRY.register
numpy.min
self.conv_out
line.replace
numpy.array
basicsr.utils.registry.ARCH_REGISTRY.register
self.condition_scale.append
self.FFHQDegradationDataset.super.__init__
self.opt.build_network.to
self.activation
unet_skips.insert
self.k
i.self.conv_body_up.size
stylegan2_bilinear_arch.EqualConv2d
self.GFPGANv1Clean.super.__init__
gfpgan.GFPGANer.enhance
self.encode
self.net_g.train
self.to_rgbs.append
f.read.strip
self.mid.block_1
self.cri_gan
ori_k.replace.replace
cropped_face_t.unsqueeze.to.unsqueeze
torch.nn.functional.conv2d
self.proj_out.permute
torch.nn.Module
open
modify_checkpoint
self.StyleGAN2GeneratorCSFT.super.__init__
self.FacialComponentDiscriminator.super.__init__
self.bn5.view
torch.nn.ModuleList
self.face_helper.clean_all
basicsr.archs.stylegan2_arch.ScaledLeakyReLU
torch.nn.Conv2d
self.ToRGB.super.__init__
self.embedding.weight.t
real_d_pred.detach.mean
self.conv5.clone
self.down.append
self.net_d_mouth.train
self.io_backend_opt.pop
feat_gt.detach
self.n_e.indices.shape.torch.zeros.to
getattr
z.permute.contiguous.view
torch.nn.ModuleList.append
line.split
torch.split
self.cri_perceptual
self.encoder.named_parameters
torch.nn.ReLU
i.self.condition_shift.clone
basicsr.ops.fused_act.fused_leaky_relu
l_d_r1.detach.mean
self.gray_resize_for_identity.unsqueeze
torch.nn.Sigmoid
v.transpose.reshape
self.net_g_ema
self.conv5.size
torch.clamp.round
cog.Input
self.construct_img_pyramid
locals
self.network_identity
style_code.view.view
self.right_eyes.detach
self.color_jitter
NormStyleCode
torchvision.transforms.functional.adjust_hue
gfpganv1_arch.ResUpBlock
self.save_training_state
self.optimizer_d.step
torch.cat
torch.save
cv2.imwrite
half_len_mouth.mean_mouth.half_len_mouth.mean_mouth.np.hstack.astype
torch.exp
self.encoder
_minimal_ext_cmd
self.conv_body_first
self.optimizer_d_left_eye.step
torch.nn.BatchNorm1d
k.transpose.transpose.reshape
z_q.permute.contiguous.detach
torch.nn.Dropout
self.prelu
self._initialize_best_metric_results
i_level.self.up.upsample
q.transpose.mul_
v.endswith
torchvision.transforms.functional.adjust_brightness
torch.clamp
self.proj_out.view
len
hasattr
self.face_enhancer.enhance
torch.nn.LeakyReLU
self.temb_proj
self.conv2
torch.sigmoid
self.VectorQuantizer.super.__init__
self.cri_gan.backward
self.proj_out.transpose
print
self.constant_input
self.fc5
numpy.prod
self.conv4
self.net_d_right_eye.train
self.gfpgan.eval
basicsr.utils.download_util.load_file_from_url.startswith
self.n_e.min_encoding_indices.shape.torch.zeros.to
ResnetBlock
basicsr.utils.img2tensor
self.net_d.train
self.bn3
self.net_d_right_eye.parameters
self.optimizer_g.step
restored_face.astype.astype
self.Bottleneck.super.__init__
self.get_component_coordinates.append
basicsr.archs.stylegan2_arch.ConvLayer
self.proj_out.matmul
basicsr.utils.registry.DATASET_REGISTRY.register
min_encoding_indices.unsqueeze.unsqueeze
self.weight.repeat
ResBlock
weight.pow.sum
self.bn0
self.net_d_left_eye.train
self.gt.detach.cpu
half_len_right_eye.mean_right_eye.half_len_right_eye.mean_right_eye.np.hstack.astype
tqdm.tqdm.set_description
data.to
w.h.b.out.new_empty.normal_
math.log
z_q.permute.contiguous.view
i.self.condition_shift
reversed
args.input.endswith
torch.nn.BatchNorm2d
os.path.isfile
rois_eyes.torch.cat.to
self.StyleGAN2GeneratorSFT.super.__init__
out.strip.decode
torchvision.transforms.functional.normalize
torch.nn.Embedding
self.register_parameter
numpy.mean
train_opt.pop
StyleGAN2GeneratorCSFT
self.net_d
self.construct_img_pyramid.insert
ToRGB
numpy.hstack
self.network_identity.eval
i_block.i_level.self.up.block
torch.log
cropped_face_t.unsqueeze.to
self.to_rgb1
super
self.net_d.parameters
_comp_style
self.cri_component
basicsr.utils.FileClient.get
glob.glob
exec
self.nin_shortcut
self.metric_results.keys
self.EqualLinear.super.__init__
numpy.concatenate
self.left_eyes.detach
basicsr.metrics.calculate_metric
ScaledLeakyReLU
x.view.transpose
json.load
self.downsample
write_version_py
self.gray_resize_for_identity
basicsr.losses.gan_loss.r1_penalty.backward
layers.append
isinstance
self.conv1
self.optimizer_d.zero_grad
self.optimizer_d_mouth.step
warnings.warn
v.transpose.transpose
numpy.clip
i.self.toRGB
StyleGAN2GeneratorBilinearSFT
json.load.values
torch.randn
env.subprocess.PIPE.cmd.subprocess.Popen.communicate
Downsample
torch.tensor
torch.no_grad
weight.view.pow
torch.randn.append
self.face_helper.align_warp_face
sorted
checkpoint_bilinear.items
self.quantize
conv2.view
self._make_layer
q.transpose.permute
os.system
time.asctime
q.transpose.transpose
self.layer4
self.bn5
x.isdigit
f.write
z_q.permute.contiguous.permute
ori_v.size
z.permute.contiguous.permute
self.GFPGANv1.super.__init__
torch.load
get_git_hash
self.gt.detach
self.final_conv.size
cog.Path
math.sqrt.view
stylegan2_bilinear_arch.ConvLayer
self.ModulatedConv2d.super.__init__
self.embedding.weight.min_encodings.torch.matmul.view
ConvUpLayer
cv2.resize
gfpgan.archs.restoreformer_arch.RestoreFormer
torch.rsqrt.view
self.get_component_coordinates
torch.matmul
self._log_validation_metric_values
self.conv_body_down.append
k.transpose.transpose.transpose
torchvision.ops.roi_align
cv2.filter2D
i_block.i_level.self.up.attn
self.bn1
conv2.size
torch.rsqrt
self._gram_mat
self.GFPGANModel.super.__init__
gfpgan.archs.gfpgan_bilinear_arch.GFPGANBilinear
self.bn4
self.StyleGAN2GeneratorBilinear.super.__init__
x.view.bmm
hue.hue.torch.tensor.uniform_.item
self.StyleGAN2GeneratorBilinearSFT.super.__init__
self.StyleConv.super.__init__
self.net_g.named_parameters
torch.nn.functional.linear
torch.nn.PReLU
argparse.ArgumentParser
self.condition_shift.append
enumerate
train_opt.build_loss.to
self.quant_conv
self.modulation
self.net_d_right_eye
StyleConv
int
out_channels.torch.zeros.fill_
self.proj_out
torch.nn.functional.pad
self.metric_results.items
self.gfpgan.load_state_dict
self.optimizer_d_right_eye.step
VectorQuantizer
basicsr.ops.fused_act.FusedLeakyReLU
stylegan2_bilinear_arch.ResBlock
self.ScaledLeakyReLU.super.__init__
z.z_q.detach
os.path.join
self.post_quant_conv
self.n_e.indices.shape.torch.zeros.to.float
self.optimizer_d_mouth.zero_grad
self.bn5.size
self.net_d_left_eye.parameters
self.face_helper.get_inverse_affine
contrast.contrast.torch.tensor.uniform_.item
normal_params.append
self.opt.get
torch.nn.Upsample
self.face_helper.add_restored_face
os.listdir
self.SEBlock.super.__init__
os.path.basename
self.save_network
setuptools.setup
facexlib.utils.face_restoration_helper.FaceRestoreHelper
gfpgan.archs.gfpganv1_clean_arch.GFPGANv1Clean.state_dict
self._update_best_metric_result
self.layer3
self.embedding.weight.data.uniform_
self.init_training_settings
self.net_d.detach
nonlinearity
ConvLayer
torch.nn.Parameter
self.loc_left_eyes.new_full
self.ResUpBlock.super.__init__
os.path.islink
EqualLinear
f.read
self.gt_folder.endswith
self.conv5.view
self.face_helper.read_image
self.fc
self.decoder.named_parameters
torch.stack
self.conv
self.get_roi_regions
self.bg_upsampler.enhance
StyleGAN2GeneratorSFT
zip
math.sqrt
x.size
self.cri_component.backward
compile
self.get_optimizer
Normalize
self.ConstantInput.super.__init__
self.conv5
get_hash
weight.view.view
basicsr.losses.gan_loss.r1_penalty.detach
rois_mouths.torch.cat.to
self.opt.keys
math.sqrt.new_empty
self.gfpgan.squeeze
self.ConvUpLayer.super.__init__
self.optimizer_d_left_eye.zero_grad
torch.tensor.uniform_
i.self.condition_scale
self.net_g
self.se
torch.nn.GroupNorm
SEBlock
self.activate
numpy.tile
q.transpose.reshape
rois_eyes.torch.cat.to.append
self.mouths.detach
basicsr.utils.imfrombytes
numpy.max
torch.nn.MaxPool2d
tempfile.mkdtemp
tqdm.tqdm.update
torch.from_numpy
basicsr.utils.tensor2img
self.IRBlock.super.__init__
i.self.conv_body_down
range
torch.nn.functional.softmax
stylegan2_bilinear_arch.EqualLinear
basicsr.data.data_util.paths_from_folder
x.view
crt_v.view
x.self.avg_pool.view
self.reduce_loss_dict
self.up.insert
opt.get
list
os.path.exists
ori_k.replace.split
gfpgan.archs.gfpganv1_arch.GFPGANv1
self.conv_in
basicsr.archs.stylegan2_arch.EqualLinear
basicsr.utils.download_util.load_file_from_url
self.conv_body_up.append
self.style_mlp.unsqueeze
latent_in.self.style_mlp.mean
self.net_d.named_parameters
i.self.conv_body_up.view
stylegan2_bilinear_arch.ScaledLeakyReLU
i_block.i_level.self.down.attn
torch.zeros
basicsr.utils.get_root_logger
realesrgan.utils.RealESRGANer
style_code.view.size
ResUpBlock
self.decode
readme
loc.torch.from_numpy.float
torch.nn.functional.interpolate
self.GFPGANBilinear.super.__init__
dict
self.conv3
basicsr.archs.rrdbnet_arch.RRDBNet
self.bn2
w_.transpose.contiguous
torchvision.transforms.functional.adjust_contrast
MultiHeadDecoderTransformer
self.optimizers.append
z.permute.contiguous
self.test
torch.nn.Linear
random.randint
self.style_conv1
f.read.format
tuple
ModulatedConv2d
gfpgan.GFPGANer
setuptools.find_packages
saturation.saturation.torch.tensor.uniform_.item
self.setup_schedulers
ValueError
l_g_total.backward
self.net_d_mouth
self.stylegan_decoder.named_parameters
f.readlines
basicsr.utils.get_root_logger.warning
MultiHeadEncoder
self.stylegan_decoder.load_state_dict
self.loc_left_eyes.size
k.transpose.transpose
torch.nn.init.xavier_normal_
self.norm2
importlib.import_module
self.final_linear
train_opt.get
x.view.view
self.gfpgan
torch.nn.AdaptiveAvgPool2d
style_truncation.append
format
NormStyleCode.append
self.layer1
rlt_feats.append
to_rgb
self.feed_data
i_level.self.down.downsample
get_requirements
torch.cuda.empty_cache
self.net_g_ema.eval
self.layer2
self.print_network
torch.nn.functional.leaky_relu_.size
basicsr.utils.scandir
get_version
basicsr.data.degradations.random_add_gaussian_noise
self.ConvLayer.super.__init__
self.style_mlp
torch.device
os.path.realpath
self.face_helper.paste_faces_to_input_image
self.gfpgan.to
self.style_convs.append
self.optimizer_g.zero_grad
self.load_network
self.RestoreFormer.super.__init__
self.norm1
collections.OrderedDict
brightness.brightness.torch.tensor.uniform_.item
basicsr.data.degradations.random_mixed_kernels
self.dropout
self.face_helper.get_face_landmarks_5
os.path.abspath
tqdm.tqdm
os.path.isdir
self.conv_shortcut
conv1
conv3x3
MultiHeadAttnBlock
torch.nn.functional.leaky_relu_
super.__init__
self.file_client.get
self.maxpool
str
self.optimizer_d_right_eye.zero_grad
conv2.new_empty
self.quantize.named_parameters
block
basicsr.archs.arch_util.default_init_weights
os.unlink
torch.randperm
self.modulated_conv
realesrgan.RealESRGANer
self.network_identity.parameters
map
i.self.condition_scale.clone
style.self.modulation.view
basicsr.archs.stylegan2_arch.ResBlock
self.mid.block_2
torch.min
styles.unsqueeze.repeat
out_rgbs.append
basicsr.utils.imwrite
self.ResNetArcFace.super.__init__
i.self.conv_body_up
torch.mean
v.transpose.permute
self.q
half_len_left_eye.mean_left_eye.half_len_left_eye.mean_left_eye.np.hstack.astype
self.EqualConv2d.super.__init__
argparse.ArgumentParser.parse_args
self.output.detach.cpu
self.color_jitter_pt
criterion
cv2.imread
self.relu
basicsr.utils.FileClient
self.noises.register_buffer
cv2.cvtColor
basicsr.utils.get_root_logger.info
self.nondist_validation
self.cri_l1
in_channels.out_channels.torch.randn.div_
self.n_e.indices.shape.torch.zeros.to.scatter_
self.StyleGAN2GeneratorClean.super.__init__
conditions.append
self.BasicBlock.super.__init__
self.modules
basicsr.train.train_pipeline
fake_d_pred.detach.mean
argparse.ArgumentParser.add_argument
self.decoder
os.makedirs
torch.sum
shift.shift.np.random.uniform.astype
self.toRGB.append
numpy.random.uniform
rois_mouths.torch.cat.to.append
torch.nn.Sequential
self.avg_pool
self.norm_out
gfpgan.archs.gfpganv1_clean_arch.GFPGANv1Clean
tb_logger.add_scalar
basicsr.archs.build_network
self.post_quant_conv.named_parameters
self.net_d_mouth.parameters
torch.nn.init.constant_
self.v
self.setup_optimizers
torch.cuda.is_available
basicsr.losses.build_loss
self.model_to_device
</pre>
</details>
@developer
Could please help me check this issue?
May I pull a request to fix it?
Thank you very much.
| open | 2023-03-12T07:21:52Z | 2023-03-12T07:21:52Z | https://github.com/TencentARC/GFPGAN/issues/350 | [] | PyDeps | 0 |
CTFd/CTFd | flask | 2,415 | Change challenge from static to dynamic | Hi,
Is it possible to change a static challenge to a dynamic one ?
I don't want to recreate the challenge because I would lose all correct submissions.
Thank you. | open | 2023-10-18T11:53:14Z | 2023-10-18T11:53:14Z | https://github.com/CTFd/CTFd/issues/2415 | [] | Whidix | 0 |
Miserlou/Zappa | django | 2,065 | Cannot import name '_levenshtein' from 'Levenshtein' | When I try to get my app deployed, I get an ImportError for the levenshtein package
## Context
I am using the `fast-autocomplete` package, which is dependent upon the `levenshtein` package. When I deploy, this is the full traceback of the error:
```
[ERROR] ImportError: cannot import name '_levenshtein' from 'Levenshtein' (/var/task/Levenshtein/__init__.py)
Traceback (most recent call last):
File "/var/task/handler.py", line 609, in lambda_handler
return LambdaHandler.lambda_handler(event, context)
File "/var/task/handler.py", line 240, in lambda_handler
handler = cls()
File "/var/task/handler.py", line 134, in __init__
self.app_module = importlib.import_module(self.settings.APP_MODULE)
File "/var/lang/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/var/task/app.py", line 5, in <module>
from fast_autocomplete import AutoComplete
File "/var/task/fast_autocomplete/__init__.py", line 11, in <module>
from fast_autocomplete.dwg import AutoComplete
File "/var/task/fast_autocomplete/dwg.py", line 11, in <module>
from Levenshtein import distance as levenshtein_distance
File "/var/task/Levenshtein/__init__.py", line 1, in <module>
from Levenshtein import _levenshtein
```
This seems to be a quite often used package yet I have only found [one similar issue](https://github.com/Miserlou/Zappa/issues/1132), and I don't understand why it was closed.
## Expected Behavior
Should be able to import Levenshtein into my lambda function
## Actual Behavior
Levenshtein is not imported
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. Add `python-Levenshtein` in my project
2. `zappa deploy/update`
3. Get the importError
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.51.0
* Operating System and Python version: Mac OS X 10.15.1 Python 3.7
* The output of `pip freeze`:
```aniso8601==8.0.0
argcomplete==1.11.1
boto3==1.12.23
botocore==1.15.23
certifi==2019.11.28
cfn-flip==1.2.2
chardet==3.0.4
click==7.1.1
docutils==0.15.2
durationpy==0.5
fast-autocomplete==0.6.0
Flask==1.1.1
Flask-RESTful==0.3.8
future==0.18.2
hjson==3.0.1
idna==2.9
importlib-metadata==1.5.0
itsdangerous==1.1.0
Jinja2==2.11.1
jmespath==0.9.5
kappa==0.6.0
MarkupSafe==1.1.1
pip-tools==4.5.1
placebo==0.9.0
python-dateutil==2.6.1
python-Levenshtein==0.12.0
python-slugify==4.0.0
pytz==2019.3
PyYAML==5.3
requests==2.23.0
s3transfer==0.3.3
six==1.14.0
text-unidecode==1.3
toml==0.10.0
tqdm==4.43.0
troposphere==2.6.0
urllib3==1.25.8
Werkzeug==0.16.1
wsgi-request-logger==0.4.6
zappa==0.51.0
zipp==3.1.0
```
* Your `zappa_settings.json`:
```
{
"dev": {
"app_function": "app.app",
"profile_name": "zappa",
"project_name": "lambda-related-",
"runtime": "python3.7",
"s3_bucket": "",
"slim_handler": false,
"manage_roles": false,
"role_name": "ZappaRole",
"role_arn": "",
}
}
```
Thank you! | closed | 2020-03-18T17:25:28Z | 2020-03-19T16:32:34Z | https://github.com/Miserlou/Zappa/issues/2065 | [] | Robinspecteur | 1 |
sinaptik-ai/pandas-ai | data-visualization | 1,404 | Add the option for no parsing. | ### 🚀 The feature
I would like a response value in the following format:
{"type": "dataframe", "value": "xyz"}
### Motivation, pitch
This would allow downstream programmatic handling of the different potential responses types.
### Alternatives
_No response_
### Additional context
As it stands I believe it works like this:
If no response_parser is passed to the SmartDataframe then the response is parsed to just include the result["value"]. I think that potentially the default should be to return both the type and value. Then in the config add an option to parse just the value.
Also, as it stands I have not contributed to this project but I would like the opportunity to implement this change if possible? | closed | 2024-10-21T15:27:53Z | 2025-01-28T16:01:46Z | https://github.com/sinaptik-ai/pandas-ai/issues/1404 | [
"enhancement"
] | aevo98765 | 3 |
piskvorky/gensim | machine-learning | 3,192 | FastText models `.save()`d from 4.0+ slower to load; gain less benefit from mmap | [Reported in forum thread: https://groups.google.com/g/gensim/c/xaGvo0j8yv0/m/VI74_Fp7AAAJ]
User identically-trained models in `gensim-3.8.3` and `gensim-4.0.1`. As expected, the files on disk from from 4.0.1 save are much smaller. Unexpectedly, loading the 4.0.1 save, using the `mmap` option, takes a few minutes, while the 3.8.3 save loads in a matter of seconds.
The likely cause is a change to the 4.0.0+ `.save()` routines to avoid saving the `.vectors` array - because it's fully re-calculable from the other `.vectors_vocab` and `.vectors_ngrams` data. The code in 4.0 first [ignores the `.vectors` array on save](https://github.com/RaRe-Technologies/gensim/blob/a93067d2ea78916cb587552ba0fd22727c4b40ab/gensim/models/fasttext.py#L1079), then [notices it's missing on load](https://github.com/RaRe-Technologies/gensim/blob/a93067d2ea78916cb587552ba0fd22727c4b40ab/gensim/models/fasttext.py#L1025), and then [uses `.adjust_vectors()` to re-fill the `.vectors` array](https://github.com/RaRe-Technologies/gensim/blob/a93067d2ea78916cb587552ba0fd22727c4b40ab/gensim/models/fasttext.py#L1172).
In a larger model, those re-calculations take enough time to be noticeable. Further, because this new `.vectors` array was freshly-allocated, it will never get any of the benefits that memmapping it from disk might have provided (never loading unaccessed ranges; never page-out writing redundant unchanged data in low-memory conditions; sharing the same RAM between processes loading the same model).
Despite the redundant storage involved, more users probably prefer fast-loads & the potential for interprocess sharing than strictly minimal on-disk formats. So the old behavior – writing `.vectors` to disk & reloading it – should be supported, and probably also the default. (Simply removing `vectors` from the `ignore` list in `FastTextKeyedVectors._save_specials()` is likely enough to restore the old behavior.)
Perhaps, a new option for smaller-but-slower-to-load saves could be supported, or a doc-note added with a way for users to achieve the same space efficiency another way. (I think, but have not tested, that manually adding `vectors` to the `.save()` method `ignore` parameter might be enough.)
| open | 2021-07-13T17:31:46Z | 2021-12-04T06:12:19Z | https://github.com/piskvorky/gensim/issues/3192 | [
"bug",
"performance"
] | gojomo | 0 |
docarray/docarray | pydantic | 1,508 | Fixing CI for documentation | closed | 2023-05-08T12:05:40Z | 2023-05-08T12:31:45Z | https://github.com/docarray/docarray/issues/1508 | [] | samsja | 0 | |
BlinkDL/RWKV-LM | pytorch | 195 | 运行报错 | 在运行chat.py文件时,报出如下错误:

python 3.10
cuda 12.1
torch 2.0 | closed | 2023-10-31T07:20:43Z | 2023-11-05T07:15:02Z | https://github.com/BlinkDL/RWKV-LM/issues/195 | [] | surviveMiao | 3 |
xuebinqin/U-2-Net | computer-vision | 189 | Training with EG1800 dataset using alpha mask | I trained the model with [EG1800](https://onedrive.live.com/?authkey=%21ADkS4V32BUmspOg&cid=F5111408123B1D9C&id=F5111408123B1D9C%2115035&parId=F5111408123B1D9C%2115033&action=locate) dataset which contains ground truth mask with values [0.0....1.0] i.e alpha. It contains 1500 train and 300 validation images. After 100 epochs the train loss was around 0.2; but val loss was around 35. I used the provided weights to initialize the training process. It seems to overfit to the training set. I used a similar code for validation also. Is it required that the mask should be strictly binary? | open | 2021-04-10T05:36:23Z | 2021-04-10T05:36:23Z | https://github.com/xuebinqin/U-2-Net/issues/189 | [] | anilsathyan7 | 0 |
hbldh/bleak | asyncio | 617 | start_notify needs asyncio.sleep | * bleak version: 0.12.1
* Python version: 3.8.0
* Operating System: Windows 10 Enterprise
* BlueZ version (`bluetoothctl -v`) in case of Linux:
### Description
Not sure if this is a bug, undocumented quirk, or just something I missed, but:
When I use `start_notify`, it seems to only work when `asyncio.sleep` is occurring. (As opposed to `time.sleep`, for instance, or waiting via another concurrency library.) This is fine for demos and examples, but seems like usually you'd want to e.g. keep handling messages until some event signaled an end. For instance, I'm using pycsp to coordinate different threads, and blocking to wait for the signal to stop causes no bluetooth messages to be handled. Is this expected behavior? I didn't see it mentioned in the docs. What's the suggested fix? I suppose I could `asyncio.sleep(1)` in a loop and check for stop every time, but that's kinda gross.
### What I Did
If you want to reproduce it, open the enable_notifications.py example and replace `asyncio.sleep` with `time.sleep` (and import time). The messages don't get processed during the sleep, and instead pile up and are processed all at once at the end. | closed | 2021-08-05T23:09:40Z | 2021-08-09T16:22:36Z | https://github.com/hbldh/bleak/issues/617 | [
"question"
] | Erhannis | 4 |
arogozhnikov/einops | tensorflow | 292 | [BUG] batchsize of dataloading | If I set dataloading batch_size to larger than 1. The whole program may need to modified, I am not sure whether I am right.
| closed | 2023-11-25T14:42:04Z | 2024-09-15T14:44:27Z | https://github.com/arogozhnikov/einops/issues/292 | [] | wwwy-binary | 0 |
whitphx/streamlit-webrtc | streamlit | 1,201 | get device : chrome webpage show 'component error' | when starting streamlir run app.py with URL: http://localhost:8501 , the webpage could get camera device correctly.
when starting streamlir run app.py with (IP) URL: http://10.146.11.214:8501 , the webpage could not show camera device list , it show'component error' .
| closed | 2023-02-21T08:19:51Z | 2023-03-02T05:56:36Z | https://github.com/whitphx/streamlit-webrtc/issues/1201 | [] | hcgprague | 5 |
QingdaoU/OnlineJudge | django | 206 | 更新到 Version: 20181215 的問題 | 前7天出完比賽題,Special Judge 測試為正常
前3天更新到 Version: 20181215
昨天開比賽的時候,有用到Special Judge的題目,沒有CE的Code全部爆System Error
比賽途中稍微改spj的code推測錯誤點,似乎在這裡
```cpp
input = fopen(args[1], "r");
user_output = fopen(args[2], "r");
if(input == NULL || user_output == NULL){
printf("Failed to open output file\n");
close_file(input);
close_file(user_output);
return ERROR;
}
```
但不確定是input還是user_output出問題
目前解決是降版處理 Version: 20180815 | closed | 2018-12-25T03:59:27Z | 2018-12-26T04:09:33Z | https://github.com/QingdaoU/OnlineJudge/issues/206 | [] | tico88612 | 18 |
huggingface/datasets | deep-learning | 6,668 | Chapter 6 - Issue Loading `cnn_dailymail` dataset | ### Describe the bug
So I am getting this bug when I try to run cell 4 of the Chapter 6 notebook code:
`dataset = load_dataset("ccdv/cnn_dailymail", version="3.0.0")`
Error Message:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[4], line 4
1 #hide_output
2 from datasets import load_dataset
----> 4 dataset = load_dataset("ccdv/cnn_dailymail", version="3.0.0")
7 # dataset = load_dataset("ccdv/cnn_dailymail", version="3.0.0", trust_remote_code=True)
8 print(f"Features: {dataset['train'].column_names}")
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\load.py:2587, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2583 # Build dataset for splits
2584 keep_in_memory = (
2585 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
2586 )
-> 2587 ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)
2588 # Rename and cast features to match task schema
2589 if task is not None:
2590 # To avoid issuing the same warning twice
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\builder.py:1244, in DatasetBuilder.as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory)
1241 verification_mode = VerificationMode(verification_mode or VerificationMode.BASIC_CHECKS)
1243 # Create a dataset for each of the given splits
-> 1244 datasets = map_nested(
1245 partial(
1246 self._build_single_dataset,
1247 run_post_process=run_post_process,
1248 verification_mode=verification_mode,
1249 in_memory=in_memory,
1250 ),
1251 split,
1252 map_tuple=True,
1253 disable_tqdm=True,
1254 )
1255 if isinstance(datasets, dict):
1256 datasets = DatasetDict(datasets)
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\utils\py_utils.py:477, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc)
466 mapped = [
467 map_nested(
468 function=function,
(...)
474 for obj in iterable
475 ]
476 elif num_proc != -1 and num_proc <= 1 or len(iterable) < parallel_min_length:
--> 477 mapped = [
478 _single_map_nested((function, obj, types, None, True, None))
479 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc)
480 ]
481 else:
482 with warnings.catch_warnings():
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\utils\py_utils.py:478, in <listcomp>(.0)
466 mapped = [
467 map_nested(
468 function=function,
(...)
474 for obj in iterable
475 ]
476 elif num_proc != -1 and num_proc <= 1 or len(iterable) < parallel_min_length:
477 mapped = [
--> 478 _single_map_nested((function, obj, types, None, True, None))
479 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc)
480 ]
481 else:
482 with warnings.catch_warnings():
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\utils\py_utils.py:370, in _single_map_nested(args)
368 # Singleton first to spare some computation
369 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 370 return function(data_struct)
372 # Reduce logging to keep things readable in multiprocessing with tqdm
373 if rank is not None and logging.get_verbosity() < logging.WARNING:
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\builder.py:1274, in DatasetBuilder._build_single_dataset(self, split, run_post_process, verification_mode, in_memory)
1271 split = Split(split)
1273 # Build base dataset
-> 1274 ds = self._as_dataset(
1275 split=split,
1276 in_memory=in_memory,
1277 )
1278 if run_post_process:
1279 for resource_file_name in self._post_processing_resources(split).values():
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\builder.py:1348, in DatasetBuilder._as_dataset(self, split, in_memory)
1346 if self._check_legacy_cache():
1347 dataset_name = self.name
-> 1348 dataset_kwargs = ArrowReader(cache_dir, self.info).read(
1349 name=dataset_name,
1350 instructions=split,
1351 split_infos=self.info.splits.values(),
1352 in_memory=in_memory,
1353 )
1354 fingerprint = self._get_dataset_fingerprint(split)
1355 return Dataset(fingerprint=fingerprint, **dataset_kwargs)
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\arrow_reader.py:254, in BaseReader.read(self, name, instructions, split_infos, in_memory)
252 if not files:
253 msg = f'Instruction "{instructions}" corresponds to no data!'
--> 254 raise ValueError(msg)
255 return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
**ValueError: Instruction "validation" corresponds to no data!**
````
Looks like the data is not being loaded. Any advice would be appreciated. Thanks!
### Steps to reproduce the bug
Run all cells of Chapter 6 notebook.
### Expected behavior
Data should load correctly without any errors.
### Environment info
- `datasets` version: 2.17.0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.9.18
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | open | 2024-02-16T04:40:56Z | 2024-02-16T04:40:56Z | https://github.com/huggingface/datasets/issues/6668 | [] | hariravichandran | 0 |
deezer/spleeter | deep-learning | 134 | FileNotFoundError on basic spleeter separate | I was just trying out the following simple command line:
`C:\Users\Léo\Downloads>spleeter separate -i celine.mp3`
It unfortunately fails for me due to a FileNotFoundError. I made sure celine.mp3 does exist in Downloads though. Any idea what could cause it? Here's the stack trace (most recent call last):
```
File "C:\Tools\Scripts\spleeter-script.py", line 11, in <module>
load_entry_point('spleeter==1.4.5', 'console_scripts', 'spleeter')()
File "c:\tools\lib\site-packages\spleeter\__main__.py", line 54, in entrypoint
main(sys.argv)
File "c:\tools\lib\site-packages\spleeter\__main__.py", line 46, in main
entrypoint(arguments, params)
File "c:\tools\lib\site-packages\spleeter\commands\separate.py", line 43, in entrypoint
synchronous=False
File "c:\tools\lib\site-packages\spleeter\separator.py", line 122, in separate_to_file
sample_rate=self._sample_rate)
File "c:\tools\lib\site-packages\spleeter\audio\ffmpeg.py", line 63, in load
probe = ffmpeg.probe(path)
File "c:\tools\lib\site-packages\ffmpeg\_probe.py", line 20, in probe
p = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
File "c:\tools\lib\subprocess.py", line 775, in __init__
restore_signals, start_new_session)
File "c:\tools\lib\subprocess.py", line 1178, in _execute_child
startupinfo)
FileNotFoundError: [WinError 2] Le fichier spécifié est introuvable
```
Thanks for your time (and for this amazing library!!) | closed | 2019-11-24T20:26:47Z | 2019-11-25T14:19:29Z | https://github.com/deezer/spleeter/issues/134 | [
"bug",
"invalid",
"wontfix",
"RTMP"
] | LogyLeo | 2 |
voila-dashboards/voila | jupyter | 1,295 | Templates: The page_config setup is copied in all templates, it should be done in the base one | ## Description
The following page_config setup is copied in the lab/classic/reveal... templates:
```jinja2
{# Copy so we do not modify the page_config with updates. #}
{% set page_config_full = page_config.copy() %}
{%- set kernel_id = kernel_start(nb) -%}
{# Set a dummy variable - we just want the side effect of the update. #}
{% set _ = page_config_full.update(baseUrl=resources.base_url, kernelId=kernel_id) %}
<script id="jupyter-config-data" type="application/json">
{{ page_config_full | tojson }}
</script>
```
We should probably have this part in the base template instead and inherited from it automatically.
| open | 2023-02-28T15:46:26Z | 2023-02-28T15:46:26Z | https://github.com/voila-dashboards/voila/issues/1295 | [
"enhancement"
] | martinRenou | 0 |
getsentry/sentry | django | 87,340 | Disable Sentry hotkeys within Create Issue Modal | ### Problem Statement
Filing on behalf of a user on a [zendesk ticket](https://sentry.zendesk.com/agent/tickets/147820)
Hitting ctrl + k in Sentry opens the command palette search bar. If you open this during the Create Issue modal, it will close the modal and open the palette instead. The user then loses all work done up to that point in the issue modal. Reopening the modal creates a new fetch request for a fields in the modal and the user needs to start fresh.
ctrl + k is a default mac keybinding that deletes text so accidentally hitting ctrl + k is not going to be uncommon
### Solution Brainstorm
Disabling Sentry hotkeys or at least the ones that can close/open other modals can help this poor user experience.
Or maybe we can add a confirmation prompt for when the window is closed?
### Product Area
Issues | closed | 2025-03-18T20:52:41Z | 2025-03-19T16:56:31Z | https://github.com/getsentry/sentry/issues/87340 | [
"Product Area: Issues"
] | Fwang36 | 2 |
lux-org/lux | jupyter | 467 | [BUG] `LuxSeries.unique` returns incorrect values after subsetting | **Describe the bug**
`LuxSeries` wrapper around pandas Series, does not compute the unique values correctly for series corresponding to subsets of dataframe.
**To Reproduce**
Invent some data:
```
data = pd.DataFrame([['a', 1, 2], ['b', 2, 3], ['c', -1, 17]], columns=['foo', 'bar', 'baz'])
```
View it, no need to click on the lux button or anything.
```
data
```
Now create a subset of this data from `bar > 0` and select `foo` column only.
```
data = data[data['bar'] > 0]['foo']
```
`data` is now a Series with two values in it `'a'` and `'b'`.
In the notebook I view it again and it produces the correct output, no need to click anything:
```
data
```
However running the `.unique()` function on the series:
```
data.unique()
```
Returns _all_ values including `['a', 'b', 'c']`, when it should forget about the value `'c'` due to subsetting.
See [gist](https://gist.github.com/lukauskas/65d32167f831b0233c80aeacfc1b197d) and screenshot below
**Expected behavior**
`data.unique()` should return only `['a', 'b']`.
**Screenshots**
<img width="853" alt="image" src="https://user-images.githubusercontent.com/108413/158380713-7968d6e9-b202-455a-a60e-742d19c8d351.png">
**Debugging information**
```
Package Versions
----------------
Version
python 3.10.2
lux 0.5.1
pandas 1.4.0
luxwidget 0.1.11
matplotlib 3.5.1
altair 4.2.0
IPython 8.0.1
ipykernel 6.9.0
ipywidgets 7.6.5
jupyter_client 7.1.2
jupyter_core 4.9.1
jupyter_server 1.13.5
jupyterlab 3.3.0
nbclient 0.5.10
nbconvert 6.4.1
nbformat 5.1.3
notebook 6.4.8
qtconsole 5.2.2
traitlets 5.1.1
Widget Setup
-------------
✅ Jupyter Lab Running
✅ luxwidget is enabled
```
**Additional context**
This actually can cause some very nasty and silent errors in the analyses that depend on this `.unique()` operator, as only the import of `lux` is needed to redefine behaviour. | open | 2022-03-15T12:49:04Z | 2022-03-20T15:19:42Z | https://github.com/lux-org/lux/issues/467 | [] | lukauskas | 1 |
ydataai/ydata-profiling | jupyter | 843 | Summarize Categorical Spark Type | Branch : spark-branch
Categorical Types are currently not being summarized in spark
Feature:
Categorical Types are currently not being summarized in spark. We want to generate summaries for them properly, including generating simple statistics (similar to pandas)
| closed | 2021-10-04T15:24:43Z | 2021-10-23T08:09:17Z | https://github.com/ydataai/ydata-profiling/issues/843 | [
"spark :zap:"
] | chanedwin | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.