repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
horovod/horovod | deep-learning | 3,342 | Multi worker inference in Databricks | Multi worker training in horovod is great. But I am facing problem with inference specifically in databricks. Is there any way to do inference in databricks. And how to accumulate results from each worker with can be stored with databricks operations. | open | 2022-01-03T12:59:51Z | 2022-01-09T07:21:43Z | https://github.com/horovod/horovod/issues/3342 | [
"enhancement"
] | tanmoyio | 1 |
allenai/allennlp | nlp | 4,835 | Better documentation for creating training configs | * The guide gives an example of a training config, but not an exhaustive list of what fields are possible to include.
* The `Vocabulary` and `Trainer` fields are general, while for the model and dataset reader, the fields will depend on specific classes.
* It will be useful to have this documented for someone looking to get started with AllenNLP.
Create a dedicated page in docs or add link [here](https://docs.allennlp.org/master/api/commands/train/#from_partial_objects) . | open | 2020-12-02T20:12:54Z | 2021-08-17T15:35:38Z | https://github.com/allenai/allennlp/issues/4835 | [] | AkshitaB | 1 |
3b1b/manim | python | 1,131 | Fading Animation for the opacity of a VMobject | Hi,
i want to change the opacity of a VMobject with an animation and don't know how to manage this.
Does anybody know a solution for my problem?
Thank you in advance for your time and valuable help.
| closed | 2020-06-08T19:12:12Z | 2020-06-09T14:23:54Z | https://github.com/3b1b/manim/issues/1131 | [] | 21Manu09 | 2 |
A3M4/YouTube-Report | seaborn | 42 | It seems like the new format of playlist likes is in .csv | File "report.py", line 22, in <module>
from parse import HTML
File "C:\Users\XXXXXX\Downloads\YouTube-Report-master\YouTube-Report-master\parse.py", line 48, in <module>
raise OSError("Required directories do not exist: %s"%(missing))
OSError: Required directories do not exist: ['C:\\Users\\XXXXXX\\Downloads\\YouTube-Report-master\\YouTube-Report-master\\Takeout/YouTube/playlists/likes.json'] | open | 2021-11-13T00:26:31Z | 2023-01-23T18:28:24Z | https://github.com/A3M4/YouTube-Report/issues/42 | [] | User1391 | 1 |
RobertCraigie/prisma-client-py | pydantic | 697 | `db pull` overwrites the Python client generator nondeterministically | ## Bug description
Context: I'm trying to use prisma-client-py with the Node Prisma Client so that the two clients can share the same schema.
The setup looks something like this:
```
generator client_py {
provider = "prisma-client-py"
recursive_type_depth = -1
interface = "asyncio"
}
generator client_js {
provider = "prisma-client-js"
}
datasource db {
...
}
model ...
```
Bug: When I run `npx prisma db pull` using the Node Prisma CLI (from our node project directory), two strange things happen:
1. `recursive_type_depth`'s value gets wrapped in a string
2. The positions of the `recursive_type_depth` and `interface` fields MIGHT swap
Here's an example of what might result:
```
generator client_py {
provider = "prisma-client-py"
interface = "asyncio"
recursive_type_depth = "-1"
}
generator client_js {
provider = "prisma-client-js"
}
datasource db {
...
}
model ...
```
The swapping is nondeterministic and happens randomly; the `recursive_type_depth` value is always wrapped.
## How to reproduce
1. Create two generators, as shown above, with a datasource
2. Run `prisma db pull`
## Expected behavior
The generator code should not be affected.
## Prisma information
See above
## Environment & setup
- OS: Mac OS, Linux
- Database: PostgreSQL
- Python version: 3.11.2
- Prisma version:
```
prisma : 4.10.1
prisma client python : 0.8.0
platform : darwin
expected engine version : aead147aa326ccb985dcfed5b065b4fdabd44b19
installed extras : []
```
^^ I'm using 4.10.1 as opposed to 4.8 because of the newly-added `--generator` flag when running `prisma generate`; makes having multiple generators share a schema more ergonomic.
| open | 2023-02-11T19:50:23Z | 2023-02-12T18:39:27Z | https://github.com/RobertCraigie/prisma-client-py/issues/697 | [
"bug/2-confirmed",
"kind/bug",
"topic: external",
"level/unknown"
] | john-sungjin | 2 |
airtai/faststream | asyncio | 1,508 | Chore: update dependencies and fix failing tests in python 3.8 | closed | 2024-06-07T06:02:22Z | 2024-06-07T06:02:58Z | https://github.com/airtai/faststream/issues/1508 | [] | kumaranvpl | 0 | |
Kludex/mangum | asyncio | 33 | Drop 3.6 support | I don't think this project should need to support 3.6 since AWS Lambda supports 3.7. I would have started from this perspective, but initially I included the Azure Functions adapter which only supported 3.6.
| closed | 2019-02-03T04:29:40Z | 2019-06-17T02:02:44Z | https://github.com/Kludex/mangum/issues/33 | [
"improvement"
] | jordaneremieff | 4 |
streamlit/streamlit | data-visualization | 10,156 | Make dataframe search filter the rows, instead of jumping to the result | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
Today, when searching in a dataframe (e.g. via the search icon in the toolbar), jumps to the results it found. But there's another possible UI here, where we filter the rows and only show the ones where a result was found. This is e.g. how search works in Notion databases. Would be cool to add that.
### Why?
If you have lots of rows, it's sometimes nicer to just see the rows that actually contain results.
### How?
Not sure if we should change this behavior, or add a parameter for it, or have a UI affordance (e.g. a checkbox in the search popup).
### Additional Context
_No response_ | open | 2025-01-10T16:06:37Z | 2025-01-10T16:06:51Z | https://github.com/streamlit/streamlit/issues/10156 | [
"type:enhancement",
"feature:st.dataframe",
"feature:st.data_editor"
] | jrieke | 1 |
httpie/cli | api | 1,395 | Y | If you have a general question, please consider asking on Discord: https://httpie.io/chat
| closed | 2022-05-08T17:57:14Z | 2022-05-08T17:57:21Z | https://github.com/httpie/cli/issues/1395 | [
"new"
] | coderj001 | 0 |
Ehco1996/django-sspanel | django | 863 | 咨询贴:现在已经不支持传统 vmess 客户端的订阅了吗? | 所在分支 dev 最新代码
我现在使用 V2rayN、V2rayA 作为订阅,已经找不到选项了。
请问已经不支持了吗? | closed | 2023-08-25T09:08:30Z | 2023-08-27T05:56:58Z | https://github.com/Ehco1996/django-sspanel/issues/863 | [
"question"
] | neilbowman666 | 1 |
awesto/django-shop | django | 80 | Price modifers should be language-aware | In both cart and order modifiers, there should be a way to make the label part display in a multilingual fashion. Right now, using gettext works, but the result of the gettext call gets saved in the database.
Therefore another user seeing the same order will see the label as translated in the original language.
| closed | 2011-07-07T09:01:34Z | 2016-02-02T14:09:37Z | https://github.com/awesto/django-shop/issues/80 | [] | chrisglass | 1 |
autokey/autokey | automation | 526 | pasting images with autokey | Is there a way to get autokey to paste an image from local storage? (Think pasting an email footer)
| open | 2021-04-02T18:22:52Z | 2021-04-03T19:37:54Z | https://github.com/autokey/autokey/issues/526 | [] | roof98 | 7 |
httpie/cli | api | 1,474 | `http -v` should show escaped Unicode as-is | In the two commands below, both shows the same JSON.
```sh
$ http -v httpbin.org/post a:='"あ"'
...
{
"a": "あ"
}
...
$ echo '{"a": "あ"}' | http -v httpbin.org/post
...
{
"a": "あ"
}
...
```
But actually they send different data:


In the first command, I expect it to show `\u3042`, but it seems parse the escaped Unicode and show the parsed letter `あ`.
i.e. it should output:
```sh
$ http -v httpbin.org/post a:='"あ"'
...
{
"a": "\u3042"
}
...
$ echo '{"a": "あ"}' | http -v httpbin.org/post
...
{
"a": "あ"
}
...
```
See also https://github.com/httpie/httpie/issues/814. | open | 2023-01-21T07:52:04Z | 2023-01-27T05:01:55Z | https://github.com/httpie/cli/issues/1474 | [
"bug",
"new"
] | wataash | 0 |
microsoft/nni | machine-learning | 5,407 | stop being disrespectful to real Engineers (Mechanical Engineering is the ONLY Engineering) | Death to you microsoft [TRASHrosoft] devs and employees, you cheap loser shills. Death to you for openly promoting donkeys who work for the garbage & useless so called "health" "sector". Death to you for being such stupid slaves and praising those daft morons. Death to you for promoting those stupid childish ego pigs, who work for such a garbage "sector". Death to every moron who works for or supports that dumpster "sector". Death to you for promoting those dumb clowns in your trashy office 365 calendar app and your garbage OS. Death to you microsoft clown employees, you donkey scums.
Death to you for openly promoting and praising "universities" and praising those disgusting 2/6 letter stupid "titles" those monkeys use. Death to you and death to every loser donkey who uses that disgusting & racist 6 letter donkey word used by the trasholes ["univeristies"].
Death to you for openly promoting donkeys who work for the garbage & useless so called "health" "sector". Death to you for being such stupid mainstream shills and praising those daft morons. Death to you for promoting those stupid childish ego pigs, who work for such a garbage "sector". Death to you for openly praising and promoting the UK's ugly & useless & trashhole "national" clown "service". Death to EVERY donkey who supports those donkeys. You're all nothing but useless slaves, you TRASHrosoft scumbags. Microsoft is utter trash, and so is windows and every garbage trashware that those monkey micorsoft devs have ever touched or made.
Microsoft has absolutely no potential, and is a complete and utter laughable failure. Death to you stupid microTRASH script kiddos. Death to your trashy software. Death to every idiot who uses the stupid 6-letter "prefixes" when mentioning first names. Death to all "universities" [DUMPSTERversities].
Death to you microsoft [microGARBAGE] devs and employees. Have a sad life and painful death, microsoft workers, you clown slaves.
And one other thing....
Death to you scums for being disrespectful to engineering and claiming to be engineers. None of you microsoft devs/employees are engineers, never were and you'll never be close. Microsoft knows ABSOLUTELY NOTHING about what engineering is or what it entails. You microsoft scriptkiddos clowns are just scummy wannabes who will never even come close to an engineer. There is no such thing as "software" "engineering". That is the stupidest and most pathetic lie ever. Quit being disrespectful to Engineering, you dirty rats. You know nothing about what Engineering entails. Engineering means designing, building and producing real mechanical systems, like turbojets and vehicular engines. It has nothing to do with scriptkiddies like you microsoft [TRASHrosoft] monkeys sitting behind a keyboard and writing code.
Software dev is easy and for dumb kids. And it has absolutely nothing to do with a serious and real subject like Engineering (Mechanical Engineering is the ONLY Engineering). Quit making false claims against engineering. And stop being disrespectful to real Engineers (Mechanical Engineering is the ONLY Engineering). Death to you dumb kids for making lies against engineering.
Hope your dirty and cheap throats get slit, microsoft loser employees and devs.
@MicrosoftIsGarbage
| closed | 2023-02-27T18:01:29Z | 2023-02-28T02:41:28Z | https://github.com/microsoft/nni/issues/5407 | [] | ghost | 0 |
tableau/server-client-python | rest-api | 731 | Custom usage statistics | Hi,
Coud you please suggest me how can I get view usage statistics not only for all the period, but also for last 1/3/12 months?
Thanks in advance,
Pasha | open | 2020-11-14T17:50:22Z | 2022-02-14T16:50:46Z | https://github.com/tableau/server-client-python/issues/731 | [
"Server-Side Enhancement"
] | paul23093 | 1 |
idealo/imagededup | computer-vision | 183 | On demand duplicate check during runtime with a 'growing' BKTree | What I would like to achieve I about the following:
```python
EXISTING_HASHES: set = set()
def is_duplicate(img_bytes: bytes):
if get_hash(img_bytes) in EXISTING_HASHES:
return True
return False
def main():
image_bytes = get_new_image()
if is_duplicate(image_bytes)
return
with open(file) as f:
f.write(image_bytes)
``` | open | 2022-11-05T16:08:17Z | 2023-04-21T10:44:05Z | https://github.com/idealo/imagededup/issues/183 | [
"enhancement"
] | sla-te | 0 |
sebp/scikit-survival | scikit-learn | 340 | Adding AIC/BIC information | **Is your feature request related to a problem? Please describe.**
I'm not sure if this is 100% relevant to penalized cox regressions, but would it be useful to ad AIC and BIC metrics if cross-validation is inappropriate, e.g. due to data sparsity or for the reason of replicating other work?
In the same vein, a forward and backward selection method could also be employed (but that would probably go in another feature request).
**Describe the solution you'd like**
Add an AIC and BIC metric to the existing set of metrics.
**Describe alternatives you've considered**
As an alternative one could also compute the AIC/BIC oneself outside out this codebase.
**References and existing implementations**
An existing implementation of this can be found in the sciki learn package [here](https://github.com/scikit-learn/scikit-learn/blob/a7cd0ca44d0af64bc22a7213371e3c11d94dbbe8/sklearn/linear_model/_least_angle.py#L2305-L2309)
One reference paper that employs this with backward selection is here
https://doi.org/10.1016/j.jtcvs.2017.11.095
Again, not sure if the other criteria already implemented are perhaps more appropriate. If so, let me know. | open | 2023-01-31T10:47:15Z | 2023-02-01T17:23:34Z | https://github.com/sebp/scikit-survival/issues/340 | [
"enhancement"
] | matzhaugen | 1 |
aiortc/aiortc | asyncio | 186 | Bind to event when ice connection is lost | I have set up a simple example server using only a data channel at the moment. I want to be able to stop the RTCPeerConnection on the server when the ice connection to the client is lost (e.g. when the client refreshes the web page or closes the tab). This shows up in the logs on the server as
`INFO:ice:Connection(0) Consent to send expired
DEBUG:ice:Connection(0) protocol(0) connection_lost(None)`
How can I listen to these events? I am wondering if this is possible with build in methods or if I need to implement my own heartbeat. | closed | 2019-06-27T06:38:34Z | 2023-12-31T10:09:49Z | https://github.com/aiortc/aiortc/issues/186 | [] | langep | 3 |
Johnserf-Seed/TikTokDownload | api | 226 | [新手求助]下载源文件并./build.bat后dist内没有exe文件 | 如题。
我现在正在用用release内的可执行文件。但是好像不能手动配置他们,比如下载的输出地址
有没有解惑的 | open | 2022-10-02T06:10:02Z | 2022-10-18T16:14:44Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/226 | [
"额外求助(help wanted)",
"无效(invalid)"
] | htkhgsj | 1 |
dfki-ric/pytransform3d | matplotlib | 258 | Clarification of UrdfTransformManager field | A required input for the method `UrdfTransformManager.plot_visuals` is a `frame`; however, the documentation does not indicate more than a `hashable` object. Most likely it requires a predifined object from `pytransform3d` but I am unable to find it. Thank you! Great library btw :) | closed | 2023-07-17T11:12:53Z | 2023-07-17T11:55:02Z | https://github.com/dfki-ric/pytransform3d/issues/258 | [] | oarriaga | 1 |
gradio-app/gradio | deep-learning | 10,107 | Chatbot scroll resetting when displaying graphs | ### Describe the bug
I have a gradio app that sometimes needs to display plots (using bokeh). This works mostly fine, but when continuing the conversation after the plot has been displayed, the scroll bar resets and there is some flickering when the chat message history is updated.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```
import gradio as gr
from gradio import ChatMessage
import bokeh.plotting
import bokeh.models
import bokeh.themes
import bokeh.io
import bokeh.resources
def create_line_plot(x_data, y_data):
fig = bokeh.plotting.figure(
title='Title',
width=1500 # Will be shrunk down according to window size
)
fig.line(
x=x_data,
y=y_data
)
return gr.Plot(fig)
def add_message(history, message):
history.append(
ChatMessage(
role='user',
content=message
)
)
return history, message
from time import sleep
def bot_stream(history, input_msg):
texts = [
'Text before plot',
'text after plot'
]
msg = ChatMessage(
role='assistant',
content=''
)
history.append(msg)
for c in texts[0]:
msg.content += c
yield history
sleep(0.05)
if input_msg == 'plot':
history.append(ChatMessage(
role='assistant',
content=create_line_plot([1, 2, 3, 4], [1, 2, 3, 4])
))
yield history
else:
print(input_msg)
msg2 = ChatMessage(
role='assistant',
content=''
)
history.append(msg2)
for c in texts[1]:
msg2.content += c
yield history
sleep(0.05)
try:
with open('./static/head.html', 'r') as head_f:
HEAD = head_f.read()
except FileNotFoundError:
HEAD = None
with gr.Blocks(
fill_height=True,
fill_width=True
) as demo:
chatbot = gr.Chatbot(
label="Chatbot",
elem_id='chatbot',
elem_classes=['gr-chatbot'],
scale=5,
height=200,
min_width=200,
type='messages',
bubble_full_width=True
)
with gr.Group():
with gr.Row():
chat_input = gr.Textbox(
container=False,
show_label=False,
label="Message",
placeholder="Type a message...",
scale=7,
autofocus=True
)
submit_btn = gr.Button(
'Send',
elem_id='send-btn',
scale=1
)
submit_listeners = [chat_input.submit, submit_btn.click]
for listener in submit_listeners:
chat_msg = listener(
add_message, [chatbot, chat_input], [chatbot, chat_input]
)
bot_msg = chat_msg.then(
bot_stream, [chatbot, chat_input], chatbot, api_name="bot_response"
)
if __name__ == "__main__":
demo.launch(server_name='0.0.0.0', server_port=8085, debug=True, show_error=True)
```
### Screenshot

### Logs
_No response_
### System Info
```shell
aiofiles==23.2.1
annotated-types==0.7.0
anyio==4.6.2.post1
bokeh==3.6.1
certifi==2024.8.30
charset-normalizer==3.4.0
click==8.1.7
contourpy==1.3.1
fastapi==0.115.5
ffmpy==0.4.0
filelock==3.16.1
fsspec==2024.10.0
gradio==5.7.1
gradio_client==1.5.0
h11==0.14.0
httpcore==1.0.7
httpx==0.28.0
huggingface-hub==0.26.3
idna==3.10
Jinja2==3.1.4
markdown-it-py==3.0.0
MarkupSafe==2.1.5
mdurl==0.1.2
numpy==2.1.3
orjson==3.10.12
packaging==24.2
pandas==2.2.3
pillow==11.0.0
pydantic==2.10.2
pydantic_core==2.27.1
pydub==0.25.1
Pygments==2.18.0
python-dateutil==2.9.0.post0
python-multipart==0.0.12
pytz==2024.2
PyYAML==6.0.2
requests==2.32.3
rich==13.9.4
ruff==0.8.1
safehttpx==0.1.6
semantic-version==2.10.0
shellingham==1.5.4
six==1.16.0
sniffio==1.3.1
starlette==0.41.3
tomlkit==0.12.0
tornado==6.4.2
tqdm==4.67.1
typer==0.14.0
typing_extensions==4.12.2
tzdata==2024.2
urllib3==2.2.3
uvicorn==0.32.1
websockets==12.0
xyzservices==2024.9.0
```
### Severity
Blocking usage of gradio | open | 2024-12-03T13:01:03Z | 2024-12-04T15:19:02Z | https://github.com/gradio-app/gradio/issues/10107 | [
"bug"
] | lippings | 1 |
jschneier/django-storages | django | 1,046 | Django 3.2 support | Hello! I hope you're doing well @jschneier @jdufresne!
I'm reaching out as part of the Open edX effort to update the [platform](https://github.com/edx/edx-platform/) to Django 3.2, `django-storages` is one of its dependencies to be updated. [Here](https://github.com/edx/upgrades/issues/44) is the issue!
After some research, I found two essential issues that can help with the Django 3.2 support:
- Fix all CI: https://github.com/jschneier/django-storages/pull/1042
- Fix CI and add Django 3.2 support: https://github.com/jschneier/django-storages/pull/1005
What can we do to help moving forward? | closed | 2021-08-27T13:03:43Z | 2021-10-07T02:27:06Z | https://github.com/jschneier/django-storages/issues/1046 | [] | mariajgrimaldi | 5 |
pydantic/logfire | pydantic | 541 | Am I forced to authenticate manually on local environment? | ### Question
Hi there,
In a FastAPI Project, I'd like to avoid authenticating manually with `logfire auth` command, even on local environment.
I configured my project this way:
**Settings**
```python
class Settings(BaseSettings):
LOGFIRE_SEND_TO_LOGFIRE: bool
LOGFIRE_TOKEN: str | None = None
model_config = SettingsConfigDict(
env_file=BASE_DIR / ".env",
env_ignore_empty=True,
env_prefix="FAPI_",
case_sensitive=True,
extra="forbid",
)
```
**.env file**
```
FAPI_LOGFIRE_SEND_TO_LOGFIRE=true
FAPI_LOGFIRE_TOKEN=<my-real-write-token>
```
**main.py file**
```python
app = FastAPI(..)
if settings.LOGFIRE_SEND_TO_LOGFIRE:
logfire.configure(send_to_logfire=True)
logger.configure(handlers=[logfire.loguru_handler()])
logfire.instrument_fastapi(app)
```
No matter what I tried, When running `fastapi dev` I get a `LogfireConfigError: You are not authenticated. Please run 'logfire auth' to authenticate.` error.
But in the [documentation](https://logfire.pydantic.dev/docs/#instrument), it seems I should be able to authenticate via token, event during development.
What am I missing here? Thanks | closed | 2024-10-25T00:01:01Z | 2024-10-25T09:19:37Z | https://github.com/pydantic/logfire/issues/541 | [
"Question"
] | ddahan | 2 |
Textualize/rich | python | 2,842 | [REQUEST] Document Syntax.stylize_range | Rewording for https://github.com/Textualize/rich/pull/2605/files | open | 2023-03-04T09:22:33Z | 2023-03-04T09:50:11Z | https://github.com/Textualize/rich/issues/2842 | [
"documentation"
] | willmcgugan | 1 |
deepset-ai/haystack | nlp | 8,107 | docs: clean up docstrings of DocumentJoiner | closed | 2024-07-29T06:51:39Z | 2024-07-29T12:39:45Z | https://github.com/deepset-ai/haystack/issues/8107 | [] | agnieszka-m | 0 | |
pytorch/pytorch | deep-learning | 149,153 | ProcessGroupNCCL: ncclCommAbort hangs with NCCL 2.25.1-1 | ### 🐛 Describe the bug
ncclCommAbort hangs when using NCCL 2.25.1-1 w/ PyTorch nightly. This is fixes with NCCL 2.26.2-1 which released yesterday (2025-03-12).
Full details (repro + stack traces) in https://gist.github.com/d4l3k/16a19b475952bc40ddd7f2febcc297b7
Relevant stack traces:
```
thread #16, name = 'python', stop reason = signal SIGSTOP
frame #0: 0x00007fb0b7f0792d libc.so.6`syscall + 29
frame #1: 0x00007fb08faef142 libstdc++.so.6`std::__atomic_futex_unsigned_base::_M_futex_wait_until_steady(this=<unavailable>, __addr=0x00007fac98000b00, __val=2147483648, __has_timeout=true, __s=<unavailable>, __ns=(__r = 711393434)) at futex.cc:217:18
frame #2: 0x00007fb090db0b85 libtorch_cuda.so`c10d::ProcessGroupNCCL::waitForFutureOrTimeout(std::future<bool>&, std::chrono::duration<long, std::ratio<1l, 1000l>> const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>> const&, c10d::C10dLoggingData&, bool) + 725
frame #3: 0x00007fb090db1068 libtorch_cuda.so`c10d::ProcessGroupNCCL::abort() + 664
frame #4: 0x00007fb0af488edc libtorch_python.so`void pybind11::cpp_function::initialize<pybind11::cpp_function::cpp_function<void, c10d::Backend, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::call_guard<pybind11::gil_scoped_release>, char [65]>(void (c10d::Backend::*)(), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::call_guard<pybind11::gil_scoped_release> const&, char const (&) [65])::'lambda'(c10d::Backend*), void, c10d::Backend*, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::call_guard<pybind11::gil_scoped_release>, char [65]>(void&&, c10d::Backend (*)(), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::call_guard<pybind11::gil_scoped_release> const&, char const (&) [65])::'lambda1'(pybind11::detail::function_call&)::_FUN(pybind11::detail::function_call&) + 188
frame #5: 0x00007fb0aeb8866e libtorch_python.so`pybind11::cpp_function::dispatcher(_object*, _object*, _object*) + 2062
frame #6: 0x00000000004fc697 python3.10`cfunction_call(func='0x7fb039dbd260', args=<unavailable>, kwargs=<unavailable>) at methodobject.c:543:19
thread #17, name = 'python', stop reason = signal SIGSTOP
frame #0: 0x00007fb0b7ed4895 libc.so.6`clock_nanosleep@GLIBC_2.2.5 + 101
frame #1: 0x00007fb0b7ed9487 libc.so.6`__nanosleep + 23
frame #2: 0x00007fb0b7f05319 libc.so.6`usleep + 73
frame #3: 0x00007fb0937e944b libtorch_cuda.so`asyncJobLaunch(asyncJobsMain=0x00007fad3c004598, groupAbortFlag=0x00007fad3c004590) at group.cc:382:36
frame #4: 0x00007fb0937e9e54 libtorch_cuda.so`groupLaunch(job_=0x00007fad3c0045b0, simInfo=0x0000000000000000) at group.cc:423:3
frame #5: 0x00007fb0937eb0e5 libtorch_cuda.so`ncclGroupEndInternal(simInfo=0x0000000000000000) at group.cc:573:7
frame #6: 0x00007fb0937f4239 libtorch_cuda.so`ncclCommAbort(comm=<unavailable>) at init.cc:2098:3
frame #7: 0x00007fb090d83907 libtorch_cuda.so`c10d::NCCLComm::abort(std::optional<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>>>) + 599
frame #8: 0x00007fb090da3ddb libtorch_cuda.so`c10d::ProcessGroupNCCL::abortCommsFromMap(std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>>, std::shared_ptr<c10d::NCCLComm>, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>>>, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>>>, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>> const, std::shared_ptr<c10d::NCCLComm>>>>&, std::optional<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>>> const&) + 75
frame #9: 0x00007fb090daea91 libtorch_cuda.so`c10d::ProcessGroupNCCL::abortComms(std::optional<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char>>> const&) + 129
frame #10: 0x00007fb090daf4ff libtorch_cuda.so`std::_Function_handler<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> (), std::__future_base::_Task_setter<std::unique_ptr<std::__future_base::_Result<bool>, std::__future_base::_Result_base::_Deleter>, std::thread::_Invoker<std::tuple<c10d::ProcessGroupNCCL::abort()::'lambda0'()>>, bool>>::_M_invoke(std::_Any_data const&) + 47
frame #11: 0x00007fb090c083eb libtorch_cuda.so`std::__future_base::_State_baseV2::_M_do_set(std::function<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> ()>*, bool*) + 27
frame #12: 0x00007fb0b7e8f5c8 libc.so.6`__pthread_once_slow + 232
frame #13: 0x00007fb090da7c66 libtorch_cuda.so`std::__future_base::_Async_state_impl<std::thread::_Invoker<std::tuple<c10d::ProcessGroupNCCL::abort()::'lambda0'()>>, bool>::_M_run() + 214
frame #14: 0x00007fb08faf0e95 libstdc++.so.6`std::execute_native_thread_routine(__p=<unavailable>) at thread.cc:104:18
frame #15: 0x00007fb0b7e8a3b2 libc.so.6`start_thread + 722
frame #16: 0x00007fb0b7f0f430 libc.so.6`__clone3 + 48
thread #18, name = 'python', stop reason = signal SIGSTOP
frame #0: 0x00007fb0b7e86f4a libc.so.6`__futex_abstimed_wait_common + 202
frame #1: 0x00007fb0b7e8bec4 libc.so.6`__pthread_clockjoin_ex + 324
frame #2: 0x00007fb0937f004f libtorch_cuda.so`::commReclaim(ncclAsyncJob *) [inlined] commFree(comm=0x000000005a762f20) at init.cc:194:5
frame #3: 0x00007fb0937efe00 libtorch_cuda.so`::commReclaim(ncclAsyncJob *) [inlined] commCleanup(comm=0x000000005a762f20) at init.cc:1926:3
frame #4: 0x00007fb0937efa4a libtorch_cuda.so`commReclaim(job_=<unavailable>) at init.cc:2013:31
frame #5: 0x00007fb0937e8db8 libtorch_cuda.so`ncclAsyncJobMain(arg=0x00007fad3c0333b0) at group.cc:73:26
frame #6: 0x00007fb0b7e8a3b2 libc.so.6`start_thread + 722
frame #7: 0x00007fb0b7f0f430 libc.so.6`__clone3 + 48
```
### Versions
PyTorch main
NCCL 2.25.1-1
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @c-p-i-o | open | 2025-03-13T20:33:26Z | 2025-03-19T18:06:51Z | https://github.com/pytorch/pytorch/issues/149153 | [
"module: dependency bug",
"oncall: distributed",
"module: nccl",
"module: c10d",
"bug"
] | d4l3k | 7 |
Gerapy/Gerapy | django | 67 | 任务管理中定时任务未能执行? | 任务管理中定时任务未能执行?所有配置均配置了,定好的时间并不能运行爬虫项目,希望大佬能告知解决办法 | closed | 2018-07-10T06:15:02Z | 2018-09-20T13:31:31Z | https://github.com/Gerapy/Gerapy/issues/67 | [] | lceBoss | 6 |
kizniche/Mycodo | automation | 1,135 | AnyLeaf pH Probe Won't Calibrate | Hey Kyle
Before i explain the issue i'm having i would like to say thank you for creating such an awesome open source piece of software. Very easy to use and so useful. My Issue, I'm trying to calibrate my AnyLeaf pH probe but i keep getting this error after i click the calibrate, slot 2 button. I enter my first buffer of 4.0, hit the calibrate, slot button 1 and then notice my reading changes to around 4pH. I enter my second buffer solution of 7.0, hit the calibrate, slot button 2 and then i have no data on the live data viewing page for that input. The following logs:
2022-01-07 18:20:25,467 - ERROR - mycodo.inputs.anyleaf_ph_9ad5adea - InputModule raised an exception when taking a reading: float division by zero
2022-01-07 18:20:29,797 - ERROR - mycodo.inputs.anyleaf_ph_9ad5adea - InputModule raised an exception when taking a reading: float division by zero
2022-01-07 18:20:44,855 - ERROR - mycodo.inputs.anyleaf_ph_9ad5adea - InputModule raised an exception when taking a reading: float division by zero
2022-01-07 18:20:44,856 - ERROR - mycodo.controllers.controller_input_9ad5adea - StopIteration raised 3 times. Possibly could not read input. Ensure it's connected properly and detected.
Now, if i clear the calibration slots, deactivate my input, reactive my input. I get a reading and i have communication again with the probe. So far I've tried
Calibrating 1 slot at a time and deactivating then activating the input, and it still fails
Tried Calibrating from a different slot first, and it still fails.
Tried Calibrating all three slots, and this still fails
Any help would be appreciated
Thanks Mate!
| closed | 2022-01-07T18:36:37Z | 2022-05-20T17:04:04Z | https://github.com/kizniche/Mycodo/issues/1135 | [] | bviolante | 6 |
ray-project/ray | pytorch | 51,487 | CI test windows://python/ray/tests:test_streaming_generator_2 is consistently_failing | CI test **windows://python/ray/tests:test_streaming_generator_2** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aad4-a541-45a9-b1ef-d27f9a1da383
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aa03-5c4f-4168-a0da-6cbdc8cbd2df
DataCaseName-windows://python/ray/tests:test_streaming_generator_2-END
Managed by OSS Test Policy | closed | 2025-03-18T23:08:16Z | 2025-03-19T21:55:16Z | https://github.com/ray-project/ray/issues/51487 | [
"bug",
"triage",
"core",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] | can-anyscale | 2 |
influxdata/influxdb-client-python | jupyter | 626 | AttributeError: 'WriteApi' object has no attribute '_subject' | ### Specifications
* Client Version: 1.39.0
* InfluxDB Version: 2.7.5
* Platform: Macos 14.0
* python version: 3.11
### Code sample to reproduce problem
```python
from influxdb_client import InfluxDBClient, Point, WritePrecision
import pandas as pd
from datetime import datetime
import threading
url = "http://localhost:8086"
token = "**********************"
org = "grafana_on_mac"
bucket = "grafana_test"
client = InfluxDBClient(url=url, org=org,token=token)
csv_file = "covid_19_india.csv"
def convert_to_utc(date_str, time_str):
datetime_str = f"{date_str} {time_str}"
dt_obj = datetime.strptime(datetime_str, "%Y-%m-%d %I:%M %p")
return dt_obj.isoformat()
def insert_data_to_influxdb(row):
try:
point = Point("covid_stats") \
.tag("State", row['State/UnionTerritory']) \
.time(convert_to_utc(row['Date'], row['Time']), WritePrecision.NS) \
.field("Cured", int(row['Cured'])) \
.field("Deaths", int(row['Deaths'])) \
.field("Confirmed", int(row['Confirmed']))
client.write_api().write(bucket=bucket, record=point)
except Exception as e:
print(f"Error occurred during insertion: {e}")
def parallel_insertion(data):
try:
with Pool() as pool:
pool.map(insert_data_to_influxdb, data.to_dict('records'))
print("Data insertion complete.")
except Exception as e:
print(f"Error occurred during parallel insertion: {e}")
try:
data = pd.read_csv(csv_file)
parallel_insertion(data)
finally:
client.close()
```
### Expected behavior
The values from the csv file should be added to the influx DB
### Actual behavior
Started seeing below error and finally nothing was written to the DB:
```
AttributeError: 'WriteApi' object has no attribute '_subject'
Error occurred during insertion: Invalid value for `api_client`, must be defined.
Exception ignored in: <function WriteApi.__del__ at 0x10638e980>
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.11/site-packages/influxdb_client/client/write_api.py", line 411, in __del__
if self._subject:
^^^^^^^^^^^^^
AttributeError: 'WriteApi' object has no attribute '_subject'
Error occurred during insertion: Invalid value for `api_client`, must be defined.
Exception ignored in: <function WriteApi.__del__ at 0x10638e980>
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.11/site-packages/influxdb_client/client/write_api.py", line 411, in __del__
if self._subject:
^^^^^^^^^^^^^
```
### Additional info
minimal csv info:
<img width="647" alt="image" src="https://github.com/influxdata/influxdb-client-python/assets/9888307/19569f97-8511-4784-88d5-22fae5bbfc35">
| closed | 2024-01-07T19:45:03Z | 2024-01-08T07:23:02Z | https://github.com/influxdata/influxdb-client-python/issues/626 | [
"bug"
] | jalees | 1 |
benbusby/whoogle-search | flask | 1,181 | Internal server error (500) | **Describe the bug**
ImportError: Error relocating /usr/local/lib/python3.12/site-packages/cryptography/hazmat/bindings/_openssl.abi3.so: FIPS_mode_set: symbol not found
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Deployment Method**
- [ ] Heroku (one-click deploy)
- [x] Docker
- [ ] `run` executable
- [ ] pip/pipx
- [ ] Other: [describe setup]
**Version of Whoogle Search**
- [ ] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc)
- [x] Version [linux/arm/v7 0.9.0 ]
- [ ] Not sure
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| open | 2024-10-01T01:10:52Z | 2025-02-24T03:17:59Z | https://github.com/benbusby/whoogle-search/issues/1181 | [
"bug"
] | amduong | 2 |
JaidedAI/EasyOCR | pytorch | 997 | Training Custom Plates Dataset | Hello,
We have a dataset consists of street images containing shop plates with different fonts, sizes, language and styles, and we want to use EasyOCR for plates detection. for that, we want to enhance the result, so we want to train our custom dataset.
1- And as you mentioned in "How to train your custom model" section, the training method depends on "TextRecognitionDataGenerator", but this method doesn't suits our needs. Is there a way for training other than text recognition (based on our images), and if so, what are the steps (tool for labeling images,.....), and how we can transfer it into model?
2- Can we use two models (default model and our custom model) in EasyOCR detection at the same time?
Thanks in advance. | open | 2023-04-26T05:29:15Z | 2023-04-30T06:00:24Z | https://github.com/JaidedAI/EasyOCR/issues/997 | [] | soheer | 1 |
mljar/mercury | data-visualization | 8 | Convert Notebook to REST API | # Problem
* How to reuse those models in microservices in Kubernetes (Containers)
# Solution
* Reuse Mercury's capability through an API-driven interview
* Instead of a Form, users would use an API
* Either HTTP REST/GRPC
* Container-level through Environment vars | closed | 2022-01-17T07:35:42Z | 2022-05-06T13:04:40Z | https://github.com/mljar/mercury/issues/8 | [
"enhancement"
] | marcellodesales | 7 |
Lightning-AI/pytorch-lightning | pytorch | 20,253 | Cannot turn off sampler injection at inference time. | ### Bug description
I want to use a custom distributed batch sampler at inference time.
The sampler looks like this:
```python
class DistributedInferenceBatchSampler(DistributedSampler):
def __init__(self, dataset: Dataset,
batch_size: int = 1,
num_replicas: Optional[int] = None,
rank: Optional[int] = None,
shuffle: bool = False,
seed: int = 0,
drop_last: bool = False,
) -> None:
super().__init__(dataset, num_replicas=num_replicas, rank=rank,
shuffle=shuffle, seed=seed, drop_last=drop_last)
# do stuff
# sort data indices by datapoint length, batch up
# subsample batches for current rank
self.batches = # nested list [[b1_1, b1_2, ...], [b2_1, b2_2, ...], ...]
def __iter__(self) -> Iterator[T_co]:
return iter(self.batches)
def __len__(self) -> int:
return len(self.batches)
```
I use this dataloader inside my data module:
```python
class DataModule(LightningDataModule):
# ...
def predict_dataloader(self, rank=None, num_replicas=None):
# define Dataset 'data'
bsampler = DistributedInferenceBatchSampler(dataset=data,
batch_size=4,
num_replicas=num_replicas,
rank=rank)
data_loader = DataLoader(data,
batch_sampler=bsampler)
return data_loader
```
I'm running inference like so:
```python
trainer = Trainer( strategy='auto',
use_distributed_sampler=False, # using custom distributed batchsampler
accelerator=gpu,
deterministic=False,
enable_progress_bar=True,
enable_model_summary=True,
devices=devices
)
trainer.predict(model=self._model, datamodule=self._datamodule)
```
However, Lightning tries to replace my batch sampler despite the `use_distributed_sample=False` flag because it always does so in predict mode, and fails because the sampler doesn't have the same signature as a Pytorch `BatchSampler`.
I've tried wrapping my custom `DistributedInferenceBatchSampler` like so:
```python
class BatchSamplerWrapper(BatchSampler):
def __init__(self, sampler, batch_size=1, drop_last=False):
self.sampler = sampler
self.batch_size = batch_size # ignored
self.drop_last = drop_last # ignored
def __iter__(self):
for batch in self.sampler:
yield batch
def __len__(self):
return len(self.sampler)
class DataModule(LightningDataModule):
# ...
def predict_dataloader(self, rank=None, num_replicas=None):
# define Dataset 'data'
bsampler = DistributedInferenceBatchSampler(dataset=data,
batch_size=4,
num_replicas=num_replicas,
rank=rank)
wrapper = BatchSamplerWrapper(bsampler, batch_size=4, drop_last=False)
data_loader = DataLoader(data,
batch_sampler=wrapper)
return data_loader
```
However, Lightning replaces my `bsampler` inside the wrapper with a `torch.utils.data.sampler.SequentialSampler` which leads to `BatchSamplerWrapper.__iter__()` not having the intended behaviour. It returns an `int` rather than a list of `int`s, leading to:
```
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
Error executing job with overrides: []
Traceback (most recent call last):
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/codebase/IgFlow/multiflow/experiments/inference.py", line 18, in sample
run.sample()
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/codebase/IgFlow/multiflow/experiments/model_run.py", line 125, in sample
trainer.predict(model=self._model, datamodule=self._datamodule)
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 864, in predict
return call._call_and_handle_interrupt(
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 43, in _call_and_handle_interrupt
return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/pytorch_lightning/strategies/launchers/subprocess_script.py", line 102, in launch
return function(*args, **kwargs)
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 903, in _predict_impl
results = self._run(model, ckpt_path=ckpt_path)
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 989, in _run
results = self._run_stage()
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1030, in _run_stage
return self.predict_loop.run()
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/pytorch_lightning/loops/utilities.py", line 182, in _decorator
return loop_run(self, *args, **kwargs)
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/pytorch_lightning/loops/prediction_loop.py", line 119, in run
batch, batch_idx, dataloader_idx = next(data_fetcher)
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/pytorch_lightning/loops/fetchers.py", line 127, in __next__
batch = super().__next__()
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/pytorch_lightning/loops/fetchers.py", line 56, in __next__
batch = next(self.iterator)
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/pytorch_lightning/utilities/combined_loader.py", line 326, in __next__
out = next(self._iterator)
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/pytorch_lightning/utilities/combined_loader.py", line 132, in __next__
out = next(self.iterators[0])
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 633, in __next__
data = self._next_data()
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1345, in _next_data
return self._process_data(data)
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data
data.reraise()
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/torch/_utils.py", line 644, in reraise
raise exception
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
File "/data/nagagpu03/not-backed-up/nvme00/vavourakis/miniforge3/envs/mflow/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
TypeError: 'int' object is not iterable
```
I just want to turn off this sampler-replacing behaviour. I have a similar setup during training (rather than inference) and that works fine (no wrappers required, either).
### What version are you seeing the problem on?
v2.1
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
_No response_
### More info
_No response_ | open | 2024-09-06T11:29:47Z | 2024-09-06T11:30:00Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20253 | [
"bug",
"needs triage",
"ver: 2.1.x"
] | ovavourakis | 0 |
pyeventsourcing/eventsourcing | sqlalchemy | 249 | monotonic | `time.time()` returns a float (time in seconds since `epoch`).
`time.monotonic()` returns a float (a clock that never goes backwards)
```
datetime.fromtimestamp(monotonic(), timezone.utc)
```
will always generate *1970-01-02 19:19:18:508179+00:00* because the monotonic value is < epoch
_"The epoch is the point where the time starts, and is platform dependent. For Unix, the epoch is January 1, 1970, 00:00:00 (UTC). To find out what the epoch is on a given platform, look at time.gmtime(0)."_

| closed | 2022-04-16T18:54:52Z | 2022-04-16T22:31:47Z | https://github.com/pyeventsourcing/eventsourcing/issues/249 | [] | austinnichols101 | 2 |
custom-components/pyscript | jupyter | 71 | Feature Request: a decorator to declare a function/class as "native" | Would it be possible to use a decorator to indicate that a function/class should remain native (i.e. not "pyscript") so that it can be called from other native (pure python) code?
As an example of what I mean, in the below code, I'd much prefer to define `PyscriptWatchdogEventHandler` directly in `watchdog.py` with some kind of decorator on it to indicate that it shouldn't be "pyscriptified".
https://gist.github.com/dlashua/f7d88f9a5afdcf7af17ce24266925a0b
Related Question: The purpose of the above code is to automatically reload pyscript files when they change (because I'm lazy). I use an input_boolean to indicate this functionality should be enabled that I can turn off when I'm not developing to save resources. Have I missed a situation that could occur and leave me with memory leaks or other unintended effects? Is there a better way to do this? | closed | 2020-11-02T17:15:37Z | 2021-02-01T12:42:15Z | https://github.com/custom-components/pyscript/issues/71 | [] | dlashua | 4 |
Guovin/iptv-api | api | 92 | 运行后的文件和demo一模一样 | 按照你的教程操作了一遍后,运行的结果user_result.txt和user_demo.txt的文件一模一样,
是这样的吗,还是哪里不对
| closed | 2024-04-23T02:10:58Z | 2024-04-24T08:24:23Z | https://github.com/Guovin/iptv-api/issues/92 | [
"duplicate"
] | AomoDa | 1 |
OpenVisualCloud/CDN-Transcode-Sample | dash | 98 | [Feature][Q4'19] Investigate Support of CMK for CPU isolation and pinning | Support manually set the CPU and Memory resource | closed | 2019-11-26T01:46:22Z | 2019-12-25T06:06:42Z | https://github.com/OpenVisualCloud/CDN-Transcode-Sample/issues/98 | [
"enhancement"
] | wenquan-mao | 0 |
huggingface/pytorch-image-models | pytorch | 2,132 | [BUG] AttributeError: module 'torch._C' has no attribute 'set_grad_enabled' | ```shell
File "/home/ahalev/miniconda3/envs/eye-image-env/lib/python3.8/site-packages/timm/loss/asymmetric_loss.py", line 40, in forward
torch._C.set_grad_enabled(False)
AttributeError: module 'torch._C' has no attribute 'set_grad_enabled'
```
New versions of torch deprecated `torch._C.set_grad_enabled` in favor of [torch.set_grad_enabled](https://pytorch.org/docs/stable/generated/torch.set_grad_enabled.html), leading to an AttributeError here:
https://github.com/huggingface/pytorch-image-models/blob/59b3d86c1d69fe85ccb5bbdb2f1711eadae6e4a7/timm/loss/asymmetric_loss.py#L39-L40 | closed | 2024-04-03T04:41:07Z | 2024-04-09T17:14:15Z | https://github.com/huggingface/pytorch-image-models/issues/2132 | [] | ahalev | 0 |
onnx/onnx | machine-learning | 5,810 | Assertion `false` failed: No Adapter From Version $19 for Constant | # Bug Report
### Is the issue related to model conversion?
<!-- If the ONNX checker reports issues with this model then this is most probably related to the converter used to convert the original framework model to ONNX. Please create this bug in the appropriate converter's GitHub repo (pytorch, tensorflow-onnx, sklearn-onnx, keras-onnx, onnxmltools) to get the best help. -->
Yes, it is. However, the issue does not appear when converting my TensorFlow model to Onnx, but only later when I run `onnx.version_converter.convert_version()` to convert between Onnx Opset versions. Therefore, I decided to post the issue in this repo.
### Describe the bug
My goal is to convert a TensorFlow model to Onnx with a "current" version of Onnx. I managed to convert my models using Onnx version 1.12.0, but not with versions 1.15.0 and 1.14.1.
When I tried the newest version I came across the bug reported in this issue: https://github.com/onnx/tensorflow-onnx/issues/2262. As suggested in the comments I downgraded Onnx to 1.14.1, which is not a fix I like, but I tried it anyway. Afterwards, this first issue was solved, but I ran into another one:
> RuntimeError: /github/workspace/onnx/version_converter/BaseConverter.h:68: adapter_lookup: Assertion `false` failed: No Adapter From Version $19 for Constant
In my particular case I tried to use the function `onnx.version_converter.convert_version()` to convert to opset version 16, but I also don't mind if it is a version >=16. Therefore, I also tried the target opset versions 18 and 19. However, it appears that for versions below 19 I will always run into the issue reported here and if I use version 19 I run into the following exception:
> ValueError: make_sure failure: Opset 19 is not supported yet. Please use a lower opset
### System information
- OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*): Linux Ubuntu 22.04.3 LTS
- ONNX version (*e.g. 1.13*): 1.14.1
- Python version: 3.11.5
- GCC/Compiler version (if compiling from source): /
- CMake version: /
- Protobuf version: 3.20.3
- Visual Studio version (if applicable): /
### Reproduction instructions
```
import onnx
from onnx import version_converter
m_path = "<PATH TO SOME ONNX MODEL>"
onnx_model = onnx.load(m_path)
converted_model = version_converter.convert_version(onnx_model , 16)
```
### Expected behavior
The conversion should work as it did for earlier versions of Onnx.
| open | 2023-12-18T13:36:51Z | 2024-10-26T21:47:05Z | https://github.com/onnx/onnx/issues/5810 | [
"topic: converters",
"contributions welcome"
] | RabJon | 15 |
mckinsey/vizro | plotly | 567 | Custom action targeting custom graph inputs / graph function parameters | ### Question
Hey team,
I have a question regarding the use of custom actions.
I would like to be able to click on a cell on my AgGrid element, and for the value of the click to update a parameter in the function that builds my graph.
However, I am not sure how to target a parameter within a function that builds the Graph component.
Below is an example - where I am able to control the text in a card element ( as in a standard example published by you).
Instead, I would like to control specific inputs to my function (the commented out mainfig.question example). To be more clear, basically, I would like this custom action to work as a parameter - being able to change the question input to my custom graph.
How is it possible to target custom graph inputs through custom actions? If that is not possible, is it possible to target the selected parameter somehow?
Thank you so much for the amazing work on this tool, and for the help in advance!
### Code/Examples
```
import vizro.models as vm
import vizro.plotly.express as px
from vizro import Vizro
from vizro.actions import filter_interaction
from vizro.models.types import capture
import pandas as pd
from vizro.tables import dash_ag_grid
import vizro.plotly.express as px
import json # Testing
data = pd.read_csv('./develop/lsurvey.csv')
questions = pd.DataFrame(data['question'].drop_duplicates())
segments = pd.DataFrame(list(data[data['question_type']=='single_choice']['question'].unique()) + ['none'], columns = ['segment'])
df = px.data.iris()
@capture("action")
def my_custom_action(clickedcell: dict):
"""Custom action."""
# text = json.dumps(clickedcell)
# return lambda: f"You clicked on cell with value: {text}" # Return an update function
return (
json.dumps(clickedcell["value"])
if clickedcell
else "Click on a point on the above graph(func)."
)
@capture("graph")
def build_selected_figure(data_frame, question, segmentation):
fig=px.scatter_matrix(
data_frame = data_frame
, dimensions=["sepal_length", "sepal_width", "petal_length", "petal_width"]
, color="species"
, title = question
),
return fig
page = vm.Page(
title="Example of a custom action with UI inputs and outputs",
path="custom_actions",
id="custom_actions",
layout=vm.Layout(
grid=[[0, 1], [2, 2]],
row_gap="25px",
),
components=[
vm.AgGrid(
id="cell-selection-simple-click-callback",
figure=dash_ag_grid(
id="cell-selection-simple-click-callback-grid",
data_frame=data,
rowData=data.to_dict("records"),
columnDefs=[{"field": i} for i in data.columns],
defaultColDef={"filter": True},
columnSize="sizeToFit",
getRowId="params.data.State",
dashGridOptions={"animateRows": False},
),
actions=[
vm.Action(
function=my_custom_action(),
inputs=["cell-selection-simple-click-callback-grid.cellClicked"],
#outputs=["mainfig.question"],
outputs=[mycard.children]
),
],
),
vm.Card(id="my_card", text="Click on a point on the above graph."),
vm.Graph(id = 'mainfig'
, figure=build_selected_figure(data_frame = df, question = "How satisfied are you with the product overall?", segmentation = 'none'),
),
],
controls=[
vm.Parameter(
id="segment",
targets=['mainfig.question'],
selector=CustomRadioItems(
# options= all_segments,
options = questions,
title='questionz',
),
),
)
dashboard = vm.Dashboard(pages=[page])
Vizro().build(dashboard).run()
```
### Other information
_No response_
### Which package?
vizro
### Package version
_No response_
### Python version
_No response_
### OS
_No response_
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | closed | 2024-07-04T13:27:04Z | 2024-07-09T14:36:01Z | https://github.com/mckinsey/vizro/issues/567 | [
"General Question :question:"
] | szt-ableton | 7 |
comfyanonymous/ComfyUI | pytorch | 6,389 | Is the image preview of the image loading options not outside the window? | ### Your question
Since PR#6324, the image preview in the image loading options seems to be embedded in the window? Instead of being displayed outside the window like before? I'm still a bit unaccustomed to this layout; the image selection window list is too small.


### Logs
_No response_
### Other
_No response_ | closed | 2025-01-08T06:17:17Z | 2025-01-09T14:35:14Z | https://github.com/comfyanonymous/ComfyUI/issues/6389 | [
"User Support",
"Custom Nodes Bug"
] | Myoko | 9 |
STVIR/pysot | computer-vision | 206 | 请问如何调整学习率与gpu个数的关系 | 在8gpu和16gpu的config中,只有batchsize的差异,学习率需要进行相应调整吗 | closed | 2019-10-14T06:46:07Z | 2019-12-18T08:59:31Z | https://github.com/STVIR/pysot/issues/206 | [] | kongbia | 5 |
benbusby/whoogle-search | flask | 1,204 | [QUESTION] when behind a reverse proxy, search "my ip" get container's internal ip instead of client's remote ip | Hi, I'm using caddy and cloudflare to reverse proxy whoogle, my caddyfile is here:
```
search.example.com {
reverse_proxy http://whoogle-search:5000 { # whoogle-search is the container name
header_up X-Real-IP {remote}
}
}
```
When I search "my ip", I get the result of the internal container ip 172.20.0.3 instead of client's remote ip. I checked the logs in caddy, remote ip (both cloudflare and the real client's ip) should have been sent to whoogle
```
{"level":"debug","ts":1733982155.3036063,"logger":"http.handlers.reverse_proxy","msg":"upstream roundtrip","upstream":"whoogle-search:5000","duration":0.2426664,"request":{"remote_ip":"172.
71.210.247","remote_port":"11862","client_ip":"172.71.210.247","proto":"HTTP/2.0","method":"GET","host":"search.example.com","uri":"/element?url=gAAAAABnWnfKtJH6gl4CYzc1QmN66eAfwFRvgQiEyHE5
DrXsJZbOztDnlgvq-RyR2lTrhEKvK6_i9e7SS6i_2RD_GQRYW79HmTa_8D8O5vX1LAGArzz2I-M=&type=image/x-icon","headers":{"X-Forwarded-Host":["search.example.com"],"Cf-Ray":["8f0b6454dbd284d2-HKG"],"Cf-Vi
sitor":["{\"scheme\":\"https\"}"],"Accept-Encoding":["gzip, br"],"Cf-Connecting-Ip":["REDACTED"],"Sec-Ch-Ua-Mobile":["?0"],"Sec-Ch-Ua-Platform":["\"Windows\""],"Sec-Ch-Ua":["\"Micros
oft Edge\";v=\"131\", \"Chromium\";v=\"131\", \"Not_A Brand\";v=\"24\""],"Cookie":["REDACTED"],"Cdn-Loop":["cloudflare; loops=1"],"X-Real-Ip":["172.71.210.247:11862"],"User-Agent":["Mozilla
/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36 Edg/131.0.0.0"],"X-Forwarded-Proto":["https"],"Sec-Fetch-Site":["same-origin"],"Acce
pt-Language":["en-US,en;q=0.9"],"X-Forwarded-For":["172.71.210.247"],"Accept":["image/avif,image/webp,image/apng,image/svg+xml,image/*,*/*;q=0.8"],"Cf-Ipcountry":["CN"],"Sec-Fetch-Mode":["n
o-cors"],"Sec-Fetch-Dest":["image"]},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"h2","server_name":"search.example.com"}},"headers":{"Content-Type":["image/png"],"Set-
Cookie":["REDACTED"],"Vary":["Cookie"],"X-Frame-Options":["DENY"],"Cache-Control":["max-age=86400"],"Content-Length":["4286"],"Date":["Thu, 12 Dec 2024 05:42:35 GMT"],"Server":["waitress"],
"X-Content-Type-Options":["nosniff"]},"status":200}
```
How can I get "my ip" return my client's real ip? | closed | 2024-12-12T06:13:14Z | 2024-12-19T07:50:32Z | https://github.com/benbusby/whoogle-search/issues/1204 | [
"question"
] | paxprot | 1 |
huggingface/datasets | machine-learning | 6,755 | Small typo on the documentation | ### Describe the bug
There is a small typo on https://github.com/huggingface/datasets/blob/d5468836fe94e8be1ae093397dd43d4a2503b926/src/datasets/dataset_dict.py#L938
It should be `caching is enabled`.
### Steps to reproduce the bug
Please visit
https://github.com/huggingface/datasets/blob/d5468836fe94e8be1ae093397dd43d4a2503b926/src/datasets/dataset_dict.py#L938
### Expected behavior
`caching is enabled`
### Environment info
- `datasets` version: 2.17.1
- Platform: Linux-5.15.0-101-generic-x86_64-with-glibc2.35
- Python version: 3.11.7
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.1
- `fsspec` version: 2023.10.0 | closed | 2024-03-24T21:47:52Z | 2024-04-02T14:01:19Z | https://github.com/huggingface/datasets/issues/6755 | [
"good first issue"
] | fostiropoulos | 3 |
FactoryBoy/factory_boy | sqlalchemy | 463 | SubFactory being created despite existing object passed in | If I have a factory like this:
```python
class EvaluationFactory(factory.alchemy.SQLAlchemyModelFactory):
class Meta:
model = models.Evaluation
sqlalchemy_session = models.db.session
sqlalchemy_session_persistence = 'flush'
professor = factory.SubFactory(ProfessorFactory)
```
And call it like this:
```python
prof = factories.ProfessorFactory()
factories.EvaluationFactory(professor=prof)
```
There will now be 2 professors in the system. One is the one I created, and one is one that was automatically created by EvaluationFactory. The Evaluation that got created is correctly linked with the Professor I created. Why is an extra professor created? Seems like a bug to me (since it's created but not even used).
| open | 2018-03-22T23:50:21Z | 2021-02-09T16:13:59Z | https://github.com/FactoryBoy/factory_boy/issues/463 | [] | fgblomqvist | 4 |
vitalik/django-ninja | pydantic | 547 | [BUG] TypeError: 'coroutine' object is not subscriptable [async pagination] | ```Python
async def search(request, keywords: str):
item = await sync_to_async(list)(Movie.objects.all())
if item:
return item
```
Error code after running
```
Traceback (most recent call last):
File "D:\Program Files (x86)\Anaconda3\envs\django\lib\site-packages\ninja\operation.py", line 99, in run
result = self.view_func(request, **values)
File "D:\Program Files (x86)\Anaconda3\envs\django\lib\site-packages\ninja\pagination.py", line 145, in view_with_pagination
result = paginator.paginate_queryset(
File "D:\Program Files (x86)\Anaconda3\envs\django\lib\site-packages\ninja\pagination.py", line 69, in paginate_queryset
"items": queryset[offset : offset + limit],
TypeError: 'coroutine' object is not subscriptable
``` | closed | 2022-09-01T07:21:00Z | 2024-04-30T11:42:38Z | https://github.com/vitalik/django-ninja/issues/547 | [] | ddzyx | 14 |
developmentseed/lonboard | data-visualization | 484 | PathStyleExtension in lonboard | As noted in #473 my use case is plotting link volumes on NZ roads. One "feature" of the road geometries (which come from OpenStreetMap) is that lane geometries lie on top of each other. This doesn't look good in the visualisation so we want to split paths with some offset. One fix seems to be the `deck.gl` `PathStyleExtension` with the `offset:true` option. I was wondering if there is a way to include this in a `lonboard` map even though it is not available in the [current set](https://developmentseed.org/lonboard/latest/api/layer-extensions/)?
```
new deck.PathLayer({
widthMinPixels: 1,
capRounded: true,
data: pathData,
getPath: (d) => d.path,
getColor: (d) => d.color,
getWidth: (d) => d.width,
getOffset: -0.5,
extensions: [
new deck.PathStyleExtension({
offset: true
})
],
parameters: {
depthTest: false
}
})
``` | closed | 2024-04-23T22:10:08Z | 2024-05-02T18:22:08Z | https://github.com/developmentseed/lonboard/issues/484 | [] | shriv | 3 |
cvat-ai/cvat | computer-vision | 9,138 | Help needed! I want to run just server and proxy my UI running locally to the server, FORBIDDEN | Hey, I want to run just the server, wants to do some enhancements in the UI, but when i ran docker-compose as instructed in via : https://docs.cvat.ai/docs/contributing/development-environment/
it says backend sever not running the requests are forbidden, can anyone please help. | closed | 2025-02-23T15:36:15Z | 2025-02-24T19:12:36Z | https://github.com/cvat-ai/cvat/issues/9138 | [
"question"
] | prabaljainn | 13 |
ansible/awx | automation | 15,665 | Remote work unit is gone | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
When trying to add a new instance with the same IP/hostname after reinstallation of the remote host I get the following error in AWX:
```
Receptor error from 10.131.254.154, detail:
Remote work unit is gone
```
The logs show:
```
2024-11-26 06:56:00,714 WARNING [d4328228e83442d4b8d0100c1fcbf803] awx.main.tasks.receptor While releasing work: WorkUnitCancelError 'error cancelling remote unit: unknown work unit ' on node '10.131.254.154' with state 'Failed' work unit id '10'
2024-11-26 06:56:00,718 INFO [d4328228e83442d4b8d0100c1fcbf803] awx.main.tasks.system Failed to find capacity of new or lost execution node 10.131.254.154, errors:
Receptor error from 10.131.254.154, detail:
Remote work unit is gone
2024-11-26 06:56:05,014 INFO [d4328228e83442d4b8d0100c1fcbf803] awx.main.tasks.receptor Reaping orphaned work unit G9gDUxVA with params --worker-info
2024-11-26 06:56:05,681 ERROR [d4328228e83442d4b8d0100c1fcbf803] awx.main.dispatch Worker failed to run task awx.main.tasks.system.awx_receptor_workunit_reaper(*[], **{}
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/awx/main/dispatch/worker/task.py", line 103, in perform_work
result = self.run_callable(body)
^^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/awx/main/dispatch/worker/task.py", line 78, in run_callable
return _call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/awx/main/tasks/system.py", line 693, in awx_receptor_workunit_reaper
administrative_workunit_reaper(receptor_work_list)
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/awx/main/tasks/receptor.py", line 222, in administrative_workunit_reaper
receptor_ctl.simple_command(f"work release {unit_id}")
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/receptorctl/socket_interface.py", line 83, in simple_command
return self.read_and_parse_json()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/lib/awx/venv/awx/lib64/python3.11/site-packages/receptorctl/socket_interface.py", line 60, in read_and_parse_json
raise RuntimeError(text[7:])
RuntimeError: error cancelling remote unit: unknown work unit 10
```
The receptor logs shows:
```
ERROR 2024/11/26 06:57:05 Error locating unit: 10
ERROR 2024/11/26 06:57:05 : unknown work unit 10
```
How do I resolve this? I can't change the IP of the receptor.
### AWX version
24.6.1
### Select the relevant components
- [ ] UI
- [ ] UI (tech preview)
- [X] API
- [ ] Docs
- [ ] Collection
- [X] CLI
- [X] Other
### Installation method
kubernetes
### Modifications
no
### Ansible version
2.15.9
### Operating system
Ubuntu
### Web browser
_No response_
### Steps to reproduce
Reinstall a instance bundle.
### Expected results
The receptor will join the mesh.
### Actual results
Receptor can't join the mesh.
### Additional information
_No response_ | closed | 2024-11-26T06:58:50Z | 2025-02-26T16:44:12Z | https://github.com/ansible/awx/issues/15665 | [
"type:bug",
"component:api",
"needs_triage",
"community"
] | xibriz | 4 |
akfamily/akshare | data-science | 5,649 | AKShare 接口问题报告 | stock_zh_a_hist接口速度变得很慢 | 谁能告诉我为什么现在接口返回这么慢,一条数据需要等10几秒,昨天还很快的啊
 | closed | 2025-02-17T05:28:04Z | 2025-02-17T06:43:40Z | https://github.com/akfamily/akshare/issues/5649 | [
"bug"
] | vivienleigh123 | 2 |
microsoft/hummingbird | scikit-learn | 557 | Remove implicit deletion of model files from .load() method | **Issue**
The load() method within `PyTorchSklearnContainer` container class implicitly deletes the original file location provided to the function class. This implicit deletion is unknown to the callers as well as it has side effects that leads to race condition and blocks concurrency setup for our inference servers in production.
1. When running inside docker container, deletion of original model file by load method requires docker container restart in order to load model again. That means humming bird model can be loaded only once within docker container lifecycle .
2. We can't run more than one process for inference server, if we do then one process loads it first and deletes the original model file location thus causing race condition for the 2nd process and onwards.
**Desired Behavior**
The `load()` method inside `PyTorchSklearnContainer` class (as well as other as the issue is applicable for all classes) should not delete the original model file location that is supplied by the caller and thus allow multiple time loading as well as running more than one process for inference server.
**Environment**
We are running the inference using a Gunicorn server in front of python app that loads the hummingbird model. The app is containerized as docker image and deployed in Kubernetes Cluster where each pod runs two Gunicorn worker processes to handle concurrency.
**Installed Version**
```
hummingbird-ml==0.4.1
torch==1.5.1
```
Please let me know if you need more information. Thanks.
Regards,
Akshay Jain | closed | 2021-12-13T16:53:34Z | 2021-12-14T19:25:04Z | https://github.com/microsoft/hummingbird/issues/557 | [] | akshjain83 | 7 |
hyperspy/hyperspy | data-visualization | 3,178 | wrong "Import error" description | #### Describe the bug
the returning input error give a wrong name for the packages to install (conda)
Error message lead to "hyperspy_gui_ipywidgets" and "hyperspy_gui_traitsui" but the conda install command needs to get "hyperspy-gui-traitsui" or hyperspy-gui-ipywidgets. This is also not consistent qui the pip commands.
#### To Reproduce
```
ImportError Traceback (most recent call last)
Cell In[2], line 1
----> 1 hs.load()
File ~\hyperspy-RELEASE_next_minor_230616\hyperspy\io.py:363, in load(filenames, signal_type, stack, stack_axis, new_axis_name, lazy, convert_units, escape_square_brackets, stack_metadata, load_original_metadata, show_progressbar, **kwds)
361 from hyperspy.signal_tools import Load
362 load_ui = Load()
--> 363 get_gui(load_ui, toolkey="hyperspy.load")
364 if load_ui.filename:
365 filenames = load_ui.filename
File ~\hyperspy-RELEASE_next_minor_230616\hyperspy\ui_registry.py:69, in get_gui(self, toolkey, display, toolkit, **kwargs)
67 def get_gui(self, toolkey, display=True, toolkit=None, **kwargs):
68 if not TOOLKIT_REGISTRY:
---> 69 raise ImportError(
70 "No toolkit registered. Install hyperspy_gui_ipywidgets or "
71 "hyperspy_gui_traitsui GUI elements."
72 )
73 from hyperspy.defaults_parser import preferences
74 if isinstance(toolkit, str):
ImportError: No toolkit registered. Install **hyperspy_gui_ipywidgets** or **hyperspy_gui_traitsui** GUI elements.
```
#### Expected behavior
A clear and concise description of what you expected to happen.
#### Python environement:
hyperspy 1.8.0.dev0 pypi_0 pypi
hyperspy-base 1.7.5 py311ha68e1ae_1 conda-forge
hyperspy-gui-traitsui 1.5.3 pyhd8ed1ab_0 conda-forge
python 3.11.4 h2628c8c_0_cpython conda-forge
| open | 2023-06-21T20:20:25Z | 2023-07-05T07:22:09Z | https://github.com/hyperspy/hyperspy/issues/3178 | [
"type: bug"
] | OliDG | 1 |
gradio-app/gradio | python | 9,960 | HighlightedText show_label=False doesn't work | ### Describe the bug
show_label=False doesn't work on HighlightedTextm, though it can be hidden by setting label="".
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
with gr.Blocks(theme=gr.themes.Ocean()) as demo:
gr.HighlightedText(
value=[["UK", "+"], ["USA", "-"], ["China", "*"], ["Australia", "*"]],
show_legend=True,
show_label=False,
# label='',
color_map={"*": "yellow", "+": "green", "-": "red"})
gr.HighlightedText(
value=[["UK", "+"], ["USA", "-"], ["China", "*"], ["Australia", "*"]],
show_legend=True,
show_label=False,
label='',
color_map={"*": "yellow", "+": "green", "-": "red"})
if __name__ == "__main__":
demo.launch()
```
### Screenshot

### Logs
```shell
.
```
### System Info
```shell
Gradio Playground
```
### Severity
I can work around it | closed | 2024-11-14T19:36:49Z | 2024-11-20T23:34:41Z | https://github.com/gradio-app/gradio/issues/9960 | [
"bug"
] | sthemeow | 1 |
OFA-Sys/Chinese-CLIP | computer-vision | 378 | 对于少类别的数据微调,loss是否需要修改?微调后计算相似度值都很小,有时相似度为负值 | 感谢您的工作~
我观察到对于clip来讲,计算loss的时候是将对角线设置为1,其余全是0,;但是对于当样本类别少于batchsize的时候,即一个batch中会出现相同的文本标签,这个时候需不需要相应的将相同文本标签对不同图片的目标值设置为1?这似乎变成了一个多标签的优化问题,需不需要将loss函数变为BCELOSS?
我在这样操作的过程中在我的数据集上进行微调后,相似度会出现负值,这是合理的么? | open | 2025-02-25T15:10:52Z | 2025-02-25T15:10:52Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/378 | [] | 1633232731 | 0 |
recommenders-team/recommenders | deep-learning | 1,295 | [ASK] Can this be installed on Kaggle or Google Notebook environment? | closed | 2021-02-07T00:58:02Z | 2021-12-17T10:31:02Z | https://github.com/recommenders-team/recommenders/issues/1295 | [
"help wanted"
] | Pwang001 | 1 | |
Morizeyao/GPT2-Chinese | nlp | 70 | running_loss的计算以及清空方式或存在重大Bug? | https://github.com/Morizeyao/GPT2-Chinese/blob/130f04b5b6ad40f20ca28c519483eb4ca3e0b13b/train.py#L212-L229
上面是原来 train.py 代码中计算 running_loss 的方式,其中正常的处理逻辑应该如下:
1. 计算每个 batch 的 loss,并且更新梯度(但是不更新模型参数);
2. 将每个 batch 的 loss 累加到 running_loss;
3. 如果达到 gradient_accumulation 的要求,则对调用优化器,对前面类累加的梯度进行模型参数更新;
4. 如果达到了要打印日志的步骤,则打印必要的日志 。
按照上述逻辑,目前的代码存在问题如下:
- 问题1. 第 214 行的 runing_loss 的更新,应该在第 213 行之前
- 问题2. 第 229 行 running_loss 的清空方式很奇怪:a) 第一个为什么要清空 running_loss ? b) 即便要清空 running_loss,也不应该是在打印日志循环中清空,打不打印日志本身不应该影响代码的训练逻辑,只是为了方便观察而已;而现在的处理方式,打不打印日志明显影响到了训练的结果 (running_loss是否清空)
- 问题3. 平均 loss 的计算方式,通常来说在训练中我们打印的 loss 应该是_训练到目前为止,平均每个 Example或者Batch的损失_,但是现在的计算方式似乎仅告诉_当前这个 Batch 的 Loss_,这是不对的。问题3+问题2或许会间接导致了 Issue #19 的出现。
关于问题1,问题2的解决可以参考 https://github.com/huggingface/pytorch-transformers/blob/7c0f2d0a6a8937063bb310fceb56ac57ce53811b/examples/run_lm_finetuning.py#L202-L211 | closed | 2019-09-24T08:49:45Z | 2019-11-11T13:36:06Z | https://github.com/Morizeyao/GPT2-Chinese/issues/70 | [] | xinfeng1i | 3 |
CorentinJ/Real-Time-Voice-Cloning | python | 791 | train using mutiple GPUs | i try using mutiple GPUs for faster train, i have 4 RTX 2080, but when i run encoder train, it only uses 1 GPUs, i tried to using " CUDA_VISIBLE_DEVICES=0,1,2,3" but it didn't work | closed | 2021-07-08T00:14:30Z | 2021-08-25T09:49:54Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/791 | [] | tranmanhdat | 2 |
gradio-app/gradio | python | 10,274 | It starts normally, but there is no screen when I open it. | ### Describe the bug

### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
def create_ui():
with gr.Blocks() as demo:
gr.Markdown("# GAN 图像生成器")
with gr.Tabs():
# DCGAN 选项卡
with gr.Tab("DCGAN"):
gr.Markdown("## DCGAN 动漫图像生成器")
with gr.Row():
with gr.Column():
dcgan_button = gr.Button("生成动漫图像")
with gr.Column():
dcgan_output = gr.Image(label="生成的动漫图像")
dcgan_button.click(fn=dcgan_generate, outputs=dcgan_output)
# StyleGAN 选项卡
with gr.Tab("StyleGAN"):
gr.Markdown("## StyleGAN 图像生成器")
with gr.Row():
with gr.Column():
stylegan_button = gr.Button("生成图像")
with gr.Column():
stylegan_output = gr.Image(label="生成的图像")
stylegan_button.click(fn=generate_multiple_images, outputs=stylegan_output)
# WGAN 选项卡
with gr.Tab("WGAN"):
gr.Markdown("## WGAN 图像生成器")
with gr.Row():
with gr.Column():
wgan_button = gr.Button("生成图像")
with gr.Column():
wgan_output = gr.Image(label="生成的图像")
wgan_button.click(fn=wgan_generate, outputs=wgan_output)
# WGANP 选项卡
with gr.Tab("WGANP"):
gr.Markdown("## WGANP 图像生成器")
with gr.Row():
with gr.Column():
wganp_button = gr.Button("生成图像")
with gr.Column():
wganp_output = gr.Image(label="生成的图像")
wganp_button.click(fn=wganp_generate, outputs=wganp_output)
return demo
# 启动 Gradio 应用
if __name__ == "__main__":
demo = create_ui()
demo.launch( share=True ,debug=True )
### Screenshot

### Logs
```shell
.
```
### System Info
```shell
└─$ gradio environment
Gradio Environment Information:
------------------------------
Operating System: Windows
gradio version: 5.9.1
gradio_client version: 1.5.2
```
### Severity
I can work around it | closed | 2025-01-01T04:16:35Z | 2025-01-01T05:54:07Z | https://github.com/gradio-app/gradio/issues/10274 | [
"bug"
] | Super1Windcloud | 1 |
xlwings/xlwings | automation | 2,469 | Your xlwings version is different on the client (0.30.12) and server (0.31.7). | #### OS
Windows 10
#### Versions of xlwings, Excel and Python
0.31.7, Office 365, Python 3.12
#### Describe your issue (incl. Traceback!)
When I start server: python app/server_fastapi.py --> All custom_function worked well.
I already had done sideload the add-in well.
But once I click on ribbon button or task panel button, I got this error:
`Your xlwings version is different on the client (0.30.12) and server (0.31.7).`
#### Include a minimal code sample to reproduce the issue (and attach a sample workbook if required!)
I already changed:
```
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>My Task Pane</title>
<!-- Load office.js and xlwings.min.js -->
<script type="text/javascript" src="https://appsforoffice.microsoft.com/lib/1/hosted/office.js"></script>
<script type="text/javascript"
src="https://cdn.jsdelivr.net/gh/xlwings/xlwings@0.31.7/xlwingsjs/dist/xlwings.min.js"></script>
<!-- Bootstrap with the xlwings theme -->
<link rel="stylesheet"
href="https://cdn.jsdelivr.net/gh/xlwings/bootstrap-xlwings@5.2.3-2/dist/bootstrap-xlwings.min.css"
integrity="sha384-TZ8CaOSXLBEEL73Aw1vX6a/2YP7QHdiuilF2C8Put8X81F3FzyRgt9ba77CMKAXq" crossorigin="anonymous">
<!-- Load Custom Functions -->
<script type="text/javascript" src="/xlwings/custom-functions-code"></script>
</head>
```
And requirements-fastapi.txt:
```
fastapi==0.111.0
gunicorn==22.0.0
jinja2==3.1.4
pandas==2.2.2
uvicorn[standard]==0.30.1
xlwings==0.31.7
```
So pls help me check and give me the instruction to fix this. I need to use task panel.
| closed | 2024-07-02T16:18:09Z | 2024-07-03T16:39:37Z | https://github.com/xlwings/xlwings/issues/2469 | [] | tuanlv14 | 2 |
plotly/dash | data-science | 2,947 | [Feature Request] | Thanks so much for your interest in Dash! | closed | 2024-08-10T15:23:51Z | 2024-08-12T14:00:27Z | https://github.com/plotly/dash/issues/2947 | [] | edoerpani | 0 |
ExpDev07/coronavirus-tracker-api | fastapi | 217 | Allow "https://coronavirus-tracker-api.herokuapp.com/v2" to fetch an instruction section. | Presently it gives the following result
```
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>404 Not Found</title>
<h1>Not Found</h1>
<p>The requested URL was not found on the server. If you entered the URL manually please check your spelling and try
again.</p>
```
A result of the following type would be more helpful!
```
{
"subAPIs": {
"location":"location",
"terms of use":"tou"
}
}
``` | closed | 2020-03-27T15:14:15Z | 2020-03-28T21:50:57Z | https://github.com/ExpDev07/coronavirus-tracker-api/issues/217 | [
"question"
] | AbhinavMir | 5 |
voila-dashboards/voila | jupyter | 1,076 | appmode compatibility support | For migrating from Jupyter Notebooks with [appmode ](https://github.com/oschuett/appmode) to JupyterLab 3.0 with voila, compatibility is needed for:
1. existing direct links into standalone web application mode. appmode uses url path `/apps/ ` but voila uses `/voila/render/`
2. setting `jupyter_notebook_url` variable to current notebook url with query parameters. voila supports query parameter so this variable could be injected into the kernel by concatenating os env vars SERVER_NAME, SERVER_PORT, SCRIPT_NAME, QUERY_STRING
Can there be a configuration option for an alias url path in place of /voila/render/ | open | 2022-01-26T16:28:11Z | 2022-07-27T08:05:39Z | https://github.com/voila-dashboards/voila/issues/1076 | [
"enhancement"
] | matthewgasbarro | 1 |
tfranzel/drf-spectacular | rest-api | 976 | Automatically resolve identical serializer names by fully qualified class path | ## Description
What is the intended way how to solve serializer name collisions? I propose to automatically resolve identical serializer names by fully qualified class path for open-api schema generation. #51 already discussed this, but i couldnt find a implementation or how to use it.
## Use Case
The warning we encounter:
```
.../workspace/a/b/c/d/serializers.py:25: Warning [DViewSet > ResponseSerializer]:
Encountered 2 components with identical names "Response" and different classes <class 'a.b.c.d.serializers.ResponseSerializer'> and <class 'a.b.c.e.serializers.ResponseSerializer'>. This will very likely result in an incorrect schema. Try renaming one.
```
In our project we have many idential serializer names, due to the following reasons:
* We have separate response and request serializers for every endpoint. Therefore we have many serializers named `ResponseSerializer` and `RequestSerializer`.
* All serializers are independent and are not shared with other serializers. As by convention we use the model name for the serializer name (`{Modelname}Serializer`). Therefore we have multiple serializers for one model.
Renaming serializers is not an option, due to the amount of serializers.
## Proposed Solution
like in #51 discussed one could use qualname
for now i am running on a fork where i changed [this line](https://github.com/HospiChef/drf-spectacular/blob/6fdaa499bb36e87dda233d51d7c60a6ac37e1021/drf_spectacular/openapi.py#L1541)
## Related Issues
#606 - recommends `@extend_schema_serializer(component_name='ABC')` to rename serializers, this is unsuitable if there are many serializers that need to be renamed
#90 - recommends the `ref_name`, but this needs to be applied to all serializers
#51 - talks about implementing what i am asking for, but i couldnt find that it ever got implemented?
#27 - suggests to use OpenApiSerializerExtension, but this needs to be applied to all serializers
| closed | 2023-04-17T10:26:33Z | 2023-07-23T21:20:53Z | https://github.com/tfranzel/drf-spectacular/issues/976 | [
"enhancement",
"fix confirmation pending"
] | haerrel | 4 |
DistrictDataLabs/yellowbrick | scikit-learn | 516 | Improvements to Silhouette Visualizer | The following improvements to the Silhouette Visualizer are left over from #91:
*Note to contributors: items in the below checklist don't need to be completed in a single PR; if you see one that catches your eye, feel to pick it off the list!*
- [x] Improve the documentation describing what Silhouette scores are and how to use the visualizer to qualitatively evaluate a clustering solution.
- [x] Find a real world example rather than just using make_blobs (note: we also have an example using the Iris dataset; ideally we'd having something a bit more unique to YB that we can add to `yellowbrick.datasets` module - perhaps this should be a separate issue?).
- [x] Instead of hard fixing the limits of the X-axis from -1.0 to 1.0; be more flexible so that the visualizer has a better display (or give the user the option of setting the limits).
- [x] Move the cluster identity labels away from the middle and to the y-axis.
- [x] Add ability to define cluster colors and improve color selection methodology.
- [x] Add a legend/annotation that describes the average clustering coefficient (e.g. label the red axvline) | closed | 2018-07-20T15:51:21Z | 2019-11-20T15:00:57Z | https://github.com/DistrictDataLabs/yellowbrick/issues/516 | [
"type: feature",
"priority: medium",
"level: novice",
"type: documentation"
] | bbengfort | 22 |
ageitgey/face_recognition | machine-learning | 1,324 | Encoding issue | * face_recognition version:
* Python version:
* Operating System:
### Description
Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.
IMPORTANT: If your issue is related to a specific picture, include it so others can reproduce the issue.
### What I Did
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```
| closed | 2021-06-10T12:35:03Z | 2021-06-10T12:35:59Z | https://github.com/ageitgey/face_recognition/issues/1324 | [] | Shakirsadiq6 | 0 |
dynaconf/dynaconf | django | 726 | [RFC] Recursive merge should allow unique flag in nested lists | I tried adding an additional config file to a Dynaconf instance with global merge set to `True` and found that a nested list was normally merged when I needed it to be unique or not merged at all as the old value was contained in the new value so the result list had duplicate items.
I thought about using a set instead but the order is important in this case
I'd like to be able to use `dynaconf_merge_unique` inside nested lists
Another alternative could be using the `local_merge` flag to allow disabling the merge for a specific nested value.
Although this alternative is more robust and general, it requires a more complex change than enabling the unique flag in nested lists
Example:
defaults.yaml
```
config_key:
nested_list:
- item_a
- item_b
```
additional_config.yaml
```
config_key:
nested_list:
- item_a
- item_b
- item_c
```
Result config dictionary:
```
{
"config_key":
"nested_list": [item_a, item_b, item_a, item_b, item_c]
}
```
A possible solution for enabling the unique flag in nested lists is to add the following lines right before running `object_merge` on nested values (for Dynaconf 3.1.7 it stays in `dynaconf/utils/__init__.py:77`).
```
unique = ("dynaconf_merge_unique" in value)
if unique:
value.remove("dynaconf_merge_unique")
``` | closed | 2022-03-15T11:34:26Z | 2022-09-21T16:38:20Z | https://github.com/dynaconf/dynaconf/issues/726 | [
"Not a Bug",
"RFC"
] | m3nadav | 2 |
postmanlabs/httpbin | api | 559 | httpbin /status is answering '408 REQUEST_TIMEOUT' | Hi,
First of all, thanks for the great project httpbin!! Very useful.
I am now having an issue to access /status testin (this has been working so far).
I am trying to do a GET over 'http://httpbin.org/status/100' but I am getting a '408 REQUEST_TIMEOUT' instead of '100 Continue'.
Even from your own web seems to fail when you "Execute" a test of GET and /status.
Any idea?
Thanks in advance,
<img width="1415" alt="Screen Shot 2019-05-28 at 4 07 40 PM" src="https://user-images.githubusercontent.com/1032834/59848012-7f9ea380-933a-11e9-869f-ae5859340ce8.png">
| open | 2019-06-20T12:05:22Z | 2019-06-20T12:05:22Z | https://github.com/postmanlabs/httpbin/issues/559 | [] | marianopeck | 0 |
piccolo-orm/piccolo | fastapi | 397 | Investigate descriptor protocol | One challenge we've had for a while (see https://github.com/piccolo-orm/piccolo/issues/62) is correctly typing ``Table`` classes.
Take this example:
```python
class Band(Table):
name = Varchar()
```
When dealing with ``Band`` as a class, then ``name`` is a ``Varchar``. When dealing with a ``Band`` instance, then ``name`` is a ``str``.
This dual nature is hard to type correctly.
```python
band = Band()
band.name = 'Pythonistas 2' # mypy can see this as an error, because it thinks name can only be a `Varchar`
```
## Solution 1 - dict syntax
I use this workaround, which appeases MyPy:
```python
band['name'] = 'Pythonistas 2'
```
## Solution 2 - override type
Some users decide to do this instead:
```python
class Band(Table):
name: str = Varchar()
```
## Solution 3 - union type
Python doesn't allow this unfortunately (otherwise it would solve our problem):
```python
class Band(Table):
name: t.Union[ClassVar[Varchar] | str] = Varchar()
```
## Solution 4 - descriptor protocol
We haven't explored this yet, hence this new issue.
Python has a [descriptor protocol](https://docs.python.org/3/howto/descriptor.html).
It means we can do something like this for each column type.
```python
class Varchar(Column):
def __get__(self, obj, objtype=None) -> t.Union[Varchar, str]:
return obj.__dict__[self._meta.name] if obj else self
def __set__(self, obj, value: str):
obj.__dict__[self._meta.name] = value
```
We're able to signal to MyPy that the value could be a `str` or `Varchar`, and that you can assign a `str` to it.
More investigation needs to be done, but this could be a good solution.
| closed | 2022-01-20T22:37:00Z | 2022-01-21T18:07:55Z | https://github.com/piccolo-orm/piccolo/issues/397 | [
"enhancement"
] | dantownsend | 1 |
521xueweihan/HelloGitHub | python | 2,371 | 【开源自荐】 用 Java?试试国产轻量的 Solon 应用开发框架 | <h1 align="center" style="text-align:center;">
Solon for java
</h1>
<p align="center">
<strong>更现代感的,轻量级应用开发框架</strong>
</p>
<p align="center">
<a href="https://solon.noear.org/">https://solon.noear.org</a>
</p>
<p align="center">
<a target="_blank" href="https://search.maven.org/search?q=org.noear%20solon">
<img src="https://img.shields.io/maven-central/v/org.noear/solon.svg?label=Maven%20Central" alt="Maven" />
</a>
<a target="_blank" href="https://www.apache.org/licenses/LICENSE-2.0.txt">
<img src="https://img.shields.io/:license-Apache2-blue.svg" alt="Apache 2" />
</a>
<a target="_blank" href="https://www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html">
<img src="https://img.shields.io/badge/JDK-8+-green.svg" alt="jdk-8+" />
</a>
<br />
<a target="_blank" href='https://gitee.com/noear/solon/stargazers'>
<img src='https://gitee.com/noear/solon/badge/star.svg' alt='gitee star'/>
</a>
<a target="_blank" href='https://github.com/noear/solon/stargazers'>
<img src="https://img.shields.io/github/stars/noear/solon.svg?logo=github" alt="github star"/>
</a>
</p>
<br/>
<p align="center">
<a href="https://jq.qq.com/?_wv=1027&k=kjB5JNiC">
<img src="https://img.shields.io/badge/QQ交流群-22200020-orange"/></a>
</p>
<hr />
启动快 5 ~ 10 倍;qps 高 2~ 3 倍;运行时内存节省 1/3 ~ 1/2;打包可以缩到 1/2 ~ 1/10
<hr />
## Solon
更现代感的应用开发框架。**更快、更小、更少、更自由!**
支持jdk8、jdk11、jdk17+;主框架0.1mb;组合不同的插件应对不同需求;方便定制;快速开发。
* 克制、简洁、开放、生态
* Http、WebSocket、Socket 三种信号统一的开发体验(俗称:三源合一)
* 支持注解与手动两种模式,按需自由操控
* Not Servlet,可以适配任何基础通讯框架(所以:最小0.2m运行rpc架构)
* 自建 IOC & AOP容器,支持 Web、Data、Job、Remoting、Cloud 等任何开发场景
* 集合 Handler + Context 和 Listener + Message 两种架构模式;强调插件式扩展;适应不同的应用场景
* 插件可扩展可切换:启动插件,扩展插件,序列化插件,数据插件,会话状态插件,视图插件(可共存) 等...
* 支持 GraalVm Native 打包
* 允许 业务插件 热插、热拨
* 体验与 Spring Boot 相近,迁移成本低: [《Solon 特性简集,相较于 Springboot 有什么区别?》](https://my.oschina.net/noear/blog/4863844)
## Solon Cloud
一系列分布式开发的接口标准和配置规范,相当于DDD模式里的防腐层概念。是 Solon 的微服务架构模式开发解决方案。
目前已适配了一系列的插件用于支持这一标准:[《Solon Cloud 分布式服务开发套件清单,感觉受与 Spring Cloud 的不同》](https://my.oschina.net/noear/blog/5039169)
其中,[Water 项目](https://gitee.com/noear/water) 是一站式支持 Solon Cloud 系列标准的支撑平台。
功能相当于:consul + rabbitmq + elk + prometheus + openFaas + quartz + 等等,并有机结合在一起。一直与 Solon 项目伴生成长。
## Hello world:
```xml
<parent>
<groupId>org.noear</groupId>
<artifactId>solon-parent</artifactId>
<version>1.10.3</version>
</parent>
<dependencies>
<dependency>
<groupId>org.noear</groupId>
<artifactId>solon-web</artifactId>
</dependency>
</dependencies>
```
```java
//Handler 模式:
public class App{
public static void main(String[] args){
SolonApp app = Solon.start(App.class,args);
app.get("/",(c)->c.output("Hello world!"));
}
}
//Controller 模式:(mvc or rest-api)
@Controller
public class App{
public static void main(String[] args){
Solon.start(App.class,args);
}
//限定 WebSocket 方法类型
@WebSocket
@Mapping("/")
public String hello(String name){
return "Hello " + name;
}
}
//Remoting 模式:(rpc)
@Mapping("/")
@Remoting
public class App implements HelloService{
public static void main(String[] args){
Solon.start(App.class,args);
}
@Override
public String hello(){
return "Hello world!";
}
}
```
## 主框架及快速集成开发包:
###### 主框架
| 组件 | 说明 |
| --- | --- |
| org.noear:solon-parent | 框架版本管理 |
| org.noear:solon | 主框架 |
| org.noear:nami | 伴生框架(做为solon remoting 的客户端)|
###### 快速集成开发包及相互关系
| 组件 | 说明 |
| --- |-------------------------------------------------------|
| org.noear:solon-lib | 快速开发基础集成包 |
| org.noear:solon-api | solon-lib + jlhttp boot;快速开发接口应用 |
| org.noear:solon-web | solon-api + freemarker + sessionstate;快速开发WEB应用 |
| org.noear:solon-beetl-web | solon-api + beetl + beetlsql + sessionstate;快速开发WEB应用 |
| org.noear:solon-enjoy-web | solon-api + enjoy + arp + sessionstate;快速开发WEB应用 |
| org.noear:solon-rpc | solon-api + nami;快速开发RPC应用 |
| org.noear:solon-cloud | solon-rpc + consul;快速开发微服务应用 |
## 快速了解 Solon 架构的材料:
##### [《Solon 的想法与架构笔记》](https://my.oschina.net/noear/blog/4980834)
##### [《Solon 生态插件清单》](https://my.oschina.net/noear/blog/5053423)
## 官网及相关示例:
* 官网地址:[https://solon.noear.org](https://solon.noear.org)
* 官网配套演示:[https://gitee.com/noear/solon-examples](https://gitee.com/noear/solon-examples)
* 项目单测:[_test](./_test/)
* 项目更多功能示例:[solon_demo](https://gitee.com/noear/solon_demo) 、 [solon_api_demo](https://gitee.com/noear/solon_api_demo) 、 [solon_rpc_demo](https://gitee.com/noear/solon_rpc_demo) 、 [solon_socketd_demo](https://gitee.com/noear/solon_socketd_demo) 、 [solon_cloud_demo](https://gitee.com/noear/solon_cloud_demo) 、 [solon_auth_demo](https://gitee.com/noear/solon_auth_demo)
| closed | 2022-09-22T03:58:47Z | 2022-09-23T06:38:19Z | https://github.com/521xueweihan/HelloGitHub/issues/2371 | [] | noear | 1 |
MaartenGr/BERTopic | nlp | 1,092 | topc_model.transform() breaks the kernel and have to restart the whole notebook again | After training the model and saving it to the disk after using the Rapids library based UMAP and HDBSCAN, when I reload the model and use .transform(), it literally crashed my Kernel and I have to run the entire thing again. The strange thing is that it does not happen when I use the model as soon as it is trained. This happens after I have trained the model and saved it to the local disk. I initially thought it were a memory issue, but inferencing on a single document also creates this issue and ruins the whole progress.
My system is Ubuntu 20.04
rapids - 22.12, python 3.9.15
bertopic - 0.13
GPU - 3090ti
I am training and inferencing on ~ 2 million documents created out of tweets. If I do not use Rapids, it works fine but it messes up when I use the rapids.
The code looks like this - I am not showing the embeddings creations as that code is very long as I am using a custom HFTransformerBackend for one of the BERT models optmized for tweets.
```
from cuml.manifold import UMAP
from cuml.cluster import HDBSCAN
from bertopic import BERTopic
umap_model = UMAP(n_components=10, n_neighbors=15, min_dist=0.0)
hdbscan_model = HDBSCAN(min_samples=2, gen_min_span_tree=True, prediction_data=True) #
# this is to create the new countvectorizer to handle the custom naming and topics
from sklearn.feature_extraction.text import CountVectorizer
from nltk.corpus import stopwords
german_stop_words = stopwords.words('german')
vectorizer_model = CountVectorizer(stop_words = german_stop_words, ngram_range = (1,2), max_features=20000)
topic_model = BERTopic(
embedding_model=HFTransformerBackend,
umap_model=umap_model,
hdbscan_model=hdbscan_model,
vectorizer_model=vectorizer_model,
top_n_words=10,
diversity=0.2,
nr_topics=75,
# calculate_probabilities=True
# min_topic_size=int(0.001*len(docs))
)
topics, probs = topic_model.fit_transform(docs)
```
I think the main culprit here is the HDBSCAN model as this is the process where the GPU maxes out to 100% and then breaks. Please help, I have wasted couple of days just figuring this out already. | closed | 2023-03-14T09:44:14Z | 2023-09-27T09:02:53Z | https://github.com/MaartenGr/BERTopic/issues/1092 | [] | hrishbhdalal | 1 |
vastsa/FileCodeBox | fastapi | 305 | [ bug 反馈 ] 错误文本 | 我测试登录页的时候无意发现,在剪切板有地址的情况下 按 ctrl+v 会出现以下文本 (ip我改成1.1.1.1了 实际上是我的机器ip)
截图你可以看到我没有复制过此段落 而且这会重复三遍 不知道什么原因
<html>
<body>
<!--StartFragment--><a href="http://1.1.1.1:12345/#/?code=78645">文件快递柜 - FileCodeBox</a><!--EndFragment-->
</body>
</html><html>
<body>
<!--StartFragment--><a href="http://1.1.1.1:12345/#/?code=78645">文件快递柜 - FileCodeBox</a><!--EndFragment-->
</body>
</html><html>
<body>
<!--StartFragment--><a href="http://1.1.1.1:12345/#/?code=78645">文件快递柜 - FileCodeBox</a><!--EndFragment-->
</body>
</html>
 | open | 2025-03-17T11:34:23Z | 2025-03-17T11:40:26Z | https://github.com/vastsa/FileCodeBox/issues/305 | [] | Marrrrrrrrry | 2 |
arogozhnikov/einops | tensorflow | 185 | Missing einops.layers.....Repeat | I tried to use repeat for in torch and needed a layer, but strangely it was not there.
I know, that I could use `einops.layers.torch.Reduce('...', reduction='repeat', ...)`, but that is confusing to read.
What do you think about adding `einops.layers.....Repeat` functions to einops?
Here is a toy example, where the last line fails because the function counterpart is missing and the first line is difficult to read:
```python
import einops
t = torch.randn((2, 3))
print(einops.layers.torch.Reduce('a b -> a b c', reduction='repeat', c=4)(t).shape) # torch.Size([2, 3, 4])
print(einops.repeat(t, 'a b -> a b c', c=4).shape) # torch.Size([2, 3, 4])
print(einops.layers.torch.Repeat('a b -> a b c', reduction='repeat', c=4)(t).shape) # AttributeError: module 'einops.layers.torch' has no attribute 'Repeat'
```
1. Try to collect **use-cases**
Since `einops.repeat` exists, I think the same use cases would be valid for a layer. I have one case, where I want to use it in pytorch.
2. **Implementation**. (optional) Implementing a sketch of your proposal in a code allows detecting possible conflicts and realize possible caveats.
Something like the following in each layers backend file
```python
class Repeat(Reduce):
def __init__(self, pattern, **axes_lengths):
super().__init__(pattern, 'repeat', **axes_lengths)
```
3. **Integrity** - does it interplay well with existing operations and notation in einops?
I would say yes, it is the counterpart of the function ``einops.repeat``.
4. **Readability**. This is harder to check, but give it a try. A simple but usable test is to write an exercise sheet with several examples of your extension explained, for others meaning of operation should be guessed. Send this test to a couple of your friends to collect the feedback and see how your notion will be understood. This should also help to improve your ideas to make them more digestible.
I think this is obvious, currently I use `einops.layers.torch.Reduce('...', reduction='repeat', ...)` and that is confusing. | open | 2022-04-26T21:25:04Z | 2024-09-16T19:00:59Z | https://github.com/arogozhnikov/einops/issues/185 | [
"good first issue",
"feature suggestion"
] | boeddeker | 5 |
dsdanielpark/Bard-API | nlp | 113 | Max retries exceeded with url (host='bard.google.com', port=443): | requests.exceptions.ProxyError: HTTPSConnectionPool(host='bard.google.com', port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot connect to proxy.', timeout('_ssl.c:1091: The handshake operation timed out'))) | closed | 2023-07-16T14:26:35Z | 2024-01-18T15:51:42Z | https://github.com/dsdanielpark/Bard-API/issues/113 | [] | Jacky1y | 1 |
pandas-dev/pandas | python | 60,994 | BUG: `iloc` with `Series` as indexer fails for `__getitem__` but works with `__setitem__` | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
# __getitem__
>>> a = pd.Series([0, 1, 2])
>>> a.iloc[pd.Series([True, False, False])]
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
Cell In[8], line 1
----> 1 a.iloc[pd.Series([True, False, False])]
File ~/.local/share/hatch/env/virtual/lontras/VBVTu9RT/lontras/lib/python3.11/site-packages/pandas/core/indexing.py:1191, in _LocationIndexer.__getitem__(self, key)
1189 maybe_callable = com.apply_if_callable(key, self.obj)
1190 maybe_callable = self._check_deprecated_callable_usage(key, maybe_callable)
-> 1191 return self._getitem_axis(maybe_callable, axis=axis)
File ~/.local/share/hatch/env/virtual/lontras/VBVTu9RT/lontras/lib/python3.11/site-packages/pandas/core/indexing.py:1738, in _iLocIndexer._getitem_axis(self, key, axis)
1735 key = np.asarray(key)
1737 if com.is_bool_indexer(key):
-> 1738 self._validate_key(key, axis)
1739 return self._getbool_axis(key, axis=axis)
1741 # a list of integers
File ~/.local/share/hatch/env/virtual/lontras/VBVTu9RT/lontras/lib/python3.11/site-packages/pandas/core/indexing.py:1578, in _iLocIndexer._validate_key(self, key, axis)
1576 if hasattr(key, "index") and isinstance(key.index, Index):
1577 if key.index.inferred_type == "integer":
-> 1578 raise NotImplementedError(
1579 "iLocation based boolean "
1580 "indexing on an integer type "
1581 "is not available"
1582 )
1583 raise ValueError(
1584 "iLocation based boolean indexing cannot use "
1585 "an indexable as a mask"
1586 )
1587 return
NotImplementedError: iLocation based boolean indexing on an integer type is not available
# __setitem__
>>> a.iloc[pd.Series([True, False, False])] = 10; a
0 10
1 1
2 2
dtype: int64
```
### Issue Description
Behavior of `loc` with a `Series` as argument shows inconsistent behavior for `__getitem__` and `__setitem__`
### Expected Behavior
Either both `__getitem__` and `__setitem__` should work or both should fail
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.10
python-bits : 64
OS : Linux
OS-release : 6.12.10-arch1-1
Version : #1 SMP PREEMPT_DYNAMIC Sat, 18 Jan 2025 02:26:57 +0000
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.2.2
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 24.3.1
Cython : None
sphinx : 8.1.3
IPython : 8.31.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.5
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
| open | 2025-02-23T16:29:36Z | 2025-03-10T20:54:11Z | https://github.com/pandas-dev/pandas/issues/60994 | [
"Bug",
"Indexing"
] | luxedo | 5 |
ludwig-ai/ludwig | data-science | 4,005 | Add llava support in ludwig | **Is your feature request related to a problem? Please describe.**
The feature request is to have multi modal capability inside ludwig using llama-next
**Describe the use case**
The basic use cases are to add imagery/video on top of text and the llm workflow for training and inference
**Describe the solution you'd like**
This PR will add the shell code to embed mulitmodal functionality into ludwig leveraging the current code thats already in there
**Describe alternatives you've considered**
N/A
**Additional context**
None
| closed | 2024-05-16T18:05:25Z | 2024-10-21T11:26:43Z | https://github.com/ludwig-ai/ludwig/issues/4005 | [
"lmm"
] | skanjila | 0 |
nltk/nltk | nlp | 2,507 | Porter stemmer returns a capital instead of lowercase | This output is unexpected. The `In` returns the capitalize `In` from PorterStemmer's output.
```python
>>> from nltk.stem import PorterStemmer
>>> porter = PorterStemmer()
>>> porter.stem('In')
'In'
```
More details on https://stackoverflow.com/q/60387288/610569 | closed | 2020-02-25T06:01:07Z | 2020-10-06T09:15:43Z | https://github.com/nltk/nltk/issues/2507 | [
"good first issue",
"stem/lemma"
] | alvations | 5 |
jmcnamara/XlsxWriter | pandas | 751 | Issue with relative urls in images | There may be an issue with relative path urls when used with images. See this StackOverflow [question](https://stackoverflow.com/questions/63990860/xlsxwriter-relative-image-url/63995592#63995592).
Example program to reproduce:
```python
import xlsxwriter
workbook = xlsxwriter.Workbook('images_url.xlsx')
worksheet = workbook.add_worksheet()
worksheet.insert_image('B4', 'python.png', {'url': r'external:..\..\blue.png'})
workbook.close()
```
| closed | 2020-09-21T15:46:31Z | 2020-09-21T19:57:24Z | https://github.com/jmcnamara/XlsxWriter/issues/751 | [
"bug"
] | jmcnamara | 1 |
mckinsey/vizro | plotly | 318 | Should we simplify form/selector components? | # Background context
So far we have the following possible `SelectorType`, which can be used in `Parameter.selector` or `Filter.selector`:
* `Checklist`, based on `dcc.Checklist`
* `Dropdown`, based on `dcc.Dropdown`
* `RadioItems`, based on `dcc.RadioItems`
* `RangeSlider`, based on `dcc.RangeSlider`
* `Slider`, based on `dcc.Slider`
Maybe, where suitable components are available, these will change to `dbc` components in the future when we are fully bootstrap compatible.
In due course we would like to enable a `Form` model that enables you to use these outside the context of a filter/parameter. I've already done this many times for custom apps by using `add_new_type`, but eventually it should be native functionality.
To this end, a `_FormComponentType` exists but is not yet public. This contains:
* all the above `SelectorType`
* `UserInput`, based on `dbc.Input` (private)
* `Textarea`, based on `dbc.Textarea` (private; actually doesn't exist in `_FormComponentType` yet but should do)
* `Button`, based on `dbc.Button` (public, since it's also part of `ComponentType` so that it can be used directly in `Page.components` or `Container.components`, like a `Graph`, `Card`, `Table`)
Remember also that we aim to **keep things simple** rather than supply every possible sort of component out of the box. We don't want to have too many models.
We are now considering adding a `DatePicker` in https://github.com/mckinsey/vizro/pull/309. This raises some questions that **don't need to be resolved now** but should be pondered over time...
# Why two sliders?
From https://github.com/mckinsey/vizro/pull/309#discussion_r1489820719:
_@antonymilne said_:
> I don't know why we have a separate `RangeSlider` and `Slider` at the moment 🤔 As far as I can tell `RangeSlider` with a single element `value = [1]` would function identically to a `Slider`. So we maybe should have simplified this to just a single `vm.Slider` model. Did we consider this before? I can't remember. @maxschulz-COL?
>
> @AnnMarieW do you know if there's any difference in behaviour between `RangeSlider` with a single element `value` vs. `Slider`? Is there a reason there's two separate dcc components for these? Other than it's maybe easier for people to find the right component because people know to search for a slider vs. a range slider. From the vizro side, we generally try to aim for something as simple as possible rather than many possible options so maybe it would make sense for us to just provide a single `Slider` model that handles both. What do you think?
_@AnnMarieW said_:
> Interesting question about using a single value in the Range Slider. It seems to work, but it would be possible to change it to a range slider when updating it with two values in a callback. So there would need to be some type of check to make sure that only a single value is returned. I'm not sure if there would be any other unintended issues.
Removing/renaming `Slider/RangeSlider` would be a breaking change, but we can definitely do it with suitable deprecation warnings etc.
# Why `UserInput` and `Textarea`?
I didn't look into these properly yet, but at a glance they feel quite similar to me. Should they also be a single model? They're not public so we can merge/rename/whatever as we please.
# How to handle `DatePicker` and `DateRangePicker`?
Will be resolved in https://github.com/mckinsey/vizro/pull/309.
# Names
Our policy so far has been to name our models according to the underlying dash component, unless where that seems confusing (hence instead of `Input` we use `UserInput`).
* Dash components already have well-established/considered names, so no need for us to think about it again
* it's more familiar to Vizro users coming from Dash
* it makes it clear what Dash component our model is based on, so easier for a user to find relevant docs on that component
I don't want to bikeshed and spend a lot time thinking again about names, but maybe we should consider whether this the right approach though. It sort of couples us to a particular Dash component which we might want to change, and makes it tricky in the edge case that a model could return _different_ Dash components like we might do for `Date(Range)Picker`.
e.g. I kind of like the simplicity of naming in https://observablehq.com/documentation/inputs/ where it's just `Date` rather than `DatePicker`, although `DatePicker` is definitely also common terminology and well understood.
Should we go for more "declarative" names like `Select` or `Selector` rather than imperative ones like `Dropdown`? We considered this before and decided it made the UX more rather than less confusing, e.g. people are more likely to say "I want radio buttons here and checkboxes there" rather than "I want a selector that enables me to select multiple items.
# Next steps
* Understand what, if any, difference there is between `RangeSlider` with a single value vs `Slider`
* Do a more thorough review of how other tools handle form components, e.g.
- https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input
- https://observablehq.com/documentation/inputs/overview
- https://dash-bootstrap-components.opensource.faculty.ai/docs/components/form/
- many others
* Do some actual user research on how people think about creating forms. e.g. if most people who use vizro aren't already dash users, there's less (but still some) value in mirroring Dash component names | open | 2024-02-19T10:26:30Z | 2025-01-16T09:15:59Z | https://github.com/mckinsey/vizro/issues/318 | [] | antonymilne | 10 |
miguelgrinberg/python-socketio | asyncio | 190 | Error installing for Python 3.7.0 | Had to switch to Python 3.6 in order to get working. Here's what happened with 3.7:
```
pip install socketio
Collecting socketio
Using cached https://files.pythonhosted.org/packages/32/fb/8667be5433aa2f54c8111d37f75425789ef7425c1c8aaa65b99c36c460de/socketio-0.1.3.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/user/.pyenv/versions/3.7.0/lib/python3.7/site-packages/setuptools/__init__.py", line 12, in <module>
from setuptools.extension import Extension
File "/home/user/.pyenv/versions/3.7.0/lib/python3.7/site-packages/setuptools/extension.py", line 7, in <module>
from setuptools.dist import _get_unpatched
File "/home/user/.pyenv/versions/3.7.0/lib/python3.7/site-packages/setuptools/dist.py", line 16, in <module>
import pkg_resources
File "/home/user/.pyenv/versions/3.7.0/lib/python3.7/site-packages/pkg_resources.py", line 1479, in <module>
register_loader_type(importlib_bootstrap.SourceFileLoader, DefaultProvider)
AttributeError: module 'importlib._bootstrap' has no attribute 'SourceFileLoader'
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-na00lkxq/socketio/
```
| closed | 2018-07-15T18:31:31Z | 2024-03-29T09:05:08Z | https://github.com/miguelgrinberg/python-socketio/issues/190 | [] | bitnom | 4 |
Lightning-AI/pytorch-lightning | deep-learning | 20,627 | 15min example not training | ### Bug description
Hello! I’m new to training a model with PyTorch Lightning, and I’ve run into a bit of an issue— it seems like the model’s parameters aren’t updating at all.
To figure out what’s going on, I wrote a **checker function** and tested it with the 15-minute example from the official website, **but it still shows that the parameters aren’t changing.**
Could there be something off with my checker? I’m feeling a bit stuck and unsure of what to try next. Any help would be greatly appreciated!
### What version are you seeing the problem on?
v2.5
### How to reproduce the bug
```python
def compare_model_parameter_dicts(dict1, dict2):
"""
Compare two sets of named parameters from PyTorch nn.Module.
Args:
params1: First model's named_parameters() output (iterator of (name, param))
params2: Second model's named_parameters() output (iterator of (name, param))
"""
# Check if parameter names match
if set(dict1.keys()) != set(dict2.keys()):
print("Parameter name mismatch!")
print(f"Params1 names: {set(dict1.keys())}")
print(f"Params2 names: {set(dict2.keys())}")
return
# Compare each parameter
differences_found = False
for name in dict1:
param1 = dict1[name]
param2 = dict2[name]
# Check if shapes match
if param1.shape != param2.shape:
print(f"Shape mismatch for {name}:")
print(f"Param1 shape: {param1.shape}")
print(f"Param2 shape: {param2.shape}")
differences_found = True
continue
# Check if values are identical
if not torch.allclose(param1, param2, rtol=1e-5, atol=1e-8):
differences_found = True
print(f"\nDifferences found in {name}:")
# Find differing elements
diff_mask = ~torch.isclose(param1, param2, rtol=1e-5, atol=1e-8)
diff_indices = torch.nonzero(diff_mask, as_tuple=False)
# Print differing values
for idx in diff_indices:
idx_tuple = tuple(idx.tolist())
val1 = param1[idx_tuple].item()
val2 = param2[idx_tuple].item()
print(f"Index {idx_tuple}:")
print(f" Value 1: {val1}")
print(f" Value 2: {val2}")
if not differences_found:
print("All parameters are identical!")
import os
from torch import optim, nn, utils, Tensor
from torchvision.datasets import MNIST
from torchvision.transforms import ToTensor
import lightning as L
import torch
from torchhelpers.parameter import compare_model_parameter_dicts
# define any number of nn.Modules (or use your current ones)
encoder = nn.Sequential(nn.Linear(28 * 28, 64), nn.ReLU(), nn.Linear(64, 3))
decoder = nn.Sequential(nn.Linear(3, 64), nn.ReLU(), nn.Linear(64, 28 * 28))
# define the LightningModule
class LitAutoEncoder(L.LightningModule):
def __init__(self, encoder, decoder):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.initial_model_param_dict = 0
def training_step(self, batch, batch_idx):
# training_step defines the train loop.
# it is independent of forward
x, _ = batch
x = x.view(x.size(0), -1)
z = self.encoder(x)
x_hat = self.decoder(z)
loss = nn.functional.mse_loss(x_hat, x)
# Logging to TensorBoard (if installed) by default
self.log("train_loss", loss)
if self.initial_model_param_dict is 0:
self.initial_model_param_dict = {name: param for name, param in self.named_parameters()}
else:
compare_model_parameter_dicts({name: param for name, param in self.named_parameters()}, self.initial_model_param_dict)
return loss
def configure_optimizers(self):
optimizer = optim.Adam(self.parameters(), lr=1e-3)
return optimizer
# init the autoencoder
autoencoder = LitAutoEncoder(encoder, decoder)
# setup data
dataset = MNIST(os.getcwd(), download=True, transform=ToTensor())
train_loader = utils.data.DataLoader(dataset)
# train the model (hint: here are some helpful Trainer arguments for rapid idea iteration)
trainer = L.Trainer(limit_train_batches=100, max_epochs=1)
trainer.fit(model=autoencoder, train_dataloaders=train_loader)
# load checkpoint
checkpoint = "./lightning_logs/version_0/checkpoints/epoch=0-step=100.ckpt"
autoencoder = LitAutoEncoder.load_from_checkpoint(checkpoint, encoder=encoder, decoder=decoder)
# choose your trained nn.Module
encoder = autoencoder.encoder
encoder.eval()
# embed 4 fake images!
fake_image_batch = torch.rand(4, 28 * 28, device=autoencoder.device)
embeddings = encoder(fake_image_batch)
print("⚡" * 20, "\nPredictions (4 image embeddings):\n", embeddings, "\n", "⚡" * 20)
```
### Error messages and logs
```
Epoch 0: 93%|█████████▎| 93/100 [00:00<00:00, 233.69it/s, v_num=3]All parameters are identical!
Epoch 0: 94%|█████████▍| 94/100 [00:00<00:00, 234.44it/s, v_num=3]All parameters are identical!
Epoch 0: 95%|█████████▌| 95/100 [00:00<00:00, 235.17it/s, v_num=3]All parameters are identical!
Epoch 0: 96%|█████████▌| 96/100 [00:00<00:00, 235.91it/s, v_num=3]All parameters are identical!
Epoch 0: 97%|█████████▋| 97/100 [00:00<00:00, 236.62it/s, v_num=3]All parameters are identical!
Epoch 0: 98%|█████████▊| 98/100 [00:00<00:00, 237.32it/s, v_num=3]All parameters are identical!
Epoch 0: 99%|█████████▉| 99/100 [00:00<00:00, 238.00it/s, v_num=3]All parameters are identical!
Epoch 0: 100%|██████████| 100/100 [00:00<00:00, 234.96it/s, v_num=3]
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.5.0):
#- PyTorch Version (e.g., 2.5):
#- Python version (e.g., 3.12):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
```
</details>
### More info
_No response_ | closed | 2025-03-08T12:36:33Z | 2025-03-08T13:11:48Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20627 | [
"bug",
"needs triage",
"ver: 2.5.x"
] | Risiamu | 0 |
sqlalchemy/alembic | sqlalchemy | 775 | AttributeError: 'Identity' object has no attribute 'arg' when autogenerating migration on Postgres table with identity columns | **Describe the bug**
Trying to autogenerate a migration on a Postgres table with an `Identity` column produces an `AttributeError` from Alembic. It seems to be assuming the `server_default` member of the `Identity` column has an `arg` member that it doesn't have. Not sure whether that should be classified as an Alembic bug or a SQLAlchemy bug.
**Expected behavior**
Alembic should generate a migration without any errors.
**To Reproduce**
Please try to provide a [Minimal, Complete, and Verifiable](http://stackoverflow.com/help/mcve) example, with the migration script and/or the SQLAlchemy tables or models involved.
See also [Reporting Bugs](https://www.sqlalchemy.org/participate.html#bugs) on the website.
On a cursory glance, it appears to require a Posgres table with an `Identity` column, and a matching identity column in an equivalent metadata table. A simple way to reproduce this is as follows:
1. Create a test database:
```pgsql
create database test_id with owner postgres;
```
2. Connect to the database and create a test table with an `Identity` column:
```pgsql
create table table_name
(
id integer generated always as identity
);
```
3. Create a matching schema in SQLAlchemy:
```py
import sqlalchemy
from sqlalchemy.orm import declarative_base
Base = declarative_base()
class TableName(Base):
__tablename__ = 'table_name'
id = sqlalchemy.Column(sqlalchemy.Integer, sqlalchemy.Identity(), primary_key=True)
```
4. Setup environment in Alembic, set `sqlalchemy.url` in your `alembic.ini` to point to Test DB, set `target_metadata = Base.metadata` in your `env.py`.
5. Try to autogenerate a revision:
```shell
alembic revision --autogenerate
```
**Error**
```shell
(venv) C:\Users\jcrot\PycharmProjects\complex_systems_sqlalchemy>alembic revision --autogenerate
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
Traceback (most recent call last):
File "C:\Users\jcrot\AppData\Local\Programs\Python\Python38\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\jcrot\AppData\Local\Programs\Python\Python38\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\jcrot\PycharmProjects\complex_systems_sqlalchemy\venv\Scripts\alembic.exe\__main__.py", line 7, in <module>
File "c:\users\jcrot\pycharmprojects\complex_systems_sqlalchemy\venv\lib\site-packages\alembic\config.py", line 581, in main
CommandLine(prog=prog).main(argv=argv)
File "c:\users\jcrot\pycharmprojects\complex_systems_sqlalchemy\venv\lib\site-packages\alembic\config.py", line 575, in main
self.run_cmd(cfg, options)
File "c:\users\jcrot\pycharmprojects\complex_systems_sqlalchemy\venv\lib\site-packages\alembic\config.py", line 552, in run_cmd
fn(
File "c:\users\jcrot\pycharmprojects\complex_systems_sqlalchemy\venv\lib\site-packages\alembic\command.py", line 214, in revision
script_directory.run_env()
File "c:\users\jcrot\pycharmprojects\complex_systems_sqlalchemy\venv\lib\site-packages\alembic\script\base.py", line 489, in run_env
util.load_python_file(self.dir, "env.py")
File "c:\users\jcrot\pycharmprojects\complex_systems_sqlalchemy\venv\lib\site-packages\alembic\util\pyfiles.py", line 98, in load_python_file
module = load_module_py(module_id, path)
File "c:\users\jcrot\pycharmprojects\complex_systems_sqlalchemy\venv\lib\site-packages\alembic\util\compat.py", line 184, in load_module_py
spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "alembic\env.py", line 81, in <module>
run_migrations_online()
File "alembic\env.py", line 75, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "c:\users\jcrot\pycharmprojects\complex_systems_sqlalchemy\venv\lib\site-packages\alembic\runtime\environment.py", line 846, in run_migrations
self.get_context().run_migrations(**kw)
File "c:\users\jcrot\pycharmprojects\complex_systems_sqlalchemy\venv\lib\site-packages\alembic\runtime\migration.py", line 511, in run_migrations
for step in self._migrations_fn(heads, self):
File "c:\users\jcrot\pycharmprojects\complex_systems_sqlalchemy\venv\lib\site-packages\alembic\command.py", line 190, in retrieve_migrations
revision_context.run_autogenerate(rev, context)
File "c:\users\jcrot\pycharmprojects\complex_systems_sqlalchemy\venv\lib\site-packages\alembic\autogenerate\api.py", line 442, in run_autogenerate
self._run_environment(rev, migration_context, True)
File "c:\users\jcrot\pycharmprojects\complex_systems_sqlalchemy\venv\lib\site-packages\alembic\autogenerate\api.py", line 481, in _run_environment
compare._populate_migration_script(
File "c:\users\jcrot\pycharmprojects\complex_systems_sqlalchemy\venv\lib\site-packages\alembic\autogenerate\compare.py", line 25, in _populate_migration_script
_produce_net_changes(autogen_context, upgrade_ops)
File "c:\users\jcrot\pycharmprojects\complex_systems_sqlalchemy\venv\lib\site-packages\alembic\autogenerate\compare.py", line 50, in _produce_net_changes
comparators.dispatch("schema", autogen_context.dialect.name)(
File "c:\users\jcrot\pycharmprojects\complex_systems_sqlalchemy\venv\lib\site-packages\alembic\util\langhelpers.py", line 303, in go
fn(*arg, **kw)
File "c:\users\jcrot\pycharmprojects\complex_systems_sqlalchemy\venv\lib\site-packages\alembic\autogenerate\compare.py", line 78, in _autogen_for_tables
_compare_tables(
File "c:\users\jcrot\pycharmprojects\complex_systems_sqlalchemy\venv\lib\site-packages\alembic\autogenerate\compare.py", line 209, in _compare_tables
with _compare_columns(
File "C:\Users\jcrot\AppData\Local\Programs\Python\Python38\lib\contextlib.py", line 113, in __enter__
return next(self.gen)
File "c:\users\jcrot\pycharmprojects\complex_systems_sqlalchemy\venv\lib\site-packages\alembic\autogenerate\compare.py", line 312, in _compare_columns
comparators.dispatch("column")(
File "c:\users\jcrot\pycharmprojects\complex_systems_sqlalchemy\venv\lib\site-packages\alembic\util\langhelpers.py", line 303, in go
fn(*arg, **kw)
File "c:\users\jcrot\pycharmprojects\complex_systems_sqlalchemy\venv\lib\site-packages\alembic\autogenerate\compare.py", line 1021, in _compare_server_default
conn_col.server_default.arg.text
AttributeError: 'Identity' object has no attribute 'arg'
```
**Versions.**
- OS: Windows 10 Pro 19041.685
- Python: 3.8.6
- Alembic: 1.4.3
- SQLAlchemy: 1.4.0b1
- Database: PostgreSQL 13.0
- DBAPI: psycopg2 2.8.6
**Additional context**
<!-- Add any other context about the problem here. -->
**Have a nice day!**
| closed | 2021-01-03T03:14:07Z | 2023-11-28T12:20:10Z | https://github.com/sqlalchemy/alembic/issues/775 | [
"question",
"autogenerate - defaults"
] | TV4Fun | 7 |
sinaptik-ai/pandas-ai | pandas | 660 | Set the preferred visualization library | ### 🚀 The feature
We need to enhance the flexibility of our application by allowing users to choose their preferred data visualization library. This will not only improve the user experience but also make our application more versatile and appealing to a broader audience.
## Proposed Solution:
- Add a Config: Introduce a new configuration parameter in our application settings that allows users to define their preferred data visualization library (`matplotlib`, `seaborn`, or `plotly`). This setting should be easily accessible and modifiable by the user.
- Update the Prompt: Modify the application's user prompt to include a specific instruction.
### Motivation, pitch
At the moment the user has to specify in every prompt which library to use
### Alternatives
_No response_
### Additional context
_No response_ | closed | 2023-10-19T20:52:19Z | 2023-10-31T14:26:57Z | https://github.com/sinaptik-ai/pandas-ai/issues/660 | [
"enhancement",
"good first issue"
] | gventuri | 9 |
deezer/spleeter | deep-learning | 626 | [Bug] Spleeter Separate on custom trained model tries to download another model |
## Description
I have trained a custom model and the separate function does not work as intended as it tries to download the model from a nonexisting url.
## Step to reproduce
1. Created custom model training data / specs annotated in the custom_model_config.json
2. Trained model (succesfully, apparently) using `!spleeter train -p "custom_model_config.json" -d "PathToCustomDataset"`. I check that the trained model folder is created.
3. When trying to separate a file using `!spleeter separate -o sep_out -p "custom_model_config.json" "test.mp3"`
The function tries to download the model from a url which obviously does not exists. Why is this happening?
## Output
```
tcmalloc: large alloc 1694474240 bytes == 0x556df36f0000 @ 0x7f9e527bf1e7 0x7f9e4ef5f631 0x7f9e4efc3cc8 0x7f9e4efc3de3 0x7f9e4f061ed8 0x7f9e4f062734 0x7f9e4f062882 0x556d45843f68 0x7f9e4efaf53d 0x556d45841c47 0x556d45841a50 0x556d458b5453 0x556d458b04ae 0x556d458433ea 0x556d458b232a 0x556d458b04ae 0x556d458433ea 0x556d458b160e 0x556d4584330a 0x556d458b160e 0x556d458b04ae 0x556d458433ea 0x556d458b160e 0x556d458b04ae 0x556d458433ea 0x556d458b232a 0x556d458b04ae 0x556d45782e2c 0x556d458b2bb5 0x556d458b07ad 0x556d45782e2c
INFO:spleeter:Downloading model archive https://github.com/deezer/spleeter/releases/download/v1.4.0/bach10_model_2.tar.gz
Traceback (most recent call last):
File "/usr/local/bin/spleeter", line 8, in <module>
sys.exit(entrypoint())
File "/usr/local/lib/python3.7/dist-packages/spleeter/__main__.py", line 256, in entrypoint
spleeter()
File "/usr/local/lib/python3.7/dist-packages/typer/main.py", line 214, in _call_
return get_command(self)(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 829, in _call_
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/typer/main.py", line 497, in wrapper
return callback(**use_params) # type: ignore
File "/usr/local/lib/python3.7/dist-packages/spleeter/__main__.py", line 137, in separate
synchronous=False,
File "/usr/local/lib/python3.7/dist-packages/spleeter/separator.py", line 382, in separate_to_file
sources = self.separate(waveform, audio_descriptor)
File "/usr/local/lib/python3.7/dist-packages/spleeter/separator.py", line 325, in separate
return self._separate_librosa(waveform, audio_descriptor)
File "/usr/local/lib/python3.7/dist-packages/spleeter/separator.py", line 269, in _separate_librosa
sess = self._get_session()
File "/usr/local/lib/python3.7/dist-packages/spleeter/separator.py", line 241, in _get_session
model_directory: str = provider.get(self._params["model_dir"])
File "/usr/local/lib/python3.7/dist-packages/spleeter/model/provider/__init__.py", line 80, in get
self.download(model_directory.split(sep)[-1], model_directory)
File "/usr/local/lib/python3.7/dist-packages/spleeter/model/provider/github.py", line 141, in download
response.raise_for_status()
File "/usr/local/lib/python3.7/dist-packages/httpx/_models.py", line 1103, in raise_for_status
raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: 404 Client Error: Not Found for url: https://github.com/deezer/spleeter/releases/download/v1.4.0/bach10_model_2.tar.gz
For more information check: https://httpstatuses.com/404
```
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Linux (Colab environment) |
| Installation type | pip |
| RAM available | 16GB |
| closed | 2021-05-29T16:33:08Z | 2021-05-31T09:36:24Z | https://github.com/deezer/spleeter/issues/626 | [
"bug",
"invalid"
] | andresC98 | 4 |
benbusby/whoogle-search | flask | 222 | [QUESTION] Getting past captcha | I use whoogle on Arch Linux via the whoogle-git AUR package through a VPN and I am getting the
> Our systems have detected unusual traffic from your computer network. This page checks to see if it's really you sending the requests, and not a robot.
If I manually go to google I can solve the captcha, but it does not resolve the block going through my whoogle since the captcha itself doesn't appear. How do i fix this? | closed | 2021-03-15T12:59:22Z | 2021-03-16T14:39:46Z | https://github.com/benbusby/whoogle-search/issues/222 | [
"question"
] | wazlecracker | 2 |
d2l-ai/d2l-en | pytorch | 2,573 | Incorrect Use of torch.no_grad() in fit_epoch Method in d2l/torch.py::Trainer::fit_epoch | Hello,
I noticed a potential issue in the fit_epoch method in https://github.com/d2l-ai/d2l-en/blob/master/d2l/torch.py, where loss.backward() is called within a torch.no_grad() block:
```
self.optim.zero_grad()
with torch.no_grad():
loss.backward()
...
```
This usage likely prevents the calculation of gradients, as loss.backward() should not be inside a torch.no_grad() block. The correct approach would be:
```
self.optim.zero_grad()
loss.backward()
...
```
Here is the original code:
```
def fit_epoch(self):
"""Defined in :numref:`sec_linear_scratch`"""
self.model.train()
for batch in self.train_dataloader:
loss = self.model.training_step(self.prepare_batch(batch))
self.optim.zero_grad()
with torch.no_grad():
loss.backward()
if self.gradient_clip_val > 0: # To be discussed later
self.clip_gradients(self.gradient_clip_val, self.model)
self.optim.step()
self.train_batch_idx += 1
if self.val_dataloader is None:
return
self.model.eval()
for batch in self.val_dataloader:
with torch.no_grad():
self.model.validation_step(self.prepare_batch(batch))
self.val_batch_idx += 1
``` | open | 2023-12-19T16:21:48Z | 2024-10-29T17:35:14Z | https://github.com/d2l-ai/d2l-en/issues/2573 | [] | caydenwei | 3 |
scrapy/scrapy | python | 6,056 | Dropping is_botocore wasn't documented | `scrapy.utils.boto.is_botocore()` was dropped in #5719 but we forgot to mention it in 2.8.0 release notes, we should do that now. | closed | 2023-09-20T10:20:20Z | 2023-09-22T08:12:22Z | https://github.com/scrapy/scrapy/issues/6056 | [
"bug",
"good first issue",
"docs"
] | wRAR | 2 |
labmlai/annotated_deep_learning_paper_implementations | pytorch | 161 | StyleGAN2: Why don't you multiply path length penalty by the lazy regularization interval? | In the paper the StyleGAN2 code based on it is mentioned that when using lazy regularization technique, the regularization terms should be multiplied "by k to balance the overall magnitude of its gradients". Although you do that with the gradient penalty, PLP without being multiplied by anything is added to the generator loss. In other implementations of this paper, both GP and PLP are multiplied (but there is also a separate step for regularizations unlike it is in yours). I have not tested if there are any improvements when this feature but there were definitely some when I decreased the lazy path penalty interval from 32 to 4 (when it was 32 latent vector was ignored as it was mentioned in one of the issues)
Is there any reason not to multiply PLP by the interval or is it a bug? | closed | 2023-01-04T10:17:00Z | 2023-03-27T21:00:24Z | https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/161 | [] | yanisnotavocado | 0 |
Farama-Foundation/PettingZoo | api | 488 | MPE change color during step | I'm using the MPE env for a navigation task.
I have a set of custom Landmarks class that need to be visited every N steps. At each step I increase the amount of time the landmark has not been visited with a custom attribute.
On top of that i would like to modify the colors of the landmark in order to reflect the time passed without visit.
My custom class is the following:
```
class TimerLandmark(Entity):
"""
Timer landmark class.
Each landmark increases its timer by the 'increase' param per step.
This timer is proportional (timer* penalty) to the penalty agents get at each turn.
The timer resets when an agent lands upon it and the timer starts from zero.
So the longer a landmark stays untouched the worse the penalty gets.
"""
colors = list(Color("green").range_to(Color("red"), 100))
def __init__(self, increase=0.1):
super().__init__()
self.timer = 0
self.increase = increase
self.counter = 0
def reset(self, world, np_random):
self.timer = 0
self.color = np.array([0, 1, 0])
self.state.p_pos = np_random.uniform(-0.9, +0.9, world.dim_p)
self.state.p_vel = np.zeros(world.dim_p)
def step(self):
""" Where the color changes"""
self.counter += 1
self.timer = self.increase * self.counter
c = self.colors[self.counter]
c = c.get_hsl()
self.color = np.array(c)
def reset_timer(self):
self.timer = 0
```
And my step function in a `SimpleEnv` class is
```
def step(self, action):
super(raw_env, self).step(action)
for landmark in self.world.landmarks:
visited = False
# check if one agent has visited the landmark
for agent in self.world.agents:
if self.is_collision(agent, landmark):
visited = True
break
# if visited reset else increase
if visited:
landmark.reset_timer()
else:
landmark.step()
```
Since the colors dont change i looked around in the render and saw that the actual code where the color is instantiated is called just once at the start of the episode. Is there something i can do to dynamically change the colors? | closed | 2021-09-22T08:13:20Z | 2021-10-07T02:06:38Z | https://github.com/Farama-Foundation/PettingZoo/issues/488 | [] | nicofirst1 | 10 |
waditu/tushare | pandas | 1,717 | 无法查询北交所交易日历 | pro.trade_cal()方法无法获取到北交所日历,希望能够支持 | open | 2023-08-17T12:15:32Z | 2023-08-17T12:15:32Z | https://github.com/waditu/tushare/issues/1717 | [] | linhuanheng | 0 |
jupyterlab/jupyter-ai | jupyter | 283 | When replacing cell output using the chat UI, add to cell metadata | <!-- Welcome! Thank you for contributing. These HTML comments will not render in the issue, but you can delete them once you've read them if you prefer! -->
<!--
Thanks for thinking of a way to improve JupyterLab. If this solves a problem for you, then it probably solves that problem for lots of people! So the whole community will benefit from this request.
Before creating a new feature request please search the issues for relevant feature requests.
-->
### Problem
When using the "replace output" option in the chat UI, the user can change a cell to contain AI-generated output, but the cell metadata does not reflect this.
### Proposed Solution
Amend the cell metadata when a cell contains AI-generated output inserted from the chat provider.
### Additional context
If a cell is created using a magic command that called model provider A, then the chat UI uses model provider B to update its contents, should the cell's metadata refer to both providers? | open | 2023-07-21T00:21:28Z | 2023-07-27T23:53:43Z | https://github.com/jupyterlab/jupyter-ai/issues/283 | [
"enhancement",
"scope:chat-ux"
] | JasonWeill | 0 |
tqdm/tqdm | jupyter | 1,194 | tqdm.notebook not rendering | I have an issue where tqdm works fine but tqdm.notebook shows unformatted (the standard tqdm) progress bar and does not update at all. I have tried it in a virtual env with pip installing only jupyter and tqdm.
Perhaps there is a clash with some other package. Be pleased if anyone can tell me how to fix.
| closed | 2021-06-25T18:02:06Z | 2021-07-29T15:22:53Z | https://github.com/tqdm/tqdm/issues/1194 | [
"question/docs ‽",
"submodule-notebook 📓"
] | simonm3 | 2 |
strawberry-graphql/strawberry | fastapi | 3,641 | From_pydantic fails on union type | ## Describe the Bug
If I do:
```
QuestionType = Annotated[Union[QuestionMultipleChoiceType, QuestionTextType, QuestionNumberType],
strawberry.union("Question")]
```
Then I'm not able to call:
```
QuestionType.from_pydantic(question)
```
as `from_pydantic` does not exist on union types.
MWE:
```
import strawberry
import operator
from pydantic import BaseModel, TypeAdapter
from typing import Union, Literal, Annotated
from strawberry.schema.config import StrawberryConfig
# Pedantic classes
class QuestionMultipleChoice(BaseModel):
# This helps pydantic to distinguish the types
kind: str = "QuestionMultipleChoice"
nbChoices: int
class QuestionText(BaseModel):
kind: str = "QuestionText"
class QuestionNumber(BaseModel):
kind: str = "QuestionNumber"
Question = Union[
QuestionMultipleChoice,
QuestionText,
QuestionNumber
]
# GraphQL types
@strawberry.experimental.pydantic.type(model=QuestionMultipleChoice, all_fields=True)
class QuestionMultipleChoiceType:
pass
@strawberry.experimental.pydantic.type(model=QuestionText, all_fields=True)
class QuestionTextType:
pass
@strawberry.experimental.pydantic.type(model=QuestionNumber, all_fields=True)
class QuestionNumberType:
pass
# Redefining all types in the union seems very verbose and error-prone. Is this really the good approach?
QuestionType = Annotated[Union[QuestionMultipleChoiceType, QuestionTextType, QuestionNumberType],
strawberry.union("Question")]
# Fake a redis database
data = {
"questionA": QuestionMultipleChoice(nbChoices=4).model_dump_json(),
"questionB": QuestionText().model_dump_json(),
"questionC": QuestionNumber().model_dump_json(),
}
print(data)
@strawberry.type
class Query:
@strawberry.field
def get_ident(self, questionID: str) -> QuestionType:
question = TypeAdapter(Question).validate_json(data[questionID])
print("question:", question)
return QuestionType.from_pydantic(question)
config = StrawberryConfig(
default_resolver=operator.getitem
)
schema = strawberry.Schema(query=Query, config=config)
```
## System Information
- Operating system: NixOs
- Strawberry version (if applicable): 0.237.3
## Additional Context
<!-- Add any other relevant information about the problem here. --> | open | 2024-09-23T06:43:43Z | 2025-03-20T15:56:52Z | https://github.com/strawberry-graphql/strawberry/issues/3641 | [
"bug"
] | tobiasBora | 0 |
python-gitlab/python-gitlab | api | 3,081 | CLI: option to auto-detect GITLAB and project-id arguments from current git repo | Please add a CLI option to auto-detect GITLAB (for -g argument) and project ID from the current git repo the gitlab command is run from. I implemented a shell wrapper around gitlab that basically would do the following:
* git remote get-url $(git remote)
* match the host of the above URL with entries in ~/.python-gitlab.cfg to get the GITLAB name
* Use the last part of the remote URL as project ID
There might be a case of multiple remotes on the git repo., I would suggest adding `--git-remote REMOTE` argument for such case | open | 2025-01-11T07:26:56Z | 2025-01-21T05:23:26Z | https://github.com/python-gitlab/python-gitlab/issues/3081 | [
"cli"
] | aelmahmoudy | 1 |
opengeos/leafmap | jupyter | 84 | Add a map titile | This feature allows users to add a title to a folium map.
 | closed | 2021-07-13T13:43:16Z | 2021-07-13T19:24:03Z | https://github.com/opengeos/leafmap/issues/84 | [
"Feature Request"
] | giswqs | 1 |
horovod/horovod | pytorch | 2,956 | When using the ring-of-rings branch of horovod,Segmentation fault(MPI)will appear when running distributed programs. | 1. Framework: (using TensorFlow v1 (1.15) with Keras2.2.4)
2. OS and version: Ubuntu16.04 LTS
3. Horovod version: Branch ring-of-rings(horovod==0.12.2.dev0)
4. MPI version: OpenMPI 4.0.0
5. CUDA version:10.0
6. NCCL version:2.5.6
7. Python version:3.6.13(conda)
8. GCC version:5.4.0
9. CMake version:3.18.4
**Checklist:**
1. Did you search issues to find if somebody asked this question before?
No
2. If your question is about hang, did you read [this doc]
(https://github.com/horovod/horovod/blob/master/docs/running.rst)?
Yes
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
Yes
**Bug report:**
Firstly compile the source code of the ring-of-rings branch,than I enter the conda environment. Finally,when I attempt to train Resnet50 model&Vgg model,some problems about mpi happens:Segmentation fault(mpirun noticed that process rank 0 with PID 0 on node node06 exited on signal 11 (Segmentation fault).
**The instruction to run the distributed program is: mpirun -np 2 python cifar10_resnet50.py**
The specific error information is as follows:
[node06:24453] *** Process received signal ***
[node06:24453] Signal: Segmentation fault (11)
[node06:24453] Signal code: Invalid permissions (2)
[node06:24453] Failing at address: 0x230f589800
[node06:24453] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x12980)[0x7fed0c311980]
[node06:24453] [ 1] /lib/x86_64-linux-gnu/libc.so.6(+0x18ec21)[0x7fed0c09cc21]
[node06:24453] [ 2] /usr/local/lib/openmpi/mca_btl_vader.so(+0x2ed0)[0x7fec925a5ed0]
[node06:24453] [ 3] /usr/local/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_send_request_start_prepare+0x51)[0x7fec917803e1]
[node06:24453] [ 4] /usr/local/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_send+0x14e3)[0x7fec9176ddf3]
[node06:24453] [ 5] /usr/local/lib/libmpi.so.40(ompi_coll_base_bcast_intra_split_bintree+0x6ef)[0x7feca20cdc0f]
[node06:24453] [ 6] /usr/local/lib/openmpi/mca_coll_tuned.so(ompi_coll_tuned_bcast_intra_dec_fixed+0x126)[0x7fec90702386]
[node06:24453] [ 7] /usr/local/lib/libmpi.so.40(MPI_Bcast+0x199)[0x7feca208f079]
[node06:24453] [ 8] /home/antl/anaconda3/envs/tf1.15-test/lib/python3.6/site-packages/horovod-0.12.2.dev0-py3.6-linux-x86_64.egg/horovod/common/mpi_lib.cpython-36m-x86_64-linux-gnu.so(+0x47650)[0x7feca2605650]
[node06:24453] [ 9] /home/antl/anaconda3/envs/tf1.15-test/lib/python3.6/site-packages/horovod-0.12.2.dev0-py3.6-linux-x86_64.egg/horovod/common/mpi_lib.cpython-36m-x86_64-linux-gnu.so(+0x4ff31)[0x7feca260df31]
[node06:24453] [10] /home/antl/anaconda3/envs/tf1.15-test/bin/../lib/libstdc++.so.6(+0xc819d)[0x7fecb833b19d]
[node06:24453] [11] /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db)[0x7fed0c3066db]
[node06:24453] [12] /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7fed0c02f71f]
[node06:24453] *** End of error message ***
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
**What confuses me is that before 2021.05, there is no problem using the ring-of-rings branch to run distributed programs, but since May, once the mpi that supports cuda-aware is not compiled, MPI Segmentation fault will appear mistake!!!**
| open | 2021-06-07T09:53:38Z | 2021-06-07T09:53:38Z | https://github.com/horovod/horovod/issues/2956 | [
"bug"
] | Frank00001 | 0 |
pydantic/pydantic-ai | pydantic | 580 | Parallel function calling broken on Vertex Gemini | If multiple function calls are made in a single response from Gemini, we're replying to it with one `content` per `functionResponse` instead of 1 `part` per `functionResponse`. This generates a 400 from Vertex with:
```json
{
"error": {
"code": 400,
"message": "Please ensure that the number of function response parts should be equal to number of function call parts of the function call turn.",
"status": "INVALID_ARGUMENT"
}
}
```
Here's the only place I could find a [clear usage example](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling#rest_1) in the Vertex docs.
Here's an MRE that fails consistently:
```python
# /// script
# dependencies = [
# "pydantic-ai"
# ]
# ///
from pydantic_ai import Agent, RunContext
from pydantic_ai.models.vertexai import VertexAIModel
agent = Agent(
model=VertexAIModel(
model_name='gemini-1.5-flash',
service_account_file='/path/to/service_account_file.json',
)
)
@agent.tool
def fetch_weather(_ctx: RunContext[None], city: str) -> str:
return f'The weather in {city} is sunny.'
def main():
prompt = "What's the weather in Paris, Tokyo, and New York?"
result = agent.run_sync(prompt, model_settings={'temperature': 0.0})
# Expect error here
# pydantic_ai.exceptions.UnexpectedModelBehavior: Unexpected response from gemini 400, body:
# {
# "error": {
# "code": 400,
# "message": "Please ensure that the number of function response parts should be equal to number of function call parts of the function call turn.",
# "status": "INVALID_ARGUMENT"
# }
# }
print('=== Final Result ===')
print(result.data)
if __name__ == '__main__':
main()
```
Happy New Year, love the library! :) | closed | 2024-12-31T22:34:18Z | 2025-01-02T15:36:09Z | https://github.com/pydantic/pydantic-ai/issues/580 | [
"bug"
] | montasaurus | 3 |
hankcs/HanLP | nlp | 606 | 简繁转换错误:姓名“梁鹏”转为繁体后变成了”樑鹏” | 版本:portable-1.3.4
| closed | 2017-08-22T01:36:11Z | 2017-08-27T03:10:07Z | https://github.com/hankcs/HanLP/issues/606 | [
"improvement"
] | lizhengjun1982 | 1 |
unit8co/darts | data-science | 1,989 | [BUG] RuntimeError when Running TransformerModel on Multiple GPUs with darts 0.25.0 | ### **Describe the bug**
When I attempt to run the `TransformerModel` using multiple GPUs, I encounter the following error:
```
RuntimeError: unsupported operation: some elements of the input tensor and the written-to tensor refer to a single memory location. Please clone() the tensor before performing the operation.
```
### **To Reproduce**
Below is the code snippet to reproduce the issue:
```python
if __name__ == '__main__':
torch.multiprocessing.freeze_support()
series = AirPassengersDataset().load().astype(np.float32)
# Create training and validation sets:
train, val = series.split_after(pd.Timestamp("19590101"))
# Normalize the time series (note: we avoid fitting the transformer on the validation set)
scaler = Scaler()
train_scaled = scaler.fit_transform(train)
val_scaled = scaler.transform(val)
series_scaled = scaler.transform(series)
my_model = TransformerModel(
input_chunk_length=12,
output_chunk_length=1,
batch_size=32,
n_epochs=200,
model_name="air_transformer",
nr_epochs_val_period=10,
d_model=16,
nhead=8,
num_encoder_layers=2,
num_decoder_layers=2,
dim_feedforward=128,
dropout=0.1,
activation="relu",
random_state=42,
save_checkpoints=True,
force_reset=True,
pl_trainer_kwargs = {"accelerator": "gpu", "devices":[0,1],"strategy": "ddp" },
)
my_model.fit(series=train_scaled, val_series=val_scaled, verbose=True)
def eval_model(model, n, series, val_series):
pred_series = model.predict(n=n)
plt.figure(figsize=(8, 5))
series.plot(label="actual")
pred_series.plot(label="forecast")
plt.title("MAPE: {:.2f}%".format(mape(pred_series, val_series)))
plt.legend()
eval_model(my_model, 26, series_scaled, val_scaled)
best_model = TransformerModel.load_from_checkpoint(
model_name="air_transformer", best=True
)
eval_model(best_model, 26, series_scaled, val_scaled)
backtest_series = my_model.historical_forecasts(
series=series_scaled,
start=pd.Timestamp("19590101"),
forecast_horizon=6,
retrain=False,
verbose=True,
)
```
### **Expected behavior**
I expect the model to run using multiple GPUs without any issues.
### **System (please complete the following information):**
- Python version: 3.9
- darts version: 0.25.0
### **Additional context**
The model runs correctly when using a single GPU.
| open | 2023-09-12T06:52:00Z | 2023-11-02T11:05:48Z | https://github.com/unit8co/darts/issues/1989 | [
"bug",
"gpu"
] | PANXIONG-CN | 2 |
microsoft/qlib | machine-learning | 1,786 | Questions about 'get_feature_config' function in class HighFreqGeneralHandler | When I run the follwing code
`python scripts/gen_pickle_data_hsm.py -c scripts/pickle_data_config.yml`
, I print out the generated results of self.get_feature_config() in class HighFreqGeneralHandler. The results are as follows:
['Cut(FFillNan(If(IsNull($open), $close, $open)/DayLast(Ref(FFillNan($close), 480))), 480, None)',
'Cut(FFillNan(If(IsNull($high), $close, $high)/DayLast(Ref(FFillNan($close), 480))), 480, None)',
'Cut(FFillNan(If(IsNull($low), $close, $low)/DayLast(Ref(FFillNan($close), 480))), 480, None)',
'Cut(FFillNan(If(IsNull($close), $close, $close)/DayLast(Ref(FFillNan($close), 480))), 480, None)',
'Cut(FFillNan(Ref(If(IsNull($open), $close, $open), 240)/DayLast(Ref(FFillNan($close), 240))), 480, None)',
'Cut(FFillNan(Ref(If(IsNull($high), $close, $high), 240)/DayLast(Ref(FFillNan($close), 240))), 480, None)',
'Cut(FFillNan(Ref(If(IsNull($low), $close, $low), 240)/DayLast(Ref(FFillNan($close), 240))), 480, None)',
'Cut(FFillNan(Ref(If(IsNull($close), $close, $close), 240)/DayLast(Ref(FFillNan($close), 240))), 480, None)',
'Cut(If(IsNull($volume/Ref(DayLast(Mean($volume, 7200)), 240)), 0, $volume/Ref(DayLast(Mean($volume, 7200)), 240)), 480, None)',
'Cut(If(IsNull(Ref($volume, 240)/Ref(DayLast(Mean($volume, 7200)), 240)), 0, Ref($volume, 240)/Ref(DayLast(Mean($volume, 7200)), 240)), 480, None)'
], ['$open', '$high', '$low', '$close', '$open_1', '$high_1', '$low_1', '$close_1', '$volume', '$volume_1']
I has 2 questions here:
First, I believe the variable 'day length' in class HighFreqGeneralHandler should be divided by freq. In my case, the freq is 5 min, so a day only has a length of 48. The Ref($close, 480) here just indicates the close price 10 days ago.
Second, the actual usage of function 'get_normalized_price_feature' is inconsistent with the annotations here. However, it seems that the open,close,high,low of today will be divided by the close price 2 days ago, and the those of yesterday will be divided by the close price of yesterday(and then Ref again? I don't know why). I 'm a little confused here and is looking forward for a more detailed explanation. | open | 2024-05-10T07:29:07Z | 2024-05-10T07:29:07Z | https://github.com/microsoft/qlib/issues/1786 | [
"bug"
] | caozhenxiang-kouji | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.