repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
miguelgrinberg/Flask-SocketIO | flask | 751 | Fetch and display real time data on to screen using Flask-SocketIO | Below is my app.py
```
#!/usr/bin/env python
from threading import Lock
from subprocess import Popen, PIPE
import flask
import subprocess
from flask import Flask, render_template, session, request
from flask_socketio import SocketIO, emit, join_room, leave_room, \
close_room, rooms, disconnect
import time
async_mode = "threading"
app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret!'
socketio = SocketIO(app, async_mode=async_mode)
thread = None
thread_lock = Lock()
@app.route('/')
def index():
return render_template('index.html', async_mode=socketio.async_mode)
@socketio.on('my_event', namespace='/test')
def test_message(message):
session['receive_count'] = session.get('receive_count', 0) + 1
emit('my_response',{'data': message['data'], 'count': session['receive_count']})
emit('my_response',{'data': message['data'], 'count': session['receive_count']})
@socketio.on('my_broadcast_event', namespace='/test')
def test_broadcast_message(message):
session['receive_count'] = session.get('receive_count', 0) + 1
emit('my_response',
{'data': message['data'], 'count': session['receive_count']},
broadcast=True)
@socketio.on('join', namespace='/test')
def join(message):
join_room(message['room'])
session['receive_count'] = session.get('receive_count', 0) + 1
emit('my_response',
{'data': 'In rooms: ' + ', '.join(rooms()),
'count': session['receive_count']})
@socketio.on('leave', namespace='/test')
def leave(message):
leave_room(message['room'])
session['receive_count'] = session.get('receive_count', 0) + 1
emit('my_response',
{'data': 'In rooms: ' + ', '.join(rooms()),
'count': session['receive_count']})
@socketio.on('close_room', namespace='/test')
def close(message):
session['receive_count'] = session.get('receive_count', 0) + 1
emit('my_response', {'data': 'Room ' + message['room'] + ' is closing.',
'count': session['receive_count']},
room=message['room'])
close_room(message['room'])
@socketio.on('my_room_event', namespace='/test')
def send_room_message(message):
session['receive_count'] = session.get('receive_count', 0) + 1
emit('my_response',
{'data': message['data'], 'count': session['receive_count']},
room=message['room'])
@socketio.on('disconnect_request', namespace='/test')
def disconnect_request():
session['receive_count'] = session.get('receive_count', 0) + 1
emit('my_response',
{'data': 'Disconnected!', 'count': session['receive_count']})
disconnect()
@socketio.on('my_ping', namespace='/test')
def ping_pong():
emit('my_pong')
@socketio.on('connect', namespace='/test')
def test_connect():
global thread
with thread_lock:
if thread is None:
thread = socketio.start_background_task(target=background_thread)
proc = subprocess.Popen(['netstat'],stdout=subprocess.PIPE,stderr=subprocess.PIPE) #call something with a lot of output so we can see it
while proc.poll() is None:
output = proc.stdout.readline()
if(output==''):
break
print(str(output.strip()))
emit('my_response', {'data': str(output.strip()), 'count': 0})
""" #call something with a lot of output so we can see it
@socketio.on('disconnect', namespace='/test')
def test_disconnect():
print('Client disconnected', request.sid)"""
if __name__ == '__main__':
socketio.run(app, debug=True)
```
My index.html file
```
<!DOCTYPE HTML>
<html>
<head>
<script type="text/javascript" src="//code.jquery.com/jquery-1.4.2.min.js"></script>
<script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/socket.io/1.3.5/socket.io.min.js"></script>
<script type="text/javascript" charset="utf-8">
$(document).ready(function() {
// Use a "/test" namespace.
// An application can open a connection on multiple namespaces, and
// Socket.IO will multiplex all those connections on a single
// physical channel. If you don't care about multiple channels, you
// can set the namespace to an empty string.
namespace = '/test';
// Connect to the Socket.IO server.
// The connection URL has the following format:
// http[s]://<domain>:<port>[/<namespace>]
var socket = io.connect(location.protocol + '//' + document.domain + ':' + location.port + namespace);
// Event handler for new connections.
// The callback function is invoked when a connection with the
// server is established.
socket.on('connect', function() {
socket.emit('my_event', {data: 'I\'m connected!'});
});
// Event handler for server sent data.
// The callback function is invoked whenever the server emits data
// to the client. The data is then displayed in the "Received"
// section of the page.
/*socket.on('my_response1', function(msg) {
console.log(msg);
$('#log1').append('<br>' + $('<div/>').text('Received #' + msg).html());
});*/
socket.on('my_response', function(msg) {
console.log(msg);
$('#log').append('<br>' + $('<div/>').text('Received #' + msg.data).html());
});
// Interval function that tests message latency by sending a "ping"
// message. The server then responds with a "pong" message and the
// round trip time is measured.
var ping_pong_times = [];
var start_time;
window.setInterval(function() {
start_time = (new Date).getTime();
socket.emit('my_ping');
}, 1000);
// Handler for the "pong" message. When the pong is received, the
// time from the ping is stored, and the average of the last 30
// samples is average and displayed.
socket.on('my_pong', function() {
var latency = (new Date).getTime() - start_time;
ping_pong_times.push(latency);
ping_pong_times = ping_pong_times.slice(-30); // keep last 30 samples
var sum = 0;
for (var i = 0; i < ping_pong_times.length; i++)
sum += ping_pong_times[i];
$('#ping-pong').text(Math.round(10 * sum / ping_pong_times.length) / 10);
});
// Handlers for the different forms in the page.
// These accept data from the user and send it to the server in a
// variety of ways
$('form#emit').submit(function(event) {
socket.emit('my_event', {data: $('#emit_data').val()});
return false;
});
$('form#broadcast').submit(function(event) {
socket.emit('my_broadcast_event', {data: $('#broadcast_data').val()});
return false;
});
$('form#join').submit(function(event) {
socket.emit('join', {room: $('#join_room').val()});
return false;
});
$('form#leave').submit(function(event) {
socket.emit('leave', {room: $('#leave_room').val()});
return false;
});
$('form#send_room').submit(function(event) {
socket.emit('my_room_event', {room: $('#room_name').val(), data: $('#room_data').val()});
return false;
});
$('form#close').submit(function(event) {
socket.emit('close_room', {room: $('#close_room').val()});
return false;
});
$('form#disconnect').submit(function(event) {
socket.emit('disconnect_request');
return false;
});
});
</script>
</head>
<body>
<h1>Flask-SocketIO Test</h1>
<p>Async mode is: <b>{{ async_mode }}</b></p>
<p>Average ping/pong latency: <b><span id="ping-pong"></span>ms</b></p>
<h2>Send:</h2>
<form id="emit" method="POST" action='#'>
<input type="text" name="emit_data" id="emit_data" placeholder="Message">
<input type="submit" value="Echo">
</form>
<form id="broadcast" method="POST" action='#'>
<input type="text" name="broadcast_data" id="broadcast_data" placeholder="Message">
<input type="submit" value="Broadcast">
</form>
<form id="join" method="POST" action='#'>
<input type="text" name="join_room" id="join_room" placeholder="Room Name">
<input type="submit" value="Join Room">
</form>
<form id="leave" method="POST" action='#'>
<input type="text" name="leave_room" id="leave_room" placeholder="Room Name">
<input type="submit" value="Leave Room">
</form>
<form id="send_room" method="POST" action='#'>
<input type="text" name="room_name" id="room_name" placeholder="Room Name">
<input type="text" name="room_data" id="room_data" placeholder="Message">
<input type="submit" value="Send to Room">
</form>
<form id="close" method="POST" action="#">
<input type="text" name="close_room" id="close_room" placeholder="Room Name">
<input type="submit" value="Close Room">
</form>
<form id="disconnect" method="POST" action="#">
<input type="submit" value="Disconnect">
</form>
<h2>Receive:</h2>
<div id="log"></div>
</body>
</html>
```
When I run the python script and get connected as a client from browser, I want 'netstat' command output to get displayed as and when I am emitting it to 'my_response' . But it is not happening though I am emitting response immediately i recieve output from command it is displaying on the browser only when the entire command execution is completed. I want it to be real time on the browser as well. print(str(output.strip())) is perfectly printing netstat logs on to the command line this means i am also emitting data to 'my_response' but I am recieving in the browser after entire execution.
Can you help me with this @miguelgrinberg | closed | 2018-07-26T10:02:10Z | 2019-01-18T20:47:15Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/751 | [
"question"
] | pavansai1 | 2 |
JaidedAI/EasyOCR | pytorch | 325 | difference between farsi and arabic in digit 5 | Hi
There is a difference between Farsi "۵" and Arabic "٥" and this issue make problem in Farsi number five detection.
How can I solve this problem?
Thanks | closed | 2020-12-06T13:02:00Z | 2024-05-24T17:11:24Z | https://github.com/JaidedAI/EasyOCR/issues/325 | [] | be42day | 5 |
modAL-python/modAL | scikit-learn | 19 | Replace np.sum(generator) with np.sum(np.from_iter(generator)) in modAL.utils.combination | ```np.sum(generator)``` throws ```DeprecationWarning```, should replace this with ```np.sum(np.from_iter(generator))```. | closed | 2018-08-29T15:15:53Z | 2018-10-18T16:27:11Z | https://github.com/modAL-python/modAL/issues/19 | [] | cosmic-cortex | 0 |
sktime/sktime | data-science | 7,201 | [ENH] Avoid reloading TimesFM each time in expanding window | **Is your feature request related to a problem? Please describe.**
TimesFM is zero-shot, and there is no fitting. However, in the expanding window strategy, when `ExpandingWindowSplitter` is used, the `_fit()` method is used every time. This results in very slow forecasting and huge memory requirements.
Concretely, when using TimesFM package, I got ~16 GB RAM requirements, model loading took ~17s, and forecasts are basically instant. When using `sktime`, memory grows up to 32 GB and I get OOM error.
**Describe the solution you'd like**
Initialization code with loading the model should be called during model initialization in `__init__`. This is that line: https://github.com/sktime/sktime/blob/6958687521ddeed7459a2b74d3314b550fa1dbb5/sktime/forecasting/timesfm_forecaster.py#L214.
**Additional context**
The same applies for other zero-shot models. | closed | 2024-09-30T11:35:11Z | 2024-10-09T11:06:57Z | https://github.com/sktime/sktime/issues/7201 | [
"module:forecasting",
"enhancement"
] | j-adamczyk | 13 |
raphaelvallat/pingouin | pandas | 178 | Generalized Estimating Equations | Hello,
Pingouin is just what statisticians needed in the Python environment, thanks for your great job. I believe the addition of generalized linear models, specifically Generalized Estimating Equations (GEE), can provide considerable added value. GEE can be used for panel, cluster, or repeated measures data when the observations are possibly correlated within a cluster but uncorrelated across clusters. In addition, it needs non of the rm_anova or mixed ANOVA assumptions. | closed | 2021-06-01T05:52:52Z | 2021-06-24T23:30:00Z | https://github.com/raphaelvallat/pingouin/issues/178 | [
"feature request :construction:"
] | malekpour-mreza | 2 |
JaidedAI/EasyOCR | pytorch | 327 | Error occurs when i install | error: package directory 'libfuturize\tests' does not exist | closed | 2020-12-10T07:09:44Z | 2022-03-02T09:24:10Z | https://github.com/JaidedAI/EasyOCR/issues/327 | [] | AndyJMR | 3 |
gradio-app/gradio | data-science | 10,408 | How to update button status when there's no Button update function call in version 5 | - [ X ] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
Not A problem, but incompatible with previous interface
**Describe the solution you'd like**
More docs support
**Additional context**
In some reason, I have to upgrade my gradio from version 3.32.0 to 5.12.0 which is the latest one.
However, I see some of the interfaces has been removed or changed.
Now I have question, I have many button to update their status when some other events happen, and now,
it seems the update function is not supported in Button component, what SHOULD I do for this scenario?
Thanks,
Tommy
++++++++++++++++++++++++++++++++++++++++
Use gr.Button() to update the status directly. Sorry for bothering.
| closed | 2025-01-22T08:21:21Z | 2025-01-22T08:55:25Z | https://github.com/gradio-app/gradio/issues/10408 | [] | Yb2S3Man | 0 |
falconry/falcon | api | 2,376 | Falcon 3.1.3 installation breaks on python 3.13 | Hi, not sure if this bug was already known, but I did not find anything in the issue tracker.
It seems that installing `falcon==3.1.3` in a python 3.13 virtualenv fails with the error `AttributeError: module 'falcon' has no attribute '__version__'`. The `4.0.0rc1` version installs just fine.
Tried on both linux (under wsl) and windows.
<details>
<summary>Full error</summary>
```
$ python3.13 -m pip install "falcon<4"
Defaulting to user installation because normal site-packages is not writeable
Collecting falcon<4
Using cached falcon-3.1.3.tar.gz (577 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [73 lines of output]
Traceback (most recent call last):
File "/tmp/pip-build-env-onbazeod/overlay/local/lib/python3.13/dist-packages/setuptools/config/expand.py", line 69, in __getattr__
return next(
ast.literal_eval(value)
for target, value in self._find_assignments()
if isinstance(target, ast.Name) and target.id == attr
)
StopIteration
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/tmp/pip-build-env-onbazeod/overlay/local/lib/python3.13/dist-packages/setuptools/config/expand.py", line 183, in read_attr
return getattr(StaticModule(module_name, spec), attr_name)
File "/tmp/pip-build-env-onbazeod/overlay/local/lib/python3.13/dist-packages/setuptools/config/expand.py", line 75, in __getattr__
raise AttributeError(f"{self.name} has no attribute {attr}") from e
AttributeError: falcon has no attribute __version__
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
~~~~^^
File "/usr/lib/python3/dist-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "/tmp/pip-build-env-onbazeod/overlay/local/lib/python3.13/dist-packages/setuptools/build_meta.py", line 332, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=[])
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-onbazeod/overlay/local/lib/python3.13/dist-packages/setuptools/build_meta.py", line 302, in _get_build_requires
self.run_setup()
~~~~~~~~~~~~~~^^
File "/tmp/pip-build-env-onbazeod/overlay/local/lib/python3.13/dist-packages/setuptools/build_meta.py", line 318, in run_setup
exec(code, locals())
~~~~^^^^^^^^^^^^^^^^
File "<string>", line 197, in <module>
File "<string>", line 174, in run_setup
File "/tmp/pip-build-env-onbazeod/overlay/local/lib/python3.13/dist-packages/setuptools/__init__.py", line 117, in setup
return distutils.core.setup(**attrs)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^
File "/tmp/pip-build-env-onbazeod/overlay/local/lib/python3.13/dist-packages/setuptools/_distutils/core.py", line 157, in setup
dist.parse_config_files()
~~~~~~~~~~~~~~~~~~~~~~~^^
File "/tmp/pip-build-env-onbazeod/overlay/local/lib/python3.13/dist-packages/setuptools/dist.py", line 643, in parse_config_files
setupcfg.parse_configuration(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
self, self.command_options, ignore_option_errors=ignore_option_errors
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/tmp/pip-build-env-onbazeod/overlay/local/lib/python3.13/dist-packages/setuptools/config/setupcfg.py", line 193, in parse_configuration
meta.parse()
~~~~~~~~~~^^
File "/tmp/pip-build-env-onbazeod/overlay/local/lib/python3.13/dist-packages/setuptools/config/setupcfg.py", line 506, in parse
section_parser_method(section_options)
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-onbazeod/overlay/local/lib/python3.13/dist-packages/setuptools/config/setupcfg.py", line 481, in parse_section
self[name] = value
~~~~^^^^^^
File "/tmp/pip-build-env-onbazeod/overlay/local/lib/python3.13/dist-packages/setuptools/config/setupcfg.py", line 299, in __setitem__
parsed = self.parsers.get(option_name, lambda x: x)(value)
File "/tmp/pip-build-env-onbazeod/overlay/local/lib/python3.13/dist-packages/setuptools/config/setupcfg.py", line 598, in _parse_version
return expand.version(self._parse_attr(value, self.package_dir, self.root_dir))
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-onbazeod/overlay/local/lib/python3.13/dist-packages/setuptools/config/setupcfg.py", line 423, in _parse_attr
return expand.read_attr(attr_desc, package_dir, root_dir)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-onbazeod/overlay/local/lib/python3.13/dist-packages/setuptools/config/expand.py", line 187, in read_attr
return getattr(module, attr_name)
AttributeError: module 'falcon' has no attribute '__version__'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
```
</details> | closed | 2024-10-16T17:47:02Z | 2024-10-18T18:18:20Z | https://github.com/falconry/falcon/issues/2376 | [
"duplicate",
"maintenance",
"question",
"community"
] | DavideCanton | 5 |
vaexio/vaex | data-science | 1,898 | [BUG-REPORT] Too much memory Consumption and not releasing it. | Hey There Vaex Team,
Basically I'm working over some task in which I'm using vaex. So in that I have 3 million rows and 7 columns dataset. I made an function in which I'm doing 3-4 groupby, 2-3 joins & 2-3 timedelta operations. Whenever i uses this function my memory usage jumps from 6.7 Gb to 18 Gb consumption and didn't release it even if function is fully executed. Even after 5 minutes it didn't release the memory due to which it reaches upto my 32GB memory. If I close my jupyter notebook then it instantly release the memory which I don't want because my 80% code still remains unexecuted due to high memory consumption. Why vaex uses this much of RAM. I have even tried gc.collect but it didn't worked for us. Attaching the python terminal screenshot. Any idea how should I solve this?

| closed | 2022-02-09T18:55:24Z | 2022-12-15T06:04:48Z | https://github.com/vaexio/vaex/issues/1898 | [] | ashsharma96 | 17 |
inducer/pudb | pytest | 107 | Ability to sort variables by most recently changed | It would be nice if the variables view could be sorted with the most recently changed variables at the top, rather than alphabetically. It's often quite hard to follow the view because things change in different places, and most of the time you don't care about most of the variables in the list.
Something from https://github.com/inducer/pudb/issues/65 may be a prequsite of this.
| open | 2014-02-21T16:46:42Z | 2014-06-07T01:44:58Z | https://github.com/inducer/pudb/issues/107 | [
"enhancement"
] | asmeurer | 0 |
ultralytics/yolov5 | machine-learning | 12,964 | Hello, I have some questions about the YOLOv5 code. Could you please help me answer them? | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Here are my questions:
In dataloader.py, why does the following occur:
if rect and shuffle: LOGGER.warning('WARNING ⚠️ --rect is incompatible with DataLoader shuffle, setting shuffle=False') shuffle = False
self.rect = False if image_weights else rect
In these codes, why must the use of the rect strategy be prohibited when using either the shuffle or image_weights strategies?
In train.py, there are three questions regarding the following code:
if RANK != -1:
loss *= WORLD_SIZE # gradient averaged between devices in DDP mode
if opt.quad:
loss *= 4.
It's unclear where it specifies that the losses from all GPUs should be aggregated onto the primary GPU to form the total loss.
What is the significance of loss *= WORLD_SIZE?
Even if opt.quad is true, isn't loss already the total loss? Why multiply it by 4 instead of directly using the total loss for backpropagation?
In val.py, there is this line of code: preds, train_out = model(im) if compute_loss else (model(im, augment=augment), None). Here are my questions:
The model returns two values, but when I look at the return statement in yolo.py (return x if self.training else (torch.cat(z, 1),) if self.export else (torch.cat(z, 1), x)), it seems to return (torch.cat(z, 1), x). I understand that z represents various confidence scores for the bounding boxes, but why do we need torch.cat(z, 1)? Additionally, x is the output from line 53 of yolo.py, which corresponds to the CNN layers. However, this model is not the complete model; why is x considered the training output and used for calculating errors?
### Additional
_No response_ | closed | 2024-04-26T08:07:36Z | 2024-10-20T19:44:52Z | https://github.com/ultralytics/yolov5/issues/12964 | [
"question",
"Stale"
] | enjoynny | 3 |
lukasmasuch/streamlit-pydantic | streamlit | 43 | Publish version compatible with pydantic 2.x | Fixes for problems with pydantic 2.x have been merged into master.
Please publish the pydantic 2.x-compatible version to pypi (after proper testing).
Thanks!
| open | 2023-09-25T08:40:10Z | 2023-10-04T14:28:02Z | https://github.com/lukasmasuch/streamlit-pydantic/issues/43 | [] | szabi | 1 |
huggingface/datasets | tensorflow | 7,472 | Label casting during `map` process is canceled after the `map` process | ### Describe the bug
When preprocessing a multi-label dataset, I introduced a step to convert int labels to float labels as [BCEWithLogitsLoss](https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html) expects float labels and forward function of models in transformers package internally use `BCEWithLogitsLoss`
However, the casting was canceled after `.map` process and the label values still use int values, which leads to an error
```
File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/transformers/models/bert/modeling_bert.py", line 1711, in forward
loss = loss_fct(logits, labels)
File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/torch/nn/modules/loss.py", line 819, in forward
return F.binary_cross_entropy_with_logits(
File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/torch/nn/functional.py", line 3628, in binary_cross_entropy_with_logits
return torch.binary_cross_entropy_with_logits(
RuntimeError: result type Float can't be cast to the desired output type Long
```
This seems like happening only when the original labels are int values (see examples below)
### Steps to reproduce the bug
If the original dataset uses a list of int labels, it will cancel the int->float casting
```python
from datasets import Dataset
data = {
'text': ['text1', 'text2', 'text3', 'text4'],
'labels': [[0, 1, 2], [3], [3, 4], [3]]
}
dataset = Dataset.from_dict(data)
label_set = set([label for labels in data['labels'] for label in labels])
label2idx = {label: idx for idx, label in enumerate(sorted(label_set))}
def multi_labels_to_ids(labels):
ids = [0.0] * len(label2idx)
for label in labels:
ids[label2idx[label]] = 1.0
return ids
def preprocess(examples):
result = {'sentence': [[0, 3, 4] for _ in range(len(examples['labels']))]}
print('"labels" are int', examples['labels'])
result['labels'] = [multi_labels_to_ids(l) for l in examples['labels']]
print('"labels" were converted to multi-label format with float values', result['labels'])
return result
preprocessed_dataset = dataset.map(preprocess, batched=True, remove_columns=['labels', 'text'])
print(preprocessed_dataset[0]['labels'])
# Output: "[1, 1, 1, 0, 0]"
# Expected: "[1.0, 1.0, 1.0, 0.0, 0.0]"
```
If the original dataset uses non-int labels, it works as expected.
```python
from datasets import Dataset
data = {
'text': ['text1', 'text2', 'text3', 'text4'],
'labels': [['label1', 'label2', 'label3'], ['label4'], ['label4', 'label5'], ['label4']]
}
dataset = Dataset.from_dict(data)
label_set = set([label for labels in data['labels'] for label in labels])
label2idx = {label: idx for idx, label in enumerate(sorted(label_set))}
def multi_labels_to_ids(labels):
ids = [0.0] * len(label2idx)
for label in labels:
ids[label2idx[label]] = 1.0
return ids
def preprocess(examples):
result = {'sentence': [[0, 3, 4] for _ in range(len(examples['labels']))]}
print('"labels" are int', examples['labels'])
result['labels'] = [multi_labels_to_ids(l) for l in examples['labels']]
print('"labels" were converted to multi-label format with float values', result['labels'])
return result
preprocessed_dataset = dataset.map(preprocess, batched=True, remove_columns=['labels', 'text'])
print(preprocessed_dataset[0]['labels'])
# Output: "[1.0, 1.0, 1.0, 0.0, 0.0]"
# Expected: "[1.0, 1.0, 1.0, 0.0, 0.0]"
```
Note that the only difference between these two examples is
> 'labels': [[0, 1, 2], [3], [3, 4], [3]]
v.s
> 'labels': [['label1', 'label2', 'label3'], ['label4'], ['label4', 'label5'], ['label4']]
### Expected behavior
Even if the original dataset uses a list of int labels, the int->float casting during `.map` process should not be canceled as shown in the above example
### Environment info
OS Ubuntu 22.04 LTS
Python 3.10.11
datasets v3.4.1 | open | 2025-03-21T07:56:22Z | 2025-03-21T07:58:14Z | https://github.com/huggingface/datasets/issues/7472 | [] | yoshitomo-matsubara | 0 |
aminalaee/sqladmin | fastapi | 715 | After deleting objects, page size is not maintained | ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
After deleting entries using the delete action, the page size is not kept as part of the redirect URL
### Steps to reproduce the bug
How to reproduce:
* Have a list page with many entries
* Display more than 10, e.g. select a page size of 100
* Delete all the entries
### Expected behavior
After the entries are deleted, the page size is still 100
### Actual behavior
The page size is reset to 10
### Debugging material
_No response_
### Environment
Python 3.11
SQL Admin 0.16.0
### Additional context
_No response_ | closed | 2024-02-19T15:33:16Z | 2024-05-13T09:33:49Z | https://github.com/aminalaee/sqladmin/issues/715 | [] | lorg | 0 |
gunthercox/ChatterBot | machine-learning | 1,883 | build dependencies error | closed | 2019-12-09T13:32:49Z | 2020-08-22T19:10:28Z | https://github.com/gunthercox/ChatterBot/issues/1883 | [
"invalid"
] | muhammmadrabi72 | 0 | |
davidsandberg/facenet | computer-vision | 937 | Guideline | Hi there, Is there any guideline for following up this project? | open | 2018-12-20T21:13:00Z | 2018-12-20T21:13:00Z | https://github.com/davidsandberg/facenet/issues/937 | [] | cod3r0k | 0 |
polarsource/polar | fastapi | 5,147 | Getting `500 Internal Server Error` while trying to create a customer | ### Description
So, am trying to create a customer, against a sandboxed organisation `https://sandbox-api.polar.sh/v1/customers/` with the Polar.sh sdk
And I have gotten a very weird behavior, whereby the email I am trying to use create a customer with, does not already exist - when i run `customers.list` it does not appear. Yet, am getting a `500 internal server error` when i try `customer.create` with the same email
### Current Behavior
```ts
export const fromUserEmail = fn(z.string().min(1), async (email) => {
const customers = await client.customers.list({ email })
if (customers.result.items.length === 0) {
return await client.customers.create({ email })
} else {
return customers.result.items[0]
}
})
```
- Customer with this email does not exist...
- Customer with this email **CANNOT** be created, as the server throws a `HTTP 500 error`
> **Please note:** a user with this email had been previously created, and successfully deleted
> And I have not tested this with the production environment...
### Expected Behavior
This piece of code should work, regardless of whether the email has been added earlier and (succesfully) deleted
```ts
export const fromUserEmail = fn(z.string().min(1), async (email) => {
const customers = await client.customers.list({ email })
if (customers.result.items.length === 0) {
return await client.customers.create({ email })
} else {
return customers.result.items[0]
}
})
```
### Screenshots


### Environment:
- WSL
- Brave
---
<!-- Thank you for contributing to Polar! We appreciate your help in improving it. -->
<!-- Questions: [Discord Server](https://discord.com/invite/Pnhfz3UThd). --> | closed | 2025-03-03T13:10:31Z | 2025-03-07T13:40:13Z | https://github.com/polarsource/polar/issues/5147 | [
"bug"
] | wanjohiryan | 2 |
frappe/frappe | rest-api | 29,883 | chore: Update CODE_OF_CONDUCT.md file | Fixing Grammar on the CODE_OF_CONDUCT.md file.
| open | 2025-01-22T06:07:44Z | 2025-01-22T06:07:44Z | https://github.com/frappe/frappe/issues/29883 | [
"feature-request"
] | chrisfrancis-dev | 0 |
neuml/txtai | nlp | 394 | What are the API endpoints to use semantic graph ? | Glad to know that txtAI brought Semantic graph as it's new feature. By the way how to actually use it if we have other language programmes and we expect it as an API ?.
Are there any API endpoints out like search and extract? Anybody using it ? Please let me know. It will be better(atleast for me) if we get the details on how to write the Python script into a yaml config file to use graphs, categories and topic modeling.
**Following is my config file content to start the server using uvicorn..**
```
# Index file path
path: ./tmp/index
# Allow indexing of documents
writable: True
# Enbeddings index
embeddings:
path: sentence-transformers/all-MiniLM-L6-v2
content: true
# I manually added these below lines upto extractor part
functions:
- name: graph
function: graph.attribute
expressions:
- name: category
expression: graph(indexid, 'category')
- name: topic
expression: graph(indexid, 'topic')
- name: topicrank
expression: graph(indexid, 'topicrank')
graph:
limit: 15
minscore: 0.1
topics:
categories:
- Society & Culture
- Science & Mathematics
- Health
- Education & Reference
- Computers & Internet
- Sports
- Business & Finance
- Entertainment & Music
- Family & Relationships
- Politics & Government
extractor:
path: distilbert-base-cased-distilled-squad
textractor:
paragraphs: true
minlength: 100
join: false
```
**Output:**
```
...
ModuleNotFoundError: No module named 'graph'
ERROR: Application startup failed. Exiting
``` | closed | 2022-12-06T07:59:22Z | 2023-01-24T03:17:04Z | https://github.com/neuml/txtai/issues/394 | [] | akset2X | 6 |
Python3WebSpider/ProxyPool | flask | 90 | 关于可配置项中部分配置的疑问 | 大佬,能问下 setting.py 中的环境配置项 **APP_ENV** 和 **APP_DEBUG** 影响的是哪部分的环境么,我无论修改还是删除那一部分代码对程序都没有任何影响,并且 server 部分永远是 production 环境,如果是控制 flask 环境,不应该是 **FLASK_ENV** 和 **FLASK_DEBUG** 么? | closed | 2020-08-28T11:22:07Z | 2020-09-01T10:45:29Z | https://github.com/Python3WebSpider/ProxyPool/issues/90 | [] | Hui4401 | 3 |
sgl-project/sglang | pytorch | 3,890 | [Bug] --dp-size issue with AMD 8xMI300X and Llama 3.1 70B | ### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
- [x] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [x] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
### Describe the bug
Using --dp-size 4 --tp 2 on 8xMI300X does not work. Is this an MI300X issue or an issue with how I'm passing in sizes?
Error:
```
_TP = init_model_parallel_group(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/sgl-workspace/sglang/python/sglang/srt/distributed/parallel_state.py", line 890, in init_model_parallel_group
return GroupCoordinator(
^^^^^^^^^^^^^^^^^
File "/sgl-workspace/sglang/python/sglang/srt/distributed/parallel_state.py", line 241, in __init__
self.ca_comm = CustomAllreduce(
^^^^^^^^^^^^^^^^
File "/sgl-workspace/sglang/python/sglang/srt/distributed/device_communicators/custom_all_reduce.py", line 221, in __init__
self.meta_ptrs = self.create_shared_buffer(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/sgl-workspace/sglang/python/sglang/srt/distributed/device_communicators/custom_all_reduce.py", line 279, in create_shared_buffer
lib = CudaRTLibrary()
^^^^^^^^^^^^^^^
File "/sgl-workspace/sglang/python/sglang/srt/distributed/device_communicators/cuda_wrapper.py", line 117, in __init__
assert so_file is not None, "libcudart is not loaded in the current process"
^^^^^^^^^^^^^^^^^^^
AssertionError: libcudart is not loaded in the current process
[rank0]:[W226 12:24:24.546735073 ProcessGroupNCCL.cpp:1496] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
[rank0]:[W226 12:24:25.914339960 ProcessGroupNCCL.cpp:1496] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
[rank0]:[W226 12:24:25.054759318 ProcessGroupNCCL.cpp:1496] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
[rank0]:[W226 12:24:25.153719919 ProcessGroupNCCL.cpp:1496] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
```
### Reproduction
```
python3 -m sglang.launch_server --model-path neuralmagic/Meta-Llama-3.1-70B-Instruct-FP8 --port 8000 --host 0.0.0.0 --dp-size 4 --tp 2 --trust-remote-code --mem-fraction-static 0.8 --max-running-requests 128 --disable-cuda-graph
```
on
```
lmsysorg/sglang:v0.4.3.post2-rocm630-srt
```
### Environment
python3 -m sglang.check_env
Python: 3.12.8 (main, Dec 4 2024, 08:54:12) [GCC 11.4.0]
ROCM available: True
GPU 0,1,2,3,4,5,6,7: AMD Instinct MI300X
GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.4
ROCM_HOME: /opt/rocm
HIPCC: HIP version: 6.3.42131-fa1d09cbd
ROCM Driver Version: 6.7.0
PyTorch: 2.6.0a0+git8d4926e
sgl_kernel: 0.0.3.post6
flashinfer: Module Not Found
triton: 3.2.0
transformers: 4.48.3
torchao: 0.8.0
numpy: 1.26.4
aiohttp: 3.11.11
fastapi: 0.115.6
hf_transfer: 0.1.9
huggingface_hub: 0.27.1
interegular: 0.3.3
modelscope: 1.23.0
orjson: 3.10.15
packaging: 24.2
psutil: 6.1.1
pydantic: 2.10.5
multipart: 0.0.20
zmq: 26.2.0
uvicorn: 0.34.0
uvloop: 0.21.0
vllm: 0.6.7.dev2+g113274a0
openai: 1.59.7
anthropic: Module Not Found
litellm: Module Not Found
decord: 0.6.0
AMD Topology:
============================ ROCm System Management Interface ============================
=============================== Link Type between two GPUs ===============================
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7
GPU0 0 XGMI XGMI XGMI XGMI XGMI XGMI XGMI
GPU1 XGMI 0 XGMI XGMI XGMI XGMI XGMI XGMI
GPU2 XGMI XGMI 0 XGMI XGMI XGMI XGMI XGMI
GPU3 XGMI XGMI XGMI 0 XGMI XGMI XGMI XGMI
GPU4 XGMI XGMI XGMI XGMI 0 XGMI XGMI XGMI
GPU5 XGMI XGMI XGMI XGMI XGMI 0 XGMI XGMI
GPU6 XGMI XGMI XGMI XGMI XGMI XGMI 0 XGMI
GPU7 XGMI XGMI XGMI XGMI XGMI XGMI XGMI 0
================================== End of ROCm SMI Log ===================================
ulimit soft: 1048576 | open | 2025-02-26T12:26:17Z | 2025-03-12T10:24:50Z | https://github.com/sgl-project/sglang/issues/3890 | [] | RonanKMcGovern | 9 |
alpacahq/alpaca-trade-api-python | rest-api | 422 | 422 Client Error: Unprocessable Entity for url | Was trying the demo,
```
from alpaca_trade_api.rest import REST
api = REST()
api.get_bars("AAPL", TimeFrame.Hour, "2021-02-08", "2021-02-08", limit=10, adjustment='raw').df
```
got that error with this message at the bottom
```
alpaca_trade_api.rest.APIError: limit must be large enough to compute an aggregate bar
```
Turns out the time span isn't long enough, use this instead
```
api.get_bars("AAPL", TimeFrame.Hour, "2021-02-08", "2021-02-12", limit=10, adjustment='raw').df
```
Can someone please fix the readme? | closed | 2021-04-26T20:10:22Z | 2021-07-02T09:17:21Z | https://github.com/alpacahq/alpaca-trade-api-python/issues/422 | [] | edukaded | 2 |
microsoft/nni | machine-learning | 5,072 | ProxylessNAS example accuracy and loss not updated + strategy.Proxyless() support | **Describe the issue**:
I'm running running ProxylessNAS example. The accuracy and loss haven't been updated after 112 epochs.

What is the expected behavior?
I found out that in 2.8 version strategy.Proxyless() was added in place of deprecated ProxylessTrainer(). Is there a plan to create a similar ProxylessNAS example? Or, how I can modify it for strategy.Proxyless()?
**Environment**:
- NNI version: 2.8
- Training service (local|remote|pai|aml|etc): local
- Client OS: Linux
- Server OS (for remote mode only):
- Python version: 3.6
- PyTorch/TensorFlow version: 1.7
- Is conda/virtualenv/venv used?: yes
- Is running in Docker?: no
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**: | open | 2022-08-17T23:45:21Z | 2022-08-19T03:36:23Z | https://github.com/microsoft/nni/issues/5072 | [] | mahdihey | 4 |
keras-team/keras | python | 20,320 | Model Accuracy Degradation by 6x when Switching TF_USE_LEGACY_KERAS from "1" (Keras 2) to "0" (Keras 3) | ### Summary
There is a significant degradation in model performance when changing the `TF_USE_LEGACY_KERAS` environment variable between Keras 2 and Keras 3 in an Encoder-Decoder Network for Neural Machine Translation. With `os.environ["TF_USE_LEGACY_KERAS"] = "1"` (Keras 2), the validation set accuracy is much higher (60% v.s. 10%) compared to when `os.environ["TF_USE_LEGACY_KERAS"] = "0"` (Keras 3), despite no changes in the model architecture or training procedure.
### System Information:
- **Python version**: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0]
- **TensorFlow version**: 2.17.0
- **Keras version**: 3.4.1
- **Environment**: Google Colab
### Steps to Reproduce:
1. Set `os.environ["TF_USE_LEGACY_KERAS"] = "1"` to use Keras 2 and [run the Encoder-Decoder model](https://colab.research.google.com/drive/1fBqRrj70V7fOyD7kGAbXg--UhBZDsbZz?usp=sharing).
2. Set `os.environ["TF_USE_LEGACY_KERAS"] = "0"` to use Keras 3 and [run the same model](https://colab.research.google.com/drive/1K8zQlmu7vKvtTmQ-CPrpDcmjZ-cxjnrq?usp=sharing).
3. Compare the validation accuracy between the two setups.
### Expected Results:
The validation accuracy should remain consistent between both runs, or at least be comparable.
Looking for guidance on the cause of this discrepancy and possible ways to resolve this performance issue.
| closed | 2024-10-03T13:08:56Z | 2024-11-06T02:00:48Z | https://github.com/keras-team/keras/issues/20320 | [
"type:support",
"stat:awaiting response from contributor",
"stale"
] | Lw-Cui | 6 |
streamlit/streamlit | data-visualization | 10,863 | Enable "Download as CSV" for large dataframes | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
Hi,
"Download as CSV" is not shown for large dataframes. In following example, it's shown for `df1` but not for `df2`.
### Reproducible Code Example
[](https://issues.streamlitapp.com/?issue=gh-10863)
```Python
import numpy as np
import pandas as pd
import streamlit as st
df1 = pd.DataFrame(np.random.randint(0, 100, size=(100_000, 4)), columns=list("ABCD"))
st.dataframe(df1)
df2 = pd.DataFrame(np.random.randint(0, 100, size=(1_000_000, 4)), columns=list("ABCD"))
st.dataframe(df2)
```
### Steps To Reproduce
_No response_
### Expected Behavior
_No response_
### Current Behavior
_No response_
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.43.2
- Python version: 3.12
- Operating System: Windows 11
- Browser: Google Chrome
### Additional Information
_No response_ | open | 2025-03-20T10:46:55Z | 2025-03-20T17:11:03Z | https://github.com/streamlit/streamlit/issues/10863 | [
"type:enhancement",
"feature:st.dataframe",
"feature:st.data_editor"
] | ghilesmeddour | 6 |
mlfoundations/open_clip | computer-vision | 861 | Training speed slow | Hi,
I found that training speed was slow down if number of gpus is more than 2, is it because more gpus brings larger batch size to compute and gather_all will take up some time?
Best | closed | 2024-04-14T02:41:03Z | 2024-05-09T05:08:56Z | https://github.com/mlfoundations/open_clip/issues/861 | [] | lezhang7 | 1 |
serengil/deepface | deep-learning | 1,058 | Efficiency: multiple overlaying copies of data in face detection | I noticed the process for faces detecion is quite expensive.
For instance when we call `DeepFace.extract_faces` the call stack is
```
DeepFace.extract_faces
detection.extract_faces
DetectorWrapper.detect_faces
[whateverdetector].detect_faces
```
- Face detector returns a `List[FacialAreaRegion]`
- DetectorWrapper.detect_faces gets the results and transofrms into a `List[DetectedFaces]`
- detection.extract_faces grabs the previous results and transforms into a `List[Dict[str, Any]]`
I would advise to uniform the return types so the first results (eventually manipulated by intermediate levels) pop up to caller with no transformation.
| closed | 2024-03-01T16:06:35Z | 2024-03-01T16:13:46Z | https://github.com/serengil/deepface/issues/1058 | [
"question"
] | AndreaLanfranchi | 3 |
deepset-ai/haystack | machine-learning | 9,017 | Add run_async for `AzureOpenAITextEmbedder` | We should be able to reuse the implementation when it is made for `OpenAITextEmbedder` | open | 2025-03-11T11:07:32Z | 2025-03-21T08:59:04Z | https://github.com/deepset-ai/haystack/issues/9017 | [
"Contributions wanted!",
"P2"
] | sjrl | 0 |
proplot-dev/proplot | matplotlib | 58 | transform=ccrs.PlateCarree() now required when making maps | I think after 77f2b71b4927f9aa8b7024bbca53b87239203f2f, this issue arose.
Previously, one could do something like:
```python
import numpy as np
import proplot as plot
% import cartopy.crs as ccrs
data = np.random.rand(180, 360)
lats = np.linspace(-89.5, 89.5, 180)
lons = np.linspace(-179.5, 179.5, 360)
f, ax = plot.subplots(proj='robin')
ax.pcolormesh(lons, lats, data)
```
and you'd get the map. Now it's showing up blank and requires `ax.pcolormesh(lons, lats, data, transform=ccrs.PlateCarree())`. Maybe minor but a nice thing to not have to worry about with `proplot`. | closed | 2019-10-25T21:58:15Z | 2019-10-29T21:36:28Z | https://github.com/proplot-dev/proplot/issues/58 | [
"bug"
] | bradyrx | 2 |
Johnserf-Seed/TikTokDownload | api | 352 | [BUG] | 已按照readme运行了Util目录下的Server.py,终端窗口已经启动服务,也按照其他帖子上在终端窗口按了Enter,目前重复报同一错误,请各位老师帮忙看下。


| closed | 2023-03-16T12:27:59Z | 2023-03-24T06:56:58Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/352 | [
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] | huyin324 | 2 |
wkentaro/labelme | deep-learning | 754 | [BUG]about the json to dataset | the labelme version is 4.5.6
I have encountered such a problem. The first image is marked with three types of objects, and the second picture is marked with two types of objects. After running jiso to dataset, in two label.png, the same kind of object marking color is different, it's my operation problem? What should I do? Thank you
| closed | 2020-08-17T11:10:25Z | 2020-08-17T11:20:23Z | https://github.com/wkentaro/labelme/issues/754 | [
"issue::bug"
] | bao258456 | 0 |
dagster-io/dagster | data-science | 27,943 | Failing to materialize a single output of a multi asset fails the whole execution | ### What's the issue?
Hello Dagster!
It seems that when a single output of a multi asset fails during HANDLE_OUTPUT, the whole multi asset is aborted.
It seems that there is no way to continue the execution of the multiasset and materialize the other outputs in case one of them fails.
The obvious api would be
```python
@multi_asset(outs={'a': AssetOut(), 'b': AssetOut()})
def mymultiasset(context):
try:
yield Output(None, output_name='a')
except Exception as e:
context.log.info('except')
context.log.info(repr(e))
finally:
context.log.info('finally')
context.log.info('after')
yield Output(None, output_name='b')
```
but this just logs "finally" and then stops execution.
It seems that the executor is not throwing any errors to the execution function, as the "except" block was not executed.
I have a suspicion that the "finally" block executed only because of the implicit __del__ destructor that generators with "try" or "with" get automatically, not because of dagster orchestration.
In any case, when (and if) the execution gets to the second "yield Output", dagster ignores it completely.
With single-assets, the only usecase for this is resource cleanup, such as
```python
@asset
def myasset(context):
with resource_that_needs_cleanup:
yield Output(None)
```
But here it seems to work well, and the __exit__ function of the context manager is always executed, probably also because of the implicit destructor.
I don't see very much into the internals of dagster, but IMHO the correct behaviour would be to throw the error inside the execution function generator and let it deal with it.
I understand that this is maybe more of a feature request than a bug, but it deviates from what would be naively expected and is undocumented (AFAIK), so I labelled it as a bug. Feel free to relabel it.
### What did you expect to happen?
I would expect that the IOManager exception (bare, or wrapped in some IOManagerHandleOutputError(exc)) will be thrown to the asset execution function.
The execution function generator will be resumed with `f.throw(exc)`.
It could decide to handle the error in its own way, optionally continuing to yield other outputs, or simply perform cleanup.
The failed output would be marked as failed.
### How to reproduce?
```python
from dagster import *
class IO(ConfigurableIOManager):
def handle_output(self, context, obj):
raise ValueError
def load_input(self, context):
...
@multi_asset(
can_subset=True,
outs={
'a': AssetOut(io_manager_key='io', is_required=False),
'b': AssetOut(),
}
)
def mymultiasset(context):
try:
yield Output(None, output_name='a')
except Exception as e:
context.log.info('except')
context.log.info(repr(e))
finally:
context.log.info('finally')
context.log.info('after')
yield Output(None, output_name='b')
@asset(io_manager_key='io')
def myasset(context):
try:
yield Output(None)
finally:
context.log.info('ok')
job = define_asset_job(
'job',
[myasset],
)
multijob = define_asset_job(
'multijob',
[mymultiasset],
)
defs = Definitions(
assets=[myasset, mymultiasset],
jobs=[job, multijob],
resources={'io': IO()},
)
```
### Dagster version
dagster, version 1.10.1
### Deployment type
Local
### Deployment details
MacOS arm64
### Additional information
_No response_
### Message from the maintainers
Impacted by this issue? Give it a 👍! We factor engagement into prioritization. | open | 2025-02-20T14:18:43Z | 2025-02-20T14:18:43Z | https://github.com/dagster-io/dagster/issues/27943 | [
"type: bug"
] | vladidobro | 0 |
plotly/dash | data-science | 3,050 | investigate ways to reduce bundle size | The dash bundle is approx. 1o Mbyte, which will be a problem for WASM deploys. We should investigate ways to reduce its size.
cf. https://github.com/plotly/plotly.py/issues/4817 | open | 2024-10-22T13:58:41Z | 2024-11-12T14:37:38Z | https://github.com/plotly/dash/issues/3050 | [
"performance",
"feature",
"P2"
] | gvwilson | 1 |
PokeAPI/pokeapi | api | 292 | Pokemon Sound? | I'm developing a Pokedex Alexa Skill (you can found as Unofficial Pokedex), that can be so awesome if I include the sound of each Pokemon, like a 'pika pika' hehehe.
Exists in any place all pokemon sound audio files?
| closed | 2017-06-07T21:28:25Z | 2019-09-19T17:12:47Z | https://github.com/PokeAPI/pokeapi/issues/292 | [] | laurenceHR | 3 |
iMerica/dj-rest-auth | rest-api | 133 | How to override LoginView? | Hello! I need to return a different token serializer response based on the type of user (regular and API user). I think I can do the selection of the serializer by overriding the method `get_response_serializer `in the `LoginView`. My question is: how do I override the LoginView in my code? To override the `LoginSerializer` I've modified the proper setting as in the `dj-rest-auth ` manual, but where should I put my custom `LoginView`? Could you kindly provide a simple example? Many thanks! | closed | 2020-08-25T19:47:41Z | 2020-08-25T22:33:41Z | https://github.com/iMerica/dj-rest-auth/issues/133 | [] | fessacchiotto | 3 |
sanic-org/sanic | asyncio | 2,559 | Failure to find registered application | **Describe the bug**
Since version 22.9.0 and I believe https://github.com/sanic-org/sanic/pull/2499, I get errors for "app not found" during startup:
```
return cls._app_registry[name]
KeyError: 'api-server'
...
sanic.exceptions.SanicException: Sanic app name "api-server" not found.
```
**Code snippet**
<!-- Relevant source code, make sure to remove what is not necessary. -->
```python
from sanic import HTTPResponse
from sanic import Request
from sanic import Sanic
from sanic import response
class MySanicSubclas(Sanic):
def __init__(self) -> None:
super().__init__(name='sanic')
self.register_endpoints()
def register_endpoints(self) -> None:
"""This endpoint registers endpoints/blueprints etc, currently registers a single route to make sanic happy"""
self.get('/')(self.dummy_route)
async def dummy_route(self, request: Request) -> HTTPResponse:
return response.html('Hello world')
if __name__ == '__main__':
app = MySanicSubclas()
app.run(
motd=False,
# legacy=True, # <- will fix the issue, but will be deprecated after 23.3
dev=True,
)
```
**Expected behavior**
No exception is thrown.
**Environment (please complete the following information):**
<!-- Please provide the information below. Instead, you can copy and paste the message that Sanic shows on startup. If you do, please remember to format it with ``` -->
- OS: Linux
- Sanic Version: 22.9.0
**Additional context**
I **believe** this is because I do NOT have a global `app` object like usual. Instead, I subclass `Sanic` and add my own start up code, then instantiate the server separately in a main entry point and therefor your new launcher cannot find a "registered" instance. I'm aware of the existance of `legacy=True` but it will be deprecated in version v23.3.
In addition, this behavior is not going to be supported once https://peps.python.org/pep-0690/ is going to be a permanent part of python as your registration behavior relies on code execution during import. | closed | 2022-10-02T15:53:28Z | 2022-10-06T14:49:15Z | https://github.com/sanic-org/sanic/issues/2559 | [] | LiraNuna | 14 |
pytorch/vision | computer-vision | 8,364 | Improved functionality for Oxford IIIT Pet data loader | ### 🚀 The feature
Add the following functionality to the Oxford IIIT Pet data loader
1. Support binary classification of cat vs dog
2. With the segmentation target type, produce trimaps with class/background/don't care regions instead of target/background/don't care when the output is a tensor
3. Support detection as a target type
### Motivation, pitch
The Oxford IIIT Pet dataset is a fun dataset for trying out new things and for new practitioners to use to learn. These capabilities allow users to more easily use this dataset with detection and segmentation target types and to use the existing annotation for animal species (rather than breed) as a simpler problem to get started. I have created these capabilities on my local copy of torchvision, and I'm up for creating a PR if the community likes the enhancements. The individual proposed enhancements can be found in the links:
[Binary cat v dog](https://github.com/matlabninja/contribution_staging/blob/main/torchvision/pet_dataloader_bin/oxford_iiit_pet.py)
[Class labeled segmentation](https://github.com/matlabninja/contribution_staging/blob/main/torchvision/pet_dataloader_seg/oxford_iiit_pet.py)
[Detection target type](https://github.com/matlabninja/contribution_staging/blob/main/torchvision/pet_dataloader_detect/oxford_iiit_pet.py)
[All 3 enhancements](https://github.com/matlabninja/contribution_staging/blob/main/torchvision/petbuild/oxford_iiit_pet.py)
### Alternatives
I thought a lot about the ability to write transforms to use with a dataset loader to accomplish this, but it was unclear to me how I could access some of the class members of the dataset loaders that were necessary.
### Additional context
Demonstration of the new features can be found in the following notebooks:
[Class-labeled segmentation maps and binary species classification training Deeplab V3](https://github.com/matlabninja/contribution_staging/blob/main/notebooks/deeplabv3_resnet_bin.ipynb)
[Detection target type training resnet 50 faster RCNN](https://github.com/matlabninja/contribution_staging/blob/main/notebooks/resnet_detect.ipynb) | open | 2024-04-02T00:35:46Z | 2024-04-19T20:11:21Z | https://github.com/pytorch/vision/issues/8364 | [] | matlabninja | 2 |
exaloop/codon | numpy | 643 | Metaprogramming and AST manipulation from Codon | Is there a way to build AST directly from Codon and resolve it at compile time (essentially metaprogramming / macros) ? | open | 2025-03-21T09:22:47Z | 2025-03-24T19:07:56Z | https://github.com/exaloop/codon/issues/643 | [] | Clonkk | 1 |
huggingface/datasets | numpy | 7,412 | Index Error Invalid Ket is out of bounds for size 0 for code-search-net/code_search_net dataset | ### Describe the bug
I am trying to do model pruning on sentence-transformers/all-mini-L6-v2 for the code-search-net/code_search_net dataset using INCTrainer class
However I am getting below error
```
raise IndexError(f"Invalid Key: {key is our of bounds for size {size}")
IndexError: Invalid key: 1840208 is out of bounds for size 0
```
### Steps to reproduce the bug
Model pruning on the above dataset using the below guide
https://huggingface.co/docs/optimum/en/intel/neural_compressor/optimization#pruning
### Expected behavior
The modsl should be successfully pruned
### Environment info
Torch version: 2.4.1
Python version: 3.8.10 | open | 2025-02-18T05:58:33Z | 2025-02-18T06:42:07Z | https://github.com/huggingface/datasets/issues/7412 | [] | harshakhmk | 0 |
matplotlib/matplotlib | data-science | 29,551 | [Bug]: 3D tick label position jitter when rotating the plot view | It seems like there is some rounding going on with 3D tick label positions. When rotating the plot, the position of the labels move around a bit. This may be intentional to line up pixel values, or it may be an artifact, but the result is "jittering" that looks bad.
This becomes most noticeable in animations:
https://github.com/scottshambaugh/mpl_stereo/blob/7c20f7123173eed89370ffa4227be85de514b3a3/docs/trefoil_3d_animation.gif
May be related to https://github.com/matplotlib/matplotlib/issues/13044? Though there aren't 2D rotations happening here afaik.
TODO: check if the axis labels are also affected | open | 2025-01-30T17:38:36Z | 2025-02-02T02:00:14Z | https://github.com/matplotlib/matplotlib/issues/29551 | [
"topic: text",
"backend: agg"
] | scottshambaugh | 7 |
wger-project/wger | django | 1,126 | Rework the user preferences | The current user preferences need to be cleaned up somewhat
- Remove obsolete options
- Birthdate shouldn't be a required field
- Better error messages for birthdate
- Possibly: reimplement the settings page in react | closed | 2022-09-24T14:56:18Z | 2022-10-05T11:46:37Z | https://github.com/wger-project/wger/issues/1126 | [] | rolandgeider | 1 |
mljar/mljar-supervised | scikit-learn | 346 | Add support for `RMSLE` eval_metric | - dont need to do logarithm on the target because log transform is in the metric
- target values need to be positive | closed | 2021-03-23T12:28:36Z | 2021-04-27T08:04:36Z | https://github.com/mljar/mljar-supervised/issues/346 | [] | pplonski | 1 |
tflearn/tflearn | tensorflow | 324 | Multiple Input layers? | I would like to feed the metadata of an image to my network.
How should I get started with implementing a network with two input layers?
Is it possible with tflearn?
Let's say I have an input image with the following shape 3x32x32 and a 10 metadata features.
I define two input layers:
```
# Building network
network = input_data(shape=[None, 32, 32, 3])
in2 = input_data(shape=[None, 10])
network = conv_2d(network, 64, 3, activation='leaky_relu')
network = conv_2d(network, 64, 3, activation='leaky_relu')
network = max_pool_2d(network, 2, strides=2)
network = dropout(network, 0.5)
network = conv_2d(network, 64, 3, activation='leaky_relu')
network = conv_2d(network, 64, 3, activation='leaky_relu')
network = max_pool_2d(network, 2, strides=2)
network = dropout(network, 0.5)
network = fully_connected(network, 128, activation='leaky_relu')
network = dropout(network, 0.5)
network = merge([network, in2],'concat')
network = fully_connected(network, 128, activation='leaky_relu')
network = dropout(network, 0.5)
network = fully_connected(network, 8, activation='softmax')
```
Should I do something else also?
Can I just do the following:
`model.fit(Ximg,Xmeta, Y,)`
Thanks in advance
| open | 2016-09-01T20:53:24Z | 2017-11-10T08:48:41Z | https://github.com/tflearn/tflearn/issues/324 | [] | EliasVansteenkiste | 2 |
Lightning-AI/pytorch-lightning | machine-learning | 20,664 | MLFlowLogger fails to log artifact on Windows | ### Bug description
Error when training with "MLFlowLogger" and with `log_models="all"`, running on Windows:
```
mlflow.exceptions.MlflowException: Invalid artifact path: 'epoch=0-step=43654'. Names may be treated as files in certain cases, and must not resolve to other names when treated as such. This name would resolve to 'epoch=0-step=43654'.
```
I was able to find the reason for this error:
`MLFlowLogger` is calling `MLflowClient.log_artifact(...)` internally -- inside method `MLFlowLogger._scan_and_log_checkpoints(...)` -- passing the artifact path as a `pathlib.Path` object. However, MLFlow expects paths in the POSIX format, whereas `pathlib.Path` will use the current filesystem format.
Now, the actual reason for the error is due to an internal check that `MLFlow` does. It tries to verify that the path is already "normalized", that is, doesn't contain "." or "..". The way it does that is by calling `posixpath.normpath(...)` and checking if the output is the same as the original path. Because the original path is a "pathlib.Path" object, both paths do not match. I am assuming that if the platform was different from Windows, the comparison would match because the internal string representation of `pathlib.Path` would be the same.
### What version are you seeing the problem on?
v2.5
### How to reproduce the bug
```python
```
### Error messages and logs
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.5.0): 2.5.1
#- PyTorch Version (e.g., 2.5): 2.6.0
#- Python version (e.g., 3.12): 3.12.9
#- OS (e.g., Linux): Windows
#- CUDA/cuDNN version: 12.6
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source): pip
```
</details>
### More info
_No response_ | open | 2025-03-22T02:00:08Z | 2025-03-24T15:08:05Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20664 | [
"bug",
"needs triage",
"ver: 2.5.x"
] | niander | 1 |
minimaxir/textgenrnn | tensorflow | 144 | TPU Support? | Is there a way to use a tensor processing unit for acceleration? If not, is this feature going to be added in the future? | open | 2019-07-27T21:12:39Z | 2022-07-06T00:19:30Z | https://github.com/minimaxir/textgenrnn/issues/144 | [] | aidanmclaughlin | 2 |
Evil0ctal/Douyin_TikTok_Download_API | api | 14 | tiktok | 国内解析没问题。
tiktok就不成功一个。
web端直接百分百不动了。
api无反应。
不知道,是否又更新了。。。。 | closed | 2022-04-18T09:28:28Z | 2022-04-23T22:08:04Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/14 | [] | pengneal | 8 |
vitalik/django-ninja | django | 702 | [BUG] Get operations to not respect Pydantic config options | **Describe the bug**
When using a Schema for GET parameters (as documented [here](https://django-ninja.rest-framework.com/guides/input/query-params/#using-schema), django-ninja ignores the `extra=Extra.forbid` declaration on the schema. The intended purpose of this configuration option is to cause validation to fail if extra parameters are provided. See [here](https://docs.pydantic.dev/usage/model_config/). This works when using POST requests, but not GET.
An example would be something like this:
```python
class TestSchema(Schema, extra=Extra.forbid):
a: str
@router.get("/")
def get_endpoint(request, params: TestSchema= Query(...)):
# succeeds when passing extra params
return params
@router.post("/")
def post_endpoint(request, body: TestSchema):
# fails when passing extra params
return body
```
**Versions (please complete the following information):**
- Python version: [e.g. 3.10]
- Django version: [e.g. 4.1.7]
- Django-Ninja version: [e.g. 0.19.1]
- Pydantic version: [e.g. 1.10.5] | open | 2023-03-14T22:41:35Z | 2023-04-21T22:05:10Z | https://github.com/vitalik/django-ninja/issues/702 | [
"help wanted"
] | scott-8 | 2 |
fbdesignpro/sweetviz | data-visualization | 177 | compare_intra method should limit the name parameter length to 2 | Hi, I was just exploring your package and I found the compare_intra feature pretty interesting. So, just thought it would be better if we can limit the `names` tuple size to 2.
my_report = sv.compare_intra(source_df=df, condition_series=df["Type"] == "SUV", names=['SUV', 'Sedan', 'Sports Car', 'Wagon', 'Minivan']) | open | 2024-08-29T10:40:03Z | 2024-08-29T10:40:03Z | https://github.com/fbdesignpro/sweetviz/issues/177 | [] | Ruchita-debug | 0 |
PaddlePaddle/models | nlp | 4,770 | 文档中链接失效 | 文档地址:https://github.com/PaddlePaddle/models/blob/release/1.8/README.md#%E8%AF%AD%E4%B9%89%E8%A1%A8%E7%A4%BA
链接:

| open | 2020-07-27T04:24:47Z | 2020-07-28T03:30:56Z | https://github.com/PaddlePaddle/models/issues/4770 | [] | howl-anderson | 0 |
jeffknupp/sandman2 | rest-api | 116 | sqlalchemy.exc.ArgumentError | I am getting an error from the get go, thoughts? This is for mssql 2012
```
sandman2ctl mssql+pymssql://USERNAMEHERE:PASSWORDHERE@HOSTHERE/DBNAMEHERE
Traceback (most recent call last):
File "/Users/rustanacecorpuz/.virtualenvs/sandman2/bin/sandman2ctl", line 10, in <module>
sys.exit(main())
File "/Users/rustanacecorpuz/.virtualenvs/sandman2/lib/python3.7/site-packages/sandman2/scripts/sandman2ctl.py", line 51, in main
app = get_app(args.URI, read_only=args.read_only, schema=args.schema)
File "/Users/rustanacecorpuz/.virtualenvs/sandman2/lib/python3.7/site-packages/sandman2/app.py", line 60, in get_app
_reflect_all(exclude_tables, admin, read_only, schema=schema)
File "/Users/rustanacecorpuz/.virtualenvs/sandman2/lib/python3.7/site-packages/sandman2/app.py", line 139, in _reflect_all
register_model(cls, admin)
File "/Users/rustanacecorpuz/.virtualenvs/sandman2/lib/python3.7/site-packages/sandman2/app.py", line 156, in register_model
cols = list(cls().__table__.primary_key.columns)
File "<string>", line 2, in __init__
File "/Users/rustanacecorpuz/.virtualenvs/sandman2/lib/python3.7/site-packages/sqlalchemy/orm/instrumentation.py", line 373, in _new_state_if_none
state = self._state_constructor(instance, self)
File "/Users/rustanacecorpuz/.virtualenvs/sandman2/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 855, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
File "/Users/rustanacecorpuz/.virtualenvs/sandman2/lib/python3.7/site-packages/sqlalchemy/orm/instrumentation.py", line 199, in _state_constructor
self.dispatch.first_init(self, self.class_)
File "/Users/rustanacecorpuz/.virtualenvs/sandman2/lib/python3.7/site-packages/sqlalchemy/event/attr.py", line 297, in __call__
fn(*args, **kw)
File "/Users/rustanacecorpuz/.virtualenvs/sandman2/lib/python3.7/site-packages/sqlalchemy/orm/mapper.py", line 3341, in _event_on_first_init
configure_mappers()
File "/Users/rustanacecorpuz/.virtualenvs/sandman2/lib/python3.7/site-packages/sqlalchemy/orm/mapper.py", line 3229, in configure_mappers
mapper._post_configure_properties()
File "/Users/rustanacecorpuz/.virtualenvs/sandman2/lib/python3.7/site-packages/sqlalchemy/orm/mapper.py", line 1947, in _post_configure_properties
prop.init()
File "/Users/rustanacecorpuz/.virtualenvs/sandman2/lib/python3.7/site-packages/sqlalchemy/orm/interfaces.py", line 196, in init
self.do_init()
File "/Users/rustanacecorpuz/.virtualenvs/sandman2/lib/python3.7/site-packages/sqlalchemy/orm/relationships.py", line 1864, in do_init
self._generate_backref()
File "/Users/rustanacecorpuz/.virtualenvs/sandman2/lib/python3.7/site-packages/sqlalchemy/orm/relationships.py", line 2121, in _generate_backref
mapper._configure_property(backref_key, relationship)
File "/Users/rustanacecorpuz/.virtualenvs/sandman2/lib/python3.7/site-packages/sqlalchemy/orm/mapper.py", line 1840, in _configure_property
prop.init()
File "/Users/rustanacecorpuz/.virtualenvs/sandman2/lib/python3.7/site-packages/sqlalchemy/orm/interfaces.py", line 196, in init
self.do_init()
File "/Users/rustanacecorpuz/.virtualenvs/sandman2/lib/python3.7/site-packages/sqlalchemy/orm/relationships.py", line 1864, in do_init
self._generate_backref()
File "/Users/rustanacecorpuz/.virtualenvs/sandman2/lib/python3.7/site-packages/sqlalchemy/orm/relationships.py", line 2124, in _generate_backref
self._add_reverse_property(self.back_populates)
File "/Users/rustanacecorpuz/.virtualenvs/sandman2/lib/python3.7/site-packages/sqlalchemy/orm/relationships.py", line 1815, in _add_reverse_property
% (other, self, self.direction)
sqlalchemy.exc.ArgumentError: incident_request_type.incident_request_type and back-reference incident_request_type.incident_request_type_collection are both of the same direction symbol('ONETOMANY'). Did you mean to set remote_side on the many-to-one side ?
``` | open | 2019-07-23T01:15:55Z | 2019-07-29T22:45:03Z | https://github.com/jeffknupp/sandman2/issues/116 | [
"bug",
"question"
] | rustanacexd | 1 |
jmcnamara/XlsxWriter | pandas | 583 | unable to close(); saves a corrupted workbook | ```python
import xlsxwriter
workbook = xlsxwriter.Workbook('hello.xlsx')
worksheet = workbook.add_worksheet()
worksheet.write('A1', 'Hello world')
workbook.close()
```
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\DOUGHE~1.ROS\\AppData\\Local\\Temp\\tmpnpp39a_c'
```
I was able to write excel files yesterday; today my day began with this issue and I was not able to resolve. I was hoping someone here may be of assistance.
If I run workbook.close() again, I do not receive the error, but the .xlsx file does not open and says it is corrupted.
I'm not sure what happened. I may have have closed/exited the program incorrectly. I've read of the importance of using workbook.close(), did I create a 'write to excel ghost' living in the purgatory of my temp folder? This issue now extends to pandas .to_excel() as well. I am able to write to csv.
I've restarted, reinstalled anaconda +packages, deleted all the tmp files in the temp folder, however I still receive this error. Any help would be greatly appreciated. Thank you.
windows 10
python 2.7, 3.6, 3.7 (I had base install of anaconda py 2.7, with an additional environment of 3.6. Reinstalled anaconda with python 3.7, 64bit. Issue happened with all three) | closed | 2018-11-16T22:27:27Z | 2018-11-26T14:37:14Z | https://github.com/jmcnamara/XlsxWriter/issues/583 | [
"question"
] | jefdough | 20 |
streamlit/streamlit | streamlit | 10,257 | Chart builder | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
It would be nice if Chart Builder feature can be bumped up in the priority list.
### Why?
This feature could be a game changer for data analyst and scientists who are quite new to programming.
### How?
_No response_
### Additional Context
_No response_ | open | 2025-01-27T02:37:34Z | 2025-01-27T11:48:05Z | https://github.com/streamlit/streamlit/issues/10257 | [
"type:enhancement",
"feature:charts"
] | dmslowmo | 1 |
ansible/awx | django | 15,001 | Remove Inventory Source for "Template additional groups and hostvars at runtime" option | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
We should remove the top option from the Source picklist of the Inventory form:

See conversation here: https://redhat-internal.slack.com/archives/C0H0TG8CV/p1709916496872759?thread_ts=1709820007.243199&cid=C0H0TG8CV
### AWX version
latest
### Select the relevant components
- [X] UI
- [ ] UI (tech preview)
- [X] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [ ] Other
### Installation method
kubernetes
### Modifications
no
### Ansible version
_No response_
### Operating system
_No response_
### Web browser
Chrome
### Steps to reproduce
1. In the inventory you want to add a source, click the Sources tab.
2. Click the Add button.
This opens the Create Source window.
3. In the **Source** field, choose a source.
### Expected results
The picklist from the **Source** field should not have the **Template additional groups and hostvars at runtime** option.
### Actual results
The picklist from the **Source** field has the **Template additional groups and hostvars at runtime** option.
### Additional information
Initially, we thought it was for constructed inventory but it doesn't make sense to have it there.
See the Slack conversation: https://redhat-internal.slack.com/archives/C0H0TG8CV/p1709820007243199 | open | 2024-03-15T00:01:44Z | 2024-03-15T00:01:58Z | https://github.com/ansible/awx/issues/15001 | [
"type:bug",
"component:api",
"component:ui",
"needs_triage"
] | tvo318 | 0 |
plotly/dash | dash | 2,606 | [BUG] Duplicate callback outputs error when background callbacks or long_callbacks share a cancel input | **Describe your context**
Currently I am trying to migrate a Dash application from using versions 2.5.1 to a newer one, and as suggested for Dash >2.5.1 I am moving from using ```long_callbacks``` to using the ```dash.callback``` with ```background=True```, along with moving to using ```background_callback_manager```.
Environment
```
dash >2.5.1
```
**Describe the bug**
Updating to Dash 2.6.0+ causes previously working ```long_callbacks``` to cause Dash to give a duplicate callback output error, even though there are no duplicate outputs, the error is still present even after switching to the ```dash.callback``` with ```background=True```, and moving to using ```background_callback_manager```. This only appears when multiple background (or long_callbacks) have a shared cancel input.
Here is a [link](https://github.com/C-C-Shen/dash_background_callback_test) to a test repo that shows this, with a Dash 2.5.1 version that works and a Dash 2.6.1 version that does not.
Here is a code snippet from the example repo:
```
@callback(
output=Output("paragraph_id_1", "children"),
inputs=Input("button_id_1", "n_clicks"),
prevent_initial_call=True,
background=True,
running=[
(Output("button_id_1", "disabled"), True, False),
],
cancel=[Input("cancel_button_id", "n_clicks")],
)
def update_clicks(n_clicks):
time.sleep(2.0)
return [f"Clicked Button 1: {n_clicks} times"]
@callback(
output=Output("paragraph_id_2", "children"),
inputs=Input("button_id_2", "n_clicks"),
prevent_initial_call=True,
background=True,
running=[
(Output("button_id_2", "disabled"), True, False),
],
cancel=[Input("cancel_button_id", "n_clicks")],
)
def update_clicks(n_clicks):
time.sleep(2.0)
return [f"Clicked Button 2: {n_clicks} times"]
```
**Expected behavior**
Sharing the same cancel parameter between multiple callbacks should work like it does in Dash 2.5.1. Moving to Dash 2.6.0+ should probably not be causing a duplicate callback output error when no outputs are duplicated.
- if this is expected then it should be mentioned in the changelog
**Screenshots**
This an example error that is associated with the ```new.py``` in the example repo linked above

| closed | 2023-07-31T19:02:22Z | 2023-08-01T11:57:05Z | https://github.com/plotly/dash/issues/2606 | [] | C-C-Shen | 1 |
huggingface/datasets | numpy | 7,208 | Iterable dataset.filter should not override features | ### Describe the bug
When calling filter on an iterable dataset, the features get set to None
### Steps to reproduce the bug
import numpy as np
import time
from datasets import Dataset, Features, Array3D
```python
features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32"), "array1": Array3D((None,10,10), dtype="float32")})
dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,10,10), dtype=np.float32) for x in [2000,1000]*25] for i in range(2)}, features=features)
ds = dataset.to_iterable_dataset()
orig_column_names = ds.column_names
ds = ds.filter(lambda x: True)
assert ds.column_names == orig_column_names
```
### Expected behavior
Filter should preserve features information
### Environment info
3.0.2 | closed | 2024-10-09T10:23:45Z | 2024-10-09T16:08:46Z | https://github.com/huggingface/datasets/issues/7208 | [] | alex-hh | 1 |
flasgger/flasgger | rest-api | 524 | Load Default APIKey in UI | Hello,
Is there a way to load by default the Authentication (e.g. in development) so that we don't have to enter it every time we refresh? This is specifically useful in development to avoid having to copy paste the API Key all the time
Thank you
<img width="647" alt="Screen Shot 2022-03-30 at 7 11 41 PM" src="https://user-images.githubusercontent.com/449118/160945661-478a8b1f-acd7-467c-a017-8a1780ef600a.png">
| open | 2022-03-30T23:13:02Z | 2022-10-27T21:49:54Z | https://github.com/flasgger/flasgger/issues/524 | [] | ftheo | 2 |
piccolo-orm/piccolo | fastapi | 764 | Getting TypeError: 'PostgresTransaction' object does not support the context manager protocol | Getting Type error while running queries in transaction using piccolo.engine
```
from piccolo.engine.finder import engine_finder
engine = engine_finder()
with engine.transaction():
```
```
E TypeError: 'PostgresTransaction' object does not support the context manager protocol
``` | closed | 2023-02-16T10:37:48Z | 2023-02-16T10:46:16Z | https://github.com/piccolo-orm/piccolo/issues/764 | [] | deserve-shubham | 2 |
Textualize/rich | python | 3,144 | [BUG] default syntax highlighting of yaml is unreadable in light terminals | - [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.
- [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).
**Describe the bug**
Default yaml rendering is unreadable on a light mode (black on white) terminal. It appears to be using the default text color for some elements which blends in with the syntax's background.
MRE:
```bash
printf "---\nfoo: bar" | python -m rich.syntax - -x yaml
```
<img width="428" alt="image" src="https://github.com/Textualize/rich/assets/36862124/63c6987f-32eb-4999-9dbd-1c7ae2e13f80">
Note: I'm able to work around this by adding `style="white"` to the rendering of the syntax block. e.g.:
```python
s = Syntax(txt, type)
self.console.print(s, style="white")
```
**Platform**
<details>
<summary>Click to expand</summary>
What platform (Win/Linux/Mac) are you running on? What terminal software are you using?
> macOS 13.6
> Terminal.app, iTerm2
```
python -m rich.diagnose
pip freeze | grep rich
╭───────────────────────── <class 'rich.console.Console'> ─────────────────────────╮
│ A high level console interface. │
│ │
│ ╭──────────────────────────────────────────────────────────────────────────────╮ │
│ │ <console width=103 ColorSystem.EIGHT_BIT> │ │
│ ╰──────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ color_system = '256' │
│ encoding = 'utf-8' │
│ file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> │
│ height = 56 │
│ is_alt_screen = False │
│ is_dumb_terminal = False │
│ is_interactive = True │
│ is_jupyter = False │
│ is_terminal = True │
│ legacy_windows = False │
│ no_color = False │
│ options = ConsoleOptions( │
│ size=ConsoleDimensions(width=103, height=56), │
│ legacy_windows=False, │
│ min_width=1, │
│ max_width=103, │
│ is_terminal=True, │
│ encoding='utf-8', │
│ max_height=56, │
│ justify=None, │
│ overflow=None, │
│ no_wrap=False, │
│ highlight=None, │
│ markup=None, │
│ height=None │
│ ) │
│ quiet = False │
│ record = False │
│ safe_box = True │
│ size = ConsoleDimensions(width=103, height=56) │
│ soft_wrap = False │
│ stderr = False │
│ style = None │
│ tab_size = 8 │
│ width = 103 │
╰──────────────────────────────────────────────────────────────────────────────────╯
╭─── <class 'rich._windows.WindowsConsoleFeatures'> ────╮
│ Windows features available. │
│ │
│ ╭───────────────────────────────────────────────────╮ │
│ │ WindowsConsoleFeatures(vt=False, truecolor=False) │ │
│ ╰───────────────────────────────────────────────────╯ │
│ │
│ truecolor = False │
│ vt = False │
╰───────────────────────────────────────────────────────╯
╭──────── Environment Variables ────────╮
│ { │
│ 'TERM': 'xterm-256color', │
│ 'COLORTERM': None, │
│ 'CLICOLOR': None, │
│ 'NO_COLOR': None, │
│ 'TERM_PROGRAM': 'Apple_Terminal', │
│ 'COLUMNS': None, │
│ 'LINES': None, │
│ 'JUPYTER_COLUMNS': None, │
│ 'JUPYTER_LINES': None, │
│ 'JPY_PARENT_PID': None, │
│ 'VSCODE_VERBOSE_LOGGING': None │
│ } │
╰───────────────────────────────────────╯
platform="Darwin"
~ venv38 3.8.17 ╱ 11:17:51
❯ pip freeze | grep rich
rich==13.5.3
richbench==1.0.3
~ venv38 3.8.17 ╱ 11:18:40
```
</details>
| open | 2023-10-06T18:25:12Z | 2023-10-06T18:26:39Z | https://github.com/Textualize/rich/issues/3144 | [
"Needs triage"
] | xton-stripe | 2 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,202 | Make it possible to define language-specific user PublicName |
An user is able to define public name in their profile.
In a multilingual setup, the value is constant across available languages.
Out users would like to be able to define language-specific values for the property.
| open | 2022-03-18T13:36:53Z | 2022-03-26T07:30:25Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3202 | [] | aetdr | 1 |
csu/quora-api | web-scraping | 23 | Add routes for get_latest_answers | open | 2014-12-24T10:18:38Z | 2014-12-24T10:18:51Z | https://github.com/csu/quora-api/issues/23 | [
"enhancement"
] | csu | 0 | |
pydantic/pydantic-ai | pydantic | 1,037 | Add support for Google's `gemini-2.0-pro-exp-02-05` model | ### Description
It is available from both vertex and google ai studio.
### References

https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models | closed | 2025-03-03T18:05:23Z | 2025-03-04T08:18:18Z | https://github.com/pydantic/pydantic-ai/issues/1037 | [] | barapa | 0 |
jmcarpenter2/swifter | pandas | 106 | Swifter incorrectly comparing results of pandas and dask applies | Wrong comparison of 2 pandas Series
```
swifter==0.302
dask==2.14.0
pandas==1.0.3
```
file **swifter.py:289**
```
self._validate_apply(
tmp_df.equals(meta), error_message="Dask apply sample does not match pandas apply sample."
)
```
It compares 2 Series (in my case):
**meta** = (0, 0.002689316907153639) (1, 0.0020169299881876556) (2, 0.0021525252276455888) (0, 0.0023806118812282305) (1, 0.002263126767581785) (2, 0.0023925398049665803) (0, 0.002505102859306909) (1, 0.0019670994913228703) (2, 0.0020991911781853417) (0, 0.0023227029
**temp_df** = (0, 0.002689316907153639) (0, 0.001975845345081718) (0, 0.002583021140662563) (0, 0.0022234801671238607) (0, 0.0021956561811590993) (0, 0.0023227029344862734) (0, 0.0027618463320199546) (0, 0.0023806118812282305) (0, 0.002505102859306909) (1, 0.00196709949
tmp_df.equals(meta) - False
while
tmp_df.sort_index().equals(meta.sort_index()) - True
tmp_df.sort_values().equals(meta.sort_values()) - True
Is it correct?
PS. There is a suggestion is to change tmp_df and meta name to more obvious name to understand which came from what part (e.g. dask_smpl_apply_df and pd_smpl_apply_df)
PSS. For some reason, Dask reduces the amount of partitions for the dataframe from 16 to 5
| closed | 2020-04-20T14:29:48Z | 2020-04-28T14:26:02Z | https://github.com/jmcarpenter2/swifter/issues/106 | [] | sann05 | 5 |
Gozargah/Marzban | api | 827 | وارد کردن لینک اشتراک در هیدیفای نکست | در اپدیت جدید نسخه dev که همین ۳ ساعت پیش عرضه شده . هنگام وارد کردن لینک اشتراک بر برنامه hiddify next به مشکل میخوره
در اندروید که کلا میپره بیرون از برنامه
در ویندوز این پنجره نمایش داده میشه

این مشکل در نسخه پایدار latest نیست و فقط در نسخه اخر به این مشکل خورده . و کاربران من روی این برنامه مشکل اپدیت دارن که مجبور شدم برگردم به نسخه latest | closed | 2024-02-28T22:35:09Z | 2024-07-01T14:20:14Z | https://github.com/Gozargah/Marzban/issues/827 | [
"Bug"
] | LonUp | 6 |
explosion/spaCy | deep-learning | 13,684 | Memory leak of MorphAnalysis object. | I have encountered a crucial bug, which makes running a continuous tokenization using Japanese tokenizer close to impossible. It's all due so memory leak of MorphAnalysis
## How to reproduce the behaviour
```
import spacy
import tracemalloc
tracemalloc.start()
tokenizer = spacy.blank("ja")
tokenizer.add_pipe("sentencizer")
for _ in range(1000):
text = " ".join(["a"] * 1000)
snapshot = tracemalloc.take_snapshot()
with tokenizer.memory_zone():
doc = tokenizer(text)
tokenizer.max_length = len(text) + 10
import gc
gc.collect()
snapshot2 = tracemalloc.take_snapshot()
# Compare the two snapshots
p_stats = snapshot2.compare_to(snapshot, "lineno")
# Pretty print the top 10 differences
print("[ Top 10 ]")
# Stop here with pdb
for stat in p_stats[:10]:
if stat.size_diff > 0:
print(stat)
```
Run this script and observe how memory keeps growing:

It all happens due to the this line:
`token.morph = MorphAnalysis(self.vocab, morph)`. I have checked the implementation itself and there is neither code for dealocation implemented, nor it supports the memory_zone.
| open | 2024-11-04T18:18:58Z | 2024-12-28T14:07:29Z | https://github.com/explosion/spaCy/issues/13684 | [] | hynky1999 | 3 |
deepfakes/faceswap | deep-learning | 1,154 | deepfakes/faceswap:换脸技术详细教程中文版链接。 | deepfakes/faceswap:换脸技术详细教程[中文版链接](https://zhuanlan.zhihu.com/p/376853800):https://zhuanlan.zhihu.com/p/376853800 | closed | 2021-06-02T07:13:25Z | 2021-06-30T10:12:25Z | https://github.com/deepfakes/faceswap/issues/1154 | [] | wusaifei | 0 |
miguelgrinberg/Flask-SocketIO | flask | 1,619 | Performance issue | I am experiencing a massive delay in establishing a connection and then frequent stalling.
The setup:
- 10 Instances of the server app, only serving websockets using Flask SocketIO
- 3 nginx proxy servers with sticky sessions for all ws traffic
- 1 redis instance (exclusive for pubsub)
The app uses `gevent` in combination with `geventwebsocket`. Monkey patching is correctly applied.
Connected are around 500 users total, messages are coming in at around 5k/s (redis ops). CPU/memory etc. are not a problem and available plentiful.
The behaviour is that it takes around 9 seconds for the initial SocketIO packet to arrive at the client and then it basically stalls out, closes the connection and repeats the cycle with an ever increasing connection "delay". What I have noticed is that it seems to collect a couple hundred messages before sending anything out to the client. This shows also in the memory usage.
The server app basically only relays messages sent via external clients, so I am at a loss why this is happening. I have tried horizontally scaling the app, but that seems to make the issue worse.
core.py
```py
import ...
redis_connection: str = 'redis://{}:{}'.format(
redis_settings['host'],
redis_settings['port']
)
# FlaskEx only overrides process_response in order to change some response headers
app = FlaskEx(
Environment.APP_NAME,
static_url_path = '/assets',
static_folder = 'etc/assets'
)
verbosity = ('--debug' in sys.argv and Environment.DEVELOPMENT)
socketio = SocketIO(
app,
async_mode = 'gevent',
manage_session = False,
message_queue = redis_connection,
cors_allowed_origins = '*',
logger = verbosity,
engineio_logger = verbosity,
path = 'connect',
always_connect = True
)
```
socketio.py
```py
import ...
@socketio.on('keepalive')
def command_keepalive():
emit('keepalive')
@socketio.on('connect')
def event_connect():
emit('ack', {
'shard': Environment.SHARD_UUID
})
# I have stripped out some auth logic which is non essential to the issue
@socketio.on('join')
def command_join(payload: dict):
join_room(room = payload['r'], sid = request.sid)
@socketio.on('leave')
def command_leave(payload: dict):
leave_room(room = payload['r'], sid = request.sid)
```
run.py
```py
import gevent.monkey
gevent.monkey.patch_all()
if __name__ == '__main__':
import geventwebsocket
import ...
from core import app
app.socketio.run(
demeter.app.app,
host = Environment.HOST,
port = Environment.PORT
)
```
py-spy dump
```
Total Samples 5100
GIL: 97.00%, Active: 100.00%, Threads: 11
%Own %Total OwnTime TotalTime Function (filename:line)
2.00% 2.00% 1.88s 1.88s url_quote (werkzeug/urls.py:572)
1.00% 1.00% 1.71s 1.71s recv (gevent/_socketcommon.py:657)
4.00% 4.00% 1.59s 1.59s _cookie_quote (werkzeug/_internal.py:412)
3.00% 3.00% 1.31s 1.31s dump_payload (itsdangerous/url_safe.py:44)
0.00% 0.00% 0.840s 0.840s bind_to_environ (werkzeug/routing.py:1689)
4.00% 7.00% 0.740s 1.16s top (werkzeug/local.py:247)
4.00% 4.00% 0.660s 0.660s _cookie_parse_impl (werkzeug/_internal.py:465)
3.00% 3.00% 0.590s 0.590s _cookie_quote (werkzeug/_internal.py:413)
1.00% 1.00% 0.580s 0.580s _cookie_quote (werkzeug/_internal.py:416)
3.00% 3.00% 0.530s 0.530s iterencode (json/encoder.py:257)
1.00% 1.00% 0.510s 0.510s derive_key (itsdangerous/signer.py:130)
2.00% 2.00% 0.470s 0.470s _dt_as_utc (werkzeug/_internal.py:320)
0.00% 0.00% 0.470s 0.490s run (gevent/hub.py:647)
1.00% 1.00% 0.450s 0.450s recv (gevent/_socketcommon.py:663)
0.00% 0.00% 0.430s 0.430s load_payload (itsdangerous/url_safe.py:33)
1.00% 1.00% 0.420s 0.420s acquire (gevent/thread.py:121)
0.00% 14.00% 0.420s 8.27s emit (flask_socketio/__init__.py:825)
1.00% 12.00% 0.380s 7.22s _publish (socketio/redis_manager.py:80)
0.00% 0.00% 0.370s 0.370s format_datetime (email/utils.py:162)
1.00% 1.00% 0.360s 0.520s max_cookie_size (flask/wrappers.py:164)
2.00% 2.00% 0.320s 0.320s __getattr__ (werkzeug/local.py:151)
0.00% 7.00% 0.320s 1.80s dumps (itsdangerous/_json.py:18)
```
| closed | 2021-07-04T09:41:33Z | 2021-07-04T10:23:14Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1619 | [] | pistol-whip | 1 |
huggingface/text-generation-inference | nlp | 2,887 | Unclear Metrics list | ### System Info
Latest docker
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
Run tgi and monitor with graphana
### Expected behavior
Can you please provide a more thorough explanation regarding the exported metrics. For examle which metric is the time to first token? | open | 2025-01-07T14:58:13Z | 2025-01-08T15:11:39Z | https://github.com/huggingface/text-generation-inference/issues/2887 | [] | vitalyshalumov | 1 |
python-visualization/folium | data-visualization | 1,682 | . | closed | 2022-12-22T17:42:23Z | 2022-12-22T21:15:56Z | https://github.com/python-visualization/folium/issues/1682 | [] | Mukund2900 | 0 | |
ultralytics/ultralytics | pytorch | 19,509 | How to generate a onxx and a nms file ? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
from ultralytics import YOLO
# Load a model
model = YOLO("maskbest-ptbr.pt") # load an official model
# Export the model
model.export(format="onnx",nms=True)
### Additional
i am need a nms-model.onxx for boxes and so on. How i can create these file | open | 2025-03-04T02:33:57Z | 2025-03-04T15:39:46Z | https://github.com/ultralytics/ultralytics/issues/19509 | [
"question",
"exports"
] | xmaxmex | 4 |
zappa/Zappa | django | 1,034 | Add support for python 3.9 | <!--- Provide a general summary of the issue in the Title above -->
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 3.6/3.7/3.8 -->
AWS [announced](https://aws.amazon.com/jp/blogs/compute/python-3-9-runtime-now-available-in-aws-lambda/) that python 3.9 is available for Lamba runtime.
## Expected Behavior
<!--- Tell us what should happen -->
Zappa should support python 3.9
## Actual Behavior
<!--- Tell us what happens instead -->
I got this error:
```
Zappa (and AWS Lambda) support the following versions of Python: ['3.6', '3.7', '3.8']
Traceback (most recent call last):
File "/var/lang/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/var/task/zappa_handler.py", line 24, in <module>
from zappa.middleware import ZappaWSGIMiddleware
File "/var/task/zappa/__init__.py", line 14, in <module>
raise RuntimeError(err_msg)
```
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1.
2.
3.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used:
* Operating System and Python version:
* The output of `pip freeze`:
* Link to your project (optional):
* Your `zappa_settings.json`:
| closed | 2021-09-04T05:12:01Z | 2021-10-28T09:19:01Z | https://github.com/zappa/Zappa/issues/1034 | [] | tsuga | 3 |
plotly/dash | plotly | 2,598 | [BUG] Missing Dash import in Documentation for Upload in Dash Core Components | **Describe your context**
Viewing the documentation of plotly-dash. And visiting the documentation of the Upload component.
- replace the result of `pip list | grep dash` below
```
dash 0.42.0
dash-core-components 0.47.0
dash-html-components 0.16.0
dash-renderer 0.23.0
dash-table 3.6.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: MacOs
- Browser: Chrome
- Version: 114
**Describe the bug**
When viewing the documentation of [Dash Upload Component](https://dash.plotly.com/dash-core-components/upload), in the first example, there is a missing Dash import which is used in the code.
**Expected behavior**
There should be an import of Dash in the first line.
**Screenshots**

| closed | 2023-07-12T16:40:25Z | 2023-07-14T17:24:06Z | https://github.com/plotly/dash/issues/2598 | [] | manavkush | 2 |
marshmallow-code/flask-smorest | rest-api | 447 | typo in doc: Reponse -> Response | In some comments, the Response object is incorrectly called Reponse. | closed | 2023-01-22T17:24:32Z | 2023-01-22T21:38:17Z | https://github.com/marshmallow-code/flask-smorest/issues/447 | [] | ElDavoo | 1 |
miguelgrinberg/Flask-SocketIO | flask | 1,064 | Flask-SocketIO: requests to upgrade to websocket fail in some cases | I am building sample web-based calculator where all clients must immediately see updates other clients make.
Locally it works beautiful, however when deployed to AWS EC2 client requests to upgrade to websocket fail in some cases both in Chrome(77.x) and Firefox(60.x). As I understand it, the server blocks everything trying to upgrade it and then client disconnects after timeout.
Running on dev server, no third-party web servers currently used.
**requirements.txt:**
```
alembic==1.1.0
Click==7.0
dnspython==1.16.0
eventlet==0.25.1
Flask==1.1.1
Flask-Migrate==2.5.2
Flask-Script==2.0.6
Flask-SocketIO==4.2.1
Flask-SQLAlchemy==2.4.0
gevent==1.4.0
gevent-websocket==0.10.1
greenlet==0.4.15
itsdangerous==1.1.0
Jinja2==2.10.1
Mako==1.1.0
MarkupSafe==1.1.1
monotonic==1.5
psycopg2==2.8.3
psycopg2-binary==2.8.3
python-dateutil==2.8.0
python-dotenv==0.10.3
python-editor==1.0.4
python-engineio==3.9.3
python-socketio==4.3.1
six==1.12.0
SQLAlchemy==1.3.7
Werkzeug==0.15.5
```
**client = socket.io 2.2.0**
**Server code:**
```
import eventlet
eventlet.monkey_patch()
#...other imports...
socketio = SocketIO(application, logger=True, engineio_logger=True)
#...other code...
socketio.run(application, host='0.0.0.0', debug=True, use_reloader=False)
```
**Client logs:** you can check at http://ec2-54-146-7-208.compute-1.amazonaws.com:5000
**Server logs:**
```
24.118.131.176 - - [19/Sep/2019 16:15:33] "GET /socket.io/?EIO=3&transport=polling&t=MrAJSFH&sid=0784c7202fc64d7fbb6d130bf574c0e5 HTTP/1.1" 400 186 60.001675
89b7a287731d4232b05c5efb59439337: Sending packet OPEN data {'sid': '89b7a287731d4232b05c5efb59439337', 'upgrades': ['websocket'], 'pingTimeout': 60000, 'pingInterval': 25000}
client connected: 9
89b7a287731d4232b05c5efb59439337: Sending packet MESSAGE data 0
24.118.131.176 - - [19/Sep/2019 16:15:34] "GET /socket.io/?EIO=3&transport=polling&t=MrAJh6W HTTP/1.1" 200 349 0.001331
(25584) accepted ('24.118.131.176', 39652)
89b7a287731d4232b05c5efb59439337: Received request to upgrade to websocket
24.118.131.176 - - [19/Sep/2019 16:15:34] "GET /socket.io/?EIO=3&transport=polling&t=MrAJh8g&sid=89b7a287731d4232b05c5efb59439337 HTTP/1.1" 200 183 0.113250
Traceback (most recent call last):
File "/home/ec2-user/venvs/sezzle-calc/lib/python3.7/site-packages/eventlet/wsgi.py", line 566, in handle_one_response
result = self.application(self.environ, start_response)
File "/home/ec2-user/venvs/sezzle-calc/lib/python3.7/site-packages/flask/app.py", line 2463, in __call__
return self.wsgi_app(environ, start_response)
File "/home/ec2-user/venvs/sezzle-calc/lib/python3.7/site-packages/flask_socketio/__init__.py", line 46, in __call__
start_response)
File "/home/ec2-user/venvs/sezzle-calc/lib/python3.7/site-packages/engineio/middleware.py", line 60, in __call__
return self.engineio_app.handle_request(environ, start_response)
File "/home/ec2-user/venvs/sezzle-calc/lib/python3.7/site-packages/socketio/server.py", line 534, in handle_request
return self.eio.handle_request(environ, start_response)
File "/home/ec2-user/venvs/sezzle-calc/lib/python3.7/site-packages/engineio/server.py", line 366, in handle_request
environ, start_response)
File "/home/ec2-user/venvs/sezzle-calc/lib/python3.7/site-packages/engineio/socket.py", line 106, in handle_get_request
start_response)
File "/home/ec2-user/venvs/sezzle-calc/lib/python3.7/site-packages/engineio/socket.py", line 146, in _upgrade_websocket
return ws(environ, start_response)
File "/home/ec2-user/venvs/sezzle-calc/lib/python3.7/site-packages/engineio/async_drivers/eventlet.py", line 20, in __call__
return super(WebSocketWSGI, self).__call__(environ, start_response)
File "/home/ec2-user/venvs/sezzle-calc/lib/python3.7/site-packages/eventlet/websocket.py", line 130, in __call__
self.handler(ws)
File "/home/ec2-user/venvs/sezzle-calc/lib/python3.7/site-packages/engineio/socket.py", line 171, in _websocket_handler
pkt = ws.wait()
File "/home/ec2-user/venvs/sezzle-calc/lib/python3.7/site-packages/eventlet/websocket.py", line 788, in wait
for i in self.iterator:
File "/home/ec2-user/venvs/sezzle-calc/lib/python3.7/site-packages/eventlet/websocket.py", line 643, in _iter_frames
message = self._recv_frame(message=fragmented_message)
File "/home/ec2-user/venvs/sezzle-calc/lib/python3.7/site-packages/eventlet/websocket.py", line 669, in _recv_frame
header = recv(2)
File "/home/ec2-user/venvs/sezzle-calc/lib/python3.7/site-packages/eventlet/websocket.py", line 578, in _get_bytes
d = self.socket.recv(numbytes - len(data))
File "/home/ec2-user/venvs/sezzle-calc/lib/python3.7/site-packages/eventlet/greenio/base.py", line 366, in recv
return self._recv_loop(self.fd.recv, b'', bufsize, flags)
File "/home/ec2-user/venvs/sezzle-calc/lib/python3.7/site-packages/eventlet/greenio/base.py", line 360, in _recv_loop
self._read_trampoline()
File "/home/ec2-user/venvs/sezzle-calc/lib/python3.7/site-packages/eventlet/greenio/base.py", line 331, in _read_trampoline
timeout_exc=socket_timeout('timed out'))
File "/home/ec2-user/venvs/sezzle-calc/lib/python3.7/site-packages/eventlet/greenio/base.py", line 210, in _trampoline
mark_as_closed=self._mark_as_closed)
File "/home/ec2-user/venvs/sezzle-calc/lib/python3.7/site-packages/eventlet/hubs/__init__.py", line 159, in trampoline
return hub.switch()
File "/home/ec2-user/venvs/sezzle-calc/lib/python3.7/site-packages/eventlet/hubs/hub.py", line 298, in switch
return self.greenlet.switch()
socket.timeout: timed out
24.118.131.176 - - [19/Sep/2019 16:16:34] "GET /socket.io/?EIO=3&transport=websocket&sid=89b7a287731d4232b05c5efb59439337 HTTP/1.1" 500 0 60.056104
Client disconnected: 9
89b7a287731d4232b05c5efb59439337: Client is gone, closing socket
24.118.131.176 - - [19/Sep/2019 16:16:34] "GET /socket.io/?EIO=3&transport=polling&t=MrAJhBM&sid=89b7a287731d4232b05c5efb59439337 HTTP/1.1" 400 186 60.001529
2736a74a53514ed18127c17d6b5ed51f: Sending packet OPEN data {'sid': '2736a74a53514ed18127c17d6b5ed51f', 'upgrades': ['websocket'], 'pingTimeout': 60000, 'pingInterval': 25000}
client connected: 9
2736a74a53514ed18127c17d6b5ed51f: Sending packet MESSAGE data 0
24.118.131.176 - - [19/Sep/2019 16:16:35] "GET /socket.io/?EIO=3&transport=polling&t=MrAJw3v HTTP/1.1" 200 349 0.001351
(25584) accepted ('24.118.131.176', 39654)
2736a74a53514ed18127c17d6b5ed51f: Received request to upgrade to websocket
24.118.131.176 - - [19/Sep/2019 16:16:35] "GET /socket.io/?EIO=3&transport=polling&t=MrAJw5e&sid=2736a74a53514ed18127c17d6b5ed51f HTTP/1.1" 200 183 0.106838
^Cwsgi exiting
2736a74a53514ed18127c17d6b5ed51f: Failed websocket upgrade, expected UPGRADE packet, received None instead.
24.118.131.176 - - [19/Sep/2019 16:16:49] "GET /socket.io/?EIO=3&transport=websocket&sid=2736a74a53514ed18127c17d6b5ed51f HTTP/1.1" 200 0 13.647335
```
Thanks. Pls let me know if additional info is needed | closed | 2019-09-19T16:19:43Z | 2019-12-19T16:59:29Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1064 | [
"question"
] | DataDrivenEngineer | 3 |
keras-team/keras | tensorflow | 20,548 | ValueError: No model config found in the file at C:\Users\gnanar\.deepface\weights\age_model_weights.h5. | I am trying to convert the age_model_weights.h5 to onnx .But it give me the following error
ValueError: No model config found in the file at C:\Users\gnanar\.deepface\weights\age_model_weights.h5.
This is my code:
import tensorflow as tf
import tf2onnx
import onnx
from tensorflow.keras.models import load_model
# Load the Keras model (.h5 file)
model = load_model("age_model_weights.h5")
# Convert Keras model to ONNX format
onnx_model, _ = tf2onnx.convert.from_keras(model)
onnx.save(onnx_model, 'age_model.onnx')
print("Model successfully converted to ONNX format!")
| closed | 2024-11-26T05:30:25Z | 2025-01-01T02:06:54Z | https://github.com/keras-team/keras/issues/20548 | [
"stat:awaiting response from contributor",
"stale",
"type:Bug"
] | Gnanapriya2000 | 6 |
sloria/TextBlob | nlp | 416 | is Textblob use for text classification? | open | 2023-01-30T08:41:05Z | 2023-01-31T05:55:35Z | https://github.com/sloria/TextBlob/issues/416 | [] | swatijibhkatesj | 1 | |
open-mmlab/mmdetection | pytorch | 11,431 | coco dataset not respecting backend_args | I am using:
- "mmengine==0.10.2"
- "mmcv==2.1.0"
- "mmdet==3.3.0"
and trying to run the rtmdet_tiny_8xb32-300e-coco example.
It works find with the default settings so I tried writing a custom backend to work with AWS S3 instead.
It can read the images fine but I get a file not found error loading the annotations.
The issue is with this line: https://github.com/open-mmlab/mmdetection/blob/44ebd17b145c2372c4b700bfb9cb20dbd28ab64a/mmdet/datasets/coco.py#L65
when self.COCOAPI is called it calls this: https://github.com/open-mmlab/mmdetection/blob/44ebd17b145c2372c4b700bfb9cb20dbd28ab64a/mmdet/datasets/api_wrappers/coco_api.py#L25
When then triggers this: https://github.com/cocodataset/cocoapi/blob/8c9bcc3cf640524c4c20a9c40e89cb6a2f2fa0e9/PythonAPI/pycocotools/coco.py#L84
From what I can see it looks like the custom backend isn't getting used by the `pycocoapi` and hence it is unable to load the annotation file.
If I manually set the `ann_file` configurations to point to a local disk folder with the annotation json but leave the images themselves on AWS S3 with my custom `backend_args` then it is fine so that suggests that my custom backend is working. It is just that the mmdet `coco` class isn't able to use the backend to load the annotation file and pass it to the pycocoapi class.
Am I missing something or is there a fix?
| open | 2024-01-25T14:19:02Z | 2024-01-25T14:19:19Z | https://github.com/open-mmlab/mmdetection/issues/11431 | [] | Data-drone | 0 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,985 | 【solved】How to do can solve Fatal server error: (EE) Server is already active for display 1 | Fatal server error:
(EE) Server is already active for display 1
If this server is no longer running, remove /tmp/.X1-lock
and start again.
(EE)
<img width="1709" alt="image" src="https://github.com/user-attachments/assets/331762aa-1d0a-4e15-96bc-4995028f9668">
| open | 2024-08-12T11:56:32Z | 2024-08-12T13:20:56Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1985 | [] | mouyong | 9 |
amdegroot/ssd.pytorch | computer-vision | 420 | box_utils.py decode | ```
def decode(loc, priors, variances):
boxes = torch.cat((
priors[:, :2] + loc[:, :2] * variances[0] * priors[:, 2:],
priors[:, 2:] * torch.exp(loc[:, 2:] * variances[1])), 1)
boxes[:, :2] -= boxes[:, 2:] / 2
boxes[:, 2:] += boxes[:, :2]
return boxes
```
why boxes[:, :2] -= boxes[:, 2:] / 2? | closed | 2019-10-17T11:38:36Z | 2019-10-18T03:44:59Z | https://github.com/amdegroot/ssd.pytorch/issues/420 | [] | sdu2011 | 1 |
huggingface/text-generation-inference | nlp | 2,113 | how to launch a service using downloaded model weights? | ### System Info
I have downloaded model weights of bge-models, and I want to launch a model service using TGI, the command is :
```
model=/storage/nfs2/ModelHub/embedding/BAAI/bge-small-zh-v1.5
revision=refs/pr/5
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
docker run --gpus all \
-p 3001:3001 -v $volume:/data text-embeddings-inference:1.2 \
--model-id $model --port 3001 --revision $revision
```
but I got the follwing error:
```
2024-06-25T03:13:34.201754Z INFO text_embeddings_router: router/src/main.rs:140: Args { model_id: "BAA*/***-*****-**-v1.5", revision: Some("refs/pr/5"), tokenization_workers: None, dtype: None, pooling: None, max_concurrent_requests: 512, max_batch_tokens: 16384, max_batch_requests: None, max_client_batch_size: 32, auto_truncate: false, hf_api_token: None, hostname: "54903bb17567", port: 3001, uds_path: "/tmp/text-embeddings-inference-server", huggingface_hub_cache: Some("/data"), payload_limit: 2000000, api_key: None, json_output: false, otlp_endpoint: None, cors_allow_origin: None }
2024-06-25T03:13:34.201950Z INFO hf_hub: /root/.cargo/git/checkouts/hf-hub-1aadb4c6e2cbe1ba/b167f69/src/lib.rs:55: Token file not found "/root/.cache/huggingface/token"
2024-06-25T03:13:36.546198Z INFO download_artifacts: text_embeddings_core::download: core/src/download.rs:20: Starting download
Error: Could not download model artifacts
Caused by:
0: request error: error sending request for url (https://huggingface.co/BAAI/bge-large-zh-v1.5/resolve/refs%2Fpr%2F5/config.json): error trying to connect: Connection reset by peer (os error 104)
1: error sending request for url (https://huggingface.co/BAAI/bge-large-zh-v1.5/resolve/refs%2Fpr%2F5/config.json): error trying to connect: Connection reset by peer (os error 104)
2: error trying to connect: Connection reset by peer (os error 104)
3: Connection reset by peer (os error 104)
4: Connection reset by peer (os error 104)
```
It seems to download model from huggingface but I want to use my private model weight.
my privatre weight:
```
>> ls /storage/nfs2/ModelHub/embedding/BAAI/bge-small-zh-v1.5
1_Pooling model.safetensors README.md tokenizer_config.json
config.json modules.json sentence_bert_config.json tokenizer.json
config_sentence_transformers.json pytorch_model.bin special_tokens_map.json vocab.txt
```
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
docker run --gpus all \
-p 3001:3001 -v $volume:/data text-embeddings-inference:1.2 \
--model-id $model --port 3001 --revision $revision
### Expected behavior
luanch the service successfully | closed | 2024-06-25T03:18:14Z | 2024-06-28T03:50:10Z | https://github.com/huggingface/text-generation-inference/issues/2113 | [] | chenchunhui97 | 2 |
holoviz/panel | jupyter | 7,708 | IPywidgets do not work with fastapi server | Hi,
Thanks for adding support for panel with fastapi, I have an application which also renders IPywidgets and I see that they do not work with fast api server while the same work with panel serve. Is this expected?
Reproducer code
```
import panel as pn
from fastapi import FastAPI
import ipywidgets as widgets
from panel.io.fastapi import add_application
app = FastAPI()
@app.get("/")
async def read_root():
return {"Hello": "World"}
@add_application('/panel', app=app, title='My Panel App')
def create_panel_app():
slider = widgets.IntSlider(
value=3,
min=0,
max=10,
step=1,
description='Slider:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='d'
)
return slider
```
Can we add support for IPywidgets? | open | 2025-02-13T08:41:56Z | 2025-02-13T11:28:56Z | https://github.com/holoviz/panel/issues/7708 | [
"component: ipywidget"
] | singharpit94 | 0 |
stanfordnlp/stanza | nlp | 1,300 | [QUESTION] | Trying to run a gunicorn/Flask Stanza app in a Docker container.
Tried with different worker types and numbers but see each worker being killed.
Current ending of Dockerfile is:
```
EXPOSE 20900
ENTRYPOINT poetry run python -m gunicorn --worker-tmp-dir /dev/shm -workers=2 --threads=4 --worker-class=sync 'isagog_api.nlp_api:app' -b 0.0.0.0:20900
```
docker compose up with a docker-compose.yml as follows:
```
version: "3.8"
services:
isagog-nlp-gu:
build: .
image: isagog-nlp-gu:v0.1
container_name: isagog-nlp-gu
restart: unless-stopped
ports:
- "20900:20900/tcp"
```
yields repeating restarts of workers with the pytorch model being always stopped around 83-87%:
```
isagog-nlp-gu | 2023-10-18 16:36:55 INFO: Using device: cpu
isagog-nlp-gu | 2023-10-18 16:36:55 INFO: Loading: tokenize
isagog-nlp-gu | 2023-10-18 16:36:55 INFO: Done loading processors!
isagog-nlp-gu | 2023-10-18 16:36:55 WARNING: Language it package default expects mwt, which has been added
isagog-nlp-gu | 2023-10-18 16:36:55 INFO: Loading these models for language: it (Italian):
isagog-nlp-gu | =================================
isagog-nlp-gu | | Processor | Package |
isagog-nlp-gu | ---------------------------------
isagog-nlp-gu | | tokenize | combined |
isagog-nlp-gu | | mwt | combined |
isagog-nlp-gu | | pos | combined_charlm |
isagog-nlp-gu | | lemma | combined_nocharlm |
isagog-nlp-gu | | depparse | combined_charlm |
isagog-nlp-gu | =================================
isagog-nlp-gu |
isagog-nlp-gu | 2023-10-18 16:36:55 INFO: Using device: cpu
isagog-nlp-gu | 2023-10-18 16:36:55 INFO: Loading: tokenize
isagog-nlp-gu | 2023-10-18 16:36:55 INFO: Loading: mwt
isagog-nlp-gu | 2023-10-18 16:36:55 INFO: Loading: mwt
isagog-nlp-gu | 2023-10-18 16:36:55 INFO: Loading: pos
isagog-nlp-gu | 2023-10-18 16:36:55 INFO: Loading: pos
isagog-nlp-gu | 2023-10-18 16:36:55 INFO: Loading: lemma
isagog-nlp-gu | 2023-10-18 16:36:55 INFO: Loading: lemma
isagog-nlp-gu | 2023-10-18 16:36:55 INFO: Loading: depparse
isagog-nlp-gu | 2023-10-18 16:36:55 INFO: Loading: depparse
isagog-nlp-gu | 2023-10-18 16:36:55 INFO: Done loading processors!
isagog-nlp-gu | 2023-10-18 16:36:55 INFO: Done loading processors!
(…)s-summarization/resolve/main/config.json: 100%|██████████| 909/909 [00:00<00:00, 6.71MB/s]
pytorch_model.bin: 83%|████████▎ | 2.59G/3.13G [00:25<00:05, 104MB/s][2023-10-18 16:37:22 +0200] [7] [CRITICAL] WORKER TIMEOUT (pid:23)
isagog-nlp-gu | [2023-10-18 16:37:22 +0200] [7] [CRITICAL] WORKER TIMEOUT (pid:24)
isagog-nlp-gu | [2023-10-18 16:37:22 +0200] [23] [INFO] Worker exiting (pid: 23)
isagog-nlp-gu | [2023-10-18 16:37:22 +0200] [24] [INFO] Worker exiting (pid: 24)
pytorch_model.bin: 83%|████████▎ | 2.59G/3.13G [00:25<00:05, 101MB/s]
isagog-nlp-gu | [2023-10-18 16:37:22 +0200] [7] [ERROR] Worker (pid:23) exited with code 1
isagog-nlp-gu | [2023-10-18 16:37:22 +0200] [7] [ERROR] Worker (pid:23) exited with code 1.
isagog-nlp-gu | [2023-10-18 16:37:22 +0200] [102] [INFO] Booting worker with pid: 102
isagog-nlp-gu | [2023-10-18 16:37:22 +0200] [7] [ERROR] Worker (pid:24) exited with code 1
isagog-nlp-gu | [2023-10-18 16:37:22 +0200] [7] [ERROR] Worker (pid:24) exited with code 1.
isagog-nlp-gu | [2023-10-18 16:37:22 +0200] [103] [INFO] Booting worker with pid: 103
isagog-nlp-gu | 2023-10-18 16:37:24 INFO: Loading these models for language: it (Italian):
isagog-nlp-gu | ========================
isagog-nlp-gu | | Processor | Package |
isagog-nlp-gu | ------------------------
isagog-nlp-gu | | tokenize | combined |
isagog-nlp-gu | | mwt | combined |
isagog-nlp-gu | | ner | fbk |
isagog-nlp-gu | ========================
isagog-nlp-gu |
```
| closed | 2023-10-18T14:40:05Z | 2024-02-25T00:11:31Z | https://github.com/stanfordnlp/stanza/issues/1300 | [
"question"
] | rjalexa | 2 |
manbearwiz/youtube-dl-server | rest-api | 61 | output folder | Hi, is there a way to specify the output folder? | closed | 2020-05-01T09:10:18Z | 2020-12-05T03:39:22Z | https://github.com/manbearwiz/youtube-dl-server/issues/61 | [] | JokerShades | 2 |
scikit-optimize/scikit-optimize | scikit-learn | 1,064 | list of numeric categories of length 2 is considered as Real or Integer dimensions | In skopt.Optimizer [docs](https://scikit-optimize.github.io/stable/modules/generated/skopt.Optimizer.html) it said:
> - a (lower_bound, upper_bound) **tuple** (for Real or Integer dimensions) ...
> - as a **list** of categories (for Categorical dimensions) ...
however a list of length 2 is considered the same as a tuple.
a reproduceable example:
```
from skopt import BayesSearchCV
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
X, y = load_iris(return_X_y=True)
searchcv = BayesSearchCV(
RandomForestClassifier(),
search_spaces={'n_estimators': [250, 500], 'max_features': (0.3, 0.95, 'uniform')},
n_iter=10,
cv=5,
scoring='f1_macro',
random_state=42
)
searchcv.fit(X, y)
print(searchcv.cv_results_['param_n_estimators'])
```
it shows that the optimizer tries many values between 250 and 500 and not just both values. | open | 2021-09-29T23:19:51Z | 2021-10-20T12:31:34Z | https://github.com/scikit-optimize/scikit-optimize/issues/1064 | [] | Abdelgha-4 | 1 |
babysor/MockingBird | deep-learning | 19 | Python | closed | 2021-08-18T14:38:21Z | 2021-08-18T14:38:37Z | https://github.com/babysor/MockingBird/issues/19 | [] | twochengxu | 0 | |
pbugnion/gmaps | jupyter | 146 | rgba regex errors when the alpha channel is 1.0 | I think rgba should allow a max of 1.0 but the regex is throwing errors if the alpha doesn't start with a 0.x
error message:
```Element of the 'gradient' trait of a WeightedHeatmap instance must be an HTML color recognized by Google maps or a tuple or a tuple, but a value of 'rgba(72,209,204,1.0)' <type 'str'> was specified.```
I think the error is being created by line 62 in geotraitlets.py:
`_rgba_re = re.compile(r'rgba\([0-9]{1,3},[0-9]{1,3},[0-9]{1,3},0?\.[0-9]*\)')`
the issue is the leading zero in `0?\.[0-9]*\)` | closed | 2017-06-12T20:50:52Z | 2017-06-25T10:29:10Z | https://github.com/pbugnion/gmaps/issues/146 | [] | mlwohls | 7 |
axnsan12/drf-yasg | django | 303 | Add ArrayField Solution to Docs | Hi,
i was looking for "How to implement the ArrayField(models.CharField(choices=TYPES, max_length=3))".
And the 'serializers.MultipleChoiceField()` from #102 solution worked for me.
Could this be added to the Docs in a Section like "Tipps 'n Tricks "? | closed | 2019-01-29T21:31:19Z | 2019-01-29T21:40:36Z | https://github.com/axnsan12/drf-yasg/issues/303 | [] | chgad | 1 |
horovod/horovod | pytorch | 3,886 | [Question] How to generate horovod timeline avoid preprocessing ? | I have a training task that does a lot of preprocessing before starting actual training.
I want to avoid recording the preprocessing part, but only later part.
How can I achieve this?
BTW, if the long running task is start with --timeline-filename xxx.json and stop later using Ctrl+c, it seems that the timeline file is always broken.
Is it possible to get timeline from a long running task before it naturally exist? | closed | 2023-04-14T03:37:29Z | 2023-04-19T11:42:54Z | https://github.com/horovod/horovod/issues/3886 | [] | Nov11 | 1 |
rio-labs/rio | data-visualization | 60 | NumberInput Field with Decimals != 0 Changes Input When Numbers Are Entered "Slowly" | ### Describe the bug
When using a NumberInput field with decimals set to a value other than 0, the input behaves unexpectedly if numbers are entered slowly. Specifically, when decimals is set to 2, entering the number "20" results in "2.000" and then immediately changes to "2.00".
### Steps to Reproduce
1. Create a NumberInput field and set decimals to a value other than 0 (e.g., 2).
2. Begin entering a number, such as "20", into the field.
3. Enter the numbers at a "slow" pace.
### Screenshots/Videos
_No response_
### Operating System
_No response_
### What browsers are you seeing the problem on?
Chrome, Edge
### Browser version
_No response_
### What device are you using?
_No response_
### Additional context
_No response_ | closed | 2024-06-12T14:43:11Z | 2024-06-14T05:03:15Z | https://github.com/rio-labs/rio/issues/60 | [
"bug"
] | Sn3llius | 1 |
cvat-ai/cvat | computer-vision | 8,454 | Generalized 'Oriented Rectangle' with polygon for OpenCV:TrackerMIL | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Is your feature request related to a problem? Please describe.
Currently, polygon can only perform track in interpolation style between key frames and lack of semi-auto tracking capability like 'Oriented Rectangle' used when clicking OpenCV icon from left sidebar then select Tracking:TrackerMIL. This make labeling for segmentation much difficult.
OpenCV should allow users to select number of points for polygon (or polyline), instead of restricting to (oriented) rectangle only, (oriented) rectangle is a special case of 4 points that form a rectangle, but if relax the the restriction and allow user to manually click a number of points for forming a polygon that would be perfect, because segmentation require higher precision using polygons.
### Describe the solution you'd like
When click on OpenCV icon at left sidebar and select Tracking:TrackerMIL, OpenCV should allow users to select a number of points to form a polygon (or polyline), instead of restricting to (oriented) rectangle only, which is a special case of polygon.
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | open | 2024-09-18T18:03:16Z | 2024-10-09T04:57:11Z | https://github.com/cvat-ai/cvat/issues/8454 | [
"enhancement"
] | JuliusLin | 2 |
vvbbnn00/WARP-Clash-API | flask | 118 | [Bug] unsupport proxy type | 
| closed | 2024-03-01T05:28:56Z | 2024-03-01T05:30:06Z | https://github.com/vvbbnn00/WARP-Clash-API/issues/118 | [] | D3ngX1n | 0 |
dask/dask | numpy | 11,133 | I'm not sure what ’b_dict‘ is, I couldn't find any relevant content | - https://docs.dask.org/en/stable/_modules/dask/bag/core.html#to_textfiles
```
b_dict.map(json.dumps).to_textfiles("/path/to/data/*.json")
``` | closed | 2024-05-20T13:20:17Z | 2024-05-21T06:16:58Z | https://github.com/dask/dask/issues/11133 | [
"needs triage"
] | simplew2011 | 1 |
MycroftAI/mycroft-core | nlp | 3,164 | ./dev_setup.sh problem | My problem is that after downloading mycroft, even if I run the ./dev_setup.sh file and install it, it wants to run ./dev_setup.sh again to run mycroft.
I'm using linux penguin in chromebook
That's a codes
fatihgztk@penguin:~/mycroft-core$ ./start-mycroft.sh all
Please update dependencies by running ./dev_setup.sh again.
fatihgztk@penguin:~/mycroft-core$ ./dev_setup.sh
Welcome to Mycroft!
This script is designed to make working with Mycroft easy. During this
first run of dev_setup we will ask you a few questions to help setup
your environment.
Do you want to run on 'master' or against a dev branch? Unless you are
a developer modifying mycroft-core itself, you should run on the
'master' branch. It is updated bi-weekly with a stable release.
Y)es, run on the stable 'master' branch
N)o, I want to run unstable branches
Choice [Y/N]: tee: /var/log/mycroft/setup.log: Permission denied
Y - using 'master' branch
fatihgztk@penguin:~/mycroft-core$ ./start-mycroft.sh all
Please update dependencies by running ./dev_setup.sh again.
fatihgztk@penguin:~/mycroft-core$
| closed | 2024-07-20T06:04:48Z | 2024-09-08T08:14:42Z | https://github.com/MycroftAI/mycroft-core/issues/3164 | [
"bug"
] | Krsth | 2 |
django-import-export/django-import-export | django | 1,275 | import_fields attribute | I did not find a way to specify `import_fields` - this might be helpful to export columns (e.g. for analytics) but only import a sub set of fields.
It might work overwriting `get_import_fields()` to check for `import_fields` attribute being set, use it if so and otherwise use `get_fields`.
If thats something you like, I can provide a PR.
| closed | 2021-04-20T08:32:02Z | 2021-11-24T06:23:53Z | https://github.com/django-import-export/django-import-export/issues/1275 | [
"question"
] | jensneuhaus | 2 |
widgetti/solara | flask | 626 | `solara.Meta` adds an empty span to DOM | This can have unexpected consequences for spacing when, for instance, using flex-boxes with a set gap, like the ones that are automatically added if an element has more than one direct child.
An example issue can be found in the linked PR | open | 2024-05-01T11:57:35Z | 2024-07-04T09:54:38Z | https://github.com/widgetti/solara/issues/626 | [] | iisakkirotko | 1 |
numba/numba | numpy | 9,084 | CUDA kernels should not make const copies of global and closure variables | Noted whilst investigating https://numba.discourse.group/t/avoid-multiple-copies-of-large-numpy-array-in-closure/2017
In many cases Numba will copy global and closure variables as constant arrays inside jitted functions and kernels (and will always attempt to do this for kernels). This is a problem because const memory is extremely limited in CUDA. The following simple example:
```python
import numpy as np
N = 100000
closed_array = np.ones(N)
@cuda.jit(cache=True)
def kernel(r, x):
r[0] = closed_array[x]
r = np.zeros(1)
kernel[1, 1](r, 2)
print(r[0])
```
will fail with the error:
```
numba.cuda.cudadrv.driver.LinkerError: [218] Call to cuLinkAddData results in CUDA_ERROR_INVALID_PTX
ptxas error : File uses too much global constant data (0xc3500 bytes, 0x10000 max)
```
Since we have [a mechanism for users to specify when a const array should be created](https://numba.readthedocs.io/en/stable/cuda/memory.html#constant-memory), constant arrays should never be implicitly created in CUDA kernels, and they should always be opted-in to. Making this change will not be a breaking change, because Numba makes no guarantee about whether a copy is made or not: https://numba.readthedocs.io/en/stable/reference/pysemantics.html#global-and-closure-variables
> Numba may or may not copy global variables referenced inside a compiled function.
The following change provides a proof-of-concept that demonstrates creating references to arrays instead of const copies:
```diff
diff --git a/numba/core/base.py b/numba/core/base.py
index 1c0a5ee5a..0728a731e 100644
--- a/numba/core/base.py
+++ b/numba/core/base.py
@@ -1086,7 +1086,7 @@ class BaseContext(object):
llvoidptr = self.get_value_type(types.voidptr)
addr = self.get_constant(types.uintp, intaddr).inttoptr(llvoidptr)
# Use a unique name by embedding the address value
- symname = 'numba.dynamic.globals.{:x}'.format(intaddr)
+ symname = 'numba_dynamic_globals_{:x}'.format(intaddr)
gv = cgutils.add_global_variable(mod, llvoidptr, symname)
# Use linkonce linkage to allow merging with other GV of the same name.
# And, avoid optimization from assuming its value.
diff --git a/numba/cuda/target.py b/numba/cuda/target.py
index 492f375f6..7b44da626 100644
--- a/numba/cuda/target.py
+++ b/numba/cuda/target.py
@@ -69,6 +69,7 @@ VALID_CHARS = re.compile(r'[^a-z0-9]', re.I)
class CUDATargetContext(BaseContext):
implement_powi_as_math_call = True
strict_alignment = True
+ allow_dynamic_globals = True
def __init__(self, typingctx, target='cuda'):
super().__init__(typingctx, target)
@@ -292,43 +293,20 @@ class CUDATargetContext(BaseContext):
Unlike the parent version. This returns a a pointer in the constant
addrspace.
"""
-
- lmod = builder.module
-
- constvals = [
- self.get_constant(types.byte, i)
- for i in iter(arr.tobytes(order='A'))
- ]
- constaryty = ir.ArrayType(ir.IntType(8), len(constvals))
- constary = ir.Constant(constaryty, constvals)
-
- addrspace = nvvm.ADDRSPACE_CONSTANT
- gv = cgutils.add_global_variable(lmod, constary.type, "_cudapy_cmem",
- addrspace=addrspace)
- gv.linkage = 'internal'
- gv.global_constant = True
- gv.initializer = constary
-
- # Preserve the underlying alignment
- lldtype = self.get_data_type(aryty.dtype)
- align = self.get_abi_sizeof(lldtype)
- gv.align = 2 ** (align - 1).bit_length()
-
- # Convert to generic address-space
- ptrty = ir.PointerType(ir.IntType(8))
- genptr = builder.addrspacecast(gv, ptrty, 'generic')
-
# Create array object
- ary = self.make_array(aryty)(self, builder)
+ dataptr = arr.device_ctypes_pointer.value
+ data = self.add_dynamic_addr(builder, dataptr, info=str(type(dataptr)))
+ rt_addr = self.add_dynamic_addr(builder, id(arr), info=str(type(arr)))
kshape = [self.get_constant(types.intp, s) for s in arr.shape]
kstrides = [self.get_constant(types.intp, s) for s in arr.strides]
- self.populate_array(ary, data=builder.bitcast(genptr, ary.data.type),
+ cary = self.make_array(aryty)(self, builder)
+ self.populate_array(cary, data=builder.bitcast(data, cary.data.type),
shape=kshape,
strides=kstrides,
- itemsize=ary.itemsize, parent=ary.parent,
+ itemsize=arr.dtype.itemsize, parent=rt_addr,
meminfo=None)
- return ary._getvalue()
+ return cary._getvalue()
def insert_const_string(self, mod, string):
"""
```
This does require the example to be modified so that the data is already on the device:
```python
from numba import cuda
import numpy as np
N = 100000
closed_array = cuda.to_device(np.ones(N))
@cuda.jit(cache=True)
def kernel(r, x):
r[0] = closed_array[x]
r = np.zeros(1)
kernel[1, 1](r, 2)
print(r[0])
```
And now works as expected:
```
$ python repro.py
1.0
```
To complete the implementation, the following considerations need to be addressed:
- Caching needs to be disabled for kernels with references to globals and closure variables. The docstring for `BaseContext.add_dynamic_addr()` suggests that addition of a dynamic address will disable caching, but this does not seem to be the case for CUDA at least.
- Making copies of data to / from the device at launch time for global and closure variables on the host needs to be considered.
- CUDA documentation should clarify that const copies are never automatically made, and indicate the const array constructor for their explicit construction.
- Tests need to be added for both global and closure variable cases
- Any other memory management considerations as required (e.g. across multiple devices, avoidance of leaks when making implicit copies from the host, etc.) | open | 2023-07-21T16:00:24Z | 2024-10-01T14:49:39Z | https://github.com/numba/numba/issues/9084 | [
"CUDA",
"bug - incorrect behavior"
] | gmarkall | 9 |
sunscrapers/djoser | rest-api | 802 | "Authentication credentials were not provided" on log out TokenAuthentication | Hi,
I keep getting the following error when I log out with `TokenAuthentication`: `Authentication credentials were not provided.`.
Any idea why this is happening?
`curl -X POST http://localhost:8000/auth/token/logout/ --data 'f4c30aed2268e8d952a742e82fe24b012766fe5f' -H 'Authorization: Token f4c30aed2268e8d952a742e82fe24b012766fe5f'` | closed | 2024-03-11T04:49:42Z | 2024-03-31T04:35:46Z | https://github.com/sunscrapers/djoser/issues/802 | [] | engin-can | 1 |
inventree/InvenTree | django | 8,410 | Environmental variable formatting (e.g. INVENTREE_TRUSTED_ORIGINS). | ### Describe the bug*
This issue has been reported due to my miscoception about the proper formatting of env variables.
Inventree's documentions refers to django documentation in terms of `INVENTREE_TRUSTED_ORIGINS` variable.
For django the variable format is a list, while for .env file it should be just a comma-separated list of URLs.
I improperly assumed, that this is a proper formatting:
`INVENTREE_TRUSTED_ORIGINS=['https://inventree.example.com:8443','https://stock.example.com:8443']`
While it should look like this:
`INVENTREE_TRUSTED_ORIGINS='https://inventree.example.com:8443,https://stock.example.com:8443'`
### Deployment Method
- [x] Docker
- [ ] Package
- [ ] Bare metal
- [ ] Other - added info in Steps to Reproduce
### Version Information
InvenTree-Version: 0.17.0 dev
Django Version: 4.2.16
Commit Hash: cc6a2f4
Commit Date: 2024-11-02
Database: postgresql
Debug-Mode: False
Deployed using Docker: True
Platform: Linux-6.1.0-25-amd64-x86_64-with
Installer: DOC
Active plugins: [{'name': 'InvenTreeBarcode', 'slug': 'inventreebarcode', 'version': '2.1.0'}, {'name': 'InvenTreeCoreNotificationsPlugin', 'slug': 'inventreecorenotificationsplugin', 'version': '1.0.0'}, {'name': 'InvenTreeCurrencyExchange', 'slug': 'inventreecurrencyexchange', 'version': '1.0.0'}, {'name': 'InvenTreeLabel', 'slug': 'inventreelabel', 'version': '1.1.0'}, {'name': 'InvenTreeLabelMachine', 'slug': 'inventreelabelmachine', 'version': '1.0.0'}, {'name': 'InvenTreeLabelSheet', 'slug': 'inventreelabelsheet', 'version': '1.0.0'}, {'name': 'DigiKeyPlugin', 'slug': 'digikeyplugin', 'version': '1.0.0'}, {'name': 'LCSCPlugin', 'slug': 'lcscplugin', 'version': '1.0.0'}, {'name': 'MouserPlugin', 'slug': 'mouserplugin', 'version': '1.0.0'}, {'name': 'TMEPlugin', 'slug': 'tmeplugin', 'version': '1.0.0'}, {'name': 'Brother Labels', 'slug': 'brother', 'version': '1.0.0'}]
### Please verify if you can reproduce this bug on the demo site.
- [ ] I can reproduce this bug on the demo site.
### Relevant log output
_No response_ | closed | 2024-11-02T12:36:21Z | 2024-11-16T05:15:30Z | https://github.com/inventree/InvenTree/issues/8410 | [
"question",
"setup",
"documentation"
] | mjktfw | 3 |
deezer/spleeter | tensorflow | 860 | [Discussion] How to use spleeter with multi gpus? | <!-- Please respect the title [Discussion] tag. -->
I want to use spleeter to process a large amount of video data, and I want to use multi gpus to speed up my processing, is multi gpus currently supported?
Thank you very much. | open | 2023-07-06T09:38:36Z | 2023-07-21T22:42:07Z | https://github.com/deezer/spleeter/issues/860 | [
"question"
] | Zth9730 | 3 |
mljar/mljar-supervised | scikit-learn | 550 | explain mode with metric_type=accuracy results seems abnormal? | Hi @pplonski
I use mljar-AutoML to run a medical dataset (task mission:binary_classification).
Mode selected=explain mode
metric_type=accuracy
The results seems abnormal as figure below...

All the metric_value were the same and didn't match the real value listed in each algorithm folder...(example showed as following)

And I found that the metric_value:0.825506 seems inserted from the file of "learner_fold_0_training.log".

Could you help?
Thanks!
| open | 2022-06-23T05:34:49Z | 2022-11-21T14:44:22Z | https://github.com/mljar/mljar-supervised/issues/550 | [] | Tonywhitemin | 7 |
ivy-llc/ivy | numpy | 28,044 | Wrong key-word argument `name` in `ivy.remainder()` function call | In the following line, the name argument is passed,
https://github.com/unifyai/ivy/blob/bec4752711c314f01298abc3845f02c24a99acab/ivy/functional/frontends/tensorflow/variable.py#L191
From the actual function definition, there is no such argument
https://github.com/unifyai/ivy/blob/8ff497a8c592b75f010160b313dc431218c2b475/ivy/functional/ivy/elementwise.py#L5415-L5422 | closed | 2024-01-25T14:03:42Z | 2024-01-25T14:51:02Z | https://github.com/ivy-llc/ivy/issues/28044 | [] | Sai-Suraj-27 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.