repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
scrapy/scrapy | web-scraping | 6,271 | Add an extra-deps job for pypy | After https://github.com/scrapy/scrapy/pull/6269 pypy gets brotlicffi support. However, it currently runs in all pypy jobs in CI. It would be best to have a separate CI job for pypy+extra-deps, same as we have with have for other scenarios, so that we know that Scrapy works well in pypy without installing brotlicffi or any other extra. | closed | 2024-03-06T13:24:33Z | 2025-01-23T08:22:19Z | https://github.com/scrapy/scrapy/issues/6271 | [
"CI",
"cleanup"
] | Gallaecio | 0 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,466 | Passwords containing special Danish letters making passwords for recovery key impossible to retreive | ### What version of GlobaLeaks are you using?
4.11.5
### What browser(s) are you seeing the problem on?
All
### What operating system(s) are you seeing the problem on?
Windows
### Describe the issue
We have a Danish admin user, who have created a password containing the Danish letter "Å". The password works for login, but it does not work for the creation of a Account Recovery Key - login fails and user is returned to the passwords screen.
When omitting the speical letter, login works, and Account Recovery Key can be retreived
### Proposed solution
It seems like the verification of the passwords are using two different schemes | closed | 2023-06-01T06:49:09Z | 2023-12-13T18:51:57Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3466 | [
"T: Bug",
"C: Backend"
] | schris-dk | 40 |
ultralytics/yolov5 | pytorch | 12,453 | when the epoch increased to a certain value, the loss increased rapidly | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
When training the Yolov5 detection model, there was a strange problem, when the epoch increased to a certain value, the loss increased rapidly.
The training commands are as follows, and the rest are the default parameters
`python train.py --img 640 --batch 128 --epoch 100 --data data/mydata.yaml --cfg models/yolov5s.yaml --weights weights/yolov5s.pt --device '0,1,2,3,4,5,6,7`
There are about 12,000 datasets

### Additional
_No response_ | closed | 2023-12-01T09:11:33Z | 2024-10-20T19:33:05Z | https://github.com/ultralytics/yolov5/issues/12453 | [
"question"
] | C-hongfei | 4 |
home-assistant/core | asyncio | 141,243 | tuya integration, the double digital meter device | ### The problem
The

, shows everything working in the smart life application, but the integration says "This device has no objects." Is it possible to add objects of this device to the tuya integration?
### What version of Home Assistant Core has the issue?
2025.3.2
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
_No response_
### Link to integration documentation on our website
_No response_
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | closed | 2025-03-23T20:05:05Z | 2025-03-23T21:01:33Z | https://github.com/home-assistant/core/issues/141243 | [
"integration: tuya",
"feature-request"
] | sergt1978 | 2 |
Esri/arcgis-python-api | jupyter | 1,710 | Help Installing Neural Network Package | I tried to install a neural network for generating images, but when installing such an error comes out, what is the problem?


| closed | 2023-11-02T21:19:56Z | 2023-11-08T12:57:29Z | https://github.com/Esri/arcgis-python-api/issues/1710 | [
"question"
] | Gitminorkes | 2 |
deepset-ai/haystack | nlp | 8,862 | Improve Type Validation in Pipelines: Configurable Strictness and Errors vs. Warnings | **Is your feature request related to a problem? Please describe.**
Currently, Haystack enforces strict type checking for pipeline connection validation, meaning users cannot run a pipeline if their type annotations do not align exactly with the expected types. While this validation is intended to help users catch potential issues early, it can be overly restrictive—especially for advanced users—leading to unintuitive errors and forcing workarounds like bypassing the pipeline run method. Additionally, the current implementation does not allow users to configure the strictness level, and it is unclear how best to align with best practices from other Python libraries like Pydantic, FastAPI, or Typer.
**Describe the solution you'd like**
Introduce configurable options for type validation in pipeline connections:
1. **Strict vs. lax type comparison** – Allow users to choose whether type checking should be strict (e.g., `Optional[str] → str` fails) or more permissive (e.g., `Optional[str] → str` passes).
2. **Error vs. warning vs. disable option** – Give users the ability to configure whether type validation should raise an error, issue a warning, or be disabled entirely.
3. **Alignment with broader ecosystem** – Investigate how established Python libraries handle similar type validation scenarios and determine if there are best practices or patterns that Haystack should adopt.
**Additional context**
Looser type validation (e.g., allowing `Optional[str]` to be passed where `str` is expected) can make Haystack more user-friendly while still providing helpful validation for common mistakes. Making type checking configurable ensures flexibility for different use cases, from beginner-friendly strict validation to more advanced, customizable behavior.
Also related to these issues raised by the community
- https://github.com/deepset-ai/haystack/issues/8524
- https://github.com/deepset-ai/haystack/issues/8494
cc @mathislucka who made a more permissive version of the type checker in haystack-experimental when creating SuperComponents | closed | 2025-02-14T13:33:49Z | 2025-03-03T15:11:44Z | https://github.com/deepset-ai/haystack/issues/8862 | [
"P1"
] | sjrl | 0 |
keras-team/keras | machine-learning | 21,068 | "PyDataset: init does not call super().init(**kwargs) causing workers, use_multiprocessing, and max_queue_size parameters to be ignored." | I'm encountering this warning when using `ImageDataGenerator.flow_from_dataframe`:
```
/usr/local/lib/python3.11/dist-packages/keras/src/trainers/data_adapters/py_dataset_adapter.py:121: UserWarning: Your `PyDataset` class should call `super().__init__(**kwargs)` in its constructor. `**kwargs` can include `workers`, `use_multiprocessing`, `max_queue_size`. Do not pass these arguments to `fit()`, as they will be ignored
```
I'm not subclassing PyDataset directly.
Proposed Fix:
Update the constructor of the relevant PyDataset subclass to call `super().__init__(**kwargs)`, so that keyword arguments (including workers, use_multiprocessing, and max_queue_size) are initialized.
Environment:
Platform: Google Colab T4 GPU High-Ram
Operating System: Ubuntu 22.04.4 LTS
Python version: 3.11.11
numpy version: 2.0.2
pandas version: 2.2.2
tensor version: 2.18.0
keras 3.8.0
keras-hub 0.18.1
keras-nlp 0.18.1
tf_keras 2.18.0
Thanks, Keras team, for your time on this! | open | 2025-03-19T16:44:36Z | 2025-03-19T17:00:50Z | https://github.com/keras-team/keras/issues/21068 | [
"type:Bug"
] | slspencer | 0 |
Lightning-AI/pytorch-lightning | pytorch | 20,173 | loss spikes in validation step when the model has multiple losses applied | ### Bug description
I have a model that has multiple losses applied. see the code following:
```
# [B, H, W, 128]
x_bin_feature=F.softmax(x_bin_pred, dim=3)
# [B, H, W, 128]
y_bin_feature=F.softmax(y_bin_pred, dim=3)
# [B, H, W, 128]
z_bin_feature=F.softmax(z_bin_pred, dim=3)
# [B, H, W, 384]
original_size_feature_map=torch.cat([x_bin_feature, y_bin_feature, z_bin_feature], dim=3)
# [B, 384, H, W]
original_size_feature_map = torch.permute(
original_size_feature_map, (0, 3, 1, 2)
)
pnp_feature = self._pnp_head(original_size_feature_map)
pnp_feature = pnp_feature.reshape(pnp_feature.shape[0], -1)
pnp_feature = self._pnp_neck(pnp_feature)
outputs["trans"] = self._pnp_trans_head(pnp_feature)
outputs["rot"] = self._pnp_rot_head(pnp_feature)
coordinate_x_loss = self._coordinate_x_loss(
x_bin_pred.view(-1, 128),
x_bin_gt.long().view(-1),
)
coordinate_y_loss = self._coordinate_y_loss(
y_bin_pred.view(-1, 128),
y_bin_gt.long().view(-1),
)
coordinate_z_loss = self._coordinate_z_loss(
z_bin_pred.view(-1, 128),
z_bin_gt.long().view(-1),
)
coordinate_loss=coordinate_x_loss+coordinate_y_loss+coordinate_z_loss
trans_label, rot_label = stack_pose_labels(pose_labels)
pm_loss = self._pm_loss(
rotation_matrix_preds,
rotation_matrix_gt,
all_points,
outputs["trans"],
trans_label,
)
```
I used the same validation set as the training set. And I train the model as usual, however I saw that the losses consistently dropping in training steps, but everytime in validation step, the loss goes extremely high. And after the validation steps, the training loss drops with no influence of validation steps:

When I remove one of the loss in my code, the model can be trained normally.
Is there sth wrong with how I use the multiple losses? Why the training loss is so different with the validation loss?
### What version are you seeing the problem on?
v1.x
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.4.0):
#- PyTorch Version (e.g., 2.4):
#- Python version (e.g., 3.12):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
```
</details>
### More info
_No response_ | open | 2024-08-06T23:54:14Z | 2024-08-07T15:09:55Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20173 | [
"question"
] | RainRoboforce | 1 |
mljar/mljar-supervised | scikit-learn | 615 | improving versions of numba | The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details. | closed | 2023-05-05T07:22:54Z | 2023-06-26T11:17:27Z | https://github.com/mljar/mljar-supervised/issues/615 | [] | adrianblazeusz | 1 |
graphql-python/graphene | graphql | 714 | Ability to have one class per query instead of one method per query | ```
[x] Feature request
[x] Or - Docs request
```
```
graphene==2.1.0
```
Is it possible to have a similar pattern for queries, similar to mutation.
This is to avoid long classes with multiple resolvers.
It would be great if this would be possible.
```python
# mutation -- working example
class JwtRefreshMutation(graphene.Mutation):
class Arguments:
token=graphene.String(required=True)
token = graphene.String()
def mutate(self, info, **args):
token = refresh_token(args['token'])
return JwtRefreshMutation(token=token)
class Mutations(graphene.ObjectType):
jwt_refresh = JwtRefreshMutation.Field()
```
```python
# query -- desired example
class JwtRefreshQuery(graphene.Query):
class Arguments:
token=graphene.String(required=True)
token = graphene.String()
def resolve(self, info, **args):
token = refresh_token(args['token'])
return JwtRefreshQuery(token=token)
class Query(graphene.ObjectType):
jwt_refresh = JwtRefreshQuery.Field()
``` | closed | 2018-04-16T04:07:53Z | 2019-08-05T22:17:38Z | https://github.com/graphql-python/graphene/issues/714 | [
"wontfix"
] | un33k | 8 |
koxudaxi/datamodel-code-generator | pydantic | 1,904 | How to change endpoints naming template? | I run the comand:
```
datamodel-codegen --url http://localhost:8000/openapi.json --output src/server/models/language_server_rpc.py --openapi-scopes schemas paths tags parameters --output-model-type pydantic.BaseModel
```
And endpoint params are named like this: `ApiV2WorkspacesWorkspaceIdMapDependenciesPostResponse`
Is it possible to change that naming template? | open | 2024-04-07T17:27:43Z | 2024-04-07T17:27:43Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1904 | [] | funnydman | 0 |
explosion/spacy-streamlit | streamlit | 17 | valueError E030 sentence boundaries unset | I am following the guidance on the spacy website using the following code, but i get the error below the code:
```
models = ["modelout\model-best"]
default_text = "this sentence is a test"
spacy_streamlit.visualize(models, default_text)
```
ValueError: [E030] Sentence boundaries unset. You can add the 'sentencizer' component to the pipeline with: `nlp.add_pipe('sentencizer')`. Alternatively, add the dependency parser or sentence recognizer, or set sentence boundaries by setting `doc[i].is_sent_start`. | closed | 2021-04-10T19:33:51Z | 2021-04-10T19:44:26Z | https://github.com/explosion/spacy-streamlit/issues/17 | [] | natescrape | 1 |
huggingface/datasets | tensorflow | 7,276 | Accessing audio dataset value throws Format not recognised error | ### Describe the bug
Accessing audio dataset value throws `Format not recognised error`
### Steps to reproduce the bug
**code:**
```py
from datasets import load_dataset
dataset = load_dataset("fawazahmed0/bug-audio")
for data in dataset["train"]:
print(data)
```
**output:**
```bash
(mypy) C:\Users\Nawaz-Server\Documents\ml>python myest.py
[C:\vcpkg\buildtrees\mpg123\src\0d8db63f9b-3db975bc05.clean\src\libmpg123\layer3.c:INT123_do_layer3():1801] error: dequantization failed!
{'audio': {'path': 'C:\\Users\\Nawaz-Server\\.cache\\huggingface\\hub\\datasets--fawazahmed0--bug-audio\\snapshots\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\data\\Ghamadi\\037135.mp3', 'array': array([ 0.00000000e+00, -2.86519935e-22, -2.56504911e-21, ...,
-1.94239747e-02, -2.42924765e-02, -2.99104657e-02]), 'sampling_rate': 22050}, 'reciter': 'Ghamadi', 'transcription': 'الا عجوز ا في الغبرين', 'line': 3923, 'chapter': 37, 'verse': 135, 'text': 'إِلَّا عَجُوزࣰ ا فِي ٱلۡغَٰبِرِينَ'}
Traceback (most recent call last):
File "C:\Users\Nawaz-Server\Documents\ml\myest.py", line 5, in <module>
for data in dataset["train"]:
~~~~~~~^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\arrow_dataset.py", line 2372, in __iter__
formatted_output = format_table(
^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\formatting\formatting.py", line 639, in format_table
return formatter(pa_table, query_type=query_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\formatting\formatting.py", line 403, in __call__
return self.format_row(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\formatting\formatting.py", line 444, in format_row
row = self.python_features_decoder.decode_row(row)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\formatting\formatting.py", line 222, in decode_row
return self.features.decode_example(row) if self.features else row
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\features\features.py", line 2042, in decode_example
column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\features\features.py", line 1403, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\features\audio.py", line 184, in decode_example
array, sampling_rate = sf.read(f)
^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\soundfile.py", line 285, in read
with SoundFile(file, 'r', samplerate, channels,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\soundfile.py", line 658, in __init__
self._file = self._open(file, mode_int, closefd)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\soundfile.py", line 1216, in _open
raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name))
soundfile.LibsndfileError: Error opening <_io.BufferedReader name='C:\\Users\\Nawaz-Server\\.cache\\huggingface\\hub\\datasets--fawazahmed0--bug-audio\\snapshots\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\data\\Ghamadi\\037136.mp3'>: Format not recognised.
```
### Expected behavior
Everything should work fine, as loading the problematic audio file directly with soundfile package works fine
**code:**
```
import soundfile as sf
print(sf.read('C:\\Users\\Nawaz-Server\\.cache\\huggingface\\hub\\datasets--fawazahmed0--bug-audio\\snapshots\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\data\\Ghamadi\\037136.mp3'))
```
**output:**
```bash
(mypy) C:\Users\Nawaz-Server\Documents\ml>python myest.py
[C:\vcpkg\buildtrees\mpg123\src\0d8db63f9b-3db975bc05.clean\src\libmpg123\layer3.c:INT123_do_layer3():1801] error: dequantization failed!
(array([ 0.00000000e+00, -8.43723821e-22, -2.45370628e-22, ...,
-7.71464454e-03, -6.90496899e-03, -8.63333419e-03]), 22050)
```
### Environment info
- `datasets` version: 3.0.2
- Platform: Windows-11-10.0.22621-SP0
- Python version: 3.12.7
- `huggingface_hub` version: 0.26.2
- PyArrow version: 17.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.10.0
- soundfile: 0.12.1 | open | 2024-11-04T05:59:13Z | 2024-11-09T18:51:52Z | https://github.com/huggingface/datasets/issues/7276 | [] | fawazahmed0 | 3 |
onnx/onnxmltools | scikit-learn | 332 | Invalid model ONNX v4 and v5 | I trained the following model in Keras:
```python
model = Sequential([
InputLayer(input_shape=[3,image_height,image_width]),
Conv2D(filters=32, kernel_size=5, strides=1, padding='same'),
BatchNormalization(),
Activation('relu'),
MaxPool2D(pool_size=8, padding='same'),
Conv2D(filters=48, kernel_size=3, strides=1, padding='same'),
BatchNormalization(),
Activation('relu'),
MaxPool2D(pool_size=5, padding='same'),
Conv2D(filters=64, kernel_size=3, strides=1, padding='same'),
BatchNormalization(),
Activation('relu'),
MaxPool2D(pool_size=3, padding='same'),
Conv2D(filters=32, kernel_size=5, strides=1, padding='same'),
Flatten(),
Dense(100, activation='relu'),
Dropout(0.1),
Dense(12, activation='softmax')
])
```
Once training is finished, I convert it with the onnxmltools to ONNX v5 format:
`onnxmltools.utils.save_model()`
When I now try to load this model with
```python
import onnxruntime as rt
import onnx
print(onnx.__version__)
print(rt.__version__)
sess = rt.InferenceSession("./driver.onnx")
```
I get the following output:
```
1.5.0
0.5.0
Traceback (most recent call last):
File "c:\Users\m\.vscode\extensions\ms-python.python-2019.6.24221\pythonFiles\ptvsd_launcher.py", line 43, in <module>
main(ptvsdArgs)
File "c:\Users\m\.vscode\extensions\ms-python.python-2019.6.24221\pythonFiles\lib\python\ptvsd\__main__.py", line 434, in main
run()
File "c:\Users\m\.vscode\extensions\ms-python.python-2019.6.24221\pythonFiles\lib\python\ptvsd\__main__.py", line 312, in run_file
runpy.run_path(target, run_name='__main__')
File "C:\Python36\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Python36\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Python36\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "c:\Users\m\Desktop\Self-driving-car\autcar\test.py", line 7, in <module>
sess = rt.InferenceSession("./driver_keras.onnx")
File "C:\Python36\lib\site-packages\onnxruntime\capi\session.py", line 29, in __init__
self._sess.load_model(path_or_bytes)
RuntimeError: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from ./driver.onnx failed:This is an invalid model. Error in Node:TFNodes_flatten_1_strided_slice : Node (TFNodes_flatten_1_strided_slice) has input size 1 not in range [min=3, max=5].
```
You can download driver.onnx [here](https://1drv.ms/u/s!AlibP17lhWs9g7UVlwLhwKr6mZPAig?e=V5GHMR).
Now, when you create the model with ONNX v4 it suddenly works! Take [this model](https://1drv.ms/u/s!AlibP17lhWs9g7UWCXflk1kqldAfMA?e=q3U0j4) for example. It has the exact same model definition, the only difference is the format (ONNX v4 instead of ONNX v5) | closed | 2019-08-17T09:05:45Z | 2019-08-26T22:43:42Z | https://github.com/onnx/onnxmltools/issues/332 | [] | christian-vorhemus | 2 |
whitphx/streamlit-webrtc | streamlit | 1,618 | Why error : AttributeError: 'NoneType' object has no attribute 'call_exception_handler' | You can now view your Streamlit app in your browser.
Local URL: http://localhost:8501
Network URL: http://192.168.1.3:8501
Exception in callback Transaction.__retry()
handle: <TimerHandle when=1014459.718 Transaction.__retry()>
Traceback (most recent call last):
File "C:\Users\*****\miniconda3\envs\conda39\lib\asyncio\selector_events.py", line 1054, in sendto
self._sock.sendto(data, addr)
AttributeError: 'NoneType' object has no attribute 'sendto'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\*****\miniconda3\envs\conda39\lib\asyncio\events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "C:\Users\*****\miniconda3\envs\conda39\lib\site-packages\aioice\stun.py", line 312, in __retry
self.__protocol.send_stun(self.__request, self.__addr)
File "C:\Users\*****\miniconda3\envs\conda39\lib\site-packages\aioice\ice.py", line 266, in send_stun
self.transport.sendto(bytes(message), addr)
File "C:\Users\*****\miniconda3\envs\conda39\lib\asyncio\selector_events.py", line 1064, in sendto
self._fatal_error(
File "C:\Users\*****\miniconda3\envs\conda39\lib\asyncio\selector_events.py", line 711, in _fatal_error
self._loop.call_exception_handler({
AttributeError: 'NoneType' object has no attribute 'call_exception_handler'
Exception in callback Transaction.__retry()
handle: <TimerHandle when=1014461.453 Transaction.__retry()>
Traceback (most recent call last):
File "C:\Users\*****\miniconda3\envs\conda39\lib\asyncio\selector_events.py", line 1054, in sendto
self._sock.sendto(data, addr)
AttributeError: 'NoneType' object has no attribute 'sendto'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\*****\miniconda3\envs\conda39\lib\asyncio\events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "C:\Users\*****\miniconda3\envs\conda39\lib\site-packages\aioice\stun.py", line 312, in __retry
self.__protocol.send_stun(self.__request, self.__addr)
File "C:\Users\*****\miniconda3\envs\conda39\lib\site-packages\aioice\ice.py", line 266, in send_stun
self.transport.sendto(bytes(message), addr)
File "C:\Users\*****\miniconda3\envs\conda39\lib\asyncio\selector_events.py", line 1064, in sendto
self._fatal_error(
File "C:\Users\*****\miniconda3\envs\conda39\lib\asyncio\selector_events.py", line 711, in _fatal_error
self._loop.call_exception_handler({
AttributeError: 'NoneType' object has no attribute 'call_exception_handler'
my code
```
import threading
from typing import Union
import av
import cv2
import numpy as np
import streamlit as st
import tensorflow as tf
import os
import argparse
import sys
import time
import importlib.util
```
```
from streamlit_webrtc import (
VideoProcessorBase,
VideoTransformerBase,
WebRtcMode,
webrtc_streamer,
RTCConfiguration,
)
RTC_CONFIGURATION = RTCConfiguration(
{"iceServers": [{"urls": ["stun:stun.l.google.com:19302"]}]}
)
PATH_TO_MODEL = "./models/detectObject/model.tflite"
PATH_TO_LABELS = "./models/detectObject/labels.txt"
```
```
@st.cache_resource
def load_tf_lite_model():
try:
interpreter = tf.lite.Interpreter(model_path=PATH_TO_MODEL)
interpreter.allocate_tensors()
return interpreter
except ValueError as ve:
print("Error loading the TensorFlow Lite model:", ve)
exit()
@st.cache_resource
def load_labels():
with open(PATH_TO_LABELS, "r") as f:
labels = [line.strip() for line in f.readlines()]
return labels
def detect_capture(image):
interpreter = load_tf_lite_model()
labels = load_labels()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
height = input_details[0]["shape"][1]
width = input_details[0]["shape"][2]
float_input = input_details[0]["dtype"] == np.float32
image_resized = cv2.resize(image, (width, height))
input_data = np.expand_dims(image_resized, axis=0)
input_mean = 127.5
input_std = 127.5
# Normalize pixel values
if float_input:
input_data = (np.float32(input_data) - input_mean) / input_std
# Perform the actual detection
interpreter.set_tensor(input_details[0]["index"], input_data)
interpreter.invoke()
# Retrieve detection results
boxes = interpreter.get_tensor(output_details[1]["index"])[0]
classes = interpreter.get_tensor(output_details[3]["index"])[0]
scores = interpreter.get_tensor(output_details[0]["index"])[0]
st.write("Kelas : {} \n"
"Score : {} \n"
"Boxes : {}".format(classes,scores,boxes))
imH, imW, _ = image.shape
for i in range(len(scores)):
if(scores[i] > 0.05) and (scores[i] <= 1.0):
ymin = int(max(1, (boxes[i][0] * imH)))
xmin = int(max(1, (boxes[i][1] * imW)))
ymax = int(min(imH, (boxes[i][2] * imH)))
xmax = int(min(imW, (boxes[i][3] * imW)))
cv2.rectangle(image, (xmin, ymin), (xmax, ymax), (10, 255, 0), 2)
# Draw label
object_name = labels[int(classes[i])]
label = "%s: %d%%" % (
object_name,
int(scores[i] * 100),
)
labelSize, baseLine = cv2.getTextSize(
label, cv2.FONT_HERSHEY_SIMPLEX, 0.7, 2
)
label_ymin = max(ymin, labelSize[1] + 10)
cv2.rectangle(
image,
(xmin, label_ymin - labelSize[1] - 10),
(xmin + labelSize[0], label_ymin + baseLine - 10),
(255, 255, 255),
cv2.FILLED,
)
cv2.putText(
image,
label,
(xmin, label_ymin - 7),
cv2.FONT_HERSHEY_SIMPLEX,
0.7,
(0, 0, 0),
2,
)
print("Kelas : {} \n"
"Score : {} \n"
"Boxes : {}".format(classes,scores,boxes))
return image
```
```
class VideoTransformer(VideoTransformerBase):
frame_lock: threading.Lock
out_image: Union[np.ndarray, None]
def __init__(self) -> None:
self.frame_lock = threading.Lock()
self.out_image = None
def transform(self, frame: av.VideoFrame) -> np.ndarray:
interpreter = load_tf_lite_model()
labels = load_labels()
input_mean = 127.5
input_std = 127.5
# get model details
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
height = input_details[0]["shape"][1]
width = input_details[0]["shape"][2]
float_input = input_details[0]["dtype"] == np.float32
out_image = frame.to_ndarray(format="bgr24")
imH, imW, _ = out_image.shape
image_resized = cv2.resize(out_image, (width, height))
input_data = np.expand_dims(image_resized, axis=0)
# Normalize pixel values
if float_input:
input_data = (np.float32(input_data) - input_mean) / input_std
# Perform the actual detection
interpreter.set_tensor(input_details[0]["index"], input_data)
interpreter.invoke()
# Retrieve detection results
boxes = interpreter.get_tensor(output_details[1]["index"])[0]
classes = interpreter.get_tensor(output_details[3]["index"])[0]
scores = interpreter.get_tensor(output_details[0]["index"])[0]
for i in range(len(scores)):
if (scores[i] > 0.05) and (scores[i] <= 1.0):
ymin = int(max(1, (boxes[i][0] * imH)))
xmin = int(max(1, (boxes[i][1] * imW)))
ymax = int(min(imH, (boxes[i][2] * imH)))
xmax = int(min(imW, (boxes[i][3] * imW)))
cv2.rectangle(out_image, (xmin, ymin), (xmax, ymax), (10, 255, 0), 2)
# Draw label
object_name = labels[int(classes[i])]
label = "%s: %d%%" % (
object_name,
int(scores[i] * 100),
)
labelSize, baseLine = cv2.getTextSize(
label, cv2.FONT_HERSHEY_SIMPLEX, 0.7, 2
)
label_ymin = max(ymin, labelSize[1] + 10)
cv2.rectangle(
out_image,
(xmin, label_ymin - labelSize[1] - 10),
(xmin + labelSize[0], label_ymin + baseLine - 10),
(255, 255, 255),
cv2.FILLED,
)
cv2.putText(
out_image,
label,
(xmin, label_ymin - 7),
cv2.FONT_HERSHEY_SIMPLEX,
0.7,
(0, 0, 0),
2,
)
with self.frame_lock:
self.out_image = out_image
print("Shape : {}".format(out_image.shape))
return out_image
```
```
def realtime_video_detection():
info = st.empty()
info.markdown("First, click on :blue['START'] to use webcam")
ctx = webrtc_streamer(
key="object_detection",
mode=WebRtcMode.SENDRECV,
rtc_configuration=RTC_CONFIGURATION,
video_processor_factory=VideoTransformer,
media_stream_constraints={"video": True, "audio": False},
async_processing=True,
)
if ctx.video_transformer:
info.markdown("Click on :blue['SNAPSHOT'] to take a picture")
snap = st.button("SNAPSHOT")
if snap:
if ctx.video_transformer.out_image is not None:
with ctx.video_transformer.frame_lock:
out_image = ctx.video_transformer.out_image.copy()
st.write("Sebelum:")
st.image(out_image, channels="BGR")
image = detect_capture(out_image)
st.write("Sesudah:")
st.image(image, channels="BGR")
if __name__ == "__main__":
realtime_video_detection()
```
| closed | 2024-05-13T11:40:56Z | 2024-11-15T12:40:32Z | https://github.com/whitphx/streamlit-webrtc/issues/1618 | [] | Yudhass | 12 |
gradio-app/gradio | python | 10,034 | Support examples for gr.Gallery | ### Describe the bug
Hi Gradio Development Team,
I suspect there may be an issue with the `Examples` mechanism when using the `gr.Gallery` component. The same `Examples` implementation works perfectly with the `gr.Image` component. Here's a detailed explanation of the issue:
Recently, I updated my Gradio application by replacing the `gr.Image` component with `gr.Gallery`. However, this resulted in a `PermissionError: [Errno 13] Permission denied: 'C:\\my\\path'`.
Upon investigation, it appears that the issue may be related to the `component.as_example(ex)` function in `gradio\components\dataset.py`.
To debug, I added a print statement in the `__init__` method of `dataset.py`. Below are the console logs for comparison:
**When using `gr.Image`, the console log shows:**
<details>
component:<gradio.components.image.Image object at 0x00000215AB195E40>
ex:power.jpg
component.as_example(ex):path='power.jpg' url=None size=None orig_name='power.jpg' mime_type=None is_stream=False meta={'_type': 'gradio.FileData'}
</details>
**When using `gr.Gallery`, the console log shows:**
<details>
component:<gradio.components.gallery.Gallery object at 0x000001CEE1667070>
ex:power.jpg
component.as_example(ex):root=[GalleryImage(image=FileData(path='p', url=None, size=None, orig_name='p', mime_type=None, is_stream=False, meta={'_type': 'gradio.FileData'}), caption=None), GalleryImage(image=FileData(path='o', url=None, size=None, orig_name='o', mime_type=None, is_stream=False, meta={'_type': 'gradio.FileData'}), caption=None), GalleryImage(image=FileData(path='w', url=None, size=None, orig_name='w', mime_type=None, is_stream=False, meta={'_type': 'gradio.FileData'}), caption=None), GalleryImage(image=FileData(path='e', url=None, size=None, orig_name='e', mime_type=None, is_stream=False, meta={'_type': 'gradio.FileData'}), caption=None), GalleryImage(image=FileData(path='r', url=None, size=None, orig_name='r', mime_type=None, is_stream=False, meta={'_type': 'gradio.FileData'}), caption=None), GalleryImage(image=FileData(path='.', url=None, size=None, orig_name='', mime_type=None, is_stream=False, meta={'_type': 'gradio.FileData'}), caption=None), GalleryImage(image=FileData(path='j', url=None, size=None, orig_name='j', mime_type=None, is_stream=False, meta={'_type': 'gradio.FileData'}), caption=None), GalleryImage(image=FileData(path='p', url=None, size=None, orig_name='p', mime_type=None, is_stream=False, meta={'_type': 'gradio.FileData'}), caption=None), GalleryImage(image=FileData(path='g', url=None, size=None, orig_name='g', mime_type=None, is_stream=False, meta={'_type': 'gradio.FileData'}), caption=None)]
Traceback (most recent call last):
File "C:\my\path\app.py", line 469, in <module>
main()
File "C:\my\path\app.py", line 449, in main
gr.Examples(
File "C:\my\path\venv\lib\site-packages\gradio\helpers.py", line 56, in create_examples
examples_obj = Examples(
File "C:\my\path\venv\lib\site-packages\gradio\helpers.py", line 264, in __init__
self.dataset = components.Dataset(
File "C:\my\path\venv\lib\site-packages\gradio\component_meta.py", line 179, in wrapper
return fn(self, **kwargs)
File "C:\my\path\venv\lib\site-packages\gradio\components\dataset.py", line 117, in __init__
processing_utils.move_files_to_cache(
File "C:\my\path\venv\lib\site-packages\gradio\processing_utils.py", line 516, in move_files_to_cache
return client_utils.traverse(
File "C:\my\path\venv\lib\site-packages\gradio_client\utils.py", line 1009, in traverse
new_obj.append(traverse(item, func, is_root))
File "C:\my\path\venv\lib\site-packages\gradio_client\utils.py", line 1004, in traverse
new_obj[key] = traverse(value, func, is_root)
File "C:\my\path\venv\lib\site-packages\gradio_client\utils.py", line 1000, in traverse
return func(json_obj)
File "C:\my\path\venv\lib\site-packages\gradio\processing_utils.py", line 490, in _move_to_cache
temp_file_path = block.move_resource_to_block_cache(payload.path)
File "C:\my\path\venv\lib\site-packages\gradio\blocks.py", line 347, in move_resource_to_block_cache
temp_file_path = processing_utils.save_file_to_cache(
File "C:\my\path\venv\lib\site-packages\gradio\processing_utils.py", line 277, in save_file_to_cache
temp_dir = hash_file(file_path)
File "C:\my\path\venv\lib\site-packages\gradio\processing_utils.py", line 206, in hash_file
with open(file_path, "rb") as f:
PermissionError: [Errno 13] Permission denied: 'C:\\my\\path'
</details>
Could you please help investigate and confirm this behavior? Thank you!
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
def main():
with gr.Blocks() as demo:
with gr.Column():
#image = gr.Image(type="pil", image_mode="RGBA", label="Input")
gallery = gr.Gallery(columns=5, rows=5, show_share_button=False, interactive=True, height="500px", label="Input")
gr.Examples(
[["power.jpg"]],
inputs=[
gallery,
],
)
demo.queue(max_size=10)
demo.launch(inbrowser=True)
if __name__ == "__main__":
main()
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
The testing environment is Windows 10 with Python 3.10.9 and Gradio 5.6.0.
```
### Severity
Blocking usage of gradio | open | 2024-11-25T08:33:59Z | 2024-12-01T06:03:16Z | https://github.com/gradio-app/gradio/issues/10034 | [
"bug",
"enhancement"
] | avan06 | 2 |
healthchecks/healthchecks | django | 1,120 | Custom Email Alerts | Any chance of being able to customise the email alerts in the UI?
We'd like to dynamically format the alert messages per project | closed | 2025-01-30T03:36:12Z | 2025-02-14T12:55:16Z | https://github.com/healthchecks/healthchecks/issues/1120 | [] | comnam90 | 1 |
huggingface/datasets | pytorch | 6,545 | `image` column not automatically inferred if image dataset only contains 1 image | ### Describe the bug
By default, the standard Image Dataset maps out `file_name` to `image` when loading an Image Dataset.
However, if the dataset contains only 1 image, this does not take place
### Steps to reproduce the bug
Input
(dataset with one image `multimodalart/repro_1_image`)
```py
from datasets import load_dataset
dataset = load_dataset("multimodalart/repro_1_image")
dataset
```
Output:
```py
DatasetDict({
train: Dataset({
features: ['file_name', 'prompt'],
num_rows: 1
})
})
```
Input
(dataset with 2+ images `multimodalart/repro_2_image`)
```py
from datasets import load_dataset
dataset = load_dataset("multimodalart/repro_2_image")
dataset
```
Output:
```py
DatasetDict({
train: Dataset({
features: ['image', 'prompt'],
num_rows: 2
})
})
```
### Expected behavior
Expected to map `file_name` → `image` for all dataset sizes, including 1.
### Environment info
Both latest main and 2.16.0 | closed | 2023-12-30T16:17:29Z | 2024-01-09T13:06:31Z | https://github.com/huggingface/datasets/issues/6545 | [] | apolinario | 0 |
gee-community/geemap | jupyter | 1,190 | geemap.ee_vector_style() bug | I had an error with geemap.ee_vector_style().
It does seem that
```
12770 if isinstance(color, str):
12771 color = [color] * size
```
should be assigned before calling the error:
```
12767 if size != len(color):
12768 raise ValueError("labels and color must be the same length.")
```
Cheers,
Daniel
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [14], in <cell line: 13>()
5 colors = [
6 "#008837",
7 "#7FC31C",
8 "#13007C",
9 "#B28653",
10 ]
12 fillColor = [c + "A8" for c in colors]
---> 13 styled_cea = geemap.ee_vector_style(
14 cea, column='CEA'
15 )
File H:\Dropbox\1. ACADEMIC\MODELS\DayCentAU\Miniconda\envs\dca\lib\site-packages\geemap\common.py:12768, in ee_vector_style(collection, column, labels, color, pointSize, pointShape, width, fillColor, lineType, neighborhood, return_fc)
12766 size = len(labels)
12767 if size != len(color):
> 12768 raise ValueError("labels and color must be the same length.")
12770 if isinstance(color, str):
12771 color = [color] * size
ValueError: labels and color must be the same length.
``` | closed | 2022-08-09T05:02:36Z | 2022-08-09T12:52:45Z | https://github.com/gee-community/geemap/issues/1190 | [
"bug"
] | Daniel-Trung-Nguyen | 2 |
huggingface/transformers | deep-learning | 36,294 | multi_modality error | ### System Info
traceback (most recent call last):
File "/home/ashim/miniconda3/envs/captioning/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1092, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/home/ashim/miniconda3/envs/captioning/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 794, in __getitem__
raise KeyError(key)
KeyError: 'multi_modality'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ashim/projects/caption_images/main.py", line 116, in <module>
"decoder": AutoModelForCausalLM.from_pretrained(
File "/home/ashim/miniconda3/envs/captioning/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 526, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/home/ashim/miniconda3/envs/captioning/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1094, in from_pretrained
raise ValueError(
ValueError: The checkpoint you are trying to load has model type `multi_modality` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
You can update Transformers with the command `pip install --upgrade transformers`. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command `pip install git+https://github.com/huggingface/transformers.git`
(captioning) ashim@usm-research-vm:~/projects/caption_images$ python
Python 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import transformers
>>> transformers.__version__
'4.50.0.dev0'
>>>
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
# Qwen-VL - Use original model
{
"processor": "Ertugrul/Qwen2-VL-7B-Captioner-Relaxed",
"decoder": Qwen2ForCausalLM.from_pretrained(
"Ertugrul/Qwen2-VL-7B-Captioner-Relaxed",
trust_remote_code=True
)
},
# DeepSeek Janus - Use as causal LM
{
"processor": "deepseek-ai/Janus-Pro-7B",
"decoder": AutoModelForCausalLM.from_pretrained(
"deepseek-ai/Janus-Pro-7B",
trust_remote_code=True
)
### Expected behavior
just load the two models as part of the dictionary and move on | closed | 2025-02-20T05:17:23Z | 2025-02-20T16:26:14Z | https://github.com/huggingface/transformers/issues/36294 | [
"bug"
] | ashimdahal | 3 |
pydata/xarray | pandas | 9,665 | open_datatree(group='some_subgroup') returning parent nodes | ### What is your issue?
@aladinor Noticed this during a demo a few meetings back but I don't think we followed up on this.
If you have a `DataTree` of this shape.
```
<xarray.DataTree>
Group: /
│ Dimensions: (lat: 1, lon: 2)
│ Dimensions without coordinates: lat, lon
│ Data variables:
│ root_variable (lat, lon) float64 16B ...
└── Group: /Group1
│ Dimensions: (lat: 1, lon: 2)
│ Dimensions without coordinates: lat, lon
│ Data variables:
│ group_1_var (lat, lon) float64 16B ...
└── Group: /Group1/subgroup1
Dimensions: (lat: 1, lon: 2)
Dimensions without coordinates: lat, lon
Data variables:
subgroup1_var (lat, lon) float64 16B ...
```
And you specify a path with `group=` you still get a nested tree but with empty groups for the groups that were not specified.
```
In [1]: open_datatree('filename.nc', engine='netcdf4', group='/Group1/subgroup')
Out [1]:
<xarray.DataTree>
Group: /
└── Group: /Group1
└── Group: /Group1/subgroup1
Dimensions: (lat: 1, lon: 2)
Dimensions without coordinates: lat, lon
Data variables:
subgroup1_var (lat, lon) float64 16B ...
```
I thought the expected result would be to only return the group specified with all of it's child nodes (if it has any), something like:
```
<xarray.DataTree>
Group: /Group1/subgroup1
Dimensions: (lat: 1, lon: 2)
Dimensions without coordinates: lat, lon
Data variables:
subgroup1_var (lat, lon) float64 16B ...
```
CCing the usual squad @shoyer, @keewis, @TomNicholas, @owenlittlejohns, and @flamingbear | closed | 2024-10-23T17:33:20Z | 2024-10-24T21:00:35Z | https://github.com/pydata/xarray/issues/9665 | [
"bug",
"topic-backends",
"topic-DataTree"
] | eni-awowale | 5 |
ultralytics/ultralytics | pytorch | 19,389 | How to disable auto augument option in training yolov11 classification model? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
hello!
Among the automatic data augmentation options, **centercrop** is applied.
Please tell me how to explicitly remove this option.
I want to use only the letterbox option.
### Additional
_No response_ | closed | 2025-02-24T02:20:06Z | 2025-03-06T15:13:48Z | https://github.com/ultralytics/ultralytics/issues/19389 | [
"question",
"classify"
] | jyrainer | 12 |
pinry/pinry | django | 380 | Project Fork? | Has anyone forked this project? This is one my favorite self hosted apps, but I'm assuming it's dead. It'd be awesome to see it live on though. | open | 2024-04-13T14:18:37Z | 2024-12-03T16:29:07Z | https://github.com/pinry/pinry/issues/380 | [] | beboprocky | 10 |
glumpy/glumpy | numpy | 142 | GL_DEPTH_TEST not functioning | TLDR gl_depth_test is not properly enabled in the gloo-picking example. Would like to see how to make it work :smiley:
I was checking out the gloo-picking example, and realize GL_DEPTH_TEST is actually not working.
This is not easily observable but if you think about it carefully, since you are rotating a point cloud at its center, let's say your mouse is going upward, the points at half of the cloud (the half that is closer to you) should move the same direction as your mouse, and __should occlude ALL points that is moving at the opposite direction__, which just simply isn't the case right now. | closed | 2018-02-24T08:44:20Z | 2018-02-25T06:04:48Z | https://github.com/glumpy/glumpy/issues/142 | [] | tobyclh | 8 |
RobertCraigie/prisma-client-py | asyncio | 116 | Feature Parity with the TypeScript Client | - [ ] #10
- [x] #19
- [ ] #25
- [ ] #26
- [ ] #27
- [x] #28
- [ ] #31
- [x] #39
- [x] #42
- [x] #52
- [x] #53
- [x] #54
- [ ] #64
- [ ] #76
- [ ] #103
- [x] #106
- [ ] #107
- [x] #134
- [ ] #314
- [x] #434
- [ ] #676
- [ ] #714
- [x] #719
- [ ] #816
- [x] #994
- [ ] #127 | open | 2021-11-13T10:49:39Z | 2024-08-27T18:27:11Z | https://github.com/RobertCraigie/prisma-client-py/issues/116 | [
"kind/epic",
"level/advanced",
"priority/high"
] | RobertCraigie | 3 |
ionelmc/pytest-benchmark | pytest | 15 | Add benchmark.weave | The `weave` attribute, as a shorthand for the `benchmark_weave` fixture.
| closed | 2015-08-03T22:55:18Z | 2015-08-10T00:32:05Z | https://github.com/ionelmc/pytest-benchmark/issues/15 | [] | ionelmc | 1 |
Miserlou/Zappa | flask | 2,113 | Forbidden message | ## Context
After successfully deploying my Flask application, I am having trouble accessing it. When I make a request using Postman, for example, I receive "Forbidden" as a message.
## Expected Behavior
After finishing the deployment, I received a link for my app and tried accessing the routes I programmed in Flask. I expected that passing it through Postman, and adding the api_key as authentication, I would be able to access my API.
## Actual Behavior
Instead, I am getting "Forbidden" as a return for my call.
## Possible Fix
After making some diggings here, I suspected that using the api_key function could be the problem. Someone suggested that I should change the configuration in **AWS Gateway API > Custom Domain Names** but there are no API registered there, which raised some warnings in my head. Maybe there is a problem in my permission of AWS that is not allowing it to create the project on Gateway API? I don't know.
After suspecting that the api_key_required camp could be the problem, I removed it. But instead, I got another error: "TypeError: 'NoneType' object is not callable. Now I am even more lost.
## Steps to Reproduce
1. zappa init
2. zappa deploy dev
## Your Environment
* Zappa version used:
* Operating System and Python version: MacOS 10.14 / Python 3.7
* The output of `pip freeze`:
```
argcomplete==1.11.1
blis==0.4.1
boto3==1.14.0
botocore==1.17.0
CacheControl==0.12.6
cachetools==4.1.0
catalogue==1.0.0
certifi==2020.4.5.2
cfn-flip==1.2.3
chardet==3.0.4
click==7.1.2
cymem==2.0.3
docutils==0.15.2
durationpy==0.5
fasttext==0.9.2
firebase-admin==4.3.0
Flask==0.12.2
future==0.18.2
google-api-core==1.20.0
google-api-python-client==1.9.2
google-auth==1.16.1
google-auth-httplib2==0.0.3
google-cloud-core==1.3.0
google-cloud-firestore==1.7.0
google-cloud-storage==1.28.1
google-resumable-media==0.5.1
googleapis-common-protos==1.52.0
grpcio==1.29.0
gunicorn==20.0.4
hjson==3.0.1
httplib2==0.18.1
idna==2.9
importlib-metadata==1.6.1
itsdangerous==1.1.0
Jinja2==2.11.2
jmespath==0.10.0
kappa==0.6.0
MarkupSafe==1.1.1
msgpack==1.0.0
murmurhash==1.0.2
numpy==1.18.5
pandas==1.0.4
pip-tools==5.2.1
plac==1.1.3
placebo==0.9.0
preshed==3.0.2
protobuf==3.12.2
pt-core-news-sm==2.2.5
pyasn1==0.4.8
pyasn1-modules==0.2.8
pybind11==2.5.0
python-dateutil==2.6.1
python-slugify==4.0.0
pytz==2020.1
PyYAML==5.3.1
requests==2.23.0
rsa==4.0
s3transfer==0.3.3
six==1.15.0
spacy==2.2.4
srsly==1.0.2
text-unidecode==1.3
thinc==7.4.0
toml==0.10.1
tqdm==4.46.1
troposphere==2.6.1
uritemplate==3.0.1
urllib3==1.25.9
wasabi==0.6.0
Werkzeug==0.16.1
wsgi-request-logger==0.4.6
zappa==0.51.0
zipp==3.1.0
```
* Your `zappa_settings.json`:
````
{
"dev": {
"app_function": "app.app",
"profile_name": "default",
"project_name": "default",
"runtime": "python3.7",
"s3_bucket": "XXXXX",
"aws_region": "sa-east-1",
"slim_handler": true,
"api_key_required": true
}
}
``` | open | 2020-06-11T05:42:00Z | 2020-11-24T12:22:27Z | https://github.com/Miserlou/Zappa/issues/2113 | [] | pexpert | 8 |
explosion/spaCy | data-science | 11,986 | spaCy 3.4 depends on numpy 1.15.0 which has known vulnerabilities | I ran the [new Google OSV scanner tool](https://github.com/google/osv-scanner) on a local copy of the spaCy repo, and it reports several vulnerabilities in the minimum numpy version:
```shell
➜ ~/go/bin/osv-scanner -r spaCy/
Scanning dir spaCy/
Scanning /home/ilab-user/spaCy/ at commit c9d9d6847f9685c21eeec01f4b8cd053cadf8bf5
Scanned /home/ilab-user/spaCy/requirements.txt file and found 36 packages
Scanned /home/ilab-user/spaCy/website/package-lock.json file and found 2295 packages
Scanned /home/ilab-user/spaCy/website/setup/requirements.txt file and found 2 packages
╭─────────────────────────────────┬───────────┬───────────────────────────┬─────────┬───────────────────────────────────────────────────╮
│ SOURCE │ ECOSYSTEM │ AFFECTED PACKAGE │ VERSION │ OSV URL (ID IN BOLD) │
├─────────────────────────────────┼───────────┼───────────────────────────┼─────────┼───────────────────────────────────────────────────┤
│ spaCy/requirements.txt │ PyPI │ numpy │ 1.15.0 │ https://osv.dev/vulnerability/GHSA-5545-2q6w-2gh6 │
│ │ │ │ │ https://osv.dev/vulnerability/PYSEC-2021-856 │
│ spaCy/requirements.txt │ PyPI │ numpy │ 1.15.0 │ https://osv.dev/vulnerability/GHSA-6p56-wp2h-9hxr │
│ spaCy/requirements.txt │ PyPI │ numpy │ 1.15.0 │ https://osv.dev/vulnerability/GHSA-f7c7-j99h-c22f │
│ │ │ │ │ https://osv.dev/vulnerability/PYSEC-2021-857 │
│ spaCy/requirements.txt │ PyPI │ numpy │ 1.15.0 │ https://osv.dev/vulnerability/GHSA-fpfv-jqm9-f5jm │
│ spaCy/requirements.txt │ PyPI │ numpy │ 1.15.0 │ https://osv.dev/vulnerability/PYSEC-2019-108 │
```
To eliminate all these vulnerabilities, a future release of spaCy should update this dependency to `numpy>=1.22.0`. | closed | 2022-12-16T16:07:05Z | 2023-02-09T00:02:12Z | https://github.com/explosion/spaCy/issues/11986 | [
"third-party"
] | dwvisser | 4 |
jschneier/django-storages | django | 822 | Cannot configure CDN for Azure | Hello there!
I want to create a CDN connection to cache data on the azure storage for my website.
For that I want to use a SAS(Shared Access Signature) and not the account key.
The following parameters are being used:
AZURE_ACCOUNT_NAME = <name of the account>
AZURE_TOKEN_CREDENTIAL = <?sv=2020... > # The SAS key
AZURE_CUSTOM_DOMAIN = <https://_**mycdnname**_.azureedge.net>
AZURE_CACHE_CONTROL = "public,max-age=31536000,immutable"
DEFAULT_FILE_STORAGE = <file storage name>
STATICFILES_STORAGE = <name of the static folder>
These settings produce the following error:
` self.account_key should not be None.`
As already mentioned I want to use it **without** the AZURE_ACCOUNT_KEY and instead use SAS.
I also tried pasting the AZURE_TOKEN_CREDENTIAL value to the AZURE_ACCOUNT_KEY which results in 403 errors.
Thank you in advance!
| closed | 2020-01-30T15:47:29Z | 2023-09-04T21:20:10Z | https://github.com/jschneier/django-storages/issues/822 | [] | rhewid | 1 |
ryfeus/lambda-packs | numpy | 52 | Query | Hi. I have deployed your code:
lambda-packs/Selenium_Chromium/
on AWS lambda. It works well!!
But when I edit it to runa new website which contains a user login, it is not working. Can you help? | open | 2021-06-15T09:36:35Z | 2021-06-15T09:36:35Z | https://github.com/ryfeus/lambda-packs/issues/52 | [] | rajatanand86 | 0 |
ultralytics/ultralytics | pytorch | 19,339 | Latest version of OpenCV is not compatible with rtsp cameras | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Other
### Bug
This is not an `ultralytics` package bug.
This is a `opencv` bug I already reported [here](https://github.com/opencv/opencv-python/issues/1085)
Version of `opencv-python==4.11.0.86` is unable to deal with rtsp cameras.
It looks like version 4.10.0.84 (or lower) are still able to process rtsp cameras.
Suggestion: It would be convenient to set `opencv-python!=4.11.0.86` in your requirements.
### Environment
```
yolo checks
Ultralytics 8.3.78 🚀 Python-3.11.0rc1 torch-2.6.0+cu124 CUDA:0 (NVIDIA GeForce RTX 4060 Laptop GPU, 7940MiB)
Setup complete ✅ (16 CPUs, 14.9 GB RAM, 722.6/937.3 GB disk)
OS Linux-6.5.0-45-generic-x86_64-with-glibc2.35
Environment Linux
Python 3.11.0rc1
Install pip
RAM 14.86 GB
Disk 722.6/937.3 GB
CPU AMD Ryzen 7 7735HS with Radeon Graphics
CPU count 16
GPU NVIDIA GeForce RTX 4060 Laptop GPU, 7940MiB
GPU count 1
CUDA 12.4
numpy ✅ 2.1.1<=2.1.1,>=1.23.0
matplotlib ✅ 3.10.0>=3.3.0
opencv-python ✅ 4.11.0.86>=4.6.0
pillow ✅ 11.1.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.15.2>=1.4.1
torch ✅ 2.6.0>=1.8.0
torch ✅ 2.6.0!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.21.0>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 7.0.0
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0
```
### Minimal Reproducible Example
```
yolo predict detect source=rtsp://user:pass@my_dns_of_the_camera:port/h264/ch1/main/av_stream
Ultralytics 8.3.78 🚀 Python-3.11.0rc1 torch-2.6.0+cu124 CUDA:0 (NVIDIA GeForce RTX 4060 Laptop GPU, 7940MiB)
YOLO11n summary (fused): 100 layers, 2,616,248 parameters, 0 gradients, 6.5 GFLOPs
[ WARN:0@31.632] global cap_ffmpeg_impl.hpp:453 _opencv_ffmpeg_interrupt_callback Stream timeout triggered after 30078.888534 ms
[ WARN:0@61.677] global cap_ffmpeg_impl.hpp:453 _opencv_ffmpeg_interrupt_callback Stream timeout triggered after 30037.034724 ms
Traceback (most recent call last):
File "/home/henry/.virtualenvs/cv2/bin/yolo", line 8, in <module>
sys.exit(entrypoint())
^^^^^^^^^^^^
File "/home/henry/.virtualenvs/cv2/lib/python3.11/site-packages/ultralytics/cfg/__init__.py", line 986, in entrypoint
getattr(model, mode)(**overrides) # default args from model
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/henry/.virtualenvs/cv2/lib/python3.11/site-packages/ultralytics/engine/model.py", line 560, in predict
return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/henry/.virtualenvs/cv2/lib/python3.11/site-packages/ultralytics/engine/predictor.py", line 190, in predict_cli
for _ in gen: # sourcery skip: remove-empty-nested-block, noqa
File "/home/henry/.virtualenvs/cv2/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 36, in generator_context
response = gen.send(None)
^^^^^^^^^^^^^^
File "/home/henry/.virtualenvs/cv2/lib/python3.11/site-packages/ultralytics/engine/predictor.py", line 233, in stream_inference
self.setup_source(source if source is not None else self.args.source)
File "/home/henry/.virtualenvs/cv2/lib/python3.11/site-packages/ultralytics/engine/predictor.py", line 205, in setup_source
self.dataset = load_inference_source(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/henry/.virtualenvs/cv2/lib/python3.11/site-packages/ultralytics/data/build.py", line 208, in load_inference_source
dataset = LoadStreams(source, vid_stride=vid_stride, buffer=buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/henry/.virtualenvs/cv2/lib/python3.11/site-packages/ultralytics/data/loaders.py", line 135, in __init__
raise ConnectionError(f"{st}Failed to read images from {s}")
ConnectionError: 1/1: rtsp://user:pass@my_dns_of_the_camera:port/h264/ch1/main/av_stream... Failed to read images from rtsp://user:pass@my_dns_of_the_camera:port/h264/ch1/main/av_stream
```
### Additional
Test with any rtsp camera, results are the same.
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2025-02-20T15:45:58Z | 2025-02-21T04:08:43Z | https://github.com/ultralytics/ultralytics/issues/19339 | [
"dependencies",
"detect"
] | hdnh2006 | 2 |
Johnserf-Seed/TikTokDownload | api | 365 | 请问配置文件的cookie格式是什么?[BUG] | 试了半天还是出现错误
UnboundLocalError: local variable 'response' referenced before assignment | open | 2023-03-23T09:14:48Z | 2023-03-27T02:10:38Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/365 | [
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] | wangshuaisss | 10 |
deepset-ai/haystack | machine-learning | 8,163 | feat: enhance documentation for DocumentCleaner | **Is your feature request related to a problem? Please describe.**
This PR #8103 adds two new parameters to DocumentCleaner. Hence, the documentation needs to be updated.
**Describe the solution you'd like**
Explanation of new params and order of execution in docs.
| closed | 2024-08-05T10:12:58Z | 2024-08-07T12:10:39Z | https://github.com/deepset-ai/haystack/issues/8163 | [
"P1"
] | Amnah199 | 4 |
slackapi/python-slack-sdk | asyncio | 1,364 | Add retry handler for 500s to WebClient by default | Include custom retry handler for Slack 500s by default similar to how `ConnectionErrorRetryHandler` is included by default in the `WebClient`. I think this would be a good addition because as a developer using the slack sdk I'd expect this to be the default behavior to prevent transient Slack server errors from raising a `SlackApiError` 😄.
### Category (place an `x` in each of the `[ ]`)
- [x] **slack_sdk.web.WebClient (sync/async)** (Web API client)
- [ ] **slack_sdk.webhook.WebhookClient (sync/async)** (Incoming Webhook, response_url sender)
- [ ] **slack_sdk.models** (UI component builders)
- [ ] **slack_sdk.oauth** (OAuth Flow Utilities)
- [ ] **slack_sdk.socket_mode** (Socket Mode client)
- [ ] **slack_sdk.audit_logs** (Audit Logs API client)
- [ ] **slack_sdk.scim** (SCIM API client)
- [ ] **slack_sdk.rtm** (RTM client)
- [ ] **slack_sdk.signature** (Request Signature Verifier)
### Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/python-slack-sdk/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2023-05-04T20:50:11Z | 2023-05-17T04:04:51Z | https://github.com/slackapi/python-slack-sdk/issues/1364 | [
"enhancement",
"web-client",
"Version: 3x",
"good first issue"
] | digitalnomd | 3 |
Miserlou/Zappa | flask | 1,901 | Error when Zappa Deploy | ## Actual Behavior
Traceback (most recent call last):
File "/Users/gimtaegsu/Documents/Django-Zappa/.env/lib/python3.7/site-packages/zappa/core.py", line 924, in upload_to_s3
self.s3_client.head_bucket(Bucket=bucket_name)
File "/Users/gimtaegsu/Documents/Django-Zappa/.env/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/gimtaegsu/Documents/Django-Zappa/.env/lib/python3.7/site-packages/botocore/client.py", line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (403) when calling the HeadBucket operation: Forbidden
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/gimtaegsu/Documents/Django-Zappa/.env/lib/python3.7/site-packages/zappa/cli.py", line 2779, in handle
sys.exit(cli.handle())
File "/Users/gimtaegsu/Documents/Django-Zappa/.env/lib/python3.7/site-packages/zappa/cli.py", line 509, in handle
self.dispatch_command(self.command, stage)
File "/Users/gimtaegsu/Documents/Django-Zappa/.env/lib/python3.7/site-packages/zappa/cli.py", line 546, in dispatch_command
self.deploy(self.vargs['zip'])
File "/Users/gimtaegsu/Documents/Django-Zappa/.env/lib/python3.7/site-packages/zappa/cli.py", line 723, in deploy
self.zip_path, self.s3_bucket_name, disable_progress=self.disable_progress)
File "/Users/gimtaegsu/Documents/Django-Zappa/.env/lib/python3.7/site-packages/zappa/core.py", line 936, in upload_to_s3
CreateBucketConfiguration={'LocationConstraint': self.aws_region},
File "/Users/gimtaegsu/Documents/Django-Zappa/.env/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/gimtaegsu/Documents/Django-Zappa/.env/lib/python3.7/site-packages/botocore/client.py", line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the CreateBucket operation: Access Denied
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1.
2.
3.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: lasted version
* Operating System and Python version: MAC OS, Python3.7
* Your `zappa_settings.py`: {
"dev": {
"django_settings": "myproject.settings",
"profile_name": "default",
"aws_region" : "ap-northeast-1",
"manage_roles" : false,
"role_name" : "my_arn_role_name",
"role_arn" : "my_role_arn_address",
"project_name": "aws-django-board",
"runtime": "python3.7",
"s3_bucket": "lambda"
}
}
| open | 2019-07-17T09:09:09Z | 2019-07-17T21:39:58Z | https://github.com/Miserlou/Zappa/issues/1901 | [] | taegsu | 1 |
HumanSignal/labelImg | deep-learning | 542 | What if all my test pics are the same size | Hi,
all the files in my testfolder are the same size and I wish to annotate the whole picture for every file.
How can I accomplish this? It feels very tedious to draw the rectangle every time. | open | 2020-01-13T18:10:23Z | 2020-01-13T18:10:23Z | https://github.com/HumanSignal/labelImg/issues/542 | [] | Mikkochu | 0 |
scikit-tda/kepler-mapper | data-visualization | 22 | Add Travis-CI to project. | By integrating CI, Travis will automatically run the test suite before any pull request.
I've added the .travis.yml settings file and tested everything so it works on my fork. From what I can tell, only the owner of the repo can integrate travis into the repo.
@MLWave, when you have a second, could you turn this on? It took me <3 minutes on my fork, just follow the first few steps [here](https://github.com/dwyl/learn-travis).
Once this is done, we can
- [x] Add the [build passing badge](https://docs.travis-ci.com/user/status-images/) to the README
- [x] Incorporate other tools to the repo:
- [pep8speaks](https://github.com/OrkoHunter/pep8speaks)
- [x] Add codecov | closed | 2017-11-16T05:25:20Z | 2017-12-20T23:43:37Z | https://github.com/scikit-tda/kepler-mapper/issues/22 | [] | sauln | 14 |
PokeAPI/pokeapi | graphql | 1,036 | Provide PMD Portraits and Sprites | Given Pokemon Conquest Data, and Pokemon Showdown sprites are provided, an additional sprite variant that has no reason to not exist ,given spinoff and inofficial sprites are there, would be to provide PMD portraits.
[PMDCollab/SpriteCollab](https://github.com/PMDCollab/SpriteCollab) is already doing much, but has no public api.
One requirement to use non-official sprites here though is to credit the sprite creators. | open | 2024-02-08T07:11:25Z | 2024-02-22T17:38:14Z | https://github.com/PokeAPI/pokeapi/issues/1036 | [] | GreatNovaDragon | 4 |
modelscope/data-juicer | streamlit | 577 | datajuicer是否可以理解成给Ray data提供了多模态数据处理的能力? | ### Before Asking 在提问之前
- [x] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully. 我已经仔细阅读了 [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) 上的操作指引。
- [x] I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
### Search before asking 先搜索,再提问
- [x] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar questions. 我已经在 [issue列表](https://github.com/alibaba/data-juicer/issues) 中搜索但是没有发现类似的问题。
### Question
我看文档中说datajuicer中提供的大部分算子都可以使用ray.data.Dataset加载的数据,转换为raydataset后进行数据处理。那请问具体有哪些算子。使用原生的ray.data的dataset是否可以进行datajuicer的算子处理?
### Additional 额外信息
_No response_ | closed | 2025-02-16T12:25:01Z | 2025-02-21T08:57:41Z | https://github.com/modelscope/data-juicer/issues/577 | [
"question"
] | nihaoqingtuan | 1 |
coqui-ai/TTS | python | 3,783 | nvm it work fine just dll fixed work fine | nvm | closed | 2024-06-09T14:04:56Z | 2024-06-10T19:27:11Z | https://github.com/coqui-ai/TTS/issues/3783 | [
"feature request"
] | xalteropsx | 0 |
aminalaee/sqladmin | sqlalchemy | 614 | Admin does not detect nullable fields for SQLAlchemy v2 | ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
I have nullable fields in models, but when I add new object through admin, it complains and asks to fill all fields.
### Steps to reproduce the bug
```
class Country(Base):
__tablename__ = "core_country"
id: Mapped[intpk]
name: Mapped[notnullstr]
code3: Mapped[notnullstr] = mapped_column(String(3))
flag: Mapped[filetype]
currency: Mapped[notnullstr] = mapped_column(String(1))
lat: Mapped[str]
long: Mapped[str]
description: Mapped[str]
is_active: Mapped[bool]
def __str__(self):
return self.name
```
And I have:
```
intpk = Annotated[int, mapped_column(primary_key=True, index=True, unique=True)]
notnullstr = Annotated[str, mapped_column(nullable=False)]
```
### Expected behavior
Long, lat and description fields should be optional, and can be left blank
### Actual behavior
It forces me to fill those fields:

### Debugging material
_No response_
### Environment
OS: Debian 11
Python: 3.11
SQLAdmin: 0.14.1
SQLAlchemy: 2.0.20
### Additional context
_No response_ | closed | 2023-09-07T17:25:03Z | 2023-09-08T09:31:56Z | https://github.com/aminalaee/sqladmin/issues/614 | [] | mmzeynalli | 2 |
nvbn/thefuck | python | 1,036 | Slow rule eval on WSL | The output of `thefuck --version` (something like `The Fuck 3.1 using Python
3.5.0 and Bash 4.4.12(1)-release`):
```sh
% thefuck --version
The Fuck 3.29 using Python 3.8.1 and ZSH 5.4.2
```
Your system (Debian 7, ArchLinux, Windows, etc.):
```sh
% uname -r
4.19.84-microsoft-standard
```

How to reproduce the bug:
```sh
# linuxbrew using latest.
% brew install -v thefuck
% echo 'eval $(thefuck --alias fuck)' >> ~/.zshrc
% source ~/.zshrc
# some repo
% gitstatus
zsh: command not found: gitstatus
# this takes like 30 seconds.
% fuck
```
The output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck):
```sh
% fuck --debug
DEBUG: Run with settings: {'alter_history': True,
'debug': True,
'env': {'GIT_TRACE': '1', 'LANG': 'C', 'LC_ALL': 'C'},
'exclude_rules': [],
'history_limit': None,
'instant_mode': True,
'no_colors': False,
'num_close_matches': 3,
'priority': {},
'repeat': False,
'require_confirmation': True,
'rules': [<const: All rules enabled>],
'slow_commands': ['lein', 'react-native', 'gradle', './gradlew', 'vagrant'],
'user_dir': PosixPath('/c/Users/MikeLloyd/.config/thefuck'),
'wait_command': 3,
'wait_slow_command': 15}
[WARN] PS1 doesn't contain user command mark, please ensure that PS1 is not changed after The Fuck alias initialization
DEBUG: Importing rule: adb_unknown_command; took: 0:00:00.000184
DEBUG: Importing rule: ag_literal; took: 0:00:00.000366
DEBUG: Importing rule: apt_get; took: 0:00:00.001400
DEBUG: Importing rule: apt_get_search; took: 0:00:00.000299
DEBUG: Importing rule: apt_invalid_operation; took: 0:00:00.000714
DEBUG: Importing rule: apt_list_upgradable; took: 0:00:00.000370
DEBUG: Importing rule: apt_upgrade; took: 0:00:00.000391
DEBUG: Importing rule: aws_cli; took: 0:00:00.000239
DEBUG: Importing rule: az_cli; took: 0:00:00.000561
DEBUG: Importing rule: brew_cask_dependency; took: 0:00:00.000493
DEBUG: Importing rule: brew_install; took: 0:00:00.000110
DEBUG: Importing rule: brew_link; took: 0:00:00.000261
DEBUG: Importing rule: brew_reinstall; took: 0:00:00.000484
DEBUG: Importing rule: brew_uninstall; took: 0:00:00.000239
DEBUG: Importing rule: brew_unknown_command; took: 0:00:00.000131
DEBUG: Importing rule: brew_update_formula; took: 0:00:00.000234
DEBUG: Importing rule: cargo; took: 0:00:00.000087
DEBUG: Importing rule: cargo_no_command; took: 0:00:00.000237
DEBUG: Importing rule: cat_dir; took: 0:00:00.000262
DEBUG: Importing rule: cd_correction; took: 0:00:00.001040
DEBUG: Importing rule: cd_mkdir; took: 0:00:00.000368
DEBUG: Importing rule: cd_parent; took: 0:00:00.000089
DEBUG: Importing rule: chmod_x; took: 0:00:00.000093
DEBUG: Importing rule: composer_not_command; took: 0:00:00.000252
DEBUG: Importing rule: cp_omitting_directory; took: 0:00:00.000365
DEBUG: Importing rule: cpp11; took: 0:00:00.000296
DEBUG: Importing rule: dirty_untar; took: 0:00:00.000876
DEBUG: Importing rule: dirty_unzip; took: 0:00:00.001614
DEBUG: Importing rule: django_south_ghost; took: 0:00:00.000222
DEBUG: Importing rule: django_south_merge; took: 0:00:00.000233
DEBUG: Importing rule: dnf_no_such_command; took: 0:00:00.085171
DEBUG: Importing rule: docker_login; took: 0:00:00.000305
DEBUG: Importing rule: docker_not_command; took: 0:00:00.083238
DEBUG: Importing rule: dry; took: 0:00:00.000133
DEBUG: Importing rule: fab_command_not_found; took: 0:00:00.000454
DEBUG: Importing rule: fix_alt_space; took: 0:00:00.000304
DEBUG: Importing rule: fix_file; took: 0:00:00.001708
DEBUG: Importing rule: gem_unknown_command; took: 0:00:00.013975
DEBUG: Importing rule: git_add; took: 0:00:00.000560
DEBUG: Importing rule: git_add_force; took: 0:00:00.000243
DEBUG: Importing rule: git_bisect_usage; took: 0:00:00.000315
DEBUG: Importing rule: git_branch_delete; took: 0:00:00.000273
DEBUG: Importing rule: git_branch_exists; took: 0:00:00.000307
DEBUG: Importing rule: git_branch_list; took: 0:00:00.000250
DEBUG: Importing rule: git_checkout; took: 0:00:00.000249
DEBUG: Importing rule: git_commit_amend; took: 0:00:00.000231
DEBUG: Importing rule: git_commit_reset; took: 0:00:00.000262
DEBUG: Importing rule: git_diff_no_index; took: 0:00:00.000296
DEBUG: Importing rule: git_diff_staged; took: 0:00:00.000367
DEBUG: Importing rule: git_fix_stash; took: 0:00:00.000320
DEBUG: Importing rule: git_flag_after_filename; took: 0:00:00.000329
DEBUG: Importing rule: git_help_aliased; took: 0:00:00.000309
DEBUG: Importing rule: git_merge; took: 0:00:00.000267
DEBUG: Importing rule: git_merge_unrelated; took: 0:00:00.000251
DEBUG: Importing rule: git_not_command; took: 0:00:00.000336
DEBUG: Importing rule: git_pull; took: 0:00:00.000260
DEBUG: Importing rule: git_pull_clone; took: 0:00:00.000250
DEBUG: Importing rule: git_pull_uncommitted_changes; took: 0:00:00.000264
DEBUG: Importing rule: git_push; took: 0:00:00.000240
DEBUG: Importing rule: git_push_different_branch_names; took: 0:00:00.000227
DEBUG: Importing rule: git_push_force; took: 0:00:00.000231
DEBUG: Importing rule: git_push_pull; took: 0:00:00.000270
DEBUG: Importing rule: git_push_without_commits; took: 0:00:00.000288
DEBUG: Importing rule: git_rebase_merge_dir; took: 0:00:00.000249
DEBUG: Importing rule: git_rebase_no_changes; took: 0:00:00.000221
DEBUG: Importing rule: git_remote_delete; took: 0:00:00.000247
DEBUG: Importing rule: git_remote_seturl_add; took: 0:00:00.000184
DEBUG: Importing rule: git_rm_local_modifications; took: 0:00:00.000264
DEBUG: Importing rule: git_rm_recursive; took: 0:00:00.000302
DEBUG: Importing rule: git_rm_staged; took: 0:00:00.000233
DEBUG: Importing rule: git_stash; took: 0:00:00.000233
DEBUG: Importing rule: git_stash_pop; took: 0:00:00.000260
DEBUG: Importing rule: git_tag_force; took: 0:00:00.000247
DEBUG: Importing rule: git_two_dashes; took: 0:00:00.000287
DEBUG: Importing rule: go_run; took: 0:00:00.000273
DEBUG: Importing rule: gradle_no_task; took: 0:00:00.001183
DEBUG: Importing rule: gradle_wrapper; took: 0:00:00.000379
DEBUG: Importing rule: grep_arguments_order; took: 0:00:00.000346
DEBUG: Importing rule: grep_recursive; took: 0:00:00.000372
DEBUG: Importing rule: grunt_task_not_found; took: 0:00:00.000639
DEBUG: Importing rule: gulp_not_task; took: 0:00:00.000453
DEBUG: Importing rule: has_exists_script; took: 0:00:00.000285
DEBUG: Importing rule: heroku_multiple_apps; took: 0:00:00.000331
DEBUG: Importing rule: heroku_not_command; took: 0:00:00.000280
DEBUG: Importing rule: history; took: 0:00:00.000137
DEBUG: Importing rule: hostscli; took: 0:00:00.000420
DEBUG: Importing rule: ifconfig_device_not_found; took: 0:00:00.000410
DEBUG: Importing rule: java; took: 0:00:00.000240
DEBUG: Importing rule: javac; took: 0:00:00.000303
DEBUG: Importing rule: lein_not_task; took: 0:00:00.000436
DEBUG: Importing rule: ln_no_hard_link; took: 0:00:00.000293
DEBUG: Importing rule: ln_s_order; took: 0:00:00.000463
DEBUG: Importing rule: long_form_help; took: 0:00:00.000143
DEBUG: Importing rule: ls_all; took: 0:00:00.000348
DEBUG: Importing rule: ls_lah; took: 0:00:00.000425
DEBUG: Importing rule: man; took: 0:00:00.000327
DEBUG: Importing rule: man_no_space; took: 0:00:00.000124
DEBUG: Importing rule: mercurial; took: 0:00:00.000345
DEBUG: Importing rule: missing_space_before_subcommand; took: 0:00:00.000105
DEBUG: Importing rule: mkdir_p; took: 0:00:00.000289
DEBUG: Importing rule: mvn_no_command; took: 0:00:00.000323
DEBUG: Importing rule: mvn_unknown_lifecycle_phase; took: 0:00:00.000367
DEBUG: Importing rule: no_command; took: 0:00:00.000398
DEBUG: Importing rule: no_such_file; took: 0:00:00.000180
DEBUG: Importing rule: npm_missing_script; took: 0:00:00.063259
DEBUG: Importing rule: npm_run_script; took: 0:00:00.000315
DEBUG: Importing rule: npm_wrong_command; took: 0:00:00.000378
DEBUG: Importing rule: open; took: 0:00:00.000335
DEBUG: Importing rule: pacman; took: 0:00:00.231445
DEBUG: Importing rule: pacman_not_found; took: 0:00:00.000160
DEBUG: Importing rule: path_from_history; took: 0:00:00.000129
DEBUG: Importing rule: php_s; took: 0:00:00.000312
DEBUG: Importing rule: pip_install; took: 0:00:00.000308
DEBUG: Importing rule: pip_unknown_command; took: 0:00:00.000300
DEBUG: Importing rule: port_already_in_use; took: 0:00:00.000678
DEBUG: Importing rule: prove_recursively; took: 0:00:00.000411
DEBUG: Importing rule: pyenv_no_such_command; took: 0:00:00.080183
DEBUG: Importing rule: python_command; took: 0:00:00.000285
DEBUG: Importing rule: python_execute; took: 0:00:00.000244
DEBUG: Importing rule: quotation_marks; took: 0:00:00.000087
DEBUG: Importing rule: react_native_command_unrecognized; took: 0:00:00.000309
DEBUG: Importing rule: remove_trailing_cedilla; took: 0:00:00.000127
DEBUG: Importing rule: rm_dir; took: 0:00:00.000255
DEBUG: Importing rule: rm_root; took: 0:00:00.000272
DEBUG: Importing rule: scm_correction; took: 0:00:00.000289
DEBUG: Importing rule: sed_unterminated_s; took: 0:00:00.000260
DEBUG: Importing rule: sl_ls; took: 0:00:00.000130
DEBUG: Importing rule: ssh_known_hosts; took: 0:00:00.000262
DEBUG: Importing rule: sudo; took: 0:00:00.000120
DEBUG: Importing rule: sudo_command_from_user_path; took: 0:00:00.000261
DEBUG: Importing rule: switch_lang; took: 0:00:00.000167
DEBUG: Importing rule: systemctl; took: 0:00:00.000388
DEBUG: Importing rule: test.py; took: 0:00:00.000100
DEBUG: Importing rule: tmux; took: 0:00:00.000286
DEBUG: Importing rule: touch; took: 0:00:00.000509
DEBUG: Importing rule: tsuru_login; took: 0:00:00.000273
DEBUG: Importing rule: tsuru_not_command; took: 0:00:00.000245
DEBUG: Importing rule: unknown_command; took: 0:00:00.000114
DEBUG: Importing rule: unsudo; took: 0:00:00.000108
DEBUG: Importing rule: vagrant_up; took: 0:00:00.000251
DEBUG: Importing rule: whois; took: 0:00:00.000368
DEBUG: Importing rule: workon_doesnt_exists; took: 0:00:00.000312
DEBUG: Importing rule: yarn_alias; took: 0:00:00.000248
DEBUG: Importing rule: yarn_command_not_found; took: 0:00:00.086066
DEBUG: Importing rule: yarn_command_replaced; took: 0:00:00.000458
DEBUG: Importing rule: yarn_help; took: 0:00:00.000263
DEBUG: Trying rule: dirty_unzip; took: 0:00:00.000111
No fucks given
DEBUG: Total took: 0:00:27.193200
```
If the bug only appears with a specific application, the output of that application and its version:
It seems to be slow regardless.
Anything else you think is relevant:
I'm not really sure what would cause it to evaluate slowly, each rule import and eval only takes a couple milliseconds, so it's not the eval it feels like there's a 20 second wait for it to spin up before it starts evaluating. | closed | 2020-01-19T17:46:49Z | 2024-04-20T06:29:43Z | https://github.com/nvbn/thefuck/issues/1036 | [
"help wanted",
"windows",
"HackIllinois"
] | siennathesane | 31 |
ipython/ipython | jupyter | 13,948 | How do I redirect jupyter' input function to a file | <!-- This is the repository for IPython command line, if you can try to make sure this question/bug/feature belong here and not on one of the Jupyter repositories.
If it's a generic Python/Jupyter question, try other forums or discourse.jupyter.org.
If you are unsure, it's ok to post here, though, there are few maintainer so you might not get a fast response.
-->
```
import sys
sys.stdin = open("a.out","r")
x = input()
print(x)
```
I want to redirect the input to a file on the Jupyter notebook ,
but I found that it's not working in the jupyter platform
the testing code above that.
```
print(input)
<bound method Kernel.raw_input of <ipykernel.ipkernel.IPythonKernel object at 0x0000014888C61DE0>>
```
and then I print(input),and I found that jupyter's input is Kernel's raw_input
```
import sys
sys.stdin = open("a.out","r")
x = input()
print(x)
```
So how can I do to making the input read the file rather than use the sys.stdin.read()(Because its running at the local environment is correct ) | open | 2023-02-15T10:45:08Z | 2023-02-15T10:45:08Z | https://github.com/ipython/ipython/issues/13948 | [] | cloudann | 0 |
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 123 | Azure Example | Hi there, love the idea of the package! I'm looking to use my Azure instance to run this and I can't seem to find any examples of how to do it in the docs and haven't been able to get something working.
This is the starting point of my Azure-based LLM usage. How can I go from this to using this package with one of the models I have hosted there?
Thanks in advance.
```
from openai import AzureOpenAI
client = AzureOpenAI(
azure_endpoint=my_endpoint,
api_key=my_api_key,
api_version=my_api_version
)
message_text = [{"role":"system","content":"You are an AI assistant that helps people find information."}]
completion = client.chat.completions.create(
model="model_name",
messages=message_text,
...
)
response_str = completion.choices[0].message.content.strip()
``` | closed | 2024-05-01T14:14:06Z | 2024-05-03T14:30:17Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/123 | [
"enhancement",
"help wanted"
] | kmaurinjones | 3 |
davidteather/TikTok-Api | api | 717 | Is there an API to like some posts | I have gone through all the documentation but I am not able to find a like post API. If anyone has any idea about this, please let me know.
Also, since these APIs are not available in the official TikTok developer account, are these safe to use.
P.S. Wasn't sure where to post this issue other than the Issues tab. | closed | 2021-09-27T11:15:09Z | 2021-09-28T05:10:29Z | https://github.com/davidteather/TikTok-Api/issues/717 | [] | yashb042 | 3 |
Neoteroi/BlackSheep | asyncio | 14 | Client example code doesn't work, results in ImportError, and then TypeError | When running the client like this:
```
import asyncio
from blacksheep.client import ClientSession
async def client_example():
async with ClientSession() as client:
response = await client.get('https://docs.python.org/3/')
text = await response.text()
print(text)
loop = asyncio.get_event_loop()
loop.run_until_complete(client_example())
```
There is an ImportError:
```
File "/venv/webapp/lib/python3.7/site-packages/blacksheep/server/application.py", line 22, in <module>
from blacksheep.middlewares import get_middlewares_chain
ImportError: cannot import name 'get_middlewares_chain' from 'blacksheep.middlewares' (/home/david/venv/api/lib/python3.7/site-packages/blacksheep/middlewares.py)
```
I don't understand why Python is giving this error, since the server example code runs fine, and I see the code for `get_middlewares_chain` in `middlewares.py` actually exists. But even trying to import it in an empty python console gets the same error.
I temporarily copied the code from `get_middlewares_chain` into `application.py` to try again, and there's no import error this time, but now there is a TypeError:
```
Traceback (most recent call last):
File "/venv/webapp/lib/python3.7/site-packages/blacksheep/client/pool.py", line 58, in get_connection
return self._get_connection()
File "/venv/webapp/lib/python3.7/site-packages/blacksheep/client/pool.py", line 41, in _get_connection
connection = self._idle_connections.get_nowait() # type: ClientConnection
File "/usr/lib/python3.7/asyncio/queues.py", line 182, in get_nowait
raise QueueEmpty
asyncio.queues.QueueEmpty
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/data/src/ochous-api/asdf/await.py", line 13, in <module>
loop.run_until_complete(client_example())
File "uvloop/loop.pyx", line 1451, in uvloop.loop.Loop.run_until_complete
File "/mnt/data/src/ochous-api/asdf/await.py", line 7, in client_example
response = await client.get('https://docs.python.org/3/')
File "/venv/webapp/lib/python3.7/site-packages/blacksheep/client/session.py", line 324, in get
None))
File "/venv/webapp/lib/python3.7/site-packages/blacksheep/client/session.py", line 281, in send
return await self._handler(request)
File "/venv/webapp/lib/python3.7/site-packages/blacksheep/middlewares.py", line 6, in middleware_wrapper
return await handler(request, next_handler)
File "/venv/webapp/lib/python3.7/site-packages/blacksheep/client/cookies.py", line 285, in cookies_middleware
response = await next_handler(request)
File "/venv/webapp/lib/python3.7/site-packages/blacksheep/client/session.py", line 130, in root_handler
return await self._send_core(request)
File "/venv/webapp/lib/python3.7/site-packages/blacksheep/client/session.py", line 289, in _send_core
response = await self._send_using_connection(request)
File "/venv/webapp/lib/python3.7/site-packages/blacksheep/client/session.py", line 302, in _send_using_connection
connection = await self.get_connection(request.url)
File "/venv/webapp/lib/python3.7/site-packages/blacksheep/client/session.py", line 257, in get_connection
loop=self.loop)
File "/usr/lib/python3.7/asyncio/tasks.py", line 416, in wait_for
return fut.result()
File "/venv/webapp/lib/python3.7/site-packages/blacksheep/client/pool.py", line 60, in get_connection
return await self.create_connection()
File "/venv/webapp/lib/python3.7/site-packages/blacksheep/client/pool.py", line 68, in create_connection
ssl=self.ssl)
File "uvloop/loop.pyx", line 1782, in create_connection
File "uvloop/sslproto.pyx", line 237, in uvloop.loop.SSLProtocol.__init__
TypeError: Expected unicode, got bytes
``` | closed | 2019-06-23T07:13:51Z | 2019-06-23T18:01:03Z | https://github.com/Neoteroi/BlackSheep/issues/14 | [] | dvtan | 1 |
nolar/kopf | asyncio | 252 | [PR] Peerings not found when zalando.org/v1 group is absent | > <a href="https://github.com/nolar"><img align="left" height="50" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> A pull request by [nolar](https://github.com/nolar) at _2019-11-21 12:07:44+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/pull/252
> Merged by [nolar](https://github.com/nolar) at _2019-11-21 12:53:25+00:00_
> Issue : #251
## Description
While testing a hotfix in #250 for #249, even without the KopfPeering CRDs defined, it happened so that the group-version did exist, since the KopfExample CRD of the same `zalando.org/v1` group-version was defined.
When none of the `zalando.org/v1` CRDs exist, the operator fails with 404 on startup unless in `--standalone` mode. — **This affects the first user experience and a quick-start guide scenario.**
This PR fixes this issue: 404 (and 403) in the resource discovery, i.e. absence of the group-version, are treated same as the absence of the individual resources while the group-version exists.
In addition, this PR adds some tests that were missing in #250 due to urgency.
## Types of Changes
- Bug fix (non-breaking change which fixes an issue)
## Review
_List of tasks the reviewer must do to review the PR_
- [ ] Tests
| closed | 2020-08-18T20:01:53Z | 2020-08-23T20:52:46Z | https://github.com/nolar/kopf/issues/252 | [
"bug",
"archive"
] | kopf-archiver[bot] | 0 |
pywinauto/pywinauto | automation | 872 | Restore binding WindowSpecification to Application instead of process | It's broken in branch `atspi` (PR #848).
Unit tests must be written to make sure it will not be broken in the future. | closed | 2020-01-01T17:00:15Z | 2020-05-23T22:00:16Z | https://github.com/pywinauto/pywinauto/issues/872 | [
"bug",
"Priority-Critical",
"refactoring_critical",
"atspi"
] | vasily-v-ryabov | 0 |
plotly/plotly.py | plotly | 5,037 | Bug Report: Template updatemenus and annotations not transferred to figures | **Bug Report: Template updatemenus and annotations not transferred to figures**
**Description**
When defining `updatemenus` and `annotations` in a Plotly template, they are not properly transferred to figures that use the template. While other template properties (like colors, fonts, etc.) are transferred correctly, `updatemenus` and `annotations` are created as empty lists.
**Expected Behavior**
All layout properties defined in a template, including `updatemenus` and `annotations`, should be transferred to figures that use the template.
**Actual Behavior**
- `fig.layout.updatemenus` and `fig.layout.annotations` are created (not `None`)
- However, they are empty lists, losing all content defined in the template
- Other template properties (like `paper_bgcolor`) are transferred correctly
**Minimal Reproducible Example**
```python
import plotly.graph_objects as go
import plotly.io as pio
import numpy as np
# Create template with updatemenus and annotations
my_template = go.layout.Template(
layout=dict(
updatemenus=[
dict(
buttons=[
dict(
label="View 1",
method="relayout",
args=[{"scene.camera.eye": dict(x=1, y=1, z=1)}],
),
],
direction="down",
showactive=True,
x=0.1,
y=1.1,
),
],
annotations=[
dict(
text="Test annotation",
x=0.5,
y=1.1,
xref="paper",
yref="paper",
showarrow=False,
),
],
paper_bgcolor="rgb(240,240,240)", # This works correctly
)
)
# Register template
pio.templates["my_template"] = my_template
# Create figure with template
fig = go.Figure(
data=[go.Scatter3d(
x=np.random.normal(0, 1, 100),
y=np.random.normal(0, 1, 100),
z=np.random.normal(0, 1, 100),
mode="markers",
)]
)
fig.update_layout(template="my_template")
# Check results
print("Template updatemenus length:", len(my_template.layout.updatemenus)) # 1
print("Figure updatemenus length:", len(fig.layout.updatemenus)) # 0
print("Template annotations length:", len(my_template.layout.annotations)) # 1
print("Figure annotations length:", len(fig.layout.annotations)) # 0
```
**Current Workaround**
The only workaround is to manually apply the `updatemenus` and `annotations` after applying the template:
```python
fig.update_layout(
updatemenus=my_template.layout.updatemenus,
annotations=my_template.layout.annotations,
)
```
**Environment**
- Python: 3.11.7
- Plotly: 5.24.1
- OS: Linux
**Additional Context**
This issue affects any template that tries to define interactive UI elements like buttons or annotations. The workaround requires additional code and makes templates less useful for defining reusable UI components.
| open | 2025-02-14T10:59:32Z | 2025-02-14T10:59:32Z | https://github.com/plotly/plotly.py/issues/5037 | [] | Boomer91 | 0 |
samuelcolvin/watchfiles | asyncio | 145 | Support `compare_contents` | See [here](https://github.com/notify-rs/notify/blob/480de9dc8f227fc9c7d6561837ad38583efe09d3/src/poll.rs#L56-L62), might be good to give this option in a future release, I'm sure someone will want it.
```rs
/// Optional feature that will evaluate the contents of changed files to determine if
/// they have indeed changed using a fast hashing algorithm. This is especially important
/// for pseudo filesystems like those on Linux under /sys and /proc which are not obligated
/// to respect any other filesystem norms such as modification timestamps, file sizes, etc.
/// By enabling this feature, performance will be significantly impacted as all files will
/// need to be read and hashed at each `poll_interval`.
pub compare_contents: bool,
``` | closed | 2022-05-17T16:10:46Z | 2022-09-11T14:22:47Z | https://github.com/samuelcolvin/watchfiles/issues/145 | [
"enhancement"
] | samuelcolvin | 5 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 648 | Random output generated by synthesizer | Hi, I am trying to train the synthesizer for another language on short dataset of around 20 hours. After 100k steps, the loss is around 0.19 which is not decreasing but oscillating since 20k steps. While using this synthesizer, strangely it is randomly giving some other output different from text provided for synthesis. Please suggest solution to this problem.
Also, considering the dataset limitation, please suggest how can the model results be improved. | closed | 2021-02-03T07:26:02Z | 2021-02-17T20:36:45Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/648 | [] | porsen123 | 3 |
slackapi/bolt-python | fastapi | 980 | conversations.history does not have attachments | (Describe your issue and goal here)
As seen in #978 , conversation.history is supposed to have a 'attachments' property, that will let me know if a file has been uploaded in a message.
I can see attachments property in a parent message if the message is a file uploaded.
```python
{
"ok": True,
"oldest": "1698814783.168829",
"messages": [
{
"type": "message",
"text": "",
"files": [
{
"id": "F064DM2H656",
"created": 1698813725,
"timestamp": 1698813725,
"name": "myfile",
"title": "myfile",
"mimetype": "application/octet-stream",
"filetype": "binary",
"pretty_type": "Binary",
"user": "U03CVKELZU6",
"user_team": "EDBV1AA7J",
"editable": False,
"size": 215205,
"mode": "hosted",
"is_external": False,
"external_type": "",
"is_public": False,
"public_url_shared": False,
"display_as_bot": False,
"username": "",
"url_private": "https://files.slack.com/files-pri/TDBV1AA7J-F064DM2H656/myfile",
"url_private_download": "https://files.slack.com/files-pri/myfile",
"media_display_type": "unknown",
"permalink": "https://example.enterprise.slack.com/files/myfile",
"permalink_public": "https://slack-files.com/myfile",
"is_starred": False,
"has_rich_preview": False,
"file_access": "visible",
}
],
"upload": False,
"user": "U03CVKELZU6",
"display_as_bot": False,
"ts": "1698814783.168829",
"client_msg_id": "696807b1-414c-4589-a0e2-daf8609379f8",
"reactions": [{"name": "bob", "users": ["U03CVKELZU6"], "count": 1}],
}
],
"has_more": False,
"pin_count": 0,
"channel_actions_ts": None,
"channel_actions_count": 0,
}
```
However, in a given thread, if a file is uploaded, conversation.history does not have attachments property.
```python
{
"ok": True,
"oldest": "1698815126.385179",
"messages": [],
"has_more": False,
"pin_count": 0,
"channel_actions_ts": None,
"channel_actions_count": 0,
}
```
2023-10-31 22:05:30 event is
```python
{
"type": "reaction_added",
"user": "U03CVKELZU6",
"reaction": "bob",
"item": {"type": "message", "channel": "C060030E22W", "ts": "1698815126.385179"},
"item_user": "U03CVKELZU6",
"event_ts": "1698815129.006300",
}
```
In order to get conversations.history I am using:
```python
result = client.conversations_history(channel=channel_id, inclusive='true', limit=1, oldest=ts)
```
### Reproducible in:
```bash
pip freeze | grep slack
python --version
sw_vers && uname -v # or `ver`
```
#### The `slack_bolt` version
slack-bolt==1.18.0
slack-sdk==3.23.0
(Paste the output of `pip freeze | grep slack`)
#### Python runtime version
(Paste the output of `python --version`)
Python 3.10.0
#### OS info
(Paste the output of `sw_vers && uname -v` on macOS/Linux or `ver` on Windows OS)
ProductName: macOS
ProductVersion: 14.1
BuildVersion: 23B74
Darwin Kernel Version 23.1.0: Mon Oct 9 21:27:27 PDT 2023; root:xnu-10002.41.9~6/RELEASE_X86_64
#### Steps to reproduce:
(Share the commands to run, source code, and project settings (e.g., setup.py))
1.
2.
3.
### Expected result:
(Tell what you expected to happen)
### Actual result:
(Tell what actually happened with logs, screenshots)
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2023-11-01T05:11:15Z | 2023-11-02T23:09:53Z | https://github.com/slackapi/bolt-python/issues/980 | [
"question",
"need info"
] | WhyIsItSoHardToPickAUsername | 3 |
Esri/arcgis-python-api | jupyter | 1,569 | Large file uploads to AGOL crash | **Describe the bug**
When using the python module to upload a 100GB Tile Package to AGOL (using a Windows machine with 8GB RAM) after a while the script crashes with `MemoryError`
**To Reproduce**
Run the script
```python
item_properties = {'title': os.path.splitext(os.path.basename(file))[0]}
if file.lower().endswith("tpkx"):
item_properties['type'] = "Compact Tile Package"
_log.info("##[section] Starting upload of file {0}".format(file))
start = time.time()
file_item = gis.content.add(item_properties=item_properties, data=file)
```
error:
```python
Traceback (most recent call last):
File "C:\Temp\agol_file_upload.py", line 88, in main
file_item = gis.content.add(item_properties=item_properties, data=file)
File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\site-packages\arcgis\gis\__init__.py", line 5975, in add
status = self._add_by_part(
File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\site-packages\arcgis\gis\__init__.py", line 5589, in _add_by_part
futures = {
File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\site-packages\arcgis\gis\__init__.py", line 5589, in <dictcomp>
futures = {
File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\site-packages\arcgis\gis\__init__.py", line 5550, in chunk_by_file_size
data = bio.write(reader.read(size))
MemoryError
```
**Expected behavior**
The file should upload correctly
**Platform (please complete the following information):**
- OS: Windows 2019
- Python API Version 2.1.0.2
**Additional context**
Previous Python API version, something like 1.8? to upload files it used to create temporary file chunks on disk. It already had issues with not clearing them and causing disk getting too full if the disk with Temp folder was too small while uploaded file was too big. In this scenario it looks like the (large) file is read too rapidly to memory in one go. It looks like a better use of ThreadPoolExecutor could help. Uploaded bytes should be released from memory immediately. Maybe some form of lazy loading of file parts with something like `imap()` could help refactoring the approach ?
| closed | 2023-05-16T07:39:24Z | 2024-05-13T10:23:45Z | https://github.com/Esri/arcgis-python-api/issues/1569 | [
"bug"
] | spiskulaos | 1 |
JaidedAI/EasyOCR | deep-learning | 1,362 | EasyOCR is not giving output and exiting before printing anything and also no error on console!!!! | When i run the code it just exiting halfway there not even giving output, please help me
my system config:
- Python 3.12.8
- windows 10
- i5 3gen
- 8gb ram
- 730 NVidia gpu
- 256 SSD
project structure:

input images:


code:
```python
import torch
import easyocr
import cv2
print(f"PyTorch version: {torch.__version__}")
print(f"EasyOCR version: {easyocr.__version__}")
print(f"OpenCV version: {cv2.__version__}")
try:
reader = easyocr.Reader(['en'])
result = reader.readtext('input/d.png')
print(result)
result = reader.readtext('input/image.jpg')
print(f"OCR Result: {result}")
except Exception as e:
print(f"An error occurred: {e}")
```
output:
```bash
PS C:\Users\user\Desktop\my-projects\random-stuff\python-image-translator> python .\testocr.py
PyTorch version: 2.5.1+cpu
EasyOCR version: 1.7.2
OpenCV version: 4.10.0
Neither CUDA nor MPS are available - defaulting to CPU. Note: This module is much faster with a GPU.
PS C:\Users\user\Desktop\my-projects\random-stuff\python-image-translator>
PS C:\Users\user\Desktop\my-projects\random-stuff\python-image-translator> python .\testocr.py
PyTorch version: 2.5.1+cpu
EasyOCR version: 1.7.2
OpenCV version: 4.10.0
Neither CUDA nor MPS are available - defaulting to CPU. Note: This module is much faster with a GPU.
PS C:\Users\user\Desktop\my-projects\random-stuff\python-image-translator>
``` | open | 2025-01-09T16:16:33Z | 2025-02-28T01:42:55Z | https://github.com/JaidedAI/EasyOCR/issues/1362 | [] | Ctmax-ui | 10 |
keras-team/keras | data-science | 20,998 | Unable to see any speedup from multi-GPU training using JAX backend for LSTM / GRU | Using the JAX backend for LSTM / GRU models, I'm unable to see any speed-up when training with 2 Nvidia 3090 vs using a single Nvidia 3090 (using keras-nightly and JAX 0.5.2). The distributed training across 2 GPUs seems to work fine, but it is just not faster and maybe even slower. See attached file for a modified version of the Keras timeseries weather forecasting example that showcases the problem.
I also can't seem to find any "official" Keras / Keras-IO example showing distributed training with a measurement of the training time. Shouldn't there be such an "official" example to showcase the gain by multi-device training?
[timeseries_weather_forecasting_LC.zip](https://github.com/user-attachments/files/19179280/timeseries_weather_forecasting_LC.zip) | open | 2025-03-06T21:58:36Z | 2025-03-11T09:02:42Z | https://github.com/keras-team/keras/issues/20998 | [
"backend:jax"
] | larschristensen | 1 |
quantmind/pulsar | asyncio | 245 | Is there a simple interface to Actors | Other actor frameworks make it simple to implement an actor. Either doing an event driven style, or (preferred) a call based style
```
class Greeter(pykka.ThreadingActor):
def __init__(self, greeting='Hi there!'):
super(Greeter, self).__init__()
self.greeting = greeting
def on_receive(self, message):
print(self.greeting)
```
Does pulsar have any _simple_ equivalent to this. I realistically want to subclass Actor, implement a `run` method and call `receive`
| closed | 2016-09-14T15:01:42Z | 2016-09-15T11:14:33Z | https://github.com/quantmind/pulsar/issues/245 | [
"question"
] | jkbbwr | 1 |
flaskbb/flaskbb | flask | 35 | Pagination error - Is this a issue with Flask-WhooshAlchemy? | Could it be that Flask-WhooshAlchemy overwrites the `query_class`?
I'm not able to use my TopicQuery class anymore.
```
Traceback (most recent call last):
File "/home/USER/.virtualenvs/flaskbb/lib/python2.7/site-packages/flask/app.py", line 1836, in __call__
return self.wsgi_app(environ, start_response)
File "/home/USER/.virtualenvs/flaskbb/lib/python2.7/site-packages/flask/app.py", line 1820, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/home/USER/.virtualenvs/flaskbb/lib/python2.7/site-packages/flask/app.py", line 1403, in handle_exception
reraise(exc_type, exc_value, tb)
File "/home/USER/.virtualenvs/flaskbb/lib/python2.7/site-packages/flask/app.py", line 1817, in wsgi_app
response = self.full_dispatch_request()
File "/home/USER/.virtualenvs/flaskbb/lib/python2.7/site-packages/flask/app.py", line 1477, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/USER/.virtualenvs/flaskbb/lib/python2.7/site-packages/flask/app.py", line 1381, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/USER/.virtualenvs/flaskbb/lib/python2.7/site-packages/flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "/home/USER/.virtualenvs/flaskbb/lib/python2.7/site-packages/flask_debugtoolbar/__init__.py", line 125, in dispatch_request
return view_func(**req.view_args)
File "/mnt/Daten/Documents/Workspace/flaskbb/flaskbb/forum/views.py", line 151, in view_forum
paginate(page, current_app.config['TOPICS_PER_PAGE'], True, True)
TypeError: paginate() takes at most 4 arguments (5 given)
```
| closed | 2014-04-14T20:49:18Z | 2018-04-15T07:47:31Z | https://github.com/flaskbb/flaskbb/issues/35 | [
"bug"
] | sh4nks | 4 |
iperov/DeepFaceLab | deep-learning | 5,294 | SAEHD Training - Root Errors | When trying to run the SAEHD training on my GTX 980 I get these code lines
Something about exhausting a resource called OOM
I really don't know what this is about
Anyone knows how to fix it?
Code:
`Running trainer.
[new] No saved models found. Enter a name of a new model :
new
Model first run.
Choose one or several GPU idxs (separated by comma).
[CPU] : CPU
[0] : GeForce GTX 980
[0] Which GPU indexes to choose? :
0
[0] Autobackup every N hour ( 0..24 ?:help ) :
0
[n] Write preview history ( y/n ?:help ) :
n
[200000] Target iteration :
200000
[y] Flip faces randomly ( y/n ?:help ) :
y
[8] Batch_size ( ?:help ) :
8
[128] Resolution ( 64-640 ?:help ) :
128
[wf] Face type ( h/mf/f/wf/head ?:help ) :
wf
[liae-ud] AE architecture ( ?:help ) :
liae-ud
[256] AutoEncoder dimensions ( 32-1024 ?:help ) :
256
[64] Encoder dimensions ( 16-256 ?:help ) :
64
[64] Decoder dimensions ( 16-256 ?:help ) :
64
[32] Decoder mask dimensions ( 16-256 ?:help ) :
32
[y] Masked training ( y/n ?:help ) :
y
[n] Eyes and mouth priority ( y/n ?:help ) :
n
[n] Uniform yaw distribution of samples ( y/n ?:help ) :
n
[y] Place models and optimizer on GPU ( y/n ?:help ) :
y
[y] Use AdaBelief optimizer? ( y/n ?:help ) :
y
[n] Use learning rate dropout ( n/y/cpu ?:help ) :
n
[y] Enable random warp of samples ( y/n ?:help ) :
y
[0.0] GAN power ( 0.0 .. 1.0 ?:help ) :
0.0
[0.0] Face style power ( 0.0..100.0 ?:help ) :
0.0
[0.0] Background style power ( 0.0..100.0 ?:help ) :
0.0
[rct] Color transfer for src faceset ( none/rct/lct/mkl/idt/sot ?:help ) :
rct
[n] Enable gradient clipping ( y/n ?:help ) :
n
[n] Enable pretraining mode ( y/n ?:help ) :
n
Initializing models: 100%|###########################################| 5/5 [00:38<00:00, 7.72s/it]
Loading samples: 100%|######################################| 18627/18627 [03:00<00:00, 103.18it/s]
Loading samples: 100%|########################################| 1381/1381 [00:09<00:00, 143.55it/s]
============== Model Summary ===============
== ==
== Model name: new_SAEHD ==
== ==
== Current iteration: 0 ==
== ==
==------------ Model Options -------------==
== ==
== resolution: 128 ==
== face_type: wf ==
== models_opt_on_gpu: True ==
== archi: liae-ud ==
== ae_dims: 256 ==
== e_dims: 64 ==
== d_dims: 64 ==
== d_mask_dims: 32 ==
== masked_training: True ==
== eyes_mouth_prio: False ==
== uniform_yaw: False ==
== adabelief: True ==
== lr_dropout: n ==
== random_warp: True ==
== true_face_power: 0.0 ==
== face_style_power: 0.0 ==
== bg_style_power: 0.0 ==
== ct_mode: rct ==
== clipgrad: False ==
== pretrain: False ==
== autobackup_hour: 0 ==
== write_preview_history: False ==
== target_iter: 200000 ==
== random_flip: True ==
== batch_size: 8 ==
== gan_power: 0.0 ==
== gan_patch_size: 24 ==
== gan_dims: 16 ==
== ==
==-------------- Running On --------------==
== ==
== Device index: 0 ==
== Name: GeForce GTX 980 ==
== VRAM: 4.00GB ==
== ==
============================================
Starting. Target iteration: 200000. Press "Enter" to stop training and save model.
Trying to do the first iteration. If an error occurs, reduce the model parameters.
!!!
Windows 10 users IMPORTANT notice. You should set this setting in order to work correctly.
https://i.imgur.com/B7cmDCB.jpg
!!!
You are training the model from scratch. It is strongly recommended to use a pretrained model to speed up the training and improve the quality.
Error: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[8,128,64,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node LeakyRelu_18 (defined at D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py:69) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[concat_39/concat/_231]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: OOM when allocating tensor with shape[8,128,64,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node LeakyRelu_18 (defined at D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py:69) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
0 successful operations.
0 derived errors ignored.
Errors may have originated from an input operation.
Input Source operations connected to node LeakyRelu_18:
Add_29 (defined at D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:105)
Input Source operations connected to node LeakyRelu_18:
Add_29 (defined at D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:105)
Original stack trace for 'LeakyRelu_18':
File "threading.py", line 884, in _bootstrap
File "threading.py", line 916, in _bootstrap_inner
File "threading.py", line 864, in run
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
debug=debug,
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\ModelBase.py", line 189, in __init__
self.on_initialize()
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 381, in on_initialize
gpu_pred_src_src, gpu_pred_src_srcm = self.decoder(gpu_src_code)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
return self.forward(*args, **kwargs)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 158, in forward
x = self.res2(x)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
return self.forward(*args, **kwargs)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 69, in forward
x = tf.nn.leaky_relu(x, 0.2)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 3502, in leaky_relu
return gen_nn_ops.leaky_relu(features, alpha=alpha, name=name)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 5104, in leaky_relu
"LeakyRelu", features=features, alpha=alpha, name=name)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
attrs=attr_protos, op_def=op_def)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3536, in _create_op_internal
op_def=op_def)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1990, in __init__
self._traceback = tf_stack.extract_stack()
Traceback (most recent call last):
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1375, in _do_call
return fn(*args)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1360, in _run_fn
target_list, run_metadata)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1453, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[8,128,64,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node LeakyRelu_18}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[concat_39/concat/_231]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: OOM when allocating tensor with shape[8,128,64,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node LeakyRelu_18}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
0 successful operations.
0 derived errors ignored.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\mainscripts\Trainer.py", line 130, in trainerThread
iter, iter_time = model.train_one_iter()
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\ModelBase.py", line 462, in train_one_iter
losses = self.onTrainOneIter()
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 678, in onTrainOneIter
src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 538, in src_dst_train
self.target_dstm_em:target_dstm_em,
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 968, in run
run_metadata_ptr)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1191, in _run
feed_dict_tensor, options, run_metadata)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1369, in _do_run
run_metadata)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1394, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[8,128,64,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node LeakyRelu_18 (defined at D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py:69) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[concat_39/concat/_231]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: OOM when allocating tensor with shape[8,128,64,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node LeakyRelu_18 (defined at D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py:69) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
0 successful operations.
0 derived errors ignored.
Errors may have originated from an input operation.
Input Source operations connected to node LeakyRelu_18:
Add_29 (defined at D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:105)
Input Source operations connected to node LeakyRelu_18:
Add_29 (defined at D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:105)
Original stack trace for 'LeakyRelu_18':
File "threading.py", line 884, in _bootstrap
File "threading.py", line 916, in _bootstrap_inner
File "threading.py", line 864, in run
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
debug=debug,
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\ModelBase.py", line 189, in __init__
self.on_initialize()
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 381, in on_initialize
gpu_pred_src_src, gpu_pred_src_srcm = self.decoder(gpu_src_code)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
return self.forward(*args, **kwargs)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 158, in forward
x = self.res2(x)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
return self.forward(*args, **kwargs)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 69, in forward
x = tf.nn.leaky_relu(x, 0.2)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 3502, in leaky_relu
return gen_nn_ops.leaky_relu(features, alpha=alpha, name=name)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 5104, in leaky_relu
"LeakyRelu", features=features, alpha=alpha, name=name)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
attrs=attr_protos, op_def=op_def)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3536, in _create_op_internal
op_def=op_def)
File "D:\.Arquivos\Programs\DFL\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1990, in __init__
self._traceback = tf_stack.extract_stack()` | closed | 2021-03-23T23:04:40Z | 2021-03-29T15:46:32Z | https://github.com/iperov/DeepFaceLab/issues/5294 | [] | JapaTuts | 0 |
kennethreitz/responder | flask | 373 | Documentation Broken (https://python-responder.org/) | Hi , I am unable to access https://python-responder.org/ for documentation | closed | 2019-07-18T09:25:56Z | 2019-07-18T15:43:56Z | https://github.com/kennethreitz/responder/issues/373 | [] | sandeepbgsk | 1 |
piccolo-orm/piccolo | fastapi | 745 | ForeignKey missing 1 required positional argument `references` | I have written following, but receives following error.
`region = ForeignKey(references=Region, on_delete=OnDelete.no_action, on_update=OnUpdate.no_action)`
Please check it above line is true or not.
I am receiving error on in this line.
The error is as follows:-
```
region = ForeignKey(references=Region, on_delete=OnDelete.no_action, on_update=OnUpdate.no_action)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/PycharmProjects/asphalt/venv/lib/python3.11/site-packages/piccolo/columns/column_types.py", line 1871, in __init__
references._meta.primary_key.__class__(
TypeError: ForeignKey.__init__() missing 1 required positional argument: 'references'
``` | open | 2023-01-05T11:41:41Z | 2023-01-05T13:14:19Z | https://github.com/piccolo-orm/piccolo/issues/745 | [] | sumitsharansatsangi | 8 |
jadore801120/attention-is-all-you-need-pytorch | nlp | 103 | RuntimeError: CUDA out of memory. happend in train.py#cal_loss# | there is a memory problem in train.py#cal_loss# with batce_size is 128.
code:one_hot = one_hot * (1 - eps) + (1 - one_hot) * eps / (n_class - 1)
please help me,thanks. | closed | 2019-04-24T05:27:18Z | 2019-12-08T10:20:59Z | https://github.com/jadore801120/attention-is-all-you-need-pytorch/issues/103 | [] | huangxianliang | 2 |
httpie/cli | rest-api | 523 | enable quic protocol | Httpie will support [QUIC](https://github.com/google/proto-quic) protocol?
| closed | 2016-09-29T06:59:16Z | 2024-11-28T08:19:51Z | https://github.com/httpie/cli/issues/523 | [] | sunisdown | 5 |
ultralytics/ultralytics | pytorch | 19,648 | can not run tensorrt,bug error: module 'tensorrt' has no attribute '__version__' | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Install
### Bug
i download the right cuda、cudnn、torch、vision,and the i download the tensorrt 8.5 GA in my windows.
when i run this demo code:
```
from ultralytics import YOLO
if __name__ == '__main__':
model = YOLO('./yolo11n.pt')
model.export(
format='engine',
imgsz=640,
keras=False,
optimize=False,
half=False,
int8=False,
dynamic=False,
simplify=True,
opset=None,
workspace=5.0,
nms=False,
batch=1,
device='0',
)
```
and run error blow as:
```
(yolo11) E:\yolo11>python tensorrt.py
Ultralytics 8.3.87 🚀 Python-3.9.21 torch-2.2.2+cu118 CUDA:0 (NVIDIA GeForce RTX 3060 Laptop GPU, 6144MiB)
ONNX: starting export with onnx 1.17.0 opset 17...
ONNX: slimming with onnxslim 0.1.48...
ONNX: export success ✅ 3.5s, saved as 'yolo11n.onnx' (10.2 MB)
TensorRT: export failure ❌ 3.5s: module 'tensorrt' has no attribute '__version__'
Traceback (most recent call last):
File "E:\yolo11\tensorrt.py", line 9, in <module>
model.export(
File "E:\yolo11\ultralytics\engine\model.py", line 742, in export
return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
File "E:\yolo11\ultralytics\engine\exporter.py", line 429, in __call__
f[1], _ = self.export_engine(dla=dla)
File "E:\yolo11\ultralytics\engine\exporter.py", line 182, in outer_func
raise e
File "E:\yolo11\ultralytics\engine\exporter.py", line 177, in outer_func
f, model = inner_func(*args, **kwargs)
File "E:\yolo11\ultralytics\engine\exporter.py", line 855, in export_engine
check_version(trt.__version__, ">=7.0.0", hard=True)
AttributeError: module 'tensorrt' has no attribute '__version__'
```
### Environment
Python 3.9.21
torch 2.2 with their vision
tensorrt 8.5
cuda 11.8 cudnn for 11.8
windows 10
```
(yolo11) E:\yolo11>pip list
Package Version
------------------- ------------
certifi 2025.1.31
charset-normalizer 3.4.1
colorama 0.4.6
coloredlogs 15.0.1
contourpy 1.3.0
cycler 0.12.1
filelock 3.17.0
flatbuffers 25.2.10
fonttools 4.56.0
fsspec 2025.3.0
humanfriendly 10.0
idna 3.10
importlib_resources 6.5.2
Jinja2 3.1.6
kiwisolver 1.4.7
MarkupSafe 3.0.2
matplotlib 3.9.4
mpmath 1.3.0
networkx 3.2.1
numpy 1.24.0
onnx 1.17.0
onnxruntime-gpu 1.19.2
onnxslim 0.1.48
opencv-python 4.11.0.86
packaging 24.2
pandas 2.2.3
pillow 11.1.0
pip 25.0
protobuf 6.30.0
psutil 7.0.0
py-cpuinfo 9.0.0
pyparsing 3.2.1
pyreadline3 3.5.4
python-dateutil 2.9.0.post0
pytz 2025.1
PyYAML 6.0.2
requests 2.32.3
scipy 1.13.1
seaborn 0.13.2
setuptools 75.8.0
six 1.17.0
sympy 1.13.1
tensorrt 8.5.1.7
torch 2.2.2+cu118
torchaudio 2.2.2+cu118
torchvision 0.17.2+cu118
tqdm 4.67.1
typing_extensions 4.12.2
tzdata 2025.1
ultralytics 8.3.87
ultralytics-thop 2.0.14
urllib3 2.3.0
wheel 0.45.1
zipp 3.21.0
```
### Minimal Reproducible Example
```
from ultralytics import YOLO
if __name__ == '__main__':
model = YOLO('./yolo11n.pt')
model.export(
format='engine',
imgsz=640,
keras=False,
optimize=False,
half=False,
int8=False,
dynamic=False,
simplify=True,
opset=None,
workspace=5.0,
nms=False,
batch=1,
device='0',
)
```
### Additional
_No response_
### Are you willing to submit a PR?
- [x] Yes I'd like to help by submitting a PR! | closed | 2025-03-11T18:20:51Z | 2025-03-12T07:18:29Z | https://github.com/ultralytics/ultralytics/issues/19648 | [
"dependencies",
"exports"
] | Hitchliff | 4 |
dsdanielpark/Bard-API | api | 81 | IndexError: list index out of range | This was working great for awhile, but it's suddenly not working. Appears to be an issue in the bardapi/core.py file
Traceback (most recent call last):
File "chatbard.py", line 23, in <module>
response = Bard().get_answer(input_text)
File "/usr/local/lib/python3.6/dist-packages/bardapi/core.py", line 113, in get_answer
resp_dict = json.loads(resp.content.splitlines()[3])[0][2]
IndexError: list index out of range
from bardapi import Bard
import os
import socket
import signal
os.environ['_BARD_API_KEY']="MYKEY."
s = socket.socket()
port = xxxxx
s.bind(('', port))
s.listen(5)
c, addr = s.accept()
def process_text(text):
split_text = text.split(".")
if len(split_text) > 1 and split_text[0].lower().startswith("sure"):
split_text = split_text[1:]
processed_text = ".".join(split_text)
return processed_text
print("Socket Up and running with a connection from",addr)
while True:
rcvdData = c.recv(1024).decode()
print("S:",rcvdData)
input_text = rcvdData
answer = Bard().get_answer(input_text)['content']
response = process_text(answer)
print("S: " + response)
sendData = (response)
c.send(str(sendData).encode())
if(sendData == "Bye" or sendData == "bye"):
break
c.close()
| closed | 2023-06-28T12:46:20Z | 2024-01-18T15:53:56Z | https://github.com/dsdanielpark/Bard-API/issues/81 | [] | JVTEAM | 16 |
ymcui/Chinese-BERT-wwm | nlp | 2 | 只能在百度的Paddle 平台上玩儿? | closed | 2019-06-20T07:38:32Z | 2019-06-20T11:05:16Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/2 | [] | mokundong | 1 | |
hbldh/bleak | asyncio | 777 | Would you be interested in a FTMS_client as an example? | **Describe the solution you'd like**
discover and server_explorer.py are a good start to create a client and I managed to create an FitnessMachineServer client.
**Describe alternatives you've considered**
If you like, I can create a sample code (removing all application dedicated stuff) for reference.
**Additional context**
This example can be added to the examples
| closed | 2022-03-03T22:06:26Z | 2023-03-20T18:58:21Z | https://github.com/hbldh/bleak/issues/777 | [] | WouterJD | 1 |
dmlc/gluon-cv | computer-vision | 1,653 | Make tutorial on how to train your own actions? | is it possible to have a tutorial on how we can train the model to detect what exercise is being done based on actions that we do and add it to the data?
thank you. | closed | 2021-05-05T06:32:47Z | 2021-05-10T04:18:30Z | https://github.com/dmlc/gluon-cv/issues/1653 | [] | ghost | 6 |
litestar-org/litestar | api | 2,990 | Enhancement: Can't we proceed with class-based DI like SpringBoot? | ### Summary
Can't we proceed with class-based DI like SpringBoot?
If you don't have one, I think it would be a good idea to add a feature.
### Basic Example
```python
class TestController(Controller):
path = "/test"
dependencies = {"test_service": Provide(TestService)}
@get(path="/a")
async def hi(self) -> int:
return 1
@get(path="/b", sync_to_thread=False)
async def hi2(self, test_service: TestService) -> dict[str, bool]: #Repetitive code is used when DI is performed like this
return await test_service.injection_test()
```
### Drawbacks and Impact
If we proceed with DI on the class itself and develop it in a reference way on the self factor, wouldn't there be an advantage in terms of productivity, and wouldn't developers who were developing other frameworks be able to access it comfortably?
### Unresolved questions
If it is possible to perform a class-based DI even indirectly, please let me know how. | closed | 2024-01-17T18:09:21Z | 2025-03-20T15:54:20Z | https://github.com/litestar-org/litestar/issues/2990 | [
"Enhancement"
] | sxngt | 5 |
JaidedAI/EasyOCR | machine-learning | 690 | emoji detection using easyocr | Can easyocr be used to detect emojis in images? I looked around and couldn't find if it's supported. | open | 2022-03-21T15:54:36Z | 2024-04-09T04:39:41Z | https://github.com/JaidedAI/EasyOCR/issues/690 | [] | shashi-netra | 1 |
iperov/DeepFaceLab | machine-learning | 5,507 | ... | .. | closed | 2022-04-07T01:40:37Z | 2022-04-07T01:40:51Z | https://github.com/iperov/DeepFaceLab/issues/5507 | [] | Gamuano | 0 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 806 | [Question] Can i use tool from command line? | [Question] Can i use tool from command line?
Eg. tool -voice "VoicesToUsageToSpeach/example.mp3" -test "text to speach" -o /save_file.ogg
Or smoethink like that? Test to speatch in selected voices and maybe emotion voices? | closed | 2021-07-23T00:48:12Z | 2021-08-25T09:08:04Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/806 | [] | SaraDark | 2 |
jpadilla/django-rest-framework-jwt | django | 10 | Multiple APIs and SSO using JWT | This isn't really the right point to ask this, I guess, but I can't figure out where to ask elsewhere. So, sorry if this is slightly OT.
I am wondering how I would use the django-rest-framework-jwt when using it for several APIs? Problem description:
- api1 holds user db, JWT auth endpoint, and some other API calls
- api2 holds a different API but no user db (except what's needed for authorization)
- clients untrusted, ie browser apps such as angular-based
Authenticating clients against api1 and using resources of it using the REST framework is pretty straightforward. Now, if api2 comes into the mix, I am a bit at a loss. Ideally, unauthenticated clients accessing api2 would need to obtain an access token at api1 and present this to api2. But how would api2 then verify this token? And how would redirection work?
I am a bit at a loss. Is that even possible with the current form of django-rest-framework-jwt (or JWT in general)? Is there some working example I could check out?
| closed | 2014-03-01T13:34:18Z | 2014-10-07T01:54:02Z | https://github.com/jpadilla/django-rest-framework-jwt/issues/10 | [] | arnuschky | 6 |
Anjok07/ultimatevocalremovergui | pytorch | 599 | Doesnt Launch | Hello. I was using UVR about a month, and it was all perfect. But, one day, I needed to take GPU out of computer, after that I cant lauch UVR. I reinstalled it, but it didnt help. I only see this window

and then application closes.(My CPU is i7 8700k). When i took out GPU for some time on Windows 10, UVR worked great. But when I did it on Windows 11(as i describing previously), UVR has stopped to launch.
P.S. Sorry, my english is bad, im not native speaker
| open | 2023-06-05T19:07:07Z | 2023-06-05T19:07:07Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/599 | [] | VoyagerSky | 0 |
geex-arts/django-jet | django | 120 | request.user not exist on admin view | I using django-guardian + django-jet and send Exception because request object dont have context request.user
https://github.com/geex-arts/django-jet/blob/master/jet/utils.py#L183
i think use the test clients code on the upstream code it's not usefull i think the PR #115 can solve it.
| closed | 2016-09-21T17:48:26Z | 2016-09-21T20:37:48Z | https://github.com/geex-arts/django-jet/issues/120 | [] | zodman | 3 |
pytorch/vision | computer-vision | 8,338 | MD5 checksum of download file does not match the one on record | ### 🐛 Describe the bug
```python
# gen_celeba.py
from torch.utils.data import DataLoader
import torchvision as tv
def main(path_to_data: str) -> None:
dataset = tv.datasets.CelebA(root=path_to_data,
split="all",
target_type="attr",
download=True, transform=tv.transforms.ToTensor())
dataloader = DataLoader(dataset)
for img, attr in dataloader:
print(img.shape, attr.shape)
break
print("Ok!")
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--p2d', type=str, required=True, help='path to data')
args = parser.parse_args()
main(path_to_data=args.p2d)
```
Running:
```python
python gen_celeba.py --p2d data
```
Gives:
```
2422it [00:00, 3014422.64it/s]
env\Lib\site-packages\torchvision\datasets\utils.py:260: UserWarning: We detected some HTML elements in the downloaded file. This most likely means that the download triggered an unhandled API response by GDrive. Please report this to torchvision at https://github.com/pytorch/vision/issues including the response:
<!DOCTYPE html><html><head><title>Google Drive - Virus scan warning</title><meta http-equiv="content-type" content="text/html; charset=utf-8"/><style nonce="idGN89Bm7B0CI5ZK88c5aQ">.goog-link-button{position:relative;color:#15c;text-decoration:underline;cursor:pointer}.goog-link-button-disabled{color:#ccc;text-decoration:none;cursor:default}body{color:#222;font:normal 13px/1.4 arial,sans-serif;margin:0}.grecaptcha-badge{visibility:hidden}.uc-main{padding-top:50px;text-align:center}#uc-dl-icon{display:inline-block;margin-top:16px;padding-right:1em;vertical-align:top}#uc-text{display:inline-block;max-width:68ex;text-align:left}.uc-error-caption,.uc-warning-caption{color:#222;font-size:16px}#uc-download-link{text-decoration:none}.uc-name-size a{color:#15c;text-decoration:none}.uc-name-size a:visited{color:#61c;text-decoration:none}.uc-name-size a:active{color:#d14836;text-decoration:none}.uc-footer{color:#777;font-size:11px;padding-bottom:5ex;padding-top:5ex;text-align:center}.uc-footer a{color:#15c}.uc-footer a:visited{color:#61c}.uc-footer a:active{color:#d14836}.uc-footer-divider{color:#ccc;width:100%}.goog-inline-block{position:relative;display:-moz-inline-box;display:inline-block}* html .goog-inline-block{display:inline}*:first-child+html .goog-inline-block{display:inline}sentinel{}</style><link rel="icon" href="//ssl.gstatic.com/docs/doclist/images/drive_2022q3_32dp.png"/></head><body><div class="uc-main"><div id="uc-dl-icon" class="image-container"><div class="drive-sprite-aux-download-file"></div></div><div id="uc-text"><p class="uc-warning-caption">Google Drive can't scan this file for viruses.</p><p class="uc-warning-subcaption"><span class="uc-name-size"><a href="/open?id=0B7EVK8r0v71pZjFTYXZWM3FlRnM">img_align_celeba.zip</a> (1.3G)</span> is too large for Google to scan for viruses. Would you still like to download this file?</p><form id="download-form" action="https://drive.usercontent.google.com/download" method="get"><input type="submit" id="uc-download-link" class="goog-inline-block jfk-button jfk-button-action" value="Download anyway"/><input type="hidden" name="id" value="0B7EVK8r0v71pZjFTYXZWM3FlRnM"><input type="hidden" name="export" value="download"><input type="hidden" name="confirm" value="t"><input type="hidden" name="uuid" value="e6d6f025-8f92-4029-893d-a3eb6374fc53"></form></div></div><div class="uc-footer"><hr class="uc-footer-divider"></div></body></html>
warnings.warn(
Traceback (most recent call last):
File "gen_celeba.py", line 23, in <module>
main(path_to_data=args.p2d)
File "gen_celeba.py", line 7, in main
dataset = tv.datasets.CelebA(root=path_to_data,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "env\Lib\site-packages\torchvision\datasets\celeba.py", line 80, in __init__
self.download()
File "env\Lib\site-packages\torchvision\datasets\celeba.py", line 150, in download
download_file_from_google_drive(file_id, os.path.join(self.root, self.base_folder), filename, md5)
File "env\Lib\site-packages\torchvision\datasets\utils.py", line 268, in download_file_from_google_drive
raise RuntimeError(
RuntimeError: The MD5 checksum of the download file data\celeba\img_align_celeba.zip does not match the one on record.Please delete the file and try again. If the issue persists, please report this to torchvision at https://github.com/pytorch/vision/issues.
```
### Versions
```
torch 2.1.2+cu118
torchvision 0.16.2+cu118
``` | closed | 2024-03-19T19:31:28Z | 2024-03-19T19:41:26Z | https://github.com/pytorch/vision/issues/8338 | [] | andreasfloros | 1 |
babysor/MockingBird | pytorch | 948 | Model shape mismatch | **Summary[问题简述(一句话)]**
用Pretrain模型(https://yisiou-my.sharepoint.com/:u:/g/personal/lawrence_cheng_fawenyo_onmicrosoft_com/EWFWDHzee-NNg9TWdKckCc4BC7bK2j9cCbOWn0-_tK0nOg?e=n0gGgC),加载模型的时候出现shape mismatch
**Env & To Reproduce[复现与环境]**
模型链接如上
在Mac M1 Pro,环境Python3.9
**Screenshots[截图(如有)]**
```
RuntimeError: Error(s) in loading state_dict for Tacotron:
size mismatch for encoder_proj.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([128, 1024]).
size mismatch for decoder.attn_rnn.weight_ih: copying a param with shape torch.Size([384, 768]) from checkpoint, the shape in current model is torch.Size([384, 1280]).
size mismatch for decoder.rnn_input.weight: copying a param with shape torch.Size([1024, 640]) from checkpoint, the shape in current model is torch.Size([1024, 1152]).
size mismatch for decoder.stop_proj.weight: copying a param with shape torch.Size([1, 1536]) from checkpoint, the shape in current model is torch.Size([1, 2048]).
```
这已经是按照https://github.com/babysor/MockingBird/issues/37 修复过一轮了,否则还要再多一个类似的error。
| closed | 2023-08-19T19:43:49Z | 2023-08-19T21:20:55Z | https://github.com/babysor/MockingBird/issues/948 | [] | firstprayer | 1 |
autogluon/autogluon | scikit-learn | 4,919 | Support image loading from S3 for MultiModelPredictor | ## Description
A clear and concise description of what the feature is.
`MultiModalPredictor` currently requires images to be stored locally for fitting the model. However, due to increasing size of the training data, customers would like to add the ability to load images directly from a cloud service, i.e. S3 bucket, during training.
- Please indicate which module (`multimodal`, `tabular`, `timeseries`) this proposal refers to.
`multimodal`
## References
- list pointers to related resources (scientific papers, blogs, documentation)
- list known open-source implementations, if they exist
| open | 2025-02-22T00:57:38Z | 2025-02-24T21:26:20Z | https://github.com/autogluon/autogluon/issues/4919 | [
"enhancement",
"module: multimodal"
] | suzhoum | 1 |
plotly/dash-table | dash | 840 | Bad prop-type definition causing console errors | There's no such thing as https://github.com/plotly/dash-table/blob/dev/src/dash-table/dash/DataTable.js#L999
The proper way of defining this would have been
```
PropTypes.oneOfType([
PropTypes.oneOf([null]),
PropTypes.string,
// ...
``` | closed | 2020-10-19T19:13:17Z | 2020-10-20T15:07:54Z | https://github.com/plotly/dash-table/issues/840 | [
"regression",
"size: 0.5",
"bug"
] | Marc-Andre-Rivet | 0 |
dynaconf/dynaconf | django | 371 | Link from readthedocs to the new websites | Colleagues of mine (not really knowing about dynaconf) were kind of confused when I told them about dynaconf 3.0 and they could just find the docs for 2.2.3 on https://dynaconf.readthedocs.io/
Would it be feasible to add a prominent link pointing to https://www.dynaconf.com/? | closed | 2020-07-14T06:00:23Z | 2020-07-14T21:31:54Z | https://github.com/dynaconf/dynaconf/issues/371 | [
"question"
] | aberres | 0 |
wagtail/wagtail | django | 12,953 | Add Default Tests for Root and Homepage Pages in Wagtail Starter Projects | ### Problem
Currently, Wagtail starter projects generated using wagtail start lack default tests for the fundamental Root and Homepage pages. While Wagtail strongly advocates for comprehensive testing, the absence of these basic tests can present an initial hurdle for new users, particularly those who are new to testing within Django and Wagtail environments. Often, introductory tutorials and documentation prioritize feature implementation, and may not sufficiently guide users on establishing testing practices from the outset.
### Proposed Solution
To enhance the developer experience and promote testing best practices from the very beginning, this proposal suggests incorporating default test files into Wagtail starter projects. Specifically, we should add a tests.py file within the home app that includes example tests for:
- Root Page Creation: Verify that the root page (ID=1) is automatically created upon project setup.
- Homepage Creation: Ensure that a default Homepage is created as a child of the root page.
- Basic Homepage Functionality: Include simple tests to check fundamental aspects of the Homepage, such as:
And add this information to the [Your first Wagtail site](https://docs.wagtail.org/en/stable/getting_started/tutorial.html). Some notice that you can run: `manage.py test`
### Benefits
- Improved Onboarding for New Users: By providing default tests, we lower the barrier for entry into testing for Wagtail beginners. These tests will serve as clear, working examples that users can immediately run and understand, encouraging them to adopt testing early in their Wagtail development journey.
- Enhanced Code Understanding: Examining default test code will help new users grasp the structure of Wagtail projects and the basic operations involved in page management (like creation, retrieval, and rendering). This hands-on approach is invaluable for learning.
- Promotion of Testing Culture: Including tests by default from project initiation underscores the importance of testing as a core component of Wagtail development. This proactive approach can foster a stronger testing culture among Wagtail users.
- Increased Project Robustness: Even these basic default tests can act as an early warning system, catching regressions in core page functionality introduced by Wagtail updates or during project-specific modifications.
### Code
Code could be somethink like this:
```python
# home/tests.py
from django.test import TestCase
from wagtail.models import Page
from home.models import HomePage
class HomeSetUpTests(TestCase):
"""
Tests steps needed by follow up tests
"""
def test_root_create(self):
Page.objects.get(pk=1)
def test_homepage_create(self):
root_page = Page.objects.get(pk=1)
self.homepage = HomePage(title='Home')
root_page.add_child(instance=self.homepage)
class HomeTests(TestCase):
"""
Class for testing homepage logic
"""
def setUp(self):
"""
Set up the testing environment.
"""
root_page = Page.objects.get(pk=1)
self.homepage = HomePage(title='Home')
root_page.add_child(instance=self.homepage)
def test_your_test(self):
"""
Tests something..
"""
raise NotImplementedError("The tests are not implemented yet.")
``` | open | 2025-03-08T11:49:47Z | 2025-03-15T12:56:32Z | https://github.com/wagtail/wagtail/issues/12953 | [
"type:Enhancement"
] | Kedrigern | 1 |
facebookresearch/fairseq | pytorch | 4,834 | AttributeError: 'dict' object has no attribute '_get_node_flag' | root/abc/venv/lib/python3.8/site-packages/omegaconf/omegaconf.py", line 670, in open_dict
prev_state = config._get_node_flag("struct")
AttributeError: 'dict' object has no attribute '_get_node_flag'
| closed | 2022-10-31T17:53:24Z | 2023-03-21T14:07:29Z | https://github.com/facebookresearch/fairseq/issues/4834 | [
"question",
"needs triage"
] | MLDeep16 | 1 |
faif/python-patterns | python | 90 | Calling business logic on different physical machine | In 3-tier.py the UI class makes a normal call to the business logic class as if they both run on the same machine, rather it must cater for calling business logic that resides on a centralized separate machine. I hope I'm not missing the point.
| closed | 2015-07-09T02:49:14Z | 2015-07-19T02:59:22Z | https://github.com/faif/python-patterns/issues/90 | [] | heshamfm | 2 |
deepspeedai/DeepSpeed | deep-learning | 5,778 | test | test | closed | 2024-07-17T12:01:05Z | 2024-07-17T12:10:25Z | https://github.com/deepspeedai/DeepSpeed/issues/5778 | [
"bug",
"training"
] | qwerfdsadad | 0 |
psf/black | python | 4,408 | Cannot parse nested f-strings if the same quote char is used inside and outside. | **Black https://github.com/psf/black/commit/9c1fd4**
[Playground link](https://black.vercel.app/?version=main&state=_Td6WFoAAATm1rRGAgAhARYAAAB0L-Wj4ACMAF5dAD2IimZxl1N_WlkB94M-fIXjTqeVxKhorS-NcPomlJhymnRzTOyZGaPOuTE-6SaihjFVbcxokbFNOC2pppRZ-kjchpSYZ4t6eufxPQLgrxNI4B-QfBbpivuH7FoSxsAAAAAxdCy8nlhxSgABeo0BAAAAPOkmmLHEZ_sCAAAAAARZWg==)
## Options
`--line-length=88`
`--safe`
## Input
```python
f"#{f"Version {__version__}":^48}#"
```
## Output
```python
cannot use --safe with this file; failed to parse source file AST: f-string: expecting '}' (<unknown>, line 1)
This could be caused by running Black with an older Python version that does not support new syntax used in your source file.
```
## Expected | closed | 2024-07-20T21:05:04Z | 2024-07-20T21:08:41Z | https://github.com/psf/black/issues/4408 | [] | maflAT | 2 |
plotly/dash | data-visualization | 3,141 | component typing id as dict and datetime for Date pickers in Dash 3.0rc1 | Hi, I tried to update to dash 3.0rc1, and what I noticed is that some typehints are too strict. What I found:
- `id`s are typed as `typing.Optional[str]`, omitting dict ids
- dates in datepickers are typed as `typing.Optional[str]`, but we use `datetime.date`s without problem, not sure if `date`s working is intended or just a coincidence though
There are possibly others, this is what I found in our codebase. | open | 2025-01-29T15:46:02Z | 2025-03-18T13:41:54Z | https://github.com/plotly/dash/issues/3141 | [
"P1",
"dash-3.0"
] | tlauli | 5 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 972 | SQLAlchemy.get_app() should prefer using self.app instead of Flask.current_app | I have a scenario where we are writing multiple flask applications which are being hosted together with a `DispatcherMiddleware`.
The issue I am facing is that if I use a common model between both applications - I get the error:
```python
File "/home/abdealijk/app/app.py", line 29, in second
item = MyModel.query.first()
File "/home/abdealijk/app/venv/lib/python3.9/site-packages/flask_sqlalchemy/__init__.py", line 552, in __get__
return type.query_class(mapper, session=self.sa.session())
File "/home/abdealijk/app/venv/lib/python3.9/site-packages/sqlalchemy/orm/scoping.py", line 78, in __call__
return self.registry()
File "/home/abdealijk/app/venv/lib/python3.9/site-packages/sqlalchemy/util/_collections.py", line 1022, in __call__
return self.registry.setdefault(key, self.createfunc())
File "/home/abdealijk/app/venv/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 3308, in __call__
return self.class_(**local_kw)
File "/home/abdealijk/app/venv/lib/python3.9/site-packages/flask_sqlalchemy/__init__.py", line 175, in __init__
track_modifications = app.config['SQLALCHEMY_TRACK_MODIFICATIONS']
KeyError: 'SQLALCHEMY_TRACK_MODIFICATIONS'
```
I went through the code in this library and found that this is most likely the culprit:
https://github.com/pallets/flask-sqlalchemy/blob/2.5.1/flask_sqlalchemy/__init__.py#L1036
Here, I see that the priority order in `SQLAlchemy.get_app()` is:
1. reference_app provided in args
2. The Flask.current_app
3. The self.app provided during initialization
This seemed odd - because I assume that if I provide an explicit app - it should have used that instead of trying to get it from Flask.
I tried changing this order to prefer using `self.app` instead of `Flask.current_app` - and things worked fine!
Here is a minimal example:
```python
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy import Column, Integer
from werkzeug.middleware.dispatcher import DispatcherMiddleware
app1 = Flask('app1')
app1.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///:memory:'
app1.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
db = SQLAlchemy(app1)
class MyModel(db.Model):
id = Column(Integer, primary_key=True, autoincrement=True)
app2 = Flask('app2')
@app1.route('/first')
def first():
item = MyModel.query.first()
item_id = item.id if item else None
print('Got item=', item_id)
return {'item': item_id}
@app2.route('/second')
def second():
item = MyModel.query.first()
item_id = item.id if item else None
print('Got item=', item_id)
return {'item': item_id}
if __name__ == '__main__':
app1.wsgi_app = DispatcherMiddleware(app1.wsgi_app, {'/app2': app2.wsgi_app})
app1.run(port=9999)
```
Now try calling:
```bash
$ curl --request GET http://127.0.0.1:9999/app2/second --header 'Content-Type: application/json'
```
Environment:
- Python version: 3.9.1
- Flask version: 2.0.0
- Flask-SQLAlchemy version: 2.5.1
- SQLAlchemy version: 1.3.23
| closed | 2021-05-24T12:01:17Z | 2021-06-09T00:06:08Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/972 | [] | AbdealiLoKo | 2 |
huggingface/datasets | pytorch | 6,716 | Non-deterministic `Dataset.builder_name` value | ### Describe the bug
I'm not sure if this is a bug, but `print(ds.builder_name)` in the following code sometimes prints out `rotten_tomatoes` instead of `parquet`:
```python
import datasets
for _ in range(100):
ds = datasets.load_dataset("rotten_tomatoes", split="train")
print(ds.builder_name) # prints out "rotten_tomatoes" sometimes instead of "parquet"
```
Output:
```
...
parquet
parquet
parquet
rotten_tomatoes
parquet
parquet
parquet
...
```
Here's a reproduction using GitHub Actions:
https://github.com/mlflow/mlflow/actions/runs/8153247984/job/22284263613?pr=11329#step:12:241
One of our tests is flaky because `builder_name` is not deterministic.
### Steps to reproduce the bug
1. Run the code above.
### Expected behavior
Always prints out `parquet`?
### Environment info
```
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.18.0
- Platform: Linux-6.5.0-1015-azure-x86_64-with-glibc2.34
- Python version: 3.8.18
- `huggingface_hub` version: 0.21.3
- PyArrow version: 15.0.0
- Pandas version: 2.0.3
- `fsspec` version: 2024.2.0
``` | closed | 2024-03-05T09:23:21Z | 2024-03-19T07:58:14Z | https://github.com/huggingface/datasets/issues/6716 | [] | harupy | 6 |
moshi4/pyCirclize | data-visualization | 33 | Multi-contig gff parsing: Range issue | ### Discussed in https://github.com/moshi4/pyCirclize/discussions/32
<div type='discussions-op-text'>
<sup>Originally posted by **acarafat** August 22, 2023</sup>
I am working with a GFF file that contains multiple contigs, but the `##sequence-region` only shows for the contig 1 in the gff header info.
Currently, while using `.get_seqid2size()` from gff parser, present code relies on `##sequence-region` tag, therefore it gets the range for rest of the contigs wrong.
It could be solved by not relying on the `##sequence-region`, since many gff file generated by different programs may not contain this comment.
</div> | closed | 2023-08-22T19:47:21Z | 2024-05-03T02:35:30Z | https://github.com/moshi4/pyCirclize/issues/33 | [
"bug"
] | acarafat | 5 |
plotly/dash | plotly | 3,064 | [BUG] Cannot extract virtualRowData from an AG Grid table using pattern matching |
**Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
async-dash 0.1.0a1
dash 2.18.2
dash_ag_grid 31.2.0
dash-bootstrap-components 1.6.0
dash_canvas 0.1.0
dash-core-components 2.0.0
dash_daq 0.5.0
dash-dndkit 0.0.3
dash-extensions 1.0.15
dash-html-components 2.0.0
dash-iconify 0.1.2
dash-leaflet 1.0.15
dash-loading-spinners 1.0.3
dash-mantine-components 0.12.1
dash-paperdragon 0.1.0
dash-resizable-panels 0.1.0
dash-svg 0.0.12
dash-table 5.0.0
dash_treeview_antd 0.0.1
```
**Describe the bug**
I am creating AG Grid tables dynamically and trying to reference them as Inputs/States dynamically in a Callback (using pattern matching) to get the virtualRowData. However, pattern matching can only return the component IDs.
```
import dash_bootstrap_components as dbc
from dash import Dash, dcc, html, Input, Output, callback, ALL
import dash_ag_grid as dag
app = Dash(__name__)
def create_grid(opt:int,
data:dict|list = None):
columnDefs = [
{'headerName': 'Label',
'field': 'label'},
{'headerName': 'Name',
'field': 'name',
'flex': 2}
]
grid = dag.AgGrid(
id=f"registration-{opt}-slides",
rowData = [
{'label': 'Test Label 1',
'name': 'Test Name 1'},
{'label': 'Test Label 2',
'name': 'Test Name 2'},
{'label': 'Test Label 3',
'name': 'Test Name 3'}
],
columnDefs=columnDefs,
defaultColDef={'resizable': True},
dashGridOptions={
"rowHeight": 70,
"rowDragManaged": True,
"rowDragEntireRow": True,
"rowDragMultiRow": True, "rowSelection": "multiple",
"suppressMoveWhenRowDragging": True,
},
dangerously_allow_code=True,
)
return grid
content = dcc.Tab(
label='Dynamic Components',
children = [
dbc.Row(
children = [
dbc.Col(
children = [
html.Div(
id='registration-grids',
children = [
html.Div(
id = 'grid-container',
children = [_create_grid_div("Option 1")]),
],
)
]
)
]
)
]
)
def _create_grid_div(opt):
grid = html.Div(
id={"type": "dynamic-cards", "index": opt},
children = [
html.H4(opt),
create_grid(opt)
],
className='registration-table-div'
)
return grid
@callback(
Output('dummy-div', 'children'),
Input({'type': "dynamic-cards", 'index': ALL}, 'id'),
Input({'type': "dynamic-cards", 'index': ALL}, 'virtualRowData'),
Input('registration-Option 1-slides', 'virtualRowData'),
prevent_initial_call=True
)
def _explore_virtual_data(id, vrd, opt1_vrd):
print(f"id: {id}")
print(f"vrd: {vrd}")
print(f"opt1_vrd: {opt1_vrd}")
return dash.no_update
dummy_div = html.Div(
children = [],
id='dummy-div'
)
app.layout = html.Div([
dummy_div,
content],
)
if __name__ == "__main__":
app.run_server(port=8888, jupyter_mode="external", debug=True, dev_tools_ui=True)
```
This results in the following output:
```
id: [{'type': 'dynamic-cards', 'index': 'Option 1'}]
vrd: [None]
opt1_vrd: [{'label': 'Test Label 1', 'name': 'Test Name 1'}, {'label': 'Test Label 2', 'name': 'Test Name 2'}, {'label': 'Test Label 3', 'name': 'Test Name 3'}]
```
**Expected behavior**
I expect that using pattern matching will allow me to get the virtualRowData from an AG Grid table just as I can calling the component directly. Can this be done, or is there another way to attain my goal?
| closed | 2024-11-07T16:15:18Z | 2024-11-08T15:50:11Z | https://github.com/plotly/dash/issues/3064 | [] | akunihiro | 2 |
ultralytics/yolov5 | pytorch | 13,126 | Train in colab and detect in my computer | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hi Glenn , hope you are good. I trying to make my dataset detection better but I have no gpu and my setup need about 5 minutes per epoch. When I train I colab and download the best of my training and want to use it I get the posix path error .. I have tried to read as.posix and I use force reload but still the same.. how can I run it in colab Mayde need more parameters . 'model = torch.hub.load('ultralytics/yolov5', 'custom', path='best_new.pt', force_reload=True)'... I use about 1000 pics for training and I train for 120 epochs without any other parameter.. i will use it for live detetction so the parameters has to be good..in photos dataset work fine
Also i am training in roboflow and the training is very good.
thank you
### Additional
_No response_ | closed | 2024-06-24T12:39:11Z | 2024-06-25T00:36:05Z | https://github.com/ultralytics/yolov5/issues/13126 | [
"question"
] | gchinta1 | 3 |
harry0703/MoneyPrinterTurbo | automation | 378 | 顺序播放能否修改 | 我将ppt转变为图片,想要尝试顺序播放,但是播放时反着的。比如生成了10张ppt,播放顺序是10,9,8,7.尝试过修改图片生成时间和图片修改时间均没有成功 | closed | 2024-05-22T20:31:54Z | 2024-05-24T02:36:52Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/378 | [] | yqh1996 | 2 |
gee-community/geemap | jupyter | 1,693 | Cleanup of file types and mode bits | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
This is at b8261776e3274c73d41376ffbf97d73aea4e48a6
Please run the following code on your computer and share the output with us so that we can better debug your issue:
```shell
find . -type f | grep -v ".git/" | xargs file | egrep 'executable|CRLF'
./docs/usage.md: Python script, ASCII text executable
./tutorials/Template/update_header.py: Python script, ASCII text executable
./geemap/geemap.py: Python script, Unicode text, UTF-8 text executable
./geemap/data/template/legend.txt: Python script, ASCII text executable
./geemap/data/template/ee_api_docs.csv: Algol 68 source, Unicode text, UTF-8 text, with very long lines (2281), with CRLF line terminators
./geemap/data/python/javascript_to_python.py: Python script, ASCII text executable
./geemap/foliumap.py: Python script, Unicode text, UTF-8 text executable, with very long lines (729)
./geemap/basemaps.py: Python script, ASCII text executable
./geemap/report.py: Python script, ASCII text executable
./geemap/__init__.py: Python script, ASCII text executable
./geemap/datasets.py: Python script, ASCII text executable
./geemap/plot.py: Python script, ASCII text executable
./geemap/plotlymap.py: Python script, Unicode text, UTF-8 text executable, with very long lines (323)
./geemap/cartoee.py: Python script, ASCII text executable
./geemap/chart.py: Python script, ASCII text executable
./geemap/toolbar.py: Python script, ASCII text executable
./geemap/deck.py: Python script, ASCII text executable
./geemap/cli.py: Python script, ASCII text executable
./geemap/osm.py: Python script, Unicode text, UTF-8 text executable, with very long lines (725)
./geemap/colormaps.py: Python script, ASCII text executable
./geemap/timelapse.py: Python script, ASCII text executable
./geemap/examples/__init__.py: Python script, ASCII text executable
./geemap/conversion.py: Python script, ASCII text executable, with very long lines (301)
./geemap/common.py: Python script, ASCII text executable
./geemap/ml.py: Python script, ASCII text executable
./geemap/legends.py: Python script, ASCII text executable
./geemap/ee_tile_layers.py: Python script, ASCII text executable
./geemap/map_widgets.py: Python script, ASCII text executable
./geemap/kepler.py: Python script, ASCII text executable
./setup.py: Python script, ASCII text executable
./tests/__init__.py: Python script, ASCII text executable
./tests/test_map_widgets.py: Python script, ASCII text executable
./tests/fake_ee.py: Python script, ASCII text executable
./tests/test_ee_tile_layers.py: Python script, ASCII text executable
./tests/test_basemaps.py: Python script, ASCII text executable
./tests/test_geemap.py: Python script, ASCII text executable
./tests/test_toolbar.py: Python script, ASCII text executable
./tests/fake_map.py: Python script, ASCII text executable
./examples/batch_update.py: Python script, ASCII text executable
./examples/heroku/config_vars.py: Python script, ASCII text executable
./examples/template/template.py: Python script, ASCII text executable, with very long lines (457)
./examples/python/earthengine_js_to_ipynb.py: Python script, ASCII text executable
./examples/python/geemap_and_earthengine.py: Python script, ASCII text executable
./examples/python/earthengine_py_to_ipynb.py: Python script, ASCII text executable
./examples/python/javascript_to_python.py: Python script, ASCII text executable
```
### Description
Many of the executable files do not need to be executable.
Also, something is up with geemap.py that makes it hard to work with (I have to do some text replacements for `import ee`) so that they work in a custom environment. Running `dos2unix` changes the entire file. I'm not sure what exactly is going on.
```shell
dos2unix geemap/geemap.py
```
```diff
git diff | head -30
diff --git a/geemap/geemap.py b/geemap/geemap.py
index d0bb18c..a2e1ba8 100644
--- a/geemap/geemap.py
+++ b/geemap/geemap.py
@@ -1,6808 +1,6808 @@
-"""Main module for interactive mapping using Google Earth Engine Python API and ipyleaflet.
-Keep in mind that Earth Engine functions use both camel case and snake case,
-such as setOptions(), setCenter(), centerObject(), addLayer().
-ipyleaflet functions use snake case, such as add_tile_layer(), add_wms_layer(), add_minimap().
-"""
-
-# *******************************************************************************#
-# This module contains core features and extra features of the geemap package. #
-# The Earth Engine team and the geemap community will maintain the core features.#
-# The geemap community will maintain the extra features. #
-# The core features include classes and functions below until the line # ******* #
-# *******************************************************************************#
-
-import os
-import warnings
-
-import ee
-import ipyleaflet
-import ipywidgets as widgets
-
-from box import Box
-from bqplot import pyplot as plt
-
-from IPython.display import display
-from .basemaps import get_xyz_dict, xyz_to_leaflet
```
| closed | 2023-09-06T17:12:43Z | 2023-09-29T01:29:26Z | https://github.com/gee-community/geemap/issues/1693 | [
"bug"
] | schwehr | 7 |
aiortc/aiortc | asyncio | 913 | Only add media bundle if offer SDP contains any media fields, useful for data only connections | Disclaimer: I'm not a WebRTC expert, in fact I only know the basics to get a simple connection up, event then this bug almost made me quit using this library, this is why I'm reporting it.
In the cases where a data only peer connection needs to be established aiortc responds to an SDP offer with an answer containing an empty `BUNDLE` group, this empty group is supposed to be filled with some `m=`s, but it never does since no media is declared and Chrome complains failing to set up the connection, for firefox this works fine.
This is the error that pops up when adding the SDP from aiortc to the browser:
```
Error: Failed to execute 'setRemoteDescription' on 'RTCPeerConnection': Failed to set remote answer sdp: A BUNDLE group contains a MID='' matching no m= section.
```
Here's the SDP produced by the browser:
```json
{
"type": "offer",
"sdp": "v=0\r\no=- 449907951519833443 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=extmap-allow-mixed\r\na=msid-semantic: WMS\r\n"
}
```
and here's the response from aiortc:
```json
{
"type": "answer",
"sdp": "v=0\r\no=- 3900077871 3900077871 IN IP4 0.0.0.0\r\ns=-\r\nt=0 0\r\na=group:BUNDLE \r\na=msid-semantic:WMS *\r\n"
}
```
My solution was to change the following lines here:
https://github.com/aiortc/aiortc/blob/9f14474c0953b90139c8697a216e4c2cd8ee5504/src/aiortc/rtcpeerconnection.py#L558-L561
to this:
```python
if len(description.media) > 0:
bundle = sdp.GroupDescription(semantic="BUNDLE", items=[])
for media in description.media:
bundle.items.append(media.rtp.muxId)
description.group.append(bundle)
```
This is the answer after the fix:
```json
{
"type":"answer",
"sdp":"v=0\r\no=- 3900078276 3900078276 IN IP4 0.0.0.0\r\ns=-\r\nt=0 0\r\na=msid-semantic:WMS *\r\n"
}
```
Is this fix correct? Would this pass as a pr? | closed | 2023-08-03T19:06:32Z | 2023-12-16T02:03:59Z | https://github.com/aiortc/aiortc/issues/913 | [
"stale"
] | fakuivan | 1 |
flairNLP/flair | nlp | 2,938 | All predictions are <unk> | I'm running the code from https://medium.com/thecyphy/training-custom-ner-model-using-flair-df1f9ea9c762 for NER on a custom dataset, and I find that no matter how I change the learning rate, every prediction is unk, and the f1-score is 0.0 on every epoch. I'm thinking there must be something wrong with the formatting of my dataset. Here is what my train set would look like, where I replace my actual labels with Text1 to keep my data anonymous.
`Text1 B-Brand
Text1 O
Text1 B-MPN
Text1 B-Type
Text1 B-Model
Text1 B-Color
Text1 B-Fabric Type
Text1 B-No Tag
Text1 B-Brand
Text1 B-Color
Text1 B-Pattern
Text1 B-Fabric Type
Text1 B-Model
Text1 O
Text1 B-Type
Text1 B-No Tag
Text1 B-Type
Text1 O
Text1 B-No Tag
Text1 B-Type
`
And here is the result of loss.tsv trained starting with learning_rate=.001 (I've tried larger and smaller learning_rates already)
EPOCH TIMESTAMP BAD_EPOCHS LEARNING_RATE TRAIN_LOSS DEV_LOSS DEV_PRECISION DEV_RECALL DEV_F1 DEV_ACCURACY
1 21:55:24 0 0.0010 3.6659961510777896 3.160431146621704 0.0 0.0 0.0 0.0
2 21:55:30 0 0.0010 2.658900432190474 2.093571424484253 0.0 0.0 0.0 0.0
3 21:55:36 0 0.0010 1.5765421452425217 0.9758513569831848 0.0 0.0 0.0 0.0
4 21:55:42 0 0.0010 0.5964466308130153 0.21864087879657745 0.0 0.0 0.0 0.0
5 21:55:48 0 0.0010 0.12082597720927506 0.027130696922540665 0.0 0.0 0.0 0.0
6 21:55:55 0 0.0010 0.015038865753739897 0.0025882211048156023 0.0 0.0 0.0 0.0
7 21:56:02 0 0.0010 0.001861507955604636 0.000609234906733036 0.0 0.0 0.0 0.0
8 21:56:09 0 0.0010 0.0007104066469299261 0.0003203396627213806 0.0 0.0 0.0 0.0
9 21:56:16 0 0.0010 0.0004282736406687817 0.0002125622413586825 0.0 0.0 0.0 0.0
10 21:56:23 0 0.0010 0.0003175982157330431 0.00015547996736131608 0.0 0.0 0.0 0.0
11 21:56:30 0 0.0010 0.00023519093161660838 0.00012211497232783586 0.0 0.0 0.0 0.0
12 21:56:37 0 0.0010 0.00018551815456892758 0.00010058629413833842 0.0 0.0 0.0 0.0
13 21:56:42 0 0.0010 0.00016401175303360117 8.437278302153572e-05 0.0 0.0 0.0 0.0
14 21:56:48 0 0.0010 0.00013860434806521084 7.258114055730402e-05 0.0 0.0 0.0 0.0
15 21:56:54 0 0.0010 0.00012990906794919298 6.315676000667736e-05 0.0 0.0 0.0 0.0
16 21:57:00 0 0.0010 0.00010746981776682954 5.596564369625412e-05 0.0 0.0 0.0 0.0
17 21:57:07 0 0.0010 9.767208015885881e-05 5.0248483603354543e-05 0.0 0.0 0.0 0.0
18 21:57:13 0 0.0010 9.089903361855359e-05 4.502263982431032e-05 0.0 0.0 0.0 0.0
19 21:57:20 0 0.0010 8.164969794247736e-05 4.14940805057995e-05 0.0 0.0 0.0 0.0
20 21:57:27 0 0.0010 7.59508407533057e-05 3.7652862374670804e-05 0.0 0.0 0.0 0.0
Notably, the loss significantly decreases, but the f1 score remains the same. If it helps at all, here is also the code I use to run the trainer
`# define columns
from flair.data import Corpus
from flair.datasets import ColumnCorpus
from flair.embeddings import TokenEmbeddings
columns = {0 : 'text', 1 : 'ner'}
# directory where the data resides
data_folder = './dataset2/'
# initializing the corpus
corpus = ColumnCorpus(
data_folder,
columns,
train_file = 'train1.txt',
test_file='test1.txt',
dev_file = 'dev1.txt')
# tag to predict
tag_type = 'ner'
# make tag dictionary from the corpus
label_dictionary = corpus.make_label_dictionary(tag_type)
from flair.embeddings import WordEmbeddings, StackedEmbeddings
from typing import List
embedding_types : List[TokenEmbeddings] = [
WordEmbeddings('glove'),]
embeddings : StackedEmbeddings = StackedEmbeddings(
embeddings=embedding_types)
print(embeddings)
from flair.models import SequenceTagger
tagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=label_dictionary,
tag_type=tag_type,
use_crf=True)
from flair.trainers import ModelTrainer
trainer= ModelTrainer(tagger, corpus)
print(trainer)
trainer.train('resources/taggers/example-ner',
learning_rate=.001,
mini_batch_size=32,
max_epochs=20)
`
Please let me know if there's anything that might stand out as to why the model is just not learning. Thanks | closed | 2022-09-11T03:02:28Z | 2023-03-30T10:40:49Z | https://github.com/flairNLP/flair/issues/2938 | [
"question",
"wontfix"
] | ModeEric | 4 |
polakowo/vectorbt | data-visualization | 518 | `from_order_func` seems missing some trades, when used with `size_type=SizeType.Value` | I was trying to make a `from_order_func` test, similar to one in the documentation, but slightly closer in structure to what I see in some of my real scenarios:
```python
import unittest
import numpy as np
import pandas as pd
import vectorbt as vbt
from numba import njit
from vectorbt.portfolio import SizeType, nb, FlexOrderContext, Order, Direction, NoOrder
from vectorbt import _typing as tp
class TestFromOrderFunc(unittest.TestCase):
def test_from_order_func(self):
_close = pd.Series(index=pd.date_range('2012.01.01', periods=4),
data=[12.0, 12.1, 12.2, 12.3])
pf = vbt.Portfolio.from_order_func(
_close,
test_order_func_nb,
flexible=True,
init_cash=10_000,
max_orders=8
)
print(f'Orders:\n{pf.orders.records_readable}')
print(f'Trades:\n{pf.trades.records_readable}')
self.assertEqual(8,
len(pf.orders.records_readable))
self.assertEqual(4,
len(pf.trades.records_readable))
@njit()
def test_order_func_nb(c: FlexOrderContext) -> tp.Tuple[int, Order]:
_close = c.close[c.i, c.from_col]
fees = 0.01
if c.call_idx == 0:
return c.from_col, nb.order_nb(
size=10.0,
direction=Direction.LongOnly,
price=_close,
size_type=SizeType.Amount,
fees=fees
)
elif c.call_idx == 1:
return c.from_col, nb.close_position_nb(
price=_close,
fees=fees
)
else:
return -1, NoOrder
```
Basically, this is just 4 ticks; on each tick we enter a long position (or short, doesn't really matter), with fees; and on the same tick exit it. The prices of entry/exit don't really matter in this case.
The testcase above works well, but only when `size_type=SizeType.Amount`. If I switch it to `size_type=SizeType.Value`, it fails, generating just 6 orders (3 trades) and completely missing the trades on the first timestamp (2012.01.01). | open | 2022-10-26T13:14:51Z | 2024-03-18T01:47:56Z | https://github.com/polakowo/vectorbt/issues/518 | [
"stale"
] | amyodov | 1 |
huggingface/text-generation-inference | nlp | 2,581 | Run Locally instructions don't work | The readme states this:

However, the command does not work:

Please fix the instructions, they are obviously incorrect | closed | 2024-09-27T15:23:14Z | 2024-10-03T22:01:17Z | https://github.com/huggingface/text-generation-inference/issues/2581 | [] | gabriel-peracio | 3 |
ultralytics/yolov5 | machine-learning | 12,976 | resume_evolve BUG!!! | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
Evolution
### Bug
` with open(ROOT / opt.resume_evolve, errors="ignore") as f:
evolve_population = yaml.safe_load(f)
print("evolve_population = " , evolve_population)
for value in evolve_population.values():
print("value = " , value)
value = np.array([value[k] for k in hyp_GA.keys()])
initial_values.append(list(value))` is there any problems?
evolve_population was the yaml and value just float , how can value[k]???????
### Environment
_No response_
### Minimal Reproducible Example
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR! | closed | 2024-04-29T10:46:01Z | 2024-10-20T19:45:08Z | https://github.com/ultralytics/yolov5/issues/12976 | [
"bug"
] | jackttj | 3 |
keras-team/keras | pytorch | 20,070 | Discovered a magic bug in keras, which is caused by incoming data | Here is an example code, where bert4keras3 is my own llm library, which can be installed through pip
```python
import json
config = {
"type_vocab_size": 2,
"use_bias": 0,
"o_bias": 0,
"vocab_size": 64000,
"num_hidden_layers": 1,
"query_head": 32,
"num_attention_heads": 4,
"hidden_size": 4096,
"intermediate_size": 11008,
"attention_head_size": 128,
"dropout_rate": 0.0,
"hidden_act": "silu",
"max_wavelength": 5000000.0
}
with open('config.json', 'w') as f:
json.dump(config, f, indent=4, ensure_ascii=False)
import os
os.environ["KERAS_BACKEND"] = "torch"
os.environ["CUDA_VISIBLE_DEVICES"]="-1"
maxlen= 16
import tensorflow as tf
import tensorflow.experimental.numpy as tnp
import keras
from bert4keras3.models import build_transformer_model
keras.config.set_dtype_policy("bfloat16")
model=build_transformer_model(
'config.json',
model='llama',
sequence_length=maxlen,
)
tokens = tnp.random.randint(100,60000,size=[3,maxlen],dtype='int32')
segments = tnp.ones([3,maxlen],dtype='int32')
print(model.inputs)
#this can successfully running
print(model([tokens,segments]))
#this can successfully running
try:
print(model({'Input-Token':segments ,
'Input-Segment':tokens }))
except:
print('error')
#this can not successfully running
try:
model({'Input-Token':tokens ,
'Input-Segment':segments })
except:
print('error')
```
The reason for the failure can be discovered in the `call` function of the `bert4keras.Layers.Add.Embedding` class. If you print `inputs, self.name`, you will find that the layer where 'Input-Token' should be passed has received 'Input-Segment' instead. However, if you print `self.name, inputs.name` during the execution of the `build_transformer_model` function, you can see that they are matched correctly.
Further, if we do not use a dictionary as input and instead pass a list in the order of `model.inputs`, the code will run smoothly at this point.
Additionally, this bug only exists in Keras 3.4.1 and not in Keras 3.3.3.
| closed | 2024-07-31T13:32:40Z | 2024-12-11T16:31:27Z | https://github.com/keras-team/keras/issues/20070 | [
"type:Bug",
"backend:tensorflow",
"backend:jax"
] | pass-lin | 9 |
deepinsight/insightface | pytorch | 2,396 | paddle版示例中余弦相似度负值问题 | paddle版中,提供的人脸识别示例,insightface-master\recognition\arcface_paddle\tools\test_recognition.py,代码中计算余弦相似度,如果是负值,就取绝对值,这个为什么?负值本来是相似度较小,取正不就变成相似度大了吗?困惑
代码

另外,我还想咨询个问题,就是余弦相似度范围是-1到1,如果转为百分比那种呢?归一化不太好,因为归一化会把量级差拉近 | open | 2023-08-04T07:07:48Z | 2023-08-04T10:49:44Z | https://github.com/deepinsight/insightface/issues/2396 | [] | truthsun22 | 1 |
dbfixtures/pytest-postgresql | pytest | 403 | Flask factory testing | ### What action do you want to perform
Run my tests developed in flask + pytest using pytest-postgresql.
Follow [flask factory conventions](https://flask.palletsprojects.com/en/1.1.x/tutorial/factory/?highlight=config#the-application-factory).
Use the [slqalchemy](https://flask-sqlalchemy.palletsprojects.com/en/2.x/config/) with configuration variable SQLALCHEMY_DATABASE_URI.
### Example
```py
@fixture(scope="session")
def connection(postgresql_proc):
"""Create application for the tests."""
dbname = "test" # <-- Cannot access a default table for the tests?¿?
user = postgresql_proc.user
host = postgresql_proc.host
port = postgresql_proc.port
return f'postgresql+psycopg2://{user}:@{host}:{port}/{dbname}'
@fixture(scope="session")
def app(connection):
"""Create application for the tests."""
return create_app(
config_object="eosc_perf.settings.TestingConfig",
SQLALCHEMY_DATABASE_URI=connection)
@fixture
def db(app):
"""Create database for the tests."""
_db.app = app
_db.create_all() #
yield _db
# Explicitly close DB connection
_db.session.close()
_db.drop_all()
``` | closed | 2021-04-19T12:37:38Z | 2021-04-21T15:14:34Z | https://github.com/dbfixtures/pytest-postgresql/issues/403 | [] | BorjaEst | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.