repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
holoviz/panel | matplotlib | 7,292 | "Debugging in VS Code" Documentation insufficient? | #### ALL software version info
```plaintext
panel==1.5.0
VS Code Version: 1.93.1 (Universal)
```
#### Description of expected behavior and the observed behavior
When adding a VS Code debugging configuration as suggested in the documentation, I expect to see Panel site variables in the debugging pane (even without setting a breakpoint). Unfortunately, the debugging configurations don't work for me... the debugging pane remains empty (see screenshot below).
#### Complete, minimal, self-contained example code that reproduces the issue
```python
import panel as pn
def example(a):
b = a + 3
return b
pn.interact(example, a=2).servable()
```
Top configuration from the [VS Code documentation page](https://panel.holoviz.org/how_to/editor/vscode_configure.html#debugging), bottom configuration from the issue opened by @hoxbro: https://github.com/holoviz/panel/issues/2833
```json
{
"name": "panel serve",
"type": "debugpy",
"request": "launch",
"program": "-m",
"args": [
"panel",
"serve",
"${relativeFile}",
"--index",
"${fileBasenameNoExtension}",
"--show"
],
"console": "integratedTerminal",
"justMyCode": true
},
{
"name": "Python: Panel",
"type": "python",
"request": "launch",
"module": "panel",
"args": [
"serve",
"${file}",
],
}
```
#### Stack traceback and/or browser JavaScript console output
N/A
#### Screenshots or screencasts of the bug in action
<img width="1728" alt="Screenshot 2024-09-18 at 07 03 10" src="https://github.com/user-attachments/assets/b019ddaa-1e4f-4e16-a727-1363b5f7f853">
- [x] I may be interested in making a pull request to address this
| closed | 2024-09-18T05:16:34Z | 2024-09-18T07:25:30Z | https://github.com/holoviz/panel/issues/7292 | [] | michaelweinold | 3 |
kubeflow/katib | scikit-learn | 1,744 | [Proposal] Support JSON format for `file-metrics-collector` | /kind feature
Describe the solution you'd like
[A clear and concise description of what you want to happen.]
## Motivation
Currently, it is difficult to parse JSON format files by `file-metrics-collector` using regexp filter since `file-metrics-collector` is designed to use TEXT format files.
I believe if `file-metrics-collector` supports JSON format files, we can be further made Katib powerful because we can make use of JSON format metrics files without regexp more easily.
Therefore, I would like to support JSON format in `file-metrics-collector`, such as the following example, which is split by newlines.
```text
{"foo": “bar", “fiz": “buz"…}
{“foo": “bar", “fiz": “buz"…}
{“foo": “bar", “fiz": “buz"…}
{“foo": “bar", “fiz": “buz"…}
…
```
This JSON format is also used in [cloudml-hypertune](https://github.com/GoogleCloudPlatform/cloudml-hypertune) recommended for use in GCP AI Platform or Vertex AI.
> If you use a custom container for training or if you want to perform hyperparameter tuning with a framework other than TensorFlow, then you must use the cloudml-hypertune Python package to report your hyperparameter metric to AI Platform Training.
https://cloud.google.com/ai-platform/training/docs/using-hyperparameter-tuning#other_machine_learning_frameworks_or_custom_containers
## Design
I'm thinking of the following Kubernetes API and webhook. Also, `file-metrics-collector` collects values whoose key is `spec.objective.objectiveMetricName` and `spec.objective.additionalMetricNames` from the metrcs file if `FileSystemFileFormat` is set `Json`.
- [common_types.go](https://github.com/kubeflow/katib/blob/46207a3c10529d1d8ee3b71a5088adc14d1aa32b/pkg/apis/controller/common/v1beta1/common_types.go#L190-L195)
```diff
+ type FileSystemFileFormat string
+
+ const (
+ TextFormat FileSystemFileFormat = "Text"
+ JsonFormat FileSystemFileFormat = "Json"
+ )
type FileSystemPath struct {
Path string `json:"path,omitempty"`
Kind FileSystemKind `json:"kind,omitempty"`
+ FileFormat FileSystemFileFormat `json:"fileFormat,omitempty"`
}
```
- [validator.go](https://github.com/kubeflow/katib/blob/46207a3c10529d1d8ee3b71a5088adc14d1aa32b/pkg/webhook/v1beta1/experiment/validator/validator.go#L392-L412)
```diff
func (g *DefaultValidator) validateMetricsCollector(inst *experimentsv1beta1.Experiment) error {
mcSpec := inst.Spec.MetricsCollectorSpec
mcKind := mcSpec.Collector.Kind
...
switch mcKind {
...
case commonapiv1beta1.FileCollector:
...
+ fileFormat := mcSpec.Source.FileSystemPath.FileSytemFileFormat
+ if fileFormat == "" {
+ fileFormat = commonapiv1beta1.TextFormat
+ } else if fileFormat != commonapiv1beta1.TextFormat && fileFormat != commonapiv1beta1.JsonFormat {
+ return return fmt.Errorf("The format of the metrics file is required by .spec.metricsCollectorSpec.source.fileSystemPath.fileFormat.")
+ }
...
```
Does it sound good to you? @kubeflow/wg-automl-leads
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
| closed | 2021-11-25T13:46:43Z | 2022-04-05T15:20:37Z | https://github.com/kubeflow/katib/issues/1744 | [
"kind/feature"
] | tenzen-y | 15 |
PrefectHQ/prefect | automation | 17,129 | `DeploymentScheduleUpdate.active` server api does not match the client api and cause an InternalError on the API | ### Bug summary
With the new prefect version 3.2.0, existing deployments cannot be updated since the client API for a `DeploymentScheduleUpdate` object does not match the `DeploymentScheduleUpdate` server API for the field `active`. This causes a prefect internal server failure when applying Deployments client side which already exist on the server. This happens because the server side `DeploymentScheduleUpdate` has a default value of None which is not allowed for the `DeploymentScheduleCreate` server API. I am not sure how this was not noticed in the tests. Workaround would be to explicitly set the value for the `active` field to True which is also the default.
server api:
https://github.com/PrefectHQ/prefect/blob/d982c69a8bd4fb92cb250bc91dea25d361601260/src/prefect/server/schemas/actions.py#L104-L106
https://github.com/PrefectHQ/prefect/blob/d982c69a8bd4fb92cb250bc91dea25d361601260/src/prefect/server/schemas/actions.py#L133-L135
client api:
https://github.com/PrefectHQ/prefect/blob/d982c69a8bd4fb92cb250bc91dea25d361601260/src/prefect/client/schemas/actions.py#L95-L97
https://github.com/PrefectHQ/prefect/blob/d982c69a8bd4fb92cb250bc91dea25d361601260/src/prefect/client/schemas/actions.py#L172-L174
the traceback from the server
```python
2025-02-13 17:39:24.177 16:39:24.174 | ERROR | prefect.server - Encountered exception in request:
2025-02-13 17:39:24.177 Traceback (most recent call last):
2025-02-13 17:39:24.177 File "/usr/local/lib/python3.12/site-packages/starlette/middleware/errors.py", line 165, in __call__
2025-02-13 17:39:24.177 await self.app(scope, receive, _send)
2025-02-13 17:39:24.177 File "/usr/local/lib/python3.12/site-packages/starlette/middleware/gzip.py", line 20, in __call__
2025-02-13 17:39:24.177 await responder(scope, receive, send)
2025-02-13 17:39:24.177 File "/usr/local/lib/python3.12/site-packages/starlette/middleware/gzip.py", line 39, in __call__
2025-02-13 17:39:24.177 await self.app(scope, receive, self.send_with_gzip)
2025-02-13 17:39:24.177 File "/usr/local/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
2025-02-13 17:39:24.177 await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
2025-02-13 17:39:24.177 File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
2025-02-13 17:39:24.177 raise exc
2025-02-13 17:39:24.177 File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
2025-02-13 17:39:24.177 await app(scope, receive, sender)
2025-02-13 17:39:24.177 File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 715, in __call__
2025-02-13 17:39:24.177 await self.middleware_stack(scope, receive, send)
2025-02-13 17:39:24.177 File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 735, in app
2025-02-13 17:39:24.177 await route.handle(scope, receive, send)
2025-02-13 17:39:24.177 File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 288, in handle
2025-02-13 17:39:24.177 await self.app(scope, receive, send)
2025-02-13 17:39:24.177 File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 76, in app
2025-02-13 17:39:24.177 await wrap_app_handling_exceptions(app, request)(scope, receive, send)
2025-02-13 17:39:24.177 File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
2025-02-13 17:39:24.177 raise exc
2025-02-13 17:39:24.177 File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
2025-02-13 17:39:24.177 await app(scope, receive, sender)
2025-02-13 17:39:24.177 File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 73, in app
2025-02-13 17:39:24.177 response = await f(request)
2025-02-13 17:39:24.177 ^^^^^^^^^^^^^^^^
2025-02-13 17:39:24.177 File "/usr/local/lib/python3.12/site-packages/prefect/server/utilities/server.py", line 50, in handle_response_scoped_depends
2025-02-13 17:39:24.177 response = await default_handler(request)
2025-02-13 17:39:24.177 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-02-13 17:39:24.177 File "/usr/local/lib/python3.12/site-packages/fastapi/routing.py", line 301, in app
2025-02-13 17:39:24.177 raw_response = await run_endpoint_function(
2025-02-13 17:39:24.177 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-02-13 17:39:24.177 File "/usr/local/lib/python3.12/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
2025-02-13 17:39:24.177 return await dependant.call(**values)
2025-02-13 17:39:24.177 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-02-13 17:39:24.177 File "/usr/local/lib/python3.12/site-packages/prefect/server/api/deployments.py", line 301, in update_deployment
2025-02-13 17:39:24.177 result = await models.deployments.update_deployment(
2025-02-13 17:39:24.177 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-02-13 17:39:24.177 File "/usr/local/lib/python3.12/site-packages/prefect/server/models/deployments.py", line 275, in update_deployment
2025-02-13 17:39:24.177 schemas.actions.DeploymentScheduleCreate(
2025-02-13 17:39:24.177 File "/usr/local/lib/python3.12/site-packages/pydantic/main.py", line 214, in __init__
2025-02-13 17:39:24.177 validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
2025-02-13 17:39:24.177 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-02-13 17:39:24.177 pydantic_core._pydantic_core.ValidationError: 1 validation error for DeploymentScheduleCreate
2025-02-13 17:39:24.177 active
2025-02-13 17:39:24.177 Input should be a valid boolean [type=bool_type, input_value=None, input_type=NoneType]
2025-02-13 17:39:24.177 For further information visit https://errors.pydantic.dev/2.10/v/bool_type
2025-02-13 17:39:24.179 ERROR: Exception in ASGI application
2025-02-13 17:39:24.179 Traceback (most recent call last):
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi
2025-02-13 17:39:24.179 result = await app( # type: ignore[func-returns-value]
2025-02-13 17:39:24.179 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
2025-02-13 17:39:24.179 return await self.app(scope, receive, send)
2025-02-13 17:39:24.179 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__
2025-02-13 17:39:24.179 await super().__call__(scope, receive, send)
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/applications.py", line 112, in __call__
2025-02-13 17:39:24.179 await self.middleware_stack(scope, receive, send)
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/middleware/errors.py", line 187, in __call__
2025-02-13 17:39:24.179 raise exc
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/middleware/errors.py", line 165, in __call__
2025-02-13 17:39:24.179 await self.app(scope, receive, _send)
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/middleware/cors.py", line 85, in __call__
2025-02-13 17:39:24.179 await self.app(scope, receive, send)
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
2025-02-13 17:39:24.179 await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
2025-02-13 17:39:24.179 raise exc
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
2025-02-13 17:39:24.179 await app(scope, receive, sender)
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 715, in __call__
2025-02-13 17:39:24.179 await self.middleware_stack(scope, receive, send)
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 735, in app
2025-02-13 17:39:24.179 await route.handle(scope, receive, send)
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 460, in handle
2025-02-13 17:39:24.179 await self.app(scope, receive, send)
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__
2025-02-13 17:39:24.179 await super().__call__(scope, receive, send)
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/applications.py", line 112, in __call__
2025-02-13 17:39:24.179 await self.middleware_stack(scope, receive, send)
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/middleware/errors.py", line 187, in __call__
2025-02-13 17:39:24.179 raise exc
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/middleware/errors.py", line 165, in __call__
2025-02-13 17:39:24.179 await self.app(scope, receive, _send)
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/middleware/gzip.py", line 20, in __call__
2025-02-13 17:39:24.179 await responder(scope, receive, send)
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/middleware/gzip.py", line 39, in __call__
2025-02-13 17:39:24.179 await self.app(scope, receive, self.send_with_gzip)
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
2025-02-13 17:39:24.179 await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
2025-02-13 17:39:24.179 raise exc
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
2025-02-13 17:39:24.179 await app(scope, receive, sender)
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 715, in __call__
2025-02-13 17:39:24.179 await self.middleware_stack(scope, receive, send)
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 735, in app
2025-02-13 17:39:24.179 await route.handle(scope, receive, send)
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 288, in handle
2025-02-13 17:39:24.179 await self.app(scope, receive, send)
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 76, in app
2025-02-13 17:39:24.179 await wrap_app_handling_exceptions(app, request)(scope, receive, send)
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
2025-02-13 17:39:24.179 raise exc
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
2025-02-13 17:39:24.179 await app(scope, receive, sender)
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 73, in app
2025-02-13 17:39:24.179 response = await f(request)
2025-02-13 17:39:24.179 ^^^^^^^^^^^^^^^^
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/prefect/server/utilities/server.py", line 50, in handle_response_scoped_depends
2025-02-13 17:39:24.179 response = await default_handler(request)
2025-02-13 17:39:24.179 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/fastapi/routing.py", line 301, in app
2025-02-13 17:39:24.179 raw_response = await run_endpoint_function(
2025-02-13 17:39:24.179 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
2025-02-13 17:39:24.179 return await dependant.call(**values)
2025-02-13 17:39:24.179 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/prefect/server/api/deployments.py", line 301, in update_deployment
2025-02-13 17:39:24.179 result = await models.deployments.update_deployment(
2025-02-13 17:39:24.179 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/prefect/server/models/deployments.py", line 275, in update_deployment
2025-02-13 17:39:24.179 schemas.actions.DeploymentScheduleCreate(
2025-02-13 17:39:24.179 File "/usr/local/lib/python3.12/site-packages/pydantic/main.py", line 214, in __init__
2025-02-13 17:39:24.179 validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
2025-02-13 17:39:24.179 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-02-13 17:39:24.179 pydantic_core._pydantic_core.ValidationError: 1 validation error for DeploymentScheduleCreate
2025-02-13 17:39:24.179 active
2025-02-13 17:39:24.179 Input should be a valid boolean [type=bool_type, input_value=None, input_type=NoneType]
2025-02-13 17:39:24.179 For further information visit https://errors.pydantic.dev/2.10/v/bool_type
```
### Version info
```Text
prefect >= 3.2.0
```
### Additional context
_No response_ | closed | 2025-02-13T17:37:26Z | 2025-02-13T17:52:07Z | https://github.com/PrefectHQ/prefect/issues/17129 | [
"bug"
] | marcm-ml | 1 |
neuml/txtai | nlp | 674 | Add support for dynamic vector dimensions | Add support for dynamic vector dimensions. This enables using models trained with [Matryoshka Representation Learning](https://arxiv.org/pdf/2205.13147.pdf). Support for this method was added with Sentence Transformers 2.4.
See [this blog post](https://huggingface.co/blog/matryoshka) for more. | closed | 2024-02-24T11:20:28Z | 2024-02-28T02:22:16Z | https://github.com/neuml/txtai/issues/674 | [] | davidmezzetti | 0 |
proplot-dev/proplot | matplotlib | 137 | Would you add the "readshapefile" method in proplot? | <!-- Thanks for helping us make proplot a better package! If this is a bug report, please use the template provided below. If this is a feature request, you can delete the template text (just try to be descriptive with your request). -->
### Description
[Description of the bug or feature.]
### Steps to reproduce
A "[Minimal, Complete and Verifiable Example](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports)" will make it much easier for maintainers to help you.
```python
# your code here
# we should be able to copy-paste this into python and exactly reproduce your bug
```
**Expected behavior**: [What you expected to happen]
**Actual behavior**: [What actually happened]
### Equivalent steps in matplotlib
Please make sure this bug is related to a specific proplot feature. If you're not sure, try to replicate it with the [native matplotlib API](https://matplotlib.org/3.1.1/api/index.html). Matplotlib bugs belong on the [matplotlib github page](https://github.com/matplotlib/matplotlib).
```python
# your code here, if applicable
```
### Proplot version
Paste the result of `import proplot; print(proplot.version)` here.
| closed | 2020-04-07T04:14:20Z | 2020-04-22T23:15:45Z | https://github.com/proplot-dev/proplot/issues/137 | [
"feature"
] | sfhua | 2 |
plotly/dash-table | plotly | 829 | Dash DataTable with RadioButton as cell content | I’m trying to create a table containing players, games and a set of configurations that can be toggled on/off.
The ideal would be to use a Dash DataTable with RadioButtons (similar as for DropDowns). Is this possible? Any ideas on how to achieve this? Screenshot 2020-09-16 at 23.27.41

The example shown is created with “standard” HTML table, but as the list can be very long I need it to be scrollable and flexible similar to a DataTable. | open | 2020-09-18T22:11:20Z | 2020-09-18T22:11:20Z | https://github.com/plotly/dash-table/issues/829 | [] | TomRoger | 0 |
opengeos/leafmap | plotly | 885 | Add support for selecting/editing multiple features | Selecting and highlight multiple features
```python
import os
import json
import requests
import copy
from ipyleaflet import Map, GeoJSON, LayersControl
def create_geojson_map(data, style, hover_style, highlight_style, center=(50.6252978589571, 0.34580993652344), zoom=4):
"""
Create a GeoJSON map with click highlighting functionality.
Parameters:
- data: GeoJSON data
- style: Default style for the GeoJSON layer
- hover_style: Style when hovering over features
- highlight_style: Style to apply when a feature is clicked
- center: Initial center of the map (default: Europe)
- zoom: Initial zoom level (default: 4)
"""
# Create the map
m = Map(center=center, zoom=zoom, scroll_wheel_zoom=True)
# Function to highlight the clicked feature
def highlight_feature(event, feature, **kwargs):
original_data = copy.deepcopy(geo_json.data)
for index, f in enumerate(original_data['features']):
if f['properties']['name'] == feature['properties']['name']:
if "fillColor" in original_data['features'][index]['properties']['style']:
color = original_data['features'][index]['properties']['style']['fillColor']
if color == "yellow":
original_data['features'][index]['properties']['style'] = style
else:
original_data['features'][index]['properties']['style'] = highlight_style
else:
original_data['features'][index]['properties']['style'] = highlight_style
break
geo_json.data = original_data
# Create the GeoJSON layer
geo_json = GeoJSON(
data=data,
style=style,
hover_style=hover_style,
name='Countries'
)
# Add click event to highlight features
geo_json.on_click(highlight_feature)
# Add the GeoJSON layer to the map
m.add_layer(geo_json)
# Add a layer control
control = LayersControl(position='topright')
m.add_control(control)
return m
# Download the GeoJSON file if it doesn't exist
if not os.path.exists('europe_110.geo.json'):
url = 'https://raw.githubusercontent.com/jupyter-widgets/ipyleaflet/master/examples/europe_110.geo.json'
r = requests.get(url)
with open('europe_110.geo.json', 'w') as f:
f.write(r.content.decode("utf-8"))
# Load the GeoJSON data
with open('europe_110.geo.json', 'r') as f:
data = json.load(f)
# Default styles
style = {"color": "#3388ff"}
hover_style = {'color': 'yellow', 'dashArray': '0', 'fillOpacity': 0.3}
highlight_style = {'color': '#3388ff', 'fillColor': 'yellow', 'weight': 3, 'fillOpacity': 0.5}
# Create and display the map
m = create_geojson_map(data, style, hover_style, highlight_style)
m
```

| closed | 2024-09-06T03:23:44Z | 2024-09-11T03:01:50Z | https://github.com/opengeos/leafmap/issues/885 | [
"Feature Request"
] | giswqs | 3 |
paperless-ngx/paperless-ngx | django | 9,141 | [BUG] Concise description of the issue | ### Description
hello,
your script does not work. It says that "docker compose" does not exist. It is not calling "docker-compose", it is calling "docker compose". Tried different ways to get paperless up and running but was not able to get it done. Very frustrating.
Regards
Thomas
### Steps to reproduce
run the script install-paperless-ngx.sh.1 on a fresh debian node
### Webserver logs
```bash
no logs
```
### Browser logs
```bash
```
### Paperless-ngx version
latest?
### Host OS
debian
### Installation method
Docker - official image
### System status
```json
```
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [x] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [x] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [x] I have already searched for relevant existing issues and discussions before opening this report.
- [x] I have updated the title field above with a concise description. | closed | 2025-02-17T18:19:08Z | 2025-03-20T03:12:52Z | https://github.com/paperless-ngx/paperless-ngx/issues/9141 | [
"not a bug"
] | higgyforever | 8 |
xonsh/xonsh | data-science | 5,071 | Suppress subprocess traceback in case `XONSH_SHOW_TRACEBACK=False` and `$RAISE_SUBPROC_ERROR=True` | Source - https://github.com/xonsh/xonsh/discussions/4708
```xsh
echo @("""
$XONSH_SHOW_TRACEBACK = False
$RAISE_SUBPROC_ERROR = True
print('LINE 1')
cp nonexisting_file.txt other_name.txt
print("LINE 2")
""") > /tmp/1.xsh
xonsh /tmp/1.xsh
#
# CURRENT OUTPUT: full traceback
#
#
# EXPECTED OUTPUT - short traceback with the line number and the exception:
#
# LINE 1
# Traceback (most recent call last):
# File "/tmp/1.xsh", line 6, in <module>
# cp nonexisting_file.txt other_name.txt
# subprocess.CalledProcessError: Command '['cp', 'nonexisting_file.txt', 'other_name.txt']'
# returned non-zero exit status
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| closed | 2023-02-24T07:48:09Z | 2023-03-08T07:12:35Z | https://github.com/xonsh/xonsh/issues/5071 | [
"error",
"priority-medium"
] | anki-code | 6 |
pydantic/FastUI | pydantic | 337 | disable/enable/hide form elements dependent on input in another element | Hi,
Thanks for this great tool.
We often have forms where some questions do not make sense if a question was answered in a specific way before. For example, on a feedback form to an invitation, we ask if the invitee will attend the event (a required `bool` field), and if they attend, we ask how many guests they will bring with them (an optional `int` field):
```python
from typing import Annotated, Optional
from pydantic import BaseModel, Field
from fastui import components as c
class Feedback(BaseModel):
attendance: Annotated[bool, Field(title='I will attend the ceremony.')]
attendees_count: Annotated[Optional[int], Field(title='I will be accompanied by (max 5) additional guest(s).')] = None
c.ModelForm(submit_url='./answer', model=Feedback)
```
Is it possible to disable/enable or maybe even hide/unhide form element `attendees_count` depending on the value of `attendance`? | open | 2024-06-27T19:08:33Z | 2024-06-27T19:08:33Z | https://github.com/pydantic/FastUI/issues/337 | [] | PHvL | 0 |
vitalik/django-ninja | django | 1,180 | Static Method Not Reflecting Instance-Specific Argument in Dynamically Created ModelSchema Class | **Description:**
I encountered an issue with dynamically creating schema classes where a static method within the class does not reflect an instance-specific argument. Below is the code snippet illustrating the problem:
```Python
class get_some_schema(a=False):
class SomeSchema(ModelSchema):
class Meta:
model = SomeModel
fields = ['some_field']
@staticmethod
def resolve_some_field(obj):
print(f"{a=}")
return "resolved field"
```
**Steps to Reproduce:**
**1. Instantiate the schema class with different arguments:**
```Python
s1 = get_some_schema()
s2 = get_some_schema(True)
```
**2. Call the resolve_some_field method on both instances:**
```Python
s1.resolve_some_field(None) # Outputs: a=False
s2.resolve_some_field(None) # Outputs: a=False (expected: a=True)
```
**Versions (please complete the following information):**
- Python version: 3.12
- Django version: 4.2.11
- Django-Ninja version: 1.1.0
- Pydantic version: 2.7.2
| open | 2024-05-31T10:46:43Z | 2024-05-31T10:46:43Z | https://github.com/vitalik/django-ninja/issues/1180 | [] | Alex-Sichkar | 0 |
huggingface/datasets | numpy | 7,287 | Support for identifier-based automated split construction | ### Feature request
As far as I understand, automated construction of splits for hub datasets is currently based on either file names or directory structure ([as described here](https://huggingface.co/docs/datasets/en/repository_structure))
It would seem to be pretty useful to also allow splits to be based on identifiers of individual examples
This could be configured like
{"split_name": {"column_name": [column values in split]}}
(This in turn requires unique 'index' columns, which could be explicitly supported or just assumed to be defined appropriately by the user).
I guess a potential downside would be that shards would end up spanning different splits - is this something that can be handled somehow? Would this only affect streaming from hub?
### Motivation
The main motivation would be that all data files could be stored in a single directory, and multiple sets of splits could be generated from the same data. This is often useful for large datasets with multiple distinct sets of splits.
This could all be configured via the README.md yaml configs
### Your contribution
May be able to contribute if it seems like a good idea | open | 2024-11-10T07:45:19Z | 2024-11-19T14:37:02Z | https://github.com/huggingface/datasets/issues/7287 | [
"enhancement"
] | alex-hh | 3 |
ultralytics/yolov5 | machine-learning | 13,476 | mAP50计算与iou_thres的关系 | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
mAP50计算与iou_thres有什么关系,运行val.py文件时,改变iou_thres为0.5,精度上升了,召回率稍微下降了,这是为什么?

### Additional
_No response_ | open | 2024-12-30T14:19:02Z | 2024-12-31T04:41:46Z | https://github.com/ultralytics/yolov5/issues/13476 | [
"question"
] | lqh964165950 | 3 |
ydataai/ydata-profiling | pandas | 853 | TraitError: n_rows and n_columns must be positive integer | **Missing functionality**
<!--
Is your feature request related to a problem?
TraitError Traceback (most recent call last)
<ipython-input-50-1055f09e4b48> in <module>
----> 1 profile.to_widgets()
~\anaconda3\lib\site-packages\pandas_profiling\profile_report.py in to_widgets(self)
412 from IPython.core.display import display
413
--> 414 display(self.widgets)
415
416 def _repr_html_(self) -> None:
~\anaconda3\lib\site-packages\pandas_profiling\profile_report.py in widgets(self)
195 def widgets(self) -> Renderable:
196 if self._widgets is None:
--> 197 self._widgets = self._render_widgets()
198 return self._widgets
199
~\anaconda3\lib\site-packages\pandas_profiling\profile_report.py in _render_widgets(self)
321 leave=False,
322 ) as pbar:
--> 323 widgets = WidgetReport(copy.deepcopy(report)).render()
324 pbar.update()
325 return widgets
~\anaconda3\lib\site-packages\pandas_profiling\report\presentation\flavours\widget\root.py in render(self, **kwargs)
7 def render(self, **kwargs) -> widgets.VBox:
8 return widgets.VBox(
----> 9 [self.content["body"].render(), self.content["footer"].render()]
10 )
~\anaconda3\lib\site-packages\pandas_profiling\report\presentation\flavours\widget\container.py in render(self)
104 widget = get_named_list(self.content["items"])
105 elif self.sequence_type in ["tabs", "sections", "select"]:
--> 106 widget = get_tabs(self.content["items"])
107 elif self.sequence_type == "accordion":
108 widget = get_accordion(self.content["items"])
~\anaconda3\lib\site-packages\pandas_profiling\report\presentation\flavours\widget\container.py in get_tabs(items)
18 titles = []
19 for item in items:
---> 20 children.append(item.render())
21 titles.append(get_name(item))
22
~\anaconda3\lib\site-packages\pandas_profiling\report\presentation\flavours\widget\container.py in render(self)
106 widget = get_tabs(self.content["items"])
107 elif self.sequence_type == "accordion":
--> 108 widget = get_accordion(self.content["items"])
109 elif self.sequence_type == "grid":
110 widget = get_row(self.content["items"])
~\anaconda3\lib\site-packages\pandas_profiling\report\presentation\flavours\widget\container.py in get_accordion(items)
87 titles = []
88 for item in items:
---> 89 children.append(item.render())
90 titles.append(get_name(item))
91
~\anaconda3\lib\site-packages\pandas_profiling\report\presentation\flavours\widget\variable.py in render(self)
8 items = [self.content["top"].render()]
9 if self.content["bottom"] is not None:
---> 10 items.append(self.content["bottom"].render())
11
12 return widgets.VBox(items)
~\anaconda3\lib\site-packages\pandas_profiling\report\presentation\flavours\widget\collapse.py in render(self)
12
13 toggle = self.content["button"].render()
---> 14 item = self.content["item"].render()
15
16 if collapse == "correlation":
~\anaconda3\lib\site-packages\pandas_profiling\report\presentation\flavours\widget\container.py in render(self)
104 widget = get_named_list(self.content["items"])
105 elif self.sequence_type in ["tabs", "sections", "select"]:
--> 106 widget = get_tabs(self.content["items"])
107 elif self.sequence_type == "accordion":
108 widget = get_accordion(self.content["items"])
~\anaconda3\lib\site-packages\pandas_profiling\report\presentation\flavours\widget\container.py in get_tabs(items)
18 titles = []
19 for item in items:
---> 20 children.append(item.render())
21 titles.append(get_name(item))
22
~\anaconda3\lib\site-packages\pandas_profiling\report\presentation\flavours\widget\container.py in render(self)
108 widget = get_accordion(self.content["items"])
109 elif self.sequence_type == "grid":
--> 110 widget = get_row(self.content["items"])
111 elif self.sequence_type == "batch_grid":
112 widget = get_batch_grid(
~\anaconda3\lib\site-packages\pandas_profiling\report\presentation\flavours\widget\container.py in get_row(items)
55 raise ValueError("Layout undefined for this number of columns")
56
---> 57 return widgets.GridBox([item.render() for item in items], layout=layout)
58
59
~\anaconda3\lib\site-packages\pandas_profiling\report\presentation\flavours\widget\container.py in <listcomp>(.0)
55 raise ValueError("Layout undefined for this number of columns")
56
---> 57 return widgets.GridBox([item.render() for item in items], layout=layout)
58
59
~\anaconda3\lib\site-packages\pandas_profiling\report\presentation\flavours\widget\container.py in render(self)
104 widget = get_named_list(self.content["items"])
105 elif self.sequence_type in ["tabs", "sections", "select"]:
--> 106 widget = get_tabs(self.content["items"])
107 elif self.sequence_type == "accordion":
108 widget = get_accordion(self.content["items"])
~\anaconda3\lib\site-packages\pandas_profiling\report\presentation\flavours\widget\container.py in get_tabs(items)
18 titles = []
19 for item in items:
---> 20 children.append(item.render())
21 titles.append(get_name(item))
22
~\anaconda3\lib\site-packages\pandas_profiling\report\presentation\flavours\widget\container.py in render(self)
102 widget = get_list(self.content["items"])
103 elif self.sequence_type == "named_list":
--> 104 widget = get_named_list(self.content["items"])
105 elif self.sequence_type in ["tabs", "sections", "select"]:
106 widget = get_tabs(self.content["items"])
~\anaconda3\lib\site-packages\pandas_profiling\report\presentation\flavours\widget\container.py in get_named_list(items)
34 def get_named_list(items: List[Renderable]) -> widgets.VBox:
35 return widgets.VBox(
---> 36 [
37 widgets.VBox(
38 [widgets.HTML(f"<strong>{get_name(item)}</strong>"), item.render()]
~\anaconda3\lib\site-packages\pandas_profiling\report\presentation\flavours\widget\container.py in <listcomp>(.0)
36 [
37 widgets.VBox(
---> 38 [widgets.HTML(f"<strong>{get_name(item)}</strong>"), item.render()]
39 )
40 for item in items
~\anaconda3\lib\site-packages\pandas_profiling\report\presentation\flavours\widget\frequency_table.py in render(self)
54 )
55
---> 56 return get_table(items)
~\anaconda3\lib\site-packages\pandas_profiling\report\presentation\flavours\widget\frequency_table.py in get_table(items)
9 items: List[Tuple[widgets.Label, widgets.FloatProgress, widgets.Label]]
10 ) -> VBox:
---> 11 table = GridspecLayout(len(items), 3)
12 for row_id, (label, progress, count) in enumerate(items):
13 table[row_id, 0] = label
~\anaconda3\lib\site-packages\ipywidgets\widgets\widget_templates.py in __init__(self, n_rows, n_columns, **kwargs)
280 def __init__(self, n_rows=None, n_columns=None, **kwargs):
281 super(GridspecLayout, self).__init__(**kwargs)
--> 282 self.n_rows = n_rows
283 self.n_columns = n_columns
284 self._grid_template_areas = [['.'] * self.n_columns for i in range(self.n_rows)]
~\anaconda3\lib\site-packages\traitlets\traitlets.py in __set__(self, obj, value)
602 raise TraitError('The "%s" trait is read-only.' % self.name)
603 else:
--> 604 self.set(obj, value)
605
606 def _validate(self, obj, value):
~\anaconda3\lib\site-packages\traitlets\traitlets.py in set(self, obj, value)
576
577 def set(self, obj, value):
--> 578 new_value = self._validate(obj, value)
579 try:
580 old_value = obj._trait_values[self.name]
~\anaconda3\lib\site-packages\traitlets\traitlets.py in _validate(self, obj, value)
610 value = self.validate(obj, value)
611 if obj._cross_validation_lock is False:
--> 612 value = self._cross_validate(obj, value)
613 return value
614
~\anaconda3\lib\site-packages\traitlets\traitlets.py in _cross_validate(self, obj, value)
616 if self.name in obj._trait_validators:
617 proposal = Bunch({'trait': self, 'value': value, 'owner': obj})
--> 618 value = obj._trait_validators[self.name](obj, proposal)
619 elif hasattr(obj, '_%s_validate' % self.name):
620 meth_name = '_%s_validate' % self.name
~\anaconda3\lib\site-packages\traitlets\traitlets.py in __call__(self, *args, **kwargs)
973 """Pass `*args` and `**kwargs` to the handler's function if it exists."""
974 if hasattr(self, 'func'):
--> 975 return self.func(*args, **kwargs)
976 else:
977 return self._init_call(*args, **kwargs)
~\anaconda3\lib\site-packages\ipywidgets\widgets\widget_templates.py in _validate_integer(self, proposal)
293 if proposal['value'] > 0:
294 return proposal['value']
--> 295 raise TraitError('n_rows and n_columns must be positive integer')
296
297 def _get_indices_from_slice(self, row, column):
TraitError: n_rows and n_columns must be positive integer | open | 2021-10-07T19:17:58Z | 2021-11-30T19:23:01Z | https://github.com/ydataai/ydata-profiling/issues/853 | [] | jijasmx | 7 |
huggingface/datasets | pytorch | 6,942 | Import sorting is disabled by flake8 noqa directive after switching to ruff linter | When we switched to `ruff` linter in PR:
- #5519
import sorting was disabled in all files containing the `# flake8: noqa` directive
- https://github.com/astral-sh/ruff/issues/11679
We should re-enable import sorting on those files. | closed | 2024-06-02T09:43:34Z | 2024-06-04T09:54:24Z | https://github.com/huggingface/datasets/issues/6942 | [
"maintenance"
] | albertvillanova | 0 |
indico/indico | sqlalchemy | 6,357 | Better UX when uploading files in the editing module | **Is your feature request related to a problem? Please describe.**
The dropzones for file uploads in the editing module are too small. At the same time it is not clear (at least to me, every time I try to upload a file) that the dropzone is actually just the area at the bottom with the dashed border and not the whole card.
Also, the position of the dropzone changes depending on the card contents which looks a bit strange..

**Describe the solution you'd like**
The dropzone should include the whole card.
| open | 2024-05-22T08:42:40Z | 2024-05-22T08:42:40Z | https://github.com/indico/indico/issues/6357 | [
"enhancement"
] | tomasr8 | 0 |
pyjanitor-devs/pyjanitor | pandas | 1,247 | Add `how='outer'` to `conditional_join` | closed | 2023-02-20T08:17:21Z | 2023-05-07T00:05:19Z | https://github.com/pyjanitor-devs/pyjanitor/issues/1247 | [] | samukweku | 0 | |
LAION-AI/Open-Assistant | python | 3,196 | Feature Request: Integration of Music Functionality in Open Assistant Project | As an active user of the Open Assistant project, I believe it would greatly enhance the user experience to include music functionality within the project. Music has become an integral part of our lives and can significantly contribute to a more enjoyable and immersive user interaction. Adding music capabilities would not only make the Open Assistant more versatile but also attract a wider audience.
I suggest implementing the following features to integrate music functionality into the Open Assistant project:
**Music Streaming Services Integration:** Incorporate popular music streaming services such as Spotify, Apple Music, or YouTube Music to allow users to listen to their favorite songs, albums, playlists, and podcasts directly through the Open Assistant.
**Voice Commands for Music Playback**: Enable voice commands to control music playback, allowing users to play, pause, skip tracks, adjust volume, and create custom playlists with natural language commands.
**Personalized Music Recommendations:** Implement an intelligent recommendation system that analyzes user preferences and suggests relevant music based on their listening history, mood, or genre preferences. This can enhance user engagement and provide a tailored music experience.
**Playlist Management:** Enable users to create, edit, and manage their music playlists within the Open Assistant. This functionality would allow users to curate personalized collections and access them easily through voice commands.
**Music Information and Trivia:** Integrate a music information database that provides details about songs, artists, albums, and related trivia. Users can ask questions about their favorite music, explore artist biographies, or learn more about specific genres or eras.
**Multi-room Audio Support:** Provide support for multi-room audio systems, allowing users to synchronize music playback across different devices or rooms in their homes.
**Implementation Considerations:**
Investigate the feasibility of licensing agreements with music streaming platforms to ensure legal and authorized access to music content.
Prioritize platform compatibility to cater to a wide range of users (e.g., web-based, mobile apps, desktop applications, voice assistants).
Design a user-friendly interface that seamlessly integrates music functionality with existing Open Assistant features.
Ensure robust error handling and fallback mechanisms for cases where specific music requests cannot be fulfilled.
This feature addition would not only elevate the Open Assistant project but also enhance its competitiveness with other voice assistant solutions. I believe that integrating music functionality would attract more users and provide an enriched user experience. I'm excited to contribute to the development of this project and would be happy to discuss further implementation details.
**More unique than OpenAI ChatGPT.**
Thank you for considering this feature request.
| closed | 2023-05-18T17:25:54Z | 2023-06-09T12:01:37Z | https://github.com/LAION-AI/Open-Assistant/issues/3196 | [
"feature",
"needs discussion"
] | CodeQueeninBuissness123 | 2 |
scikit-optimize/scikit-optimize | scikit-learn | 1,102 | Model is fed wrong values by `BayesSearchCV` | Hi there, this is my first time trying `scikit-optimize` and I have followed the [minimal example on your webpage](https://scikit-optimize.github.io/stable/auto_examples/sklearn-gridsearchcv-replacement.html), with the only modification of using `joblib` to distribute jobs to a dask cluster.
```python
from skopt import BayesSearchCV
import xgboost as xgb
model = xgb.XGBRegressor(eval_metric="rmse")
grid_params_bayes_skopt = {
'learning_rate': (0.01, 1.0, 'log-uniform'),
'n_estimators': (100, 1000, 'log-uniform'),
'max_depth': (3,10, 'log-uniform'),
'gamma': (0, 5, 'log-uniform')
}
opt = BayesSearchCV(
model,
grid_params_bayes_skopt,
n_iter=32,
cv=3
)
with joblib.parallel_backend('dask'):
opt.fit(X_train, y_train)
```
```python
---------------------------------------------------------------------------
XGBoostError Traceback (most recent call last)
/tmp/ipykernel_439/257726221.py in <module>
22
23 with joblib.parallel_backend('dask'):
---> 24 opt.fit(X_train, y_train)
/srv/conda/envs/notebook/lib/python3.9/site-packages/skopt/searchcv.py in fit(self, X, y, groups, callback, **fit_params)
464 self.optimizer_kwargs_ = dict(self.optimizer_kwargs)
465
--> 466 super().fit(X=X, y=y, groups=groups, **fit_params)
467
468 # BaseSearchCV never ranked train scores,
/srv/conda/envs/notebook/lib/python3.9/site-packages/sklearn/model_selection/_search.py in fit(self, X, y, groups, **fit_params)
889 return results
890
--> 891 self._run_search(evaluate_candidates)
892
893 # multimetric is determined here because in the case of a callable
/srv/conda/envs/notebook/lib/python3.9/site-packages/skopt/searchcv.py in _run_search(self, evaluate_candidates)
510 n_points_adjusted = min(n_iter, n_points)
511
--> 512 optim_result = self._step(
513 search_space, optimizer,
514 evaluate_candidates, n_points=n_points_adjusted
/srv/conda/envs/notebook/lib/python3.9/site-packages/skopt/searchcv.py in _step(self, search_space, optimizer, evaluate_candidates, n_points)
406 params_dict = [point_asdict(search_space, p) for p in params]
407
--> 408 all_results = evaluate_candidates(params_dict)
409 # Feed the point and objective value back into optimizer
410 # Optimizer minimizes objective, hence provide negative score
/srv/conda/envs/notebook/lib/python3.9/site-packages/sklearn/model_selection/_search.py in evaluate_candidates(candidate_params, cv, more_results)
836 )
837
--> 838 out = parallel(
839 delayed(_fit_and_score)(
840 clone(base_estimator),
/srv/conda/envs/notebook/lib/python3.9/site-packages/joblib/parallel.py in __call__(self, iterable)
1054
1055 with self._backend.retrieval_context():
-> 1056 self.retrieve()
1057 # Make sure that we get a last message telling us we are done
1058 elapsed_time = time.time() - self._start_time
/srv/conda/envs/notebook/lib/python3.9/site-packages/joblib/parallel.py in retrieve(self)
933 try:
934 if getattr(self._backend, 'supports_timeout', False):
--> 935 self._output.extend(job.get(timeout=self.timeout))
936 else:
937 self._output.extend(job.get())
/srv/conda/envs/notebook/lib/python3.9/concurrent/futures/_base.py in result(self, timeout)
443 raise CancelledError()
444 elif self._state == FINISHED:
--> 445 return self.__get_result()
446 else:
447 raise TimeoutError()
/srv/conda/envs/notebook/lib/python3.9/concurrent/futures/_base.py in __get_result(self)
388 if self._exception:
389 try:
--> 390 raise self._exception
391 finally:
392 # Break a reference cycle with the exception in self._exception
/srv/conda/envs/notebook/lib/python3.9/site-packages/distributed/worker.py in apply_function_simple()
4382 start = time()
4383 try:
-> 4384 result = function(*args, **kwargs)
4385 except Exception as e:
4386 msg = error_message(e)
/srv/conda/envs/notebook/lib/python3.9/site-packages/distributed/worker.py in execute_task()
4254 if istask(task):
4255 func, args = task[0], task[1:]
-> 4256 return func(*map(execute_task, args))
4257 elif isinstance(task, list):
4258 return list(map(execute_task, task))
/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/utils.py in apply()
35 def apply(func, args, kwargs=None):
36 if kwargs:
---> 37 return func(*args, **kwargs)
38 else:
39 return func(*args)
/srv/conda/envs/notebook/lib/python3.9/site-packages/joblib/_dask.py in __call__()
122 with parallel_backend('dask'):
123 for func, args, kwargs in tasks:
--> 124 results.append(func(*args, **kwargs))
125 return results
126
/srv/conda/envs/notebook/lib/python3.9/site-packages/sklearn/utils/fixes.py in __call__()
209 def __call__(self, *args, **kwargs):
210 with config_context(**self.config):
--> 211 return self.function(*args, **kwargs)
212
213
/srv/conda/envs/notebook/lib/python3.9/site-packages/sklearn/model_selection/_validation.py in _fit_and_score()
679 estimator.fit(X_train, **fit_params)
680 else:
--> 681 estimator.fit(X_train, y_train, **fit_params)
682
683 except Exception:
/srv/conda/envs/notebook/lib/python3.9/site-packages/xgboost/core.py in inner_f()
504 for k, arg in zip(sig.parameters, args):
505 kwargs[k] = arg
--> 506 return f(**kwargs)
507
508 return inner_f
/srv/conda/envs/notebook/lib/python3.9/site-packages/xgboost/sklearn.py in fit()
787
788 model, feval, params = self._configure_fit(xgb_model, eval_metric, params)
--> 789 self._Booster = train(
790 params,
791 train_dmatrix,
/srv/conda/envs/notebook/lib/python3.9/site-packages/xgboost/training.py in train()
186 Booster : a trained booster model
187 """
--> 188 bst = _train_internal(params, dtrain,
189 num_boost_round=num_boost_round,
190 evals=evals,
/srv/conda/envs/notebook/lib/python3.9/site-packages/xgboost/training.py in _train_internal()
79 if callbacks.before_iteration(bst, i, dtrain, evals):
80 break
---> 81 bst.update(dtrain, i, obj)
82 if callbacks.after_iteration(bst, i, dtrain, evals):
83 break
/srv/conda/envs/notebook/lib/python3.9/site-packages/xgboost/core.py in update()
1678
1679 if fobj is None:
-> 1680 _check_call(_LIB.XGBoosterUpdateOneIter(self.handle,
1681 ctypes.c_int(iteration),
1682 dtrain.handle))
/srv/conda/envs/notebook/lib/python3.9/site-packages/xgboost/core.py in _check_call()
216 """
217 if ret != 0:
--> 218 raise XGBoostError(py_str(_LIB.XGBGetLastError()))
219
220
XGBoostError: value -9.22337e+18 for Parameter min_split_loss should be greater equal to 0
min_split_loss: Minimum loss reduction required to make a further partition.
``` | open | 2022-01-20T11:15:57Z | 2022-01-20T11:16:17Z | https://github.com/scikit-optimize/scikit-optimize/issues/1102 | [] | gcaria | 0 |
dunossauro/fastapi-do-zero | pydantic | 121 | `datetime.utcnow` está deprecado no python 3.12 | Como algumas pessoas estão fazendo o curso com a versão mais recente do python, essa chamada deve ser alterada para uma versão que seja compatível com 3.11 (o recomendado) com as versões mais recentes da linguagem.
Uma solução para manter a mesma funcionalidade, sem fazer a chamada de forma ingênua do datetime é chamar o timezone:
```python
from zoneinfo import ZoneInfo
def create_access_token(data: dict):
to_encode = data.copy()
expire = datetime.now(tz=ZoneInfo('America/Sao_Paulo')) + timedelta(
minutes=ACCESS_TOKEN_EXPIRE_MINUTES
)
to_encode.update({'exp': expire})
encoded_jwt = encode(to_encode, SECRET_KEY, algorithm=ALGORITHM)
return encoded_jwt
```
Isso deve ser alterado no texto das aulas 06 e 07 e nos códigos subsequentes a todas as aulas.
Esse problema foi originalmente mencionado pelo @azmovi em #112 | closed | 2024-04-01T19:03:42Z | 2024-04-17T08:38:35Z | https://github.com/dunossauro/fastapi-do-zero/issues/121 | [] | dunossauro | 0 |
rougier/scientific-visualization-book | numpy | 47 | Minor typo on page 177 | This sentence "Filled contours with dropshadows is a nice effet that allows" has effect misspelled. | closed | 2022-02-24T17:23:09Z | 2022-08-08T14:48:28Z | https://github.com/rougier/scientific-visualization-book/issues/47 | [] | tjnd89 | 1 |
seleniumbase/SeleniumBase | pytest | 3,227 | driver.uc_gui_click_captcha() Broken | Since chrome version - Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36.
**driver.uc_gui_click_captcha()** is broken. It is not clicking on the cloudflare turnstile captcha when accessing the website.
Tried below it was working earlier, but suddenly after uc driver update it stopped working.
<img width="507" alt="Screenshot 2024-10-27 at 4 00 53 PM" src="https://github.com/user-attachments/assets/940ed40c-ad6c-418c-b9a0-0500453ea874">
| closed | 2024-10-27T10:01:19Z | 2024-10-27T23:14:24Z | https://github.com/seleniumbase/SeleniumBase/issues/3227 | [
"can't reproduce",
"UC Mode / CDP Mode"
] | p-rk | 15 |
huggingface/transformers | pytorch | 36,224 | Incompatibility in flash_attention_2 + Llama + Transformers>=4.43 + Autocast to fp16 | ### System Info
setting: Inference or Training Llama with Automatic Mixed Precision (AMP) autocast from fp32 to fp16 + FlashAttention 2 (FA2).
I observed that in newer versions of the Transformers library (>=4.43), training (and inference) fails with the error `RuntimeError: FlashAttention only supports fp16 and bf16`. This error does not occur with `GPT2` or other parameter combinations. What is happening?
Given:
- FA2 supports only fp16/bf16 and fails when it encounters fp32.
- Autocast does not cast all operations to fp16.
The failure was caused by the fact that In `transformers >= 4.43`, positional embeddings in Llama are precomputed based on the hidden_states (fp32) and are also output in fp32. This is done in the `LlamaModel.forward()`, before layers' forwards, using the following code:
`position_embeddings = self.rotary_emb(hidden_states, position_ids)` [link](https://github.com/huggingface/transformers/blob/782bfffb2e4dfb5bbe7940429215d794f4434172/src/transformers/models/llama/modeling_llama.py#L918)
These embeddings are then passed to the attention mechanism as a parameter.
In the attention class, we have:
`cos, sin = position_embeddings` [link](https://github.com/huggingface/transformers/blob/782bfffb2e4dfb5bbe7940429215d794f4434172/src/transformers/models/llama/modeling_llama.py#L458)
These `fp32` values are then added to the autocasted values of `q` and `k`. Autocast ignores the addition, resulting in `q_embed` being in `fp32` type. This causes FA2 to fail. If SDPA is used, it handles the mixed dtypes without issues.
Why didn't this happen before?:
In transformers<=4.41 or so, Llama positional embeddings are recomputed in each layer (inefficiently) based on `value_states` (`fp16` within autocast) and are also output in `fp16`. Hence, no errors occur.
**Proposed solutions**:
1) cast `cos` and `sin` to q.dtype in `apply_rotary_pos_emb()` [link](https://github.com/huggingface/transformers/blob/298b3f19303294293f7af075609481d64cb13de3/src/transformers/models/llama/modeling_llama.py#L150)
2) cast position_embeddings into the trarget dtype right after creation [here](https://github.com/huggingface/transformers/blob/298b3f19303294293f7af075609481d64cb13de3/src/transformers/models/llama/modeling_llama.py#L569C9-L569C28)
3) do nothing and let people use `sdpa` when autocasting from `fp32`. This is not that bad, since sdpa is quite fast by now.
@ArthurZucker please comment | open | 2025-02-17T08:14:24Z | 2025-03-20T08:03:39Z | https://github.com/huggingface/transformers/issues/36224 | [
"bug"
] | poedator | 2 |
deepset-ai/haystack | pytorch | 8,026 | Sentence Window retrieval documentation | Add documentation for the new `SentenceWindowRetrieval` abstraction: https://github.com/deepset-ai/haystack/blob/0411cd938a8b1d4a7153b0c269c6cd11d7da2efd/haystack/components/retrievers/sentence_window_retrieval.py#L13
A suggestion to add under the advanced RAG techniques alongside Hyde. | closed | 2024-07-15T14:26:25Z | 2024-07-17T11:55:47Z | https://github.com/deepset-ai/haystack/issues/8026 | [
"type:documentation",
"P1"
] | mrm1001 | 2 |
quantumlib/Cirq | api | 6,329 | `synchronize_terminal_measurements()` misorders measurements with the same key | **Description of the issue**
(not 100% sure this isn't intended behavior)
when two measurements have the same key, reordering them results in a logically different circuit. Transformers like `align_left()` and `align_right()` correctly account for this by preventing the reordering of measurements with the same key, but `synchronize_terminal_measurements()` does not (see below)
**How to reproduce the issue**
```python
circuit = cirq.Circuit(
cirq.X(cirq.q(1)),
cirq.measure(cirq.q(0), key="key1"),
cirq.measure(cirq.q(1), key="key1"),
cirq.measure(cirq.q(1), key="key2"),
)
print(circuit)
print(cirq.align_right(circuit)) # no change, as it would require reordering the two measurements with `key="key1"`
print(cirq.synchronize_terminal_measurements(circuit))
```
prints:
```
0: ───M('key1')───────────────────────────
1: ───X───────────M('key1')───M('key2')───
```
```
0: ───M('key1')───────────────────────────
1: ───X───────────M('key1')───M('key2')───
```
```
0: ───────────────────M('key1')───
1: ───X───M('key1')───M('key2')───
```
where the final circuit is not logically equivalent (the expected measurement outcome for `"key1"` is flipped)
**Cirq version**
```
1.3.0.dev20230830191034
```
| open | 2023-10-25T18:26:53Z | 2025-03-22T00:30:02Z | https://github.com/quantumlib/Cirq/issues/6329 | [
"kind/bug-report",
"triage/needs-feasibility",
"triage/needs-more-evidence"
] | richrines1 | 13 |
encode/databases | sqlalchemy | 495 | The documentation does not describe the return values for the database.execute (LAST_INSERT_ID) | ```
async with database.transaction() as transaction:
query = insert(Session).values({"foo": "bar"})
last_insert_id = await database.execute(query)
(uid, ) = await database.fetch_one("SELECT LAST_INSERT_ID() as id")
assert uid == last_inserted_id # True
```
It is possible to extract last_insert_id from the `database.execute` call, but this is not mentioned in the documentation.
It is also not clear from the documentation what is the difference between `database.fetch_one` and `database.execute`. Why is it that calling `database.execute("SELECT LAST_INSERT_ID() as id")` won't produce the same result?
| open | 2022-06-02T16:05:12Z | 2022-06-02T16:05:12Z | https://github.com/encode/databases/issues/495 | [] | AntonGsv | 0 |
MaartenGr/BERTopic | nlp | 1,574 | Modeling taking a long time after progress bar complete | I was surprised that my model was taking over 8 hours to run, so I set 'verbose = True' to monitor the progress. I was surprised to see that the progress bar completed within an hour, but the cell was left running for over several hours after (and still is). I've added a screenshot of my code and output for reference. I don't believe the problem to be from saving the model, because if I end the cell process and try to use the model it will say it does not exist.
<img width="919" alt="Screenshot 2023-10-11 at 1 37 16 PM" src="https://github.com/MaartenGr/BERTopic/assets/21200604/a832091a-363a-4d8a-a67d-0af80020117d"> | open | 2023-10-11T17:40:14Z | 2023-10-12T13:48:57Z | https://github.com/MaartenGr/BERTopic/issues/1574 | [] | vlawlor | 2 |
statsmodels/statsmodels | data-science | 8,764 | I did this code and this error occurs after running it | ```
import pandas as pd
import numpy as np
import statsmodels.api as sm
from statsmodels.tsa.arima.model import ARIMA
from sklearn.neural_network import MLPRegressor
from sklearn.model_selection import TimeSeriesSplit
from sklearn.base import BaseEstimator, RegressorMixin
import optuna
# Load your dataset from an Excel file
data = pd.read_excel(r"C:\Users\Keller\Desktop\captura dados python\dados.xlsx", engine='openpyxl')
# Keep only the first x rows of the dataset
data = data.head(1000)
# Define the objective function for Optuna to minimize
def objective(trial):
order = (trial.suggest_int('p', 1, 4), 1, 0)
seasonal_order = ((trial.suggest_int('P', 1, 4), 0, 0, 12))
model = ARIMA(endog=data['Fechamento'], order=order, seasonal_order=seasonal_order)
tscv = TimeSeriesSplit(n_splits=10)
score = np.mean([sm.tools.eval_measures.rmse(actual, model.fit(train).forecast(len(actual))) for train, actual in tscv.split(data['Fechamento'])])
return score
# Set the number of trials for Optuna to run
n_trials = 10
# Run the Optuna search to find the best hyperparameters
study = optuna.create_study(direction='minimize')
study.optimize(objective, n_trials=n_trials)
# Get the best model from the Optuna search and fit it to the data
params = study.best_params
order = (params['p'], 1, 0)
seasonal_order = ((params['P'], 0, 0, 12))
model = ARIMA(endog=data['Fechamento'], order=order, seasonal_order=seasonal_order)
model.fit(data['Fechamento'])
# Specify the number of days you want to forecast into the future
future_days = 5
# Create a future dataframe with the specified number of days
future_dates = pd.date_range(start=data.index[-1], periods=future_days + 1, freq='D')[1:]
future = pd.DataFrame(index=future_dates)
# Define a function to predict the next n values using MLP regression
def predict_next_n(data, n):
X = np.arange(len(data)).reshape(-1, 1)
y = data.values
model = MLPRegressor(hidden_layer_sizes=(100,), max_iter=1000)
model.fit(X[-10:], y[-10:])
next_n = np.arange(len(data), len(data) + n).reshape(-1, 1)
return model.predict(next_n)
# Generate predictions for the future dates
future['Fechamento'] = model.forecast(future_days)
future['Amplitude'] = pd.Series(predict_next_n(data['Amplitude'], future_days), index=future.index)
future['Media200D'] = pd.Series(predict_next_n(data['Media200D'], future_days), index=future.index)
future['Media200S'] = pd.Series(predict_next_n(data['Media200S'], future_days), index=future.index)
# Save the forecast to an Excel file
future.to_excel(r"C:\Users\Keller\Desktop\captura dados python\futuro.xlsx", engine='openpyxl')
print(f"Forecast for {future_days} days saved to 'futuro.xlsx'")
```
the error below
[32m[I 2023-04-01 21:59:50,473][0m A new study created in memory with name: no-name-a965a999-b5ef-4c4f-9d19-5ac2167ef3e0[0m
Warning (from warnings module):
File "C:\Program Files\Python311\Lib\site-packages\statsmodels\tsa\statespace\tools.py", line 536
y[k-1, i] = (y[k, i] - y[k, k]*y[k, k-i-1]) / (1 - y[k, k]**2)
RuntimeWarning: invalid value encountered in scalar divide
Warning (from warnings module):
File "C:\Program Files\Python311\Lib\site-packages\statsmodels\tsa\statespace\tools.py", line 538
x = r / ((1 - r**2)**0.5)
RuntimeWarning: divide by zero encountered in divide
Warning (from warnings module):
File "C:\Program Files\Python311\Lib\site-packages\statsmodels\tsa\statespace\tools.py", line 538
x = r / ((1 - r**2)**0.5)
RuntimeWarning: invalid value encountered in sqrt
Warning (from warnings module):
File "C:\Program Files\Python311\Lib\site-packages\statsmodels\tsa\statespace\tools.py", line 497
r = unconstrained/((1 + unconstrained**2)**0.5)
RuntimeWarning: invalid value encountered in divide
[33m[W 2023-04-01 21:59:50,699][0m Trial 0 failed with parameters: {'p': 2, 'P': 2} because of the following error: LinAlgError('Schur decomposition solver error.').[0m
Traceback (most recent call last):
File "C:\Program Files\Python311\Lib\site-packages\optuna\study\_optimize.py", line 200, in _run_trial
value_or_values = func(trial)
^^^^^^^^^^^
File "C:/Users/Keller/Desktop/captura dados python/Calculo_Preditivo.py", line 22, in objective
score = np.mean([sm.tools.eval_measures.rmse(actual, model.fit(train).forecast(len(actual))) for train, actual in tscv.split(data['Fechamento'])])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:/Users/Keller/Desktop/captura dados python/Calculo_Preditivo.py", line 22, in <listcomp>
score = np.mean([sm.tools.eval_measures.rmse(actual, model.fit(train).forecast(len(actual))) for train, actual in tscv.split(data['Fechamento'])])
^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\statsmodels\tsa\arima\model.py", line 390, in fit
res = super().fit(
^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\statsmodels\tsa\statespace\mlemodel.py", line 704, in fit
mlefit = super(MLEModel, self).fit(start_params, method=method,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\statsmodels\base\model.py", line 563, in fit
xopt, retvals, optim_settings = optimizer._fit(f, score, start_params,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\statsmodels\base\optimizer.py", line 241, in _fit
xopt, retvals = func(objective, gradient, start_params, fargs, kwargs,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\statsmodels\base\optimizer.py", line 651, in _fit_lbfgs
retvals = optimize.fmin_l_bfgs_b(func, start_params, maxiter=maxiter,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\scipy\optimize\_lbfgsb_py.py", line 197, in fmin_l_bfgs_b
res = _minimize_lbfgsb(fun, x0, args=args, jac=jac, bounds=bounds,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\scipy\optimize\_lbfgsb_py.py", line 305, in _minimize_lbfgsb
sf = _prepare_scalar_function(fun, x0, jac=jac, args=args, epsilon=eps,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\scipy\optimize\_optimize.py", line 332, in _prepare_scalar_function
sf = ScalarFunction(fun, x0, args, grad, hess,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\scipy\optimize\_differentiable_functions.py", line 158, in __init__
self._update_fun()
File "C:\Program Files\Python311\Lib\site-packages\scipy\optimize\_differentiable_functions.py", line 251, in _update_fun
self._update_fun_impl()
File "C:\Program Files\Python311\Lib\site-packages\scipy\optimize\_differentiable_functions.py", line 155, in update_fun
self.f = fun_wrapped(self.x)
^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\scipy\optimize\_differentiable_functions.py", line 137, in fun_wrapped
fx = fun(np.copy(x), *args)
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\statsmodels\base\model.py", line 531, in f
return -self.loglike(params, *args) / nobs
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\statsmodels\tsa\statespace\mlemodel.py", line 939, in loglike
loglike = self.ssm.loglike(complex_step=complex_step, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\statsmodels\tsa\statespace\kalman_filter.py", line 983, in loglike
kfilter = self._filter(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\statsmodels\tsa\statespace\kalman_filter.py", line 903, in _filter
self._initialize_state(prefix=prefix, complex_step=complex_step)
File "C:\Program Files\Python311\Lib\site-packages\statsmodels\tsa\statespace\representation.py", line 983, in _initialize_state
self._statespaces[prefix].initialize(self.initialization,
File "statsmodels\tsa\statespace\_representation.pyx", line 1373, in statsmodels.tsa.statespace._representation.dStatespace.initialize
File "statsmodels\tsa\statespace\_representation.pyx", line 1362, in statsmodels.tsa.statespace._representation.dStatespace.initialize
File "statsmodels\tsa\statespace\_initialization.pyx", line 288, in statsmodels.tsa.statespace._initialization.dInitialization.initialize
File "statsmodels\tsa\statespace\_initialization.pyx", line 406, in statsmodels.tsa.statespace._initialization.dInitialization.initialize_stationary_stationary_cov
File "statsmodels\tsa\statespace\_tools.pyx", line 1284, in statsmodels.tsa.statespace._tools._dsolve_discrete_lyapunov
numpy.linalg.LinAlgError: Schur decomposition solver error.
[33m[W 2023-04-01 21:59:50,721][0m Trial 0 failed with value None.[0m
Traceback (most recent call last):
File "C:/Users/Keller/Desktop/captura dados python/Calculo_Preditivo.py", line 30, in <module>
study.optimize(objective, n_trials=n_trials)
File "C:\Program Files\Python311\Lib\site-packages\optuna\study\study.py", line 425, in optimize
_optimize(
File "C:\Program Files\Python311\Lib\site-packages\optuna\study\_optimize.py", line 66, in _optimize
_optimize_sequential(
File "C:\Program Files\Python311\Lib\site-packages\optuna\study\_optimize.py", line 163, in _optimize_sequential
frozen_trial = _run_trial(study, func, catch)
File "C:\Program Files\Python311\Lib\site-packages\optuna\study\_optimize.py", line 251, in _run_trial
raise func_err
File "C:\Program Files\Python311\Lib\site-packages\optuna\study\_optimize.py", line 200, in _run_trial
value_or_values = func(trial)
File "C:/Users/Keller/Desktop/captura dados python/Calculo_Preditivo.py", line 22, in objective
score = np.mean([sm.tools.eval_measures.rmse(actual, model.fit(train).forecast(len(actual))) for train, actual in tscv.split(data['Fechamento'])])
File "C:/Users/Keller/Desktop/captura dados python/Calculo_Preditivo.py", line 22, in <listcomp>
score = np.mean([sm.tools.eval_measures.rmse(actual, model.fit(train).forecast(len(actual))) for train, actual in tscv.split(data['Fechamento'])])
File "C:\Program Files\Python311\Lib\site-packages\statsmodels\tsa\arima\model.py", line 390, in fit
res = super().fit(
File "C:\Program Files\Python311\Lib\site-packages\statsmodels\tsa\statespace\mlemodel.py", line 704, in fit
mlefit = super(MLEModel, self).fit(start_params, method=method,
File "C:\Program Files\Python311\Lib\site-packages\statsmodels\base\model.py", line 563, in fit
xopt, retvals, optim_settings = optimizer._fit(f, score, start_params,
File "C:\Program Files\Python311\Lib\site-packages\statsmodels\base\optimizer.py", line 241, in _fit
xopt, retvals = func(objective, gradient, start_params, fargs, kwargs,
File "C:\Program Files\Python311\Lib\site-packages\statsmodels\base\optimizer.py", line 651, in _fit_lbfgs
retvals = optimize.fmin_l_bfgs_b(func, start_params, maxiter=maxiter,
File "C:\Program Files\Python311\Lib\site-packages\scipy\optimize\_lbfgsb_py.py", line 197, in fmin_l_bfgs_b
res = _minimize_lbfgsb(fun, x0, args=args, jac=jac, bounds=bounds,
File "C:\Program Files\Python311\Lib\site-packages\scipy\optimize\_lbfgsb_py.py", line 305, in _minimize_lbfgsb
sf = _prepare_scalar_function(fun, x0, jac=jac, args=args, epsilon=eps,
File "C:\Program Files\Python311\Lib\site-packages\scipy\optimize\_optimize.py", line 332, in _prepare_scalar_function
sf = ScalarFunction(fun, x0, args, grad, hess,
File "C:\Program Files\Python311\Lib\site-packages\scipy\optimize\_differentiable_functions.py", line 158, in __init__
self._update_fun()
File "C:\Program Files\Python311\Lib\site-packages\scipy\optimize\_differentiable_functions.py", line 251, in _update_fun
self._update_fun_impl()
File "C:\Program Files\Python311\Lib\site-packages\scipy\optimize\_differentiable_functions.py", line 155, in update_fun
self.f = fun_wrapped(self.x)
File "C:\Program Files\Python311\Lib\site-packages\scipy\optimize\_differentiable_functions.py", line 137, in fun_wrapped
fx = fun(np.copy(x), *args)
File "C:\Program Files\Python311\Lib\site-packages\statsmodels\base\model.py", line 531, in f
return -self.loglike(params, *args) / nobs
File "C:\Program Files\Python311\Lib\site-packages\statsmodels\tsa\statespace\mlemodel.py", line 939, in loglike
loglike = self.ssm.loglike(complex_step=complex_step, **kwargs)
File "C:\Program Files\Python311\Lib\site-packages\statsmodels\tsa\statespace\kalman_filter.py", line 983, in loglike
kfilter = self._filter(**kwargs)
File "C:\Program Files\Python311\Lib\site-packages\statsmodels\tsa\statespace\kalman_filter.py", line 903, in _filter
self._initialize_state(prefix=prefix, complex_step=complex_step)
File "C:\Program Files\Python311\Lib\site-packages\statsmodels\tsa\statespace\representation.py", line 983, in _initialize_state
self._statespaces[prefix].initialize(self.initialization,
File "statsmodels\tsa\statespace\_representation.pyx", line 1373, in statsmodels.tsa.statespace._representation.dStatespace.initialize
File "statsmodels\tsa\statespace\_representation.pyx", line 1362, in statsmodels.tsa.statespace._representation.dStatespace.initialize
File "statsmodels\tsa\statespace\_initialization.pyx", line 288, in statsmodels.tsa.statespace._initialization.dInitialization.initialize
File "statsmodels\tsa\statespace\_initialization.pyx", line 406, in statsmodels.tsa.statespace._initialization.dInitialization.initialize_stationary_stationary_cov
File "statsmodels\tsa\statespace\_tools.pyx", line 1284, in statsmodels.tsa.statespace._tools._dsolve_discrete_lyapunov
numpy.linalg.LinAlgError: Schur decomposition solver error.
| open | 2023-04-02T01:00:27Z | 2023-04-02T01:09:10Z | https://github.com/statsmodels/statsmodels/issues/8764 | [] | akeller1992 | 0 |
Yorko/mlcourse.ai | numpy | 640 | Mistake in assignment 8 | There is no true formula for updating weights using gradient descent:

Right formula is:

Minus in sigma's argument is critical. | closed | 2019-10-30T22:11:54Z | 2019-11-04T14:09:57Z | https://github.com/Yorko/mlcourse.ai/issues/640 | [
"enhancement"
] | Ecclesiast | 1 |
KaiyangZhou/deep-person-reid | computer-vision | 252 | Pytorch to ONNX | Hellow,I‘m interested in your work, and I want to make a demo with your resnet50 model, but when I followed by the introduction, there will be a misktake. Could you please tell me the pretrained model's detail or if it is different with the nomal pytorch model?
Thank you very much!!!
my code as following:
import torchvision
import torch
from torch.autograd import Variable
import onnx
input_name = ['input']
output_name = ['output']
input = Variable(torch.randn(1, 3, 256, 128))
model = torch.load("resnet50_fc512_market_xent.pth.tar")
torch.onnx.export(model, input, 'resnet50.onnx', input_names=input_name,output_names=output_name, verbose=True)
| closed | 2019-11-03T07:28:40Z | 2019-11-04T06:07:58Z | https://github.com/KaiyangZhou/deep-person-reid/issues/252 | [] | zhuyu-cs | 0 |
WZMIAOMIAO/deep-learning-for-image-processing | deep-learning | 226 | 大神 这边的tensorflow2内容有木有tensorflow1的对应的 ?比如rest 迁移 在tensorflow1的版本? 这差异太大啦 | **System information**
* Have I written custom code:
* OS Platform(e.g., window10 or Linux Ubuntu 16.04):
* Python version:
* Deep learning framework and version(e.g., Tensorflow2.1 or Pytorch1.3):
* Use GPU or not:
* CUDA/cuDNN version(if you use GPU):
* The network you trained(e.g., Resnet34 network):
**Describe the current behavior**
**Error info / logs**
| closed | 2021-04-15T02:53:02Z | 2021-04-16T10:10:13Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/226 | [] | nanfangyuan | 1 |
home-assistant/core | asyncio | 141,117 | Device Location not working |

Hey there!
I have set up my companion app with Home Assistant, which is running on a Raspberry Pi 5. However, I'm experiencing issues with the "device tracker," as it always shows "away" even when I am at home.
I have checked the [troubleshooting guide](https://companion.home-assistant.io/docs/troubleshooting/faqs/#device-tracker-is-not-updating-in-android-app) and verified all permissions on my Android phone (Samsung S20), and everything seems to be in order. The logs from the companion app's location tracking troubleshooting show the correct location, although there are some "duplicate" locations, the sent ones are accurate.
So, why am I consistently getting an "away" status on the device tracker?
Thanks in advanced
### What version of Home Assistant Core has the issue?
core-2025.3.4
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
device_tracker companion app
### Link to integration documentation on our website
_No response_
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | open | 2025-03-22T13:58:18Z | 2025-03-22T23:20:45Z | https://github.com/home-assistant/core/issues/141117 | [] | tommdq | 1 |
dunossauro/fastapi-do-zero | pydantic | 193 | FastAPI_Du_Zero |
Link do projeto | Seu @ no git | Comentário (opcional)
-- | -- | --
[FastAPI_Du_Zero](https://github.com/rodten23/FastAPI_Du_Zero) | [@rodten23](https://github.com/rodten23) | Implementação do material do curso sem alterações. Muito Obrigado, Dunossauro!
| closed | 2024-07-07T00:09:55Z | 2024-07-10T01:39:32Z | https://github.com/dunossauro/fastapi-do-zero/issues/193 | [] | rodten23 | 1 |
ultralytics/yolov5 | deep-learning | 12,846 | YOLOv5 interface - predict problem | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
Other
### Bug
TypeError Traceback (most recent call last)
[<ipython-input-21-e91aa8dd130a>](https://localhost:8080/#) in <cell line: 7>()
5 # Run batched inference on a list of images
6 source = '/content/gdrive/MyDrive/Data/Vid_and_pictures/20240227_102420.jpg'
----> 7 results = model.predict(source, conf=0.5, imgsz=320, save=True, save_txt = True, save_conf=True) # list of Results objects
8 '''
9 # Process results list
3 frames
[/usr/local/lib/python3.10/dist-packages/ultralytics/nn/autobackend.py](https://localhost:8080/#) in __init__(self, weights, device, dnn, data, fp16, batch, fuse, verbose)
141 if nn_module:
142 model = weights.to(device)
--> 143 model = model.fuse(verbose=verbose) if fuse else model
144 if hasattr(model, "kpt_shape"):
145 kpt_shape = model.kpt_shape # pose-only
TypeError: BaseModel.fuse() got an unexpected keyword argument 'verbose'
### Environment
_No response_
### Minimal Reproducible Example
`from ultralytics import YOLO
model = YOLO('/content/gdrive/MyDrive/yolov5_diplomka/yolov5/runs/train/exp3/weights/best.pt')
source = '/content/gdrive/MyDrive/Data/Vid_and_pictures/20240227_102420.jpg'
results = model.predict(source, conf=0.5, imgsz=320, save=True, save_txt = True, save_conf=True) # list of Results objects
for result in results:
boxes = result.boxes # Boxes object for bounding box outputs
masks = result.masks # Masks object for segmentation masks outputs
keypoints = result.keypoints # Keypoints object for pose outputs
probs = result.probs # Probs object for classification outputs
result.show() # display to screen
result.save(filename='result_predict.jpg') # save to disk`
Sorry I am new in neural networks. Can you please solve my problem? I am trying to use my model on a picture.
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR! | closed | 2024-03-24T15:13:59Z | 2024-10-20T19:42:10Z | https://github.com/ultralytics/yolov5/issues/12846 | [
"bug",
"Stale"
] | paulikoe | 3 |
httpie/cli | api | 1,255 | Warn the user when there is no incoming data after a certain time passed on stdin | E.g
```
$ cat | http POST pie.dev/post 861ms
> no stdin data read in 10.0s (perhaps you want to --ignore-stdin)
> https://httpie.io/docs/cli/best-practices
``` | closed | 2021-12-30T15:06:40Z | 2022-01-12T14:07:34Z | https://github.com/httpie/cli/issues/1255 | [
"enhancement",
"new"
] | isidentical | 0 |
public-apis/public-apis | api | 4,158 | 1 | open | 2025-02-25T02:06:30Z | 2025-02-25T02:06:30Z | https://github.com/public-apis/public-apis/issues/4158 | [] | 2629728088 | 0 | |
CorentinJ/Real-Time-Voice-Cloning | python | 903 | Synthesizer fine-tuning ruined the output | We were trying to fine-tune the existing model on some of our own data. We were faced with this error.

We went into the synthesizer/train.py file and commented out the line 192(which was an if condition)
After that, the model started training but after training(for 2k steps), we were getting no output at all. Can you please identify what might have caused this? Thank you. | closed | 2021-11-23T17:24:05Z | 2021-11-24T04:23:43Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/903 | [] | Fzr2k | 2 |
huggingface/transformers | tensorflow | 36,537 | Bug when computing positional IDs from embeddings | ### System Info
https://github.com/huggingface/transformers/blob/c0f8d055ce7a218e041e20a06946bf0baa8a7d6a/src/transformers/models/esm/modeling_esm.py#L243
I think it should be `1, sequence_length + 1, dtype=torch.long, device=inputs_embeds.device`
I don't see how the index of my padding token in the alphabet has anything to do with my positions in the sequence
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
-
### Expected behavior
- | open | 2025-03-04T14:01:31Z | 2025-03-04T14:27:26Z | https://github.com/huggingface/transformers/issues/36537 | [
"bug"
] | SabrinaRichter | 1 |
vitalik/django-ninja | django | 947 | Management command to output schema json | We're starting to use the openapi client generators. It's not too hard to just `curl` the json file in local dev, but for CI it would be slightly annoying to have to do.
It would be nice if there was a clean `python manage.py ninja-schema` that would output the json schema without having to start up gunicorn or the django dev server. I believe DRF has a similar option. | closed | 2023-11-22T02:17:09Z | 2023-11-22T18:45:01Z | https://github.com/vitalik/django-ninja/issues/947 | [
"documentation"
] | shughes-uk | 3 |
jessevig/bertviz | nlp | 58 | no attribute 'bias' while loading a finetuned BERT from TF | Hi!
I have a finetuned BERT model trained in Tensorflow and I would like to visualize its attentions.
I tried to load the model in different ways, but I always get the same error. Any suggestion at how to solve this no attribute bias error?
```
model = BertModel.from_pretrained(model_path, from_tf=True, output_attentions=True)
tokenizer = BertTokenizer.from_pretrained(model_version, do_lower_case=do_lower_case)
ModuleAttributeError: 'BertModel' object has no attribute 'bias'
```
```
model = BertForPreTraining.from_pretrained(model_path, from_tf=True, output_attentions=True)
tokenizer = BertTokenizer.from_pretrained(model_version, do_lower_case=do_lower_case)
ModuleAttributeError: 'BertForPreTraining' object has no attribute 'bias'
```
```
config = BertConfig.from_json_file(model_path+"config.json")
config.output_attentions = True
model = BertForPreTraining.from_pretrained(model_path+"model.ckpt.index", from_tf=True, config=config)
ModuleAttributeError: 'BertForPreTraining' object has no attribute 'bias'
```
```
model = AutoModelForPreTraining.from_pretrained(model_path, from_tf=True, output_attentions=True)
tokenizer = AutoTokenizer.from_pretrained(model_version, do_lower_case=do_lower_case)
ModuleAttributeError: 'AutoModelForPreTraining' object has no attribute 'bias'
```
Alternatively, I tried to convert my TF model to Pytorch using `transformers-cli convert`, and I was able to load the model and see the attentions. But they seem very fuzzy. Is this normal? Or did a make a mistake?

Thanks | closed | 2020-10-29T14:47:17Z | 2021-01-31T14:41:48Z | https://github.com/jessevig/bertviz/issues/58 | [] | GorkaUrbizu | 2 |
glumpy/glumpy | numpy | 285 | Installing from pypi on python 3.8+ | The update to handle the depreciated function `time.clock` in python 3.8 which was pushed on feb 23, 2020 is not on pypi resulting in an unusable installation for python 3.8+ users.
The work around fix is simply to go into the app/clock.py and change line 164 which reads:
`_default_time_function = time.clock`
and switch it to:
`_default_time_function = time.perf_counter`
Hope this helps some people! | closed | 2021-05-06T15:03:52Z | 2021-05-17T19:23:49Z | https://github.com/glumpy/glumpy/issues/285 | [] | merny93 | 2 |
mwaskom/seaborn | data-visualization | 3,192 | TypeError: ufunc 'isfinite' not supported with numpy 1.24.0 | This is the code that I ran
```python
import matplotlib.pyplot as plt
import seaborn as sns
fmri = sns.load_dataset("fmri")
fmri.info()
sns.set(style="darkgrid")
sns.lineplot(data=fmri, x="timepoint", y="signal", hue="region", style="event")
plt.show()
```
This is the error I got
```python
❯ python3 example.py
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1064 entries, 0 to 1063
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 subject 1064 non-null object
1 timepoint 1064 non-null int64
2 event 1064 non-null object
3 region 1064 non-null object
4 signal 1064 non-null float64
dtypes: float64(1), int64(1), object(3)
memory usage: 41.7+ KB
Traceback (most recent call last):
File "/home/rizwan/Downloads/Seaborn-Issue/example.py", line 8, in <module>
sns.lineplot(
File "/home/rizwan/Miniconda3/envs/py39/lib/python3.9/site-packages/seaborn/relational.py", line 645, in lineplot
p.plot(ax, kwargs)
File "/home/rizwan/Miniconda3/envs/py39/lib/python3.9/site-packages/seaborn/relational.py", line 489, in plot
func(
File "/home/rizwan/Miniconda3/envs/py39/lib/python3.9/site-packages/matplotlib/__init__.py", line 1423, in inner
return func(ax, *map(sanitize_sequence, args), **kwargs)
File "/home/rizwan/Miniconda3/envs/py39/lib/python3.9/site-packages/matplotlib/axes/_axes.py", line 5367, in fill_between
return self._fill_between_x_or_y(
File "/home/rizwan/Miniconda3/envs/py39/lib/python3.9/site-packages/matplotlib/axes/_axes.py", line 5272, in _fill_between_x_or_y
ind, dep1, dep2 = map(
File "/home/rizwan/Miniconda3/envs/py39/lib/python3.9/site-packages/numpy/ma/core.py", line 2360, in masked_invalid
return masked_where(~(np.isfinite(getdata(a))), a, copy=copy)
TypeError: ufunc 'isfinite' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
```
Environment
- Python `3.9.15`
- Seaborn `0.12.1`
- Matplotlib `3.6.2`
- Pandas `1.5.2`
- Numpy `1.24.0`
This error only occurs when I use numpy `1.24.0`, version `1.23.5` or lower works as usual. | closed | 2022-12-19T13:57:57Z | 2024-02-28T20:42:27Z | https://github.com/mwaskom/seaborn/issues/3192 | [
"upstream"
] | Rizwan-Hasan | 3 |
open-mmlab/mmdetection | pytorch | 11,556 | 在使用resume文件继续训练时卡在了advance dataloader步骤中 | 你好,最近我在MMdetection训练时服务器意外重启,所以想用resume继续训练遇到了这个情况,请问这要怎么解决
 | open | 2024-03-16T06:56:38Z | 2024-05-30T05:37:33Z | https://github.com/open-mmlab/mmdetection/issues/11556 | [] | Chengnotwang | 13 |
miguelgrinberg/python-socketio | asyncio | 1,292 | All connections stop processing events at the same time | Hi guys!
I'm recently facing an issue that all my connections turn into a strange state **almost at the same time**
So basically, I have a customised "telemetry" packet a little bit under 1k size sending every 5 seconds. After running them for couple hours, all of such "telemetry" events starting timeout. I double checked the server side and I can confirm that none of those events been received (I logged at the first line of my event handler".
The strangest part is: while these kind events keeping timeout, the socketio pingpong packets are still sending as normal. The log looks like this:
```
2023-12-31 03:15:51,394 - INFO - Sending packet MESSAGE data 2/node,1659["report_telemetry","{\"system\": {\"current_config_tag\": \"2afa24ceba44e4ac\", \"cpu_
2023-12-31 03:15:51,469 - INFO - Received packet MESSAGE data 3/node,1659[{"return_code":0,"ts":"2023-12-30T19:15:51.380215","message":"Telemetry resolved"}]
2023-12-31 03:15:56,494 - INFO - Sending packet MESSAGE data 2/node,1660["report_telemetry","{\"system\": {\"current_config_tag\": \"2afa24ceba44e4ac\", \"cpu_
2023-12-31 03:15:56,570 - INFO - Received packet MESSAGE data 3/node,1660[{"return_code":0,"ts":"2023-12-30T19:15:56.480691","message":"Telemetry resolved"}]
2023-12-31 03:15:58,958 - INFO - Received packet PING data
2023-12-31 03:15:58,958 - INFO - Sending packet PONG data
2023-12-31 03:16:01,598 - INFO - Sending packet MESSAGE data 2/node,1661["report_telemetry","{\"system\": {\"current_config_tag\": \"2afa24ceba44e4ac\", \"cpu_
2023-12-31 03:16:01,674 - INFO - Received packet MESSAGE data 3/node,1661[{"return_code":0,"ts":"2023-12-30T19:16:01.584726","message":"Telemetry resolved"}]
2023-12-31 03:16:06,690 - INFO - Sending packet MESSAGE data 2/node,1662["report_telemetry","{\"system\": {\"current_config_tag\": \"2afa24ceba44e4ac\", \"cpu_
2023-12-31 03:16:26,725 - INFO - Sending packet MESSAGE data 2/node,1663["report_telemetry","{\"system\": {\"current_config_tag\": \"2afa24ceba44e4ac\", \"cpu_
2023-12-31 03:16:46,767 - INFO - Sending packet MESSAGE data 2/node,1664["report_telemetry","{\"system\": {\"current_config_tag\": \"2afa24ceba44e4ac\", \"cpu_
2023-12-31 03:16:59,013 - INFO - Received packet PING data
2023-12-31 03:16:59,022 - INFO - Sending packet PONG data
2023-12-31 03:17:06,802 - INFO - Sending packet MESSAGE data 2/node,1665["report_telemetry","{\"system\": {\"current_config_tag\": \"2afa24ceba44e4ac\", \"cpu_
2023-12-31 03:17:26,821 - INFO - Sending packet MESSAGE data 2/node,1666["report_telemetry","{\"system\": {\"current_config_tag\": \"2afa24ceba44e4ac\", \"cpu_
```
As you can see, servers stop sending ACKs from 2023-12-31 03:16:06,690 and I can confirm that the event handler is not been triggered. Why this happens?
More information about this issue:
- I have an nginx proxy, but I tried to let the client talk to the server directly. Same result.
- All clients (roughly 10) suppose to be long live. It was working while for a really long time (about a month) but start failing recently.
- It happens everyday and for all clients almost the same time (within few seconds)
- All clients are forced to use the websocket mode, not polling then upgrading
- I have changed the ping interval to 60s
- I'm actually using flask-socketio, but I think this is not related to flask-socketio itself.
- The backend is been served via uwsgi with gevent, the config looks like this:
```
[uwsgi]
strict = true
master = true
enable-threads = true
vacuum = true ; Delete sockets during shutdown
single-interpreter = true
die-on-term = true ; Shutdown when receiving SIGTERM (default is respawn)
need-app = true
wsgi-file = wsgi.py
callable = app
http-websockets = true
gevent = 1024
disable-logging = true
log-4xx = true
log-5xx = true
```
Versions:
flask-socketio==5.2.0
python-socketio[client]==5.9.1
python-engineio==4.7.1 | closed | 2023-12-31T06:55:37Z | 2023-12-31T11:48:25Z | https://github.com/miguelgrinberg/python-socketio/issues/1292 | [] | morland96 | 0 |
lanpa/tensorboardX | numpy | 229 | AssertionError: %30 : Dynamic = onnx::Shape(%8) has empty scope name in add_graph() | I'm writing a PyTorch LSTM model and want to use tensorboardX to visualize the graph.
But when I call ``writer.add_graph(rnn, r)``, I got error:
```bat
C:\Python36\python.exe I:/github_repos/botrainer/train.py
Traceback (most recent call last):
File "I:/github_repos/botrainer/train.py", line 238, in <module>
writer.add_graph(rnn, r, verbose=False)
File "C:\Python36\lib\site-packages\tensorboardX\writer.py", line 520, in add_graph
self.file_writer.add_graph(graph(model, input_to_model, verbose))
File "C:\Python36\lib\site-packages\tensorboardX\pytorch_graph.py", line 104, in graph
list_of_nodes = parse(graph)
File "C:\Python36\lib\site-packages\tensorboardX\pytorch_graph.py", line 24, in parse
assert n.scopeName() != '', '{} has empty scope name'.format(n)
AssertionError: %30 : Dynamic = onnx::Shape(%8)
has empty scope name
Process finished with exit code 1
```
My env is:
```bat
C:\Users\xxx>python3 -V
Python 3.6.6
C:\Users\xxx>pip3 freeze
absl-py==0.4.1
astor==0.7.1
atomicwrites==1.2.1
attrs==18.2.0
colorama==0.3.9
cycler==0.10.0
gast==0.2.0
grpcio==1.15.0
kiwisolver==1.0.1
Markdown==2.6.11
matplotlib==2.2.3
more-itertools==4.3.0
numpy==1.14.5
Pillow==5.2.0
pluggy==0.7.1
protobuf==3.6.1
py==1.6.0
pyparsing==2.2.0
pytest==3.8.0
python-dateutil==2.7.3
pytz==2018.5
six==1.11.0
tensorboard==1.10.0
tensorboardX==1.4
tensorflow==1.10.0
termcolor==1.1.0
torch==0.4.1
torchvision==0.2.1
Werkzeug==0.14.1
```
After some debugging, it seems that the ``scope name`` for the main RNN model is empty. My code is:
https://github.com/hsluoyz/botrainer/blob/39857ff57a5d7a7983cf969bb95f1bf123f88901/train.py#L237-L238
Can you help me solve this issue? Thanks. | closed | 2018-09-19T08:15:04Z | 2018-12-29T17:58:33Z | https://github.com/lanpa/tensorboardX/issues/229 | [
"seems fixed"
] | hsluoyz | 1 |
google/seq2seq | tensorflow | 213 | where to get the English-Chinese data? | hello, I want to do the translation for English-Chinese or Chinese-English, but where to get the English-Chinese data link? can you give some advises?
thank you in advance! | open | 2017-05-09T12:08:58Z | 2017-07-01T15:18:29Z | https://github.com/google/seq2seq/issues/213 | [] | PapaMadeleine2022 | 1 |
deepfakes/faceswap | machine-learning | 786 | issue converting with filter | Hi, I'm trying to run convert with a face filter (to process/not process certain person) and it shows the following error.
(It runs no problem if I don't include the face filter)
face_filter = dict(detector=self.args.detector.replace("-", "_").lower(),
AttributeError: 'Namespace' object has no attribute 'detector' | closed | 2019-07-09T09:17:27Z | 2019-07-10T09:54:11Z | https://github.com/deepfakes/faceswap/issues/786 | [] | khcy82dyc | 1 |
mwaskom/seaborn | data-visualization | 3,439 | so.plot not working in version 0.12.2 | The following code does to display the intended graph, just a blank white space.
```
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import seaborn.objects as so
dataLi = pd.read_csv("lithium_data.csv")
dataLi
(
so.Plot(dataLi,
x = "CV",
y = "Mean")
)
plt.show()
```
In contrast the code below does provide a graph. Why does the code above not work?
```
dataLi = pd.read_csv("lithium_data.csv")
g = sns.relplot(data = dataLi,
x = "CV",
y = "Mean")
g.set_axis_labels("%CV", "Mean [Lithium], mmol/L")
plt.show()
plt.close
```
`
| closed | 2023-08-16T02:17:21Z | 2023-08-16T16:19:59Z | https://github.com/mwaskom/seaborn/issues/3439 | [] | RSelvaratnam | 1 |
MaartenGr/BERTopic | nlp | 1,054 | Length of weights not compatible with specified axis. | umap_model = UMAP(n_components=10, n_neighbors=50, min_dist=0.0)
print('umap done')
hdbscan_model = HDBSCAN(min_samples=15, gen_min_span_tree=True,prediction_data=True)
print('hdbscan done')
ctfidf_model = ClassTfidfTransformer()
representation_model = MaximalMarginalRelevance(diversity=0.6)
vectorizer_model = CountVectorizer(ngram_range=(2, 2), stop_words="english")
sentence_model = SentenceTransformer("all-MiniLM-L6-v2")
pool = sentence_model.start_multi_process_pool()
embeddings = sentence_model.encode_multi_process(docs, pool)
print('embedding done')
topic_model = BERTopic(embedding_model=sentence_model,
vectorizer_model=vectorizer_model,
umap_model=umap_model,
ctfidf_model=ctfidf_model,
hdbscan_model=hdbscan_model,
representation_model = representation_model,
nr_topics=15,
top_n_words = 15, n_gram_range=(1,2),low_memory = True,calculate_probabilities=True)
When I try to fit the model as follows, I am getting following error:
topics, probs = topic_model.fit_transform(docs)
or
topics, probs = topic_model.fit_transform(docs,embeddings)
ValueError: Length of weights not compatible with specified axis.
Code works if I do not include representation_model . Pls advice how to fix this problem.
| closed | 2023-03-01T15:54:19Z | 2023-05-23T09:35:15Z | https://github.com/MaartenGr/BERTopic/issues/1054 | [] | josepius-clemson | 4 |
gunthercox/ChatterBot | machine-learning | 2,213 | can't get the same response if I type same input twice | I'm trying to develop a chatbot for my project, the problem is I cannot get the same response with the same input. Can anyone help to solve this issue?
it looks like that:
[https://ibb.co/19b5zxP](url)
here is the code below:
from chatterbot import ChatBot
from chatterbot.trainers import ListTrainer
from chatterbot.trainers import ChatterBotCorpusTrainer
import random
ticket_number = random.randint(10000,19999)
chatbot = ChatBot(
'IThelpdeskChatbot',
storage_adapter='chatterbot.storage.SQLStorageAdapter',
preprocessors=[
'chatterbot.preprocessors.clean_whitespace',
'chatterbot.preprocessors.unescape_html',
'chatterbot.preprocessors.convert_to_ascii'
],
logic_adapters=[
{
'import_path': 'chatterbot.logic.SpecificResponseAdapter',
'input_text': 'bye',
'output_text': 'Thank you for using IT helpdesk Chatbot! Your ticket number is : {} . <br>Please help us to complete a <a href="http://www.google.com">survey</a> and give us some feedback.'.format(ticket_number)
},
{
'import_path': 'chatterbot.logic.SpecificResponseAdapter',
'input_text': '1',
'output_text': 'Go to network tab from bottom right corner, choose inn001 network, click on connect'
},
{
'import_path': 'chatterbot.logic.SpecificResponseAdapter',
'input_text': '2',
'output_text': 'Go to URL: <a href="http://www.password.reset.com">www.password.reset.com</a> , Enter you login id, enter password received, reset the password'
},
{
'import_path': 'chatterbot.logic.SpecificResponseAdapter',
'input_text': '3',
'output_text': ' Verify that your computer is able to see the Internet and/or other computers to ensure that your computer is not encountering a connection issue, which would be causing your e-mail issue. Ensure that your Internet e-mail server or your Network e-mail server is not encountering issues by contacting either your Internet Service Provider or your Network administrator.'
},
{
'import_path': 'chatterbot.logic.BestMatch',
#'threshold' : 0.9,
'maximum_similarity_threshold': 0.9,
'default_response': 'Sorry, I do not understand. '
}
],
database_uri='sqlite:///database.sqlite3'
)
trainer_corpus = ChatterBotCorpusTrainer(chatbot)
trainer_corpus.train(
"chatterbot.corpus.english.greetings",
"chatterbot.corpus.english.computers",
"chatterbot.corpus.english.conversations",
"chatterbot.corpus.custom.qna"
)
| open | 2021-11-01T07:46:59Z | 2021-12-01T17:26:09Z | https://github.com/gunthercox/ChatterBot/issues/2213 | [] | samsam3215 | 1 |
huggingface/datasets | nlp | 7,449 | Cannot load data with different schemas from different parquet files | ### Describe the bug
Cannot load samples with optional fields from different files. The schema cannot be correctly derived.
### Steps to reproduce the bug
When I place two samples with an optional field `some_extra_field` within a single parquet file, it can be loaded via `load_dataset`.
```python
import pandas as pd
from datasets import load_dataset
data = [
{'conversations': {'role': 'user', 'content': 'hello'}},
{'conversations': {'role': 'user', 'content': 'hi', 'some_extra_field': 'some_value'}}
]
df = pd.DataFrame(data)
df.to_parquet('data.parquet')
dataset = load_dataset('parquet', data_files='data.parquet', split='train')
print(dataset.features)
```
The schema can be derived. `some_extra_field` is set to None for the first row where it is absent.
```
{'conversations': {'content': Value(dtype='string', id=None), 'role': Value(dtype='string', id=None), 'some_extra_field': Value(dtype='string', id=None)}}
```
However, when I separate the samples into different files, it cannot be loaded.
```python
import pandas as pd
from datasets import load_dataset
data1 = [{'conversations': {'role': 'user', 'content': 'hello'}}]
pd.DataFrame(data1).to_parquet('data1.parquet')
data2 = [{'conversations': {'role': 'user', 'content': 'hi', 'some_extra_field': 'some_value'}}]
pd.DataFrame(data2).to_parquet('data2.parquet')
dataset = load_dataset('parquet', data_files=['data1.parquet', 'data2.parquet'], split='train')
print(dataset.features)
```
Traceback:
```
Traceback (most recent call last):
File "/home/tiger/.local/lib/python3.9/site-packages/datasets/builder.py", line 1854, in _prepare_split_single
for _, table in generator:
File "/home/tiger/.local/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables
yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
File "/home/tiger/.local/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
File "/home/tiger/.local/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast
return cast_table_to_schema(table, schema)
File "/home/tiger/.local/lib/python3.9/site-packages/datasets/table.py", line 2245, in cast_table_to_schema
arrays = [
File "/home/tiger/.local/lib/python3.9/site-packages/datasets/table.py", line 2246, in <listcomp>
cast_array_to_feature(
File "/home/tiger/.local/lib/python3.9/site-packages/datasets/table.py", line 1795, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/tiger/.local/lib/python3.9/site-packages/datasets/table.py", line 1795, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/tiger/.local/lib/python3.9/site-packages/datasets/table.py", line 2108, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}")
TypeError: Couldn't cast array of type
struct<content: string, role: string, some_extra_field: string>
to
{'content': Value(dtype='string', id=None), 'role': Value(dtype='string', id=None)}
```
### Expected behavior
Correctly load data with optional fields from different parquet files.
### Environment info
- `datasets` version: 3.3.2
- Platform: Linux-5.10.135.bsk.4-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- `huggingface_hub` version: 0.28.1
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1 | closed | 2025-03-13T08:14:49Z | 2025-03-17T07:27:48Z | https://github.com/huggingface/datasets/issues/7449 | [] | li-plus | 2 |
pytest-dev/pytest-django | pytest | 542 | Catching Django-specific warnings in pytest | I would like to turn the `RemovedInNextVersionWarning` warnings to errors, as described in https://docs.pytest.org/en/latest/warnings.html
Unfortunately, when I add to pytest.ini the following lines:
```
filterwarnings =
error::RemovedInDjango20Warning
```
I end up with (it is looking strictly for Python internal warnings):
```
INTERNALERROR> File "/myfolder/python3.6/warnings.py", line 236, in _getcategory
INTERNALERROR> raise _OptionError("unknown warning category: %r" % (category,))
INTERNALERROR> warnings._OptionError: unknown warning category: 'RemovedInDjango20Warning'
```
Should/could pytest-django as a plugin make pytest aware of more possible warnings to handle?
Bonus question (slightly out of the scope): Even better would be a possibility to filter out the DjangoWarnings coming from the written code to the warnings coming from other libraries :-) | closed | 2017-11-21T16:28:45Z | 2019-02-03T23:21:00Z | https://github.com/pytest-dev/pytest-django/issues/542 | [
"bitesize",
"documentation",
"question"
] | MRigal | 3 |
JoeanAmier/TikTokDownloader | api | 397 | 请教如何去掉保存路径中的uid | 请教作者和各位前辈
想让视频文件保存路径不包含uid,直接放在root作为根目录, 然后直接存各自mark值对应的子文件夹里
比如
"root": "/Users/admin/Desktop/抖音下载",
{
"mark": "项目1",
"url": "https://www.douyin.com/user/MS4wsjABAAAAjhjh9m4Y2GglyvbImVZlMjhjhzYPJ4?from_tab_name=main&vid=7463746000604171572",
"tab": "post",
"earliest": "2025/1/23",
"latest": "",
"enable": true
},
直接保存到
/Users/admin/Desktop/抖音下载/项目1/1.mp4
不再保存到
/Users/admin/Desktop/抖音下载/UID1792275955040775_项目1_发布作品/
感谢指点 | closed | 2025-01-27T13:04:43Z | 2025-01-27T14:29:14Z | https://github.com/JoeanAmier/TikTokDownloader/issues/397 | [] | 9ihbd2DZSMjtsf7vecXjz | 2 |
flasgger/flasgger | flask | 249 | Flasgger 0.9.1 does not reflect changes without restarting app | I'm using latest flasgger==0.9.1 and create documentation in Yaml files using `swag_from` decorator. Whenever I change something in Yaml file (major or very minor change, whatever) I need to restart Python app in order to get changes reflected on Flasgger UI. I think it's quite inconvenient - I see it worked well in flasgger==0.8.3 without reloading.
Here is my init:
```python
def create_swagger(app):
template = {
"openapi": '3.0.0',
"info": {
"title": "my title",
"description": "my description",
"contact": {
"responsibleOrganization": "my org",
"email": "my mail",
"url": "my url"
},
"version": "1.0"
},
"basePath": "/",
"schemes": [
"http",
"https"
],
"operationId": "getmyData"
}
config = {
"headers": [
],
"specs": [
{
"endpoint": 'api',
"route": '/api.json',
"rule_filter": lambda rule: True, # all in
"model_filter": lambda tag: True, # all in
}
],
"static_url_path": "/flasgger_static",
"swagger_ui": True,
"specs_route": "/docs"
}
return Swagger(app, template=template, config=config)
``` | closed | 2018-10-04T11:46:25Z | 2019-01-16T04:41:19Z | https://github.com/flasgger/flasgger/issues/249 | [
"0.9.2"
] | ab-gissupport | 9 |
google-research/bert | nlp | 959 | How to interprete results | I am a bit confused.
After the training im left with the following files in the output folder:
train.tf_record
model.ckpt-x.meta
model.ckpt-x.index
model.ckpt-x.data-00000of-x
graph.pbtxt
checkpoints.txt
events.out.tfevents.1575989900.MO-HSK-M-TEC057
i know i have to use the model.ckpt to load the model when testing it, but how do i use the other files to analyse how the training went? is there any helpful information in them? I set the flag "do evaluate" to true, so there should be the results of the evaluations somewhere no?
thanks. | closed | 2019-12-10T16:05:48Z | 2020-06-06T16:47:56Z | https://github.com/google-research/bert/issues/959 | [] | raff7 | 1 |
huggingface/text-generation-inference | nlp | 2,832 | use pip install TGI3.0 | ### Feature request
I want to use
pip install tgi
but there are only 2.4.0
And I see the release already has 3.0.1.
Do I just have to wait to use it via pip install tgi 3.0+?
### Motivation
Easier to use tgi 3.0+
### Your contribution
https://pypi.org/project/tgi/ | open | 2024-12-12T13:53:46Z | 2024-12-14T10:09:25Z | https://github.com/huggingface/text-generation-inference/issues/2832 | [] | xiezhipeng-git | 3 |
flasgger/flasgger | rest-api | 285 | __init__() missing 2 required positional arguments: 'schema_name_resolver' and 'spec' | Hello, I'm getting the error below. Am I missing anything?
```
../../../env36/lib64/python3.6/site-packages/flask_base/app.py:1: in <module>
from flasgger import Swagger, LazyString, LazyJSONEncoder
../../../env36/lib64/python3.6/site-packages/flasgger/__init__.py:8: in <module>
from .base import Swagger, Flasgger, NO_SANITIZER, BR_SANITIZER, MK_SANITIZER, LazyJSONEncoder # noqa
../../../env36/lib64/python3.6/site-packages/flasgger/base.py:37: in <module>
from .utils import extract_definitions
../../../env36/lib64/python3.6/site-packages/flasgger/utils.py:22: in <module>
from .marshmallow_apispec import SwaggerView
../../../env36/lib64/python3.6/site-packages/flasgger/marshmallow_apispec.py:13: in <module>
openapi_converter = openapi.OpenAPIConverter(openapi_version='2.0')
E TypeError: __init__() missing 2 required positional arguments: 'schema_name_resolver' and 'spec'
``` | open | 2019-02-10T15:14:05Z | 2019-07-25T05:52:22Z | https://github.com/flasgger/flasgger/issues/285 | [] | wobeng | 12 |
hzwer/ECCV2022-RIFE | computer-vision | 110 | Nothing is generating. | Hello, i'm trying to use RIFE using the d emo.mp4 and others video.
The outputs show that no FPS is generated and no videos are present when we finalize the command:
- python3 inference_video.py --exp=2 --video=sample.mp4 --montage --skip
Fri Feb 19 14:55:27 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.36.06 Driver Version: 450.36.06 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Quadro M4000 On | 00000000:00:05.0 Off | N/A |
| 46% 24C P8 12W / 120W | 1MiB / 8126MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
sample.mp4, 0.0 frames in total, 0.0FPS to 0.0FPS
Will not merge audio because using png, fps or skip flag!
135it [00:43, 3.19it/s]
My packages:
attrs==19.1.0
backcall==0.1.0
bleach==3.1.0
certifi==2020.12.5
chardet==4.0.0
cloudpickle==1.2.1
cycler==0.10.0
Cython==0.29.13
dataclasses==0.8
decorator==4.4.0
defusedxml==0.6.0
dtrx==8.0.2
entrypoints==0.3
filelock==3.0.12
future==0.17.1
gdown==3.12.2
idna==2.10
imageio==2.9.0
imageio-ffmpeg==0.4.3
ipykernel==5.1.1
ipython==7.7.0
ipython-genutils==0.2.0
ipywidgets==7.5.1
jedi==0.14.1
Jinja2==2.10.1
joblib==0.13.2
json5==0.8.5
jsonschema==3.0.2
jupyter==1.0.0
jupyter-client==5.3.1
jupyter-console==6.0.0
jupyter-core==4.5.0
jupyterlab==1.0.4
jupyterlab-server==1.0.0
kiwisolver==1.1.0
MarkupSafe==1.1.1
matplotlib==3.1.1
mistune==0.8.4
moviepy==1.0.3
nbconvert==5.5.0
nbformat==4.4.0
notebook==6.0.0
numpy==1.17.0
opencv-python==4.5.1.48
pandas==0.25.0
pandocfilters==1.4.2
parso==0.5.1
pexpect==4.7.0
pickleshare==0.7.5
Pillow==6.1.0
proglog==0.1.9
prometheus-client==0.7.1
prompt-toolkit==2.0.9
protobuf==3.9.1
ptyprocess==0.6.0
Pygments==2.4.2
pygobject==3.26.1
pyparsing==2.4.2
pyrsistent==0.15.4
PySocks==1.7.1
python-apt==1.6.4
python-dateutil==2.8.0
python-distutils-extra==2.39
pytz==2019.2
PyYAML==5.1.2
pyzmq==18.0.2
qtconsole==4.5.2
requests==2.25.1
scikit-learn==0.21.3
scipy==1.3.0
Send2Trash==1.5.0
six==1.12.0
sk-video==1.1.10
terminado==0.8.2
testpath==0.4.2
torch==1.7.1
torch-nightly==1.2.0.dev20190805
torchvision==0.4.0a0+d31eafa
tornado==6.0.3
tqdm==4.57.0
traitlets==4.3.2
typing==3.7.4
typing-extensions==3.7.4.3
urllib3==1.26.3
wcwidth==0.1.7
webencodings==0.5.1
widgetsnbextension==3.5.1
My system:
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic
30 GB ram
Quadro M4000
From the paperspace, using pytorch library package. | closed | 2021-02-19T15:00:46Z | 2021-03-15T15:12:51Z | https://github.com/hzwer/ECCV2022-RIFE/issues/110 | [] | ktakanopy | 7 |
graphistry/pygraphistry | pandas | 385 | [FEA] anonymize graph | **Is your feature request related to a problem? Please describe.**
When sharing graphs with others, especially via going from private server / private account -> public hub, such as for publicizing or debugging, it'd help to have a way to quickly anonymize a graph
Sample use cases to make fast:
* show topology-only
* with and without renaming topology identifiers
* with and without renaming all cols
* including/dropping specific columns
* with/without preserving topology (prevent decloaking)
* with/without preserving value distributions
* as needed, opt in/out for particular columns
Perf:
* fast for graphs < 10M nodes, edges
* path to bigger graphs: if pandas, stick to vector ops, ...
**Describe the solution you'd like**
Something declarative and configurable like:
```python
g2 = g.anonymize(
node_policy={
'include': ['col1', ...], # safelist of columns to include
'preserve': ['col1', ...], # opt-in columns not to anonymize,
'rename': ['col1', ...] | True,
'sample_drop': 0.2 # % nodes to drop; 0 (default) means preserve all
'sample_add': 0.2 # % nodes to add; 0 (default) means add none
},
edge_policy={
'drop': ['col2', ...] # switch to opt-out via columns to exclude
},
sample_keep=..,
sample_add=...
)
g2.plot()
g_orig = g2.deanonymize(g2._anon_remapping)
```
Sample transforms:
* rename columns
* remap categoricals, including both range values & distribution, but preserve type
* resample edges, both removing/adding
* ... and shift topology distributions & supernode locations
----
If there is a popular tabular or graph centric library here that is well-maintained, we should consider using
... but not if it looks like maintenance or security risks
**Additional context**
Ultimately it'd be good to push this to the UI via some sort of safe mode: role-specific masking, ...
| open | 2022-07-29T21:01:18Z | 2022-09-09T20:08:45Z | https://github.com/graphistry/pygraphistry/issues/385 | [
"enhancement",
"help wanted",
"good-first-issue"
] | lmeyerov | 3 |
scrapy/scrapy | python | 6,361 | Remove top-level reactor imports from CrawlerProces/CrawlerRunner examples | There are several code examples on https://docs.scrapy.org/en/latest/topics/practices.html that have a top-level `from twisted.internet import reactor`, which is problematic (breaks when the settings specify a non-default reactor) and needs to be fixed. | closed | 2024-05-14T09:52:47Z | 2024-05-27T10:36:35Z | https://github.com/scrapy/scrapy/issues/6361 | [
"bug",
"good first issue",
"docs"
] | wRAR | 5 |
HIT-SCIR/ltp | nlp | 338 | 如何在./tools/train/ 中编译生成./otcws文件 | <!-- 中文模板起始:If you speak English, please remove the Chinese templates -->
在提问之前,请确认以下几点:
- [x] 由于您的问题可能与前任问题重复,在提交issue前,请您确认您已经搜索过之前的问题
## 问题*类型*
<!-- 例如:构建失败、内存错误、异常终止等 -->
## 出错*场景*
<!-- 例如:分析句子“xxx”时出错,运行4小时后出错,能否复现 -->
## 代码片段
## 如何复现这一错误
<!-- Please be specific as possible. Use dashes (-) or numbers (1.) to create a list of steps -->
## 运行环境
<!-- 操作系统, python版本等。 -->
## 期望结果
<!-- What should have happened? -->
## 其他
<!-- 中文模板结束, end of Chinese template -->
<!-- start of English template: 如果您用中文提问,请删除英文模板 -->
Please ensure your issue adheres to the following guidelines:
- [x] Search previous issues before making a new one, as yours may be a duplicate.
## *What* is affected by this bug?
<!-- Eg. building failed, memory leak, program terminated. -->
## *When* does this occur?
<!-- Eg. when analyze the sentence "xxx", when the program run for about 4 hours. (Does it possibly occur or occur every time?) -->
## *Where* on the code does it happen?
<!-- Eg. when i call the api xxx and then call xxx the program will crash. (show the process code if needed.) -->
## *How* do we replicate the issue?
<!-- Please be specific as possible. Use dashes (-) or numbers (1.) to create a list of steps -->
## Your environment information
<!-- OS, languages, IDE and it's version, and other related tools, environment variables, the way you insert the code to your project. -->
## Expected behavior (i.e. solution)
<!-- What should have happened? -->
## Other Comments
<!-- end of English template, 英文模板结束 -->
| closed | 2019-05-07T09:29:55Z | 2020-06-25T11:19:51Z | https://github.com/HIT-SCIR/ltp/issues/338 | [] | Fireboyar | 2 |
youfou/wxpy | api | 169 | 所有的方法,能否放回json格式的数据 | 所有类似的API能够返回json格式数据?
bot.friends().search('', sex=2, city='深圳', province='广东') | open | 2017-08-29T02:32:45Z | 2017-08-29T02:57:52Z | https://github.com/youfou/wxpy/issues/169 | [] | leeyisoft | 1 |
apify/crawlee-python | automation | 1,034 | How to pass arbitrary data to context | Hello, I am looking for a way to inject or pass arbitrary data to the context object. Similar to what is possible in fastapi via `app.state` or via dependencies:
```python
from fastapi import FastAPI
app = FastAPI()
app.state.custom_context = {"a": "b"}
# In separate file
router = Router()
@router.get("/items/{item_id}")
async def read_items(item_id: str, request: Request):
example_context = request.app.state.custom_context
return items[item_id]
```
My use case is that I have the handlers in separate files and I would like to pass general run information to the handlers, without the need for database requests. I am aware of `user_data`, but I am not sure this is appropriate here, since it is more tied to the `Request` and not the overall run. Furthermore, I don't see a straight-forward way to pass the data from the start_urls? Is it possible to abuse the `pre_navigation_hook` or something similar?
Any guidance would be helpful. Thanks!
| closed | 2025-02-27T20:25:16Z | 2025-03-03T14:15:51Z | https://github.com/apify/crawlee-python/issues/1034 | [
"t-tooling"
] | cirezd | 3 |
gradio-app/gradio | data-visualization | 10,673 | No mouse wheel zoom in `gr.Plot()` after updating to Gradio 5 | ### Describe the bug
Hey there 🙂
I recently upgraded from Gradio 4.44.1 to 5.18.0 (using Python 3.11.11).
I noticed that I can no longer zoom using the mouse wheel on a Plotly-generated map inside `gr.Plot()`. The zoom-in/out buttons provided by Plotly only work occasionally.
This issue occurs both locally and on Huggingface.
Thanks! 🚀
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
import plotly.express as px
import pandas as pd
def create_map():
df = pd.DataFrame({
"lat": [37.7749, 40.7128, 34.0522],
"lon": [-122.4194, -74.0060, -118.2437]
})
fig = px.scatter_mapbox(df, lat="lat", lon="lon", zoom=3)
fig.update_layout(mapbox_style="open-street-map", margin={"r":0,"t":0,"l":0,"b":0})
return fig
with gr.Blocks() as demo:
plot = gr.Plot(value=create_map())
demo.launch()
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Darwin
gradio version: 5.18.0
gradio_client version: 1.7.2
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts is not installed.
fastapi: 0.115.8
ffmpy: 0.5.0
gradio-client==1.7.2 is not installed.
httpx: 0.28.1
huggingface-hub: 0.29.1
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 2.2.3
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.3
pillow: 11.1.0
pydantic: 2.10.6
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.9.7
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.45.3
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2025.2.0
httpx: 0.28.1
huggingface-hub: 0.29.1
packaging: 24.2
typing-extensions: 4.12.2
websockets: 15.0
```
### Severity
Blocking usage of gradio | closed | 2025-02-25T13:47:48Z | 2025-02-25T14:00:27Z | https://github.com/gradio-app/gradio/issues/10673 | [
"bug"
] | marinasie | 1 |
sinaptik-ai/pandas-ai | data-visualization | 1,175 | Agent Method parameter save_charts acting like open_charts, and opens saved charts automatically | ### System Info
Python 3.11.7
PandasAI 2.0.37
Pandas 1.5.3
### 🐛 Describe the bug
``` python
def test():
os.environ["PANDASAI_API_KEY"] = os.environ.get("PANDASAI_API_KEY")
llm = OpenAI(api_token="os.environ.get("OPENAI_API_KEY")")
df = pd.DataFrame({
"country": ["United States", "United Kingdom", "France", "Germany", "Italy", "Spain", "Canada", "Australia", "Japan",
"China"],
"gdp": [19294482071552, 2891615567872, 2411255037952, 3435817336832, 1745433788416, 1181205135360,
1607402389504, 1490967855104, 4380756541440, 14631844184064],
"happiness_index": [6.94, 7.16, 6.66, 7.07, 6.38, 6.4, 7.23, 7.22, 5.87, 5.12] })
pAI = Agent(df, config={"verbose": True, "llm": llm, "save_charts": True, "enable_cache": True})
llm_analysis_response = pAI.chat("Create a plot for GDP in relation to happiness")
return llm_analysis_response
```
The above code should create the chart and save it to the default path (and it does)
However, it opens the chart automatically, similar to what open_charts does, which causes problems when integrating pandasai in applications like streamlit for example. | closed | 2024-05-23T17:44:29Z | 2024-08-29T16:05:48Z | https://github.com/sinaptik-ai/pandas-ai/issues/1175 | [
"bug"
] | Emad-Eldin-G | 7 |
graphql-python/graphene-sqlalchemy | graphql | 140 | geoalchemy2 support | ## Help! geoalchemy2 support!
```
class CoordinateMixin():
location = db.Column('location', Geography(geometry_type='POINT' ,srid=4326, spatial_index=True, dimension=2), doc='gps coordinate')
```
Exception: Don't know how to convert the SQLAlchemy field user.location (<class 'sqlalchemy.sql.schema.Column'>)
I'v already add the following code in schema file.
```
from geoalchemy2 import Geography
from graphene_sqlalchemy.converter import get_column_doc, convert_sqlalchemy_type
from app.graphql.extesions import CoordinateJSON
@convert_sqlalchemy_type.register(Geography)
def convert_column_to_coordinatejson(type, column, registry=None):
return graphene.Field(CoordinateJSON, description=get_column_doc(column))
schema = graphene.Schema(query=RootQuery, types=types, mutation=MyMutation)
```
| open | 2018-07-04T12:40:44Z | 2022-01-13T16:26:50Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/140 | [] | wahello | 6 |
huggingface/datasets | nlp | 6,760 | Load codeparrot/apps raising UnicodeDecodeError in datasets-2.18.0 | ### Describe the bug
This happens with datasets-2.18.0; I downgraded the version to 2.14.6 fixing this temporarily.
```
Traceback (most recent call last):
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 2556, in load_dataset
builder_instance = load_dataset_builder(
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 2228, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 1879, in dataset_module_factory
raise e1 from None
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 1831, in dataset_module_factory
can_load_config_from_parquet_export = "DEFAULT_CONFIG_NAME" not in f.read()
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
```
### Steps to reproduce the bug
1. Using Python3.10/3.11
2. Install datasets-2.18.0
3. test with
```
from datasets import load_dataset
dataset = load_dataset("codeparrot/apps")
```
### Expected behavior
Normally it should manage to download and load the dataset without such error.
### Environment info
Ubuntu, Python3.10/3.11 | open | 2024-03-28T03:44:26Z | 2024-06-19T07:06:40Z | https://github.com/huggingface/datasets/issues/6760 | [] | yucc-leon | 4 |
graphql-python/graphene-django | django | 995 | ModuleNotFoundError: No module named 'graphene_django' | **Note: for support questions, please use stackoverflow**. This repository's issues are reserved for feature requests and bug reports.
* **What is the current behavior?**
```
Watching for file changes with StatReloader
Exception in thread django-main-thread:
Traceback (most recent call last):
File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/Users/raymondkorir/Library/Python/3.7/lib/python/site-packages/django/utils/autoreload.py", line 53, in wrapper
fn(*args, **kwargs)
File "/Users/raymondkorir/Library/Python/3.7/lib/python/site-packages/django/core/management/commands/runserver.py", line 109, in inner_run
autoreload.raise_last_exception()
File "/Users/raymondkorir/Library/Python/3.7/lib/python/site-packages/django/utils/autoreload.py", line 76, in raise_last_exception
raise _exception[1]
File "/Users/raymondkorir/Library/Python/3.7/lib/python/site-packages/django/core/management/__init__.py", line 357, in execute
autoreload.check_errors(django.setup)()
File "/Users/raymondkorir/Library/Python/3.7/lib/python/site-packages/django/utils/autoreload.py", line 53, in wrapper
fn(*args, **kwargs)
File "/Users/raymondkorir/Library/Python/3.7/lib/python/site-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/Users/raymondkorir/Library/Python/3.7/lib/python/site-packages/django/apps/registry.py", line 91, in populate
app_config = AppConfig.create(entry)
File "/Users/raymondkorir/Library/Python/3.7/lib/python/site-packages/django/apps/config.py", line 90, in create
module = import_module(entry)
File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'graphene_django'
```
* **If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem** via
a github repo, https://repl.it or similar (you can use this template as a starting point: https://repl.it/@jkimbo/Graphene-Django-Example).
* **What is the expected behavior?**
* **What is the motivation / use case for changing the behavior?**
* **Please tell us about your environment:**
- Version:
- Platform:
* **Other information** (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow)
pip freeze
```
graphene==2.1.8
graphene-django==2.11.0
graphql-core==2.3.2
graphql-relay==2.0.1
gunicorn==20.0.4
```
| closed | 2020-06-26T16:12:29Z | 2020-06-26T17:50:20Z | https://github.com/graphql-python/graphene-django/issues/995 | [
"🐛bug"
] | raymondfx | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,599 | Cycle_A loss suddenly becomes very high | Hi, @junyanz @taesungp
does anyone encounter issues like the cycle_A loss suddenly becomes very high after a random iteration? my loss log shows that in epoch 78, the discrimintor loss suddenly decrease to close to zero and my generator loss goes from 0.2ish to almost 1 and 2. The cycle_A loss also looks like being suddenly very high.
<img width="1262" alt="Screenshot 2023-09-23 at 6 09 43 PM" src="https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/assets/87685037/3ac00157-88a0-4757-a745-3d4f6c82c332">
<img width="1153" alt="Screenshot 2023-09-23 at 6 10 39 PM" src="https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/assets/87685037/70c22be2-f91b-466d-bb14-c64c44d2d7ee">
The inferenced image is all black now after these severeal losses exploded. | open | 2023-09-23T22:15:30Z | 2023-10-31T17:06:13Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1599 | [] | JunCS1 | 1 |
CorentinJ/Real-Time-Voice-Cloning | python | 302 | License should be GPL | First off, this is a really cool project. Thanks for sharing it and leaving it up!
I noticed, however, that PyQt5 uses the GPL. Since this is a copyleft license, your code must also be GPL. Otherwise, we're violating the terms of the license.
See also:
https://www.riverbankcomputing.com/commercial/license-faq
https://www.gnu.org/licenses/gpl-faq.html | closed | 2020-03-22T16:22:28Z | 2020-05-02T23:47:36Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/302 | [] | rustygentile | 2 |
0b01001001/spectree | pydantic | 235 | Can the email validator dependency be optional? | `EmailStr` is used for the contact info, but the field is optional. It's a waste to install these additional dependencies if I don't plan on supplying a value for this field, and don't plan on using `EmailStr` for my own API's models. Or maybe I do want to supply an email in the contact info, but don't value validation of it enough to justify the additional dependencies.
Is there a nice way to make this dependency optional? Of course, it could just be an extra for spectree, but will that feel awkward or unintuitive for users? | closed | 2022-07-11T22:08:39Z | 2022-07-14T02:51:40Z | https://github.com/0b01001001/spectree/issues/235 | [] | MarkKoz | 6 |
3b1b/manim | python | 1,228 | There are no scenes inside that module |
[09/15/20 04:18:10] ERROR __main__.py:83
There are no scenes inside that module
| closed | 2020-09-15T01:22:48Z | 2021-02-16T10:35:03Z | https://github.com/3b1b/manim/issues/1228 | [] | zhangj563 | 5 |
openapi-generators/openapi-python-client | rest-api | 351 | Support non-file fields in Multipart requests | **Is your feature request related to a problem? Please describe.**
Given this schema for a POST request:
```
{
"requestBody": {
"content": {
"multipart/form-data": {
"schema": {
"type": "object",
"properties": {
"file": {
"type": "string",
"format": "binary"
},
"options": {
"type": "string",
"default": "{}"
}
}
}
}
}
}
}
```
the generated code will call `httpx.post(files=dict(file=file, options=options))` which causes a server-side pydantic validation error (`options` should be of type `str`).
**Describe the solution you'd like**
The correct invocation would be `httpx.post(data=dict(options=options), files=dict(file=file))` (see [relevant page from httpx docs](https://www.python-httpx.org/advanced/#multipart-file-encoding))
**Describe alternatives you've considered**
Some sort of server-side workaround, but it might be somewhat ugly.
| closed | 2021-03-19T17:08:08Z | 2021-03-22T20:25:52Z | https://github.com/openapi-generators/openapi-python-client/issues/351 | [
"✨ enhancement"
] | csymeonides-mf | 1 |
piskvorky/gensim | nlp | 2,889 | Document the X2Vec refactoring in the change log. | Document the X2Vec refactoring in the change log.
What changed from the user's perspective?
_Originally posted by @mpenkov in https://github.com/RaRe-Technologies/gensim/pull/2698/review_comment/create_ | closed | 2020-07-19T13:14:09Z | 2020-09-28T12:23:19Z | https://github.com/piskvorky/gensim/issues/2889 | [] | mpenkov | 2 |
AutoViML/AutoViz | scikit-learn | 65 | Read CSV file with different encodings | Hi. I'm trying to use the library with a CSV file that uses "ISO-8859-1" encoding, and the log says:
`pandas ascii encoder does not work for this file. Continuing...`
` pandas utf-8 encoder does not work for this file. Continuing...`
` pandas iso-8859-1 encoder does not work for this file. Continuing...`
After checking the source code, I found that there is a bug in the AutoViz_Utils.py file:
<img width="533" alt="image" src="https://user-images.githubusercontent.com/84574031/160566021-39a96083-76fb-442e-ae6d-0e77781547a9.png">
Here there is a for loop to try with different encodings but, as it can be seen, the `encoding` parameter of the `pd.read_csv` function is always set to `None`.
Please, check this, maybe I'm missing something.
Thanks in advance. | closed | 2022-03-29T08:19:39Z | 2022-04-08T19:00:53Z | https://github.com/AutoViML/AutoViz/issues/65 | [] | gaspar-avit | 1 |
roboflow/supervision | pytorch | 943 | `Detections.empty()` invalidates detections, causes crashes when `Detections.merge()` is called. | ### Search before asking
- [ ] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
This is the underlying reason for #928. I believe it is important enough to deserve a separate issue.
### Bug
`Detectons.empty()` creates a very specific type of `Detections`, which is guaranteed to be incompatible with some models when merging results.
### Example
Suppose we run an instance segmentation model on a video. It finds one detection in frame 1, but no detections in frame 2. When we try calling `Detections.merge` on these detections, it raises an error.
```python
IMAGE_PATH = "cat.png"
image = cv2.imread(IMAGE_PATH)
black_image = np.zeros_like(image, dtype=np.uint8)
model = get_roboflow_model(model_id="yolov8n-640") # Error type 1
# model = get_roboflow_model(model_id="yolov8n-seg-640") # Error type 2
result = model.infer(image)[0]
detections_1 = sv.Detections.from_inference(result)
result = model.infer(black_image)[0]
detections_2 = sv.Detections.from_inference(result)
sv.Detections.merge([detections_1, detections_2])
```
<details>
<summary>Error type 1</summary>
ValueError Traceback (most recent call last)
<ipython-input-8-173450170fe7> in <cell line: 18>()
16 detections_2 = sv.Detections.from_inference(result)
17
---> 18 sv.Detections.merge([detections_1, detections_2])
1 frames
/usr/local/lib/python3.10/dist-packages/supervision/detection/core.py in merge(cls, detections_list)
768 tracker_id = stack_or_none("tracker_id")
769
--> 770 data = merge_data([d.data for d in detections_list])
771
772 return cls(
/usr/local/lib/python3.10/dist-packages/supervision/detection/utils.py in merge_data(data_list)
629 all_keys_sets = [set(data.keys()) for data in data_list]
630 if not all(keys_set == all_keys_sets[0] for keys_set in all_keys_sets):
--> 631 raise ValueError("All data dictionaries must have the same keys to merge.")
632
633 for data in data_list:
ValueError: All data dictionaries must have the same keys to merge.
</details>
<details>
<summary>Error type 2</summary>
ValueError Traceback (most recent call last)
[<ipython-input-10-1ac0380e7de6>](https://localhost:8080/#) in <cell line: 20>()
18
19
---> 20 sv.Detections.merge([detections_1, detections_2])
1 frames
[/usr/local/lib/python3.10/dist-packages/supervision/detection/core.py](https://localhost:8080/#) in stack_or_none(name)
756 return None
757 if any(d.__getattribute__(name) is None for d in detections_list):
--> 758 raise ValueError(f"All or none of the '{name}' fields must be None")
759 return (
760 np.vstack([d.__getattribute__(name) for d in detections_list])
ValueError: All or none of the 'mask' fields must be None
</details>
### Deeper explanation
`Detections` contains these variables:
```python
xyxy: np.ndarray
mask: Optional[np.ndarray] = None
confidence: Optional[np.ndarray] = None
class_id: Optional[np.ndarray] = None
tracker_id: Optional[np.ndarray] = None
data: Dict[str, Union[np.ndarray, List]] = field(default_factory=dict)
```
Suppose we call `Detections.merge([detections_1, detections_2])`. An error is rightfully raised when the same variable is defined differently in the detections. For example - when `mask` is `None` in `detections_1` and `np.array([])` in `detections_2`. It makes sense as we don't want to merge incompatible detections.
However, when calling `Detections.empty()`, frequently used to initialize the no-detections case, it sets a specific subset of the fields - only the `xyxy`, `condifence` and `class_id`.
Many models use other fields as well. Because of this, the set of defined fields in an empty detection might be different to what the model returns. When trying to merge these, an error is raised.
### Environment
_No response_
### Minimal Reproducible Example
See aforementioned example.
[this Colab](https://colab.research.google.com/drive/1ktj_CIlM9mcmboo8LJUCsYgKlaPN8bJM?usp=sharing) contains examples, list of functions affected.
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
I could help submit a PR when I figure out a solution, but this is a non-trivial bug and to my best estimates, solving it 'correctly' would require a redesign of `Detections`. Quick remediation may mean band-aid code and hope that future features are written defensively when `merge` is involved. | open | 2024-02-25T09:02:56Z | 2024-05-20T07:30:42Z | https://github.com/roboflow/supervision/issues/943 | [
"bug"
] | LinasKo | 3 |
wkentaro/labelme | deep-learning | 709 | how to label instance segmentation dataset when multiple objects of the same class in one picture | when run labelme_json_to_dataset xxx.json,no info.yaml generated.
I use the latest version.
| closed | 2020-07-01T09:11:43Z | 2020-09-07T10:21:04Z | https://github.com/wkentaro/labelme/issues/709 | [] | deep-practice | 9 |
frappe/frappe | rest-api | 31,849 | A method to disable Prepared Report permanently |
**Is your feature request related to a problem? Please describe.**
We have a script report that is used by all our users multiple times a day. There are defined limits on the amount of data that can be processed by this report and it should always execute in a fraction of a second.
Unfortunately once every few weeks or months the report will take a long time to execute for a single user (we're not sure why but it could be related to network or Frappecloud congestion). Once this happens the report is automatically and permanently changed to a Prepared Report which degrades the user experience for every user. As everyone uses this report multiple times per day it confuses and frustrates our users when this happens.
After this change is triggered the only resolution is for an admin to go the "Role Permission for Page and Report" and manually unset the Prepared Report flag so that the report starts behaving normally again.
In our org an admin is not always available so, in the absence of a better solution, we have had to create a scheduled job that turns off the Prepared Report flag on our important reports repeatedly just so our staff don't have to deal with this.
**Describe the solution you'd like**
A flag to permanently prevent a report being converted to a prepared report. As far as I can tell this was an option in Frappe before but seems to have been removed. We have reports that are used very regularly and should never be converted to a prepared report. If they run slow occasionally that's fine, let them time out but don't permanently change their behaviour for all users because of one bad run.
**Describe alternatives you've considered**
This is a a bigger quesion but should a report ever be permanently converted to a prepared report against the will of the system designer? I'm not sure what the benefit here. If a report is running too long then let it time out and generate an error. If that happens a lot and the report really is processing a lot of data (and not just running slow this one time for some reason) then the system admin can convert it to a prepared report themselves.
**Additional context**
We're using the latest version of Frappe 15 on Frappecloud.
| open | 2025-03-21T11:14:27Z | 2025-03-24T05:57:54Z | https://github.com/frappe/frappe/issues/31849 | [
"feature-request"
] | gscfogrady | 1 |
sherlock-project/sherlock | python | 2,400 | [SPAM] | [SPAM] | closed | 2025-01-29T01:36:40Z | 2025-01-29T02:04:35Z | https://github.com/sherlock-project/sherlock/issues/2400 | [
"spam"
] | Jhonatanbb | 0 |
recommenders-team/recommenders | deep-learning | 1,577 | [BUG] Misleading Example provided in LibffmConverter | The example provided in **LibffmConverter** is wrong
https://github.com/microsoft/recommenders/blob/27709229cdc4aa7d39ab715789f093a2d21d2661/recommenders/datasets/pandas_df_utils.py#L134-L141
The expected output for last column should be:
```python
field4
4:6:1
4:7:1
4:8:1
4:9:1
4:10:1
``` | closed | 2021-12-10T10:29:35Z | 2021-12-17T10:20:49Z | https://github.com/recommenders-team/recommenders/issues/1577 | [] | tim5go | 1 |
vimalloc/flask-jwt-extended | flask | 442 | access token isn't refreshing | Here is my code:
```python
from flask import Flask, url_for, redirect, jsonify, make_response, flash
from flask_mail import Mail
from flask_migrate import Migrate
from flask_restful import Api
from flask_sqlalchemy import SQLAlchemy
from config import config
from flask_login import LoginManager, logout_user, current_user
from datetime import datetime, timezone, timedelta
from flask_jwt_extended import (
JWTManager,
get_jwt,
create_access_token,
create_refresh_token,
get_jwt_identity,
set_access_cookies,
unset_jwt_cookies,
jwt_required,
verify_jwt_in_request
)
import re
from jinja2 import evalcontextfilter, Markup, escape
login_manager = LoginManager()
login_manager.login_view = "auth.login"
mail = Mail()
db = SQLAlchemy()
migrate = Migrate()
def create_app(config_name):
"""App factory"""
app = Flask(__name__)
app.config.from_object(config[config_name])
config[config_name].init_app(app)
with app.app_context():
db.init_app(app)
login_manager.init_app(app)
db.create_all()
mail.init_app(app)
jwt = JWTManager(app)
migrate.init_app(app, db)
@app.after_request
def refresh_expiring_jwts(response):
try:
# verify_jwt_in_request(optional=True)
exp_timestamp = get_jwt()["exp"]
now = datetime.now(timezone.utc)
target_timestamp = datetime.timestamp(now + timedelta(minutes=30))
if target_timestamp > exp_timestamp:
access_token = create_access_token(identity=get_jwt_identity())
set_access_cookies(response, access_token)
return response
except (RuntimeError, KeyError) as e:
print(e)
# Case where there is not a valid JWT. Just return the original respone
return response
@app.template_filter()
@evalcontextfilter
def linebreaks(eval_ctx, value):
"""Converts newlines into <p> and <br />s."""
value = re.sub(r"\r\n|\r|\n", "\n", value) # normalize newlines
paras = re.split("\n{2,}", value)
paras = [u"<p>%s</p>" % p.replace("\n", "<br />") for p in paras]
paras = u"\n\n".join(paras)
return Markup(paras)
@jwt.expired_token_loader
def my_expired_token_callback(jwt_header, jwt_payload):
response = make_response(jsonify(auth=False))
if current_user.is_authenticated:
flash("You've been logged out", "error")
logout_user()
unset_jwt_cookies(response)
return response
from .main import main as main_blueprint
app.register_blueprint(main_blueprint)
from .auth import auth as auth_blueprint
app.register_blueprint(auth_blueprint, url_prefix="/auth")
from .api import api_bp as api_blueprint
app.register_blueprint(api_blueprint, url_prefix="/api/v1")
return app
```
The error that's printed is -
```
You must call `@jwt_required()` or `verify_jwt_in_request()` before using this method
````
I have tried adding the commented line - ```verify_jwt_in_request(optional=True)``` and ```jwt_required()```
as a decorator to the callback but neither work.
Thanks and regards,
V | closed | 2021-07-29T21:01:50Z | 2021-07-31T11:25:46Z | https://github.com/vimalloc/flask-jwt-extended/issues/442 | [] | V01D0 | 1 |
OFA-Sys/Chinese-CLIP | nlp | 250 | 中文和英文同时匹配图片时中文得分非常低 | 当对一张图片与中文和英文的文本分别相似度计算时, 中文文本的得分远小于英文的得分.
```
import torch
from PIL import Image
import requests
import cn_clip.clip as clip
from cn_clip.clip import load_from_name, available_models
print("Available models:", available_models())
# Available models: ['ViT-B-16', 'ViT-L-14', 'ViT-L-14-336', 'ViT-H-14', 'RN50']
device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = load_from_name("ViT-L-14-336", device=device, download_root='checkpoint')
model.eval()
img_path = "examples/dog.jpg"
text_list = ["猫", "狗", "dog"]
image = preprocess(Image.open(str(img_path))).unsqueeze(0).to(device)
text = clip.tokenize(text_list).to(device)
with torch.no_grad():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features /= image_features.norm(dim=-1, keepdim=True)
text_features /= text_features.norm(dim=-1, keepdim=True)
logits_per_image, logits_per_text = model.get_similarity(image, text)
probs = logits_per_image.softmax(dim=-1).cpu().numpy()
for i, prompt in enumerate(text_list):
print(f"{prompt}: {probs[0][i]}")
```
结果为:
猫: 4.565715789794922e-05
狗: 0.010009765625
dog: 0.98974609375
这种问题怎么解决?
| open | 2024-01-22T11:07:26Z | 2024-01-22T11:07:26Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/250 | [] | HiddenMarkovModel | 0 |
sigmavirus24/github3.py | rest-api | 609 | Support for source import API | [Support Import API](https://developer.github.com/v3/migration/source_imports/)
- [ ] Start an import
- [ ] Get import progress
- [ ] Update existing import
- [ ] Get commit authors
- [ ] Map a commit author
- [ ] Set Git LFS preference
- [ ] Get large files
- [ ] Cancel an import
I'll begin with "starting the import".
##
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/34013392-support-for-source-import-api?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github).
</bountysource-plugin> | open | 2016-05-07T20:09:12Z | 2016-11-15T19:39:34Z | https://github.com/sigmavirus24/github3.py/issues/609 | [] | itsmemattchung | 0 |
waditu/tushare | pandas | 1,004 | index_weight接口取到的指数成分和权重有不少没更新 | 比如,今天(20190411)取到的中小板399005和创业板399006权重还是去年20181228的,000807的权重还是20181130,这种情况还不少 | closed | 2019-04-11T06:17:04Z | 2019-04-16T15:17:48Z | https://github.com/waditu/tushare/issues/1004 | [] | deepfuzzy | 2 |
huggingface/datasets | pandas | 7,359 | There are multiple 'mteb/arguana' configurations in the cache: default, corpus, queries with HF_HUB_OFFLINE=1 | ### Describe the bug
Hey folks,
I am trying to run this code -
```python
from datasets import load_dataset, get_dataset_config_names
ds = load_dataset("mteb/arguana")
```
with HF_HUB_OFFLINE=1
But I get the following error -
```python
Using the latest cached version of the dataset since mteb/arguana couldn't be found on the Hugging Face Hub (offline mode is enabled).
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[2], line 1
----> 1 ds = load_dataset("mteb/arguana")
File ~/env/lib/python3.10/site-packages/datasets/load.py:2129, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2124 verification_mode = VerificationMode(
2125 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
2126 )
2128 # Create a dataset builder
-> 2129 builder_instance = load_dataset_builder(
2130 path=path,
2131 name=name,
2132 data_dir=data_dir,
2133 data_files=data_files,
2134 cache_dir=cache_dir,
2135 features=features,
2136 download_config=download_config,
2137 download_mode=download_mode,
2138 revision=revision,
2139 token=token,
2140 storage_options=storage_options,
2141 trust_remote_code=trust_remote_code,
2142 _require_default_config_name=name is None,
2143 **config_kwargs,
2144 )
2146 # Return iterable dataset in case of streaming
2147 if streaming:
File ~/env/lib/python3.10/site-packages/datasets/load.py:1886, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs)
1884 builder_cls = get_dataset_builder_class(dataset_module, dataset_name=dataset_name)
1885 # Instantiate the dataset builder
-> 1886 builder_instance: DatasetBuilder = builder_cls(
1887 cache_dir=cache_dir,
1888 dataset_name=dataset_name,
1889 config_name=config_name,
1890 data_dir=data_dir,
1891 data_files=data_files,
1892 hash=dataset_module.hash,
1893 info=info,
1894 features=features,
1895 token=token,
1896 storage_options=storage_options,
1897 **builder_kwargs,
1898 **config_kwargs,
1899 )
1900 builder_instance._use_legacy_cache_dir_if_possible(dataset_module)
1902 return builder_instance
File ~/env/lib/python3.10/site-packages/datasets/packaged_modules/cache/cache.py:124, in Cache.__init__(self, cache_dir, dataset_name, config_name, version, hash, base_path, info, features, token, repo_id, data_files, data_dir, storage_options, writer_batch_size, **config_kwargs)
122 config_kwargs["data_dir"] = data_dir
123 if hash == "auto" and version == "auto":
--> 124 config_name, version, hash = _find_hash_in_cache(
125 dataset_name=repo_id or dataset_name,
126 config_name=config_name,
127 cache_dir=cache_dir,
128 config_kwargs=config_kwargs,
129 custom_features=features,
130 )
131 elif hash == "auto" or version == "auto":
132 raise NotImplementedError("Pass both hash='auto' and version='auto' instead")
File ~/env/lib/python3.10/site-packages/datasets/packaged_modules/cache/cache.py:84, in _find_hash_in_cache(dataset_name, config_name, cache_dir, config_kwargs, custom_features)
72 other_configs = [
73 Path(_cached_directory_path).parts[-3]
74 for _cached_directory_path in glob.glob(os.path.join(cached_datasets_directory_path_root, "*", version, hash))
(...)
81 )
82 ]
83 if not config_id and len(other_configs) > 1:
---> 84 raise ValueError(
85 f"There are multiple '{dataset_name}' configurations in the cache: {', '.join(other_configs)}"
86 f"\nPlease specify which configuration to reload from the cache, e.g."
87 f"\n\tload_dataset('{dataset_name}', '{other_configs[0]}')"
88 )
89 config_name = cached_directory_path.parts[-3]
90 warning_msg = (
91 f"Found the latest cached dataset configuration '{config_name}' at {cached_directory_path} "
92 f"(last modified on {time.ctime(_get_modification_time(cached_directory_path))})."
93 )
ValueError: There are multiple 'mteb/arguana' configurations in the cache: queries, corpus, default
Please specify which configuration to reload from the cache, e.g.
load_dataset('mteb/arguana', 'queries')
```
It works when I run the same code with HF_HUB_OFFLINE=0, but after the data is downloaded, I turn off the HF hub cache with HF_HUB_OFFLINE=1, and then this error appears.
Are there some files I am missing with hub disabled?
### Steps to reproduce the bug
from datasets import load_dataset, get_dataset_config_names
ds = load_dataset("mteb/arguana")
with HF_HUB_OFFLINE=1
(after already running it with HF_HUB_OFFLINE=0 and populating the datasets cache)
### Expected behavior
Dataset loaded successfully as it does with HF_HUB_OFFLINE=1
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-5.15.148.2-2.cm2-x86_64-with-glibc2.35
- Python version: 3.10.14
- `huggingface_hub` version: 0.27.0
- PyArrow version: 17.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.6.1 | open | 2025-01-06T17:42:49Z | 2025-01-06T17:43:31Z | https://github.com/huggingface/datasets/issues/7359 | [] | Bhavya6187 | 1 |
cvat-ai/cvat | computer-vision | 8,514 | Track has the incorrect number of interpolated frames equals to the total number of video frames | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
I am not having this error a few days ago, but I don't know why it started happening recently.
Basically, if a video has 10000 frames,
- annotate a rectangle shape would produce 1 annotation (which is correct)
- but adding a track (even with just one frame) would produce 20000 interpolated frames which is related to the number of the frames of the video
I also checked the exported annotation count, it aligns with what's showing in the statistics.
### Expected Behavior
~For a track, the total number of interpolated frames should be the frames in between any key frames.~
nvm, it turns out to be I didn't switch outside of a track so the track propagates all the way to the last frame.
### Possible Solution
_No response_
### Environment
- Git hash `0b7fc51339fa37e2fea6ead477fdaacbd1049278`
- Docker 27.3.1
- Linux: 6.8.0-45-generic #45~22.04.1-Ubuntu
_No response_ | closed | 2024-10-05T22:07:19Z | 2024-10-06T08:05:53Z | https://github.com/cvat-ai/cvat/issues/8514 | [
"bug"
] | Microos | 1 |
flavors/django-graphql-jwt | graphql | 51 | Raise custom exception with decorators | It would be nice if the decorators would support raising a custom exception (see line in decorators). I would like to implement a `@verification_required` annotation that would extend the `@user_passes_test` function to include checking the custom verification field on the `User` model. However, it currently raises the permission error, rather than my own error for verification.
https://github.com/flavors/django-graphql-jwt/blob/master/graphql_jwt/decorators.py#L40 | closed | 2018-11-14T20:09:25Z | 2018-11-28T16:06:53Z | https://github.com/flavors/django-graphql-jwt/issues/51 | [] | kendallroth | 2 |
python-restx/flask-restx | api | 602 | Unit Test "EmailTest.test_invalid_values_check" is failing | ### Summary
The unit test case "EmailTest.test_invalid_values_check" fails because the `not-found .fr` was registered a few weeks ago and doesn't throw a `ValueError` exception anymore. This affects the CI process. ([Tests #2026](https://github.com/python-restx/flask-restx/actions/runs/8105255279))
For test cases, reserved domain names should be used instead, as described in [RFC 2606](https://datatracker.ietf.org/doc/html/rfc2606#section-2).
### Code
The `EmailTest.test_invalid_values_check` testcase expects all the domain names to be invalid although the `not-found .fr` one is actually valid. ([ICANN Lookup](https://lookup.icann.org/en/lookup))
```python
@pytest.mark.parametrize(
"value",
[
"coucou@not-found.fr",
"me@localhost",
"me@127.0.0.1",
"me@127.1.2.3",
"me@::1",
"me@200.8.9.10",
"me@2001:db8:85a3::8a2e:370:7334",
],
)
def test_invalid_values_check(self, value):
email = inputs.email(check=True)
self.assert_bad_email(email, value)
```
### Expected Behavior
All unit tests should be passing.
### Actual Behavior
The `EmailTest.test_invalid_values_check` testcase fails.
### Error Messages/Stack Trace
Here's a partial result from `tox` command.
```
======================================================= FAILURES =======================================================
_______________________________ EmailTest.test_invalid_values_check[coucou@not-found.fr] _______________________________
self = <tests.test_inputs.EmailTest object at 0x7f5816d65150>, value = 'coucou@not-found.fr'
@pytest.mark.parametrize(
"value",
[
"coucou@not-found.fr",
"me@localhost",
"me@127.0.0.1",
"me@127.1.2.3",
"me@::1",
"me@200.8.9.10",
"me@2001:db8:85a3::8a2e:370:7334",
],
)
def test_invalid_values_check(self, value):
email = inputs.email(check=True)
> self.assert_bad_email(email, value)
tests/test_inputs.py:667:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <tests.test_inputs.EmailTest object at 0x7f5816d65150>
validator = <flask_restx.inputs.email object at 0x7f5816d648b0>, value = 'coucou@not-found.fr'
msg = '{0} is not a valid email'
def assert_bad_email(self, validator, value, msg=None):
msg = msg or "{0} is not a valid email"
> with pytest.raises(ValueError) as cm:
E Failed: DID NOT RAISE <class 'ValueError'>
tests/test_inputs.py:605: Failed
```
### Environment
- Python version: py 3.8 - 3.12, pypy 3.8
- Flask version: 2, 3
- Flask-RESTX version: 1.3.0
- Other installed Flask extensions: none
### Additional Context
- This issue will be fixed by https://github.com/python-restx/flask-restx/pull/603 | closed | 2024-04-15T16:01:33Z | 2024-07-24T14:27:37Z | https://github.com/python-restx/flask-restx/issues/602 | [
"bug"
] | StellaContrail | 1 |
jupyter-book/jupyter-book | jupyter | 1,502 | MAINT: Remove mathjax_config migration to mathjax3_config due to sphinx>4 changes | ### Description / Summary
`jupyter-book/config.py` contains a `todo` item to remove some code when we release `jupyter-book>=0.14`
The purpose of this code is to check if someone using `sphinx>=4` has specified `mathjax_config` in:
```yaml
sphinx:
config:
mathjax_config:
```
Sphinx has made this config version specific using `mathjax2_config` and `mathjax3_config`
We have added an automatic migration from `mathjax_config` -> `mathjax3_config` (if the user hasn't request `mathjax version 2` using the `mathjax_path` config) to keep project builds working when using `sphinx>=4` but issue a `jb` warning message to update their `_config.yml` file.
### Value / benefit
Will likely not support `sphinx3` at this stage so can be safely removed
### Implementation details
Simple deletion of code block
### Tasks to complete
_No response_ | open | 2021-10-13T06:40:42Z | 2021-10-13T06:42:59Z | https://github.com/jupyter-book/jupyter-book/issues/1502 | [
"deprecate"
] | mmcky | 0 |
plotly/dash-table | dash | 634 | user friendly option for table to size to parent container | At this time it is a little tricky to get the table to vertically size to align with neighbouring components. This is exhibited while using Design Kit and including the table in a row, block, or card component.
For example: in order to get it to size within a given row
<img width="1278" alt="Screen Shot 2019-10-30 at 8 17 58 PM" src="https://user-images.githubusercontent.com/2789078/67908625-93f97500-fb52-11e9-905a-d48f79d134b4.png">
we had to use `table_style`:
`style_table={'height': '100%','overflowY':'scroll'},`
as well as target the #table via css:
```
#table {
height: 100%;
}
```
([full example code](https://github.com/plotly/dash-customer-success/blob/mckinsey-app/apps/kedro-test/app.py))
cc @wbrgss | open | 2019-10-31T00:22:37Z | 2019-10-31T00:22:37Z | https://github.com/plotly/dash-table/issues/634 | [] | cldougl | 0 |
tensorflow/tensor2tensor | machine-learning | 1,278 | Documentation for batch_size contradicts behaviour on TPU for variable length sequences | ### Description
The focus of this issue the interaction between the hparams batch_size and max_length.
The [comments on the batch_size hparam](https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/layers/common_hparams.py#L31) state the following:
```
# If the problem consists of variable-length sequences
# (see problem.batch_size_means_tokens()), then this is the number
# of tokens per batch per GPU or per TPU core. Otherwise, this is
# the number of examples per GPU or per TPU core.
```
The relevant portion is the claim: **for variable-length sequences, the batch_size is the number of tokens per batch per gpu or per tpu core.**
However, the input_fn in [problem.py](https://github.com/medicode/tensor2tensor/blob/master/tensor2tensor/data_generators/problem.py#L793) does not follow this behaviour.
Instead, if a variable length sequence is provided and tpus are enabled, and hparams.max_length is not defined, the following behaviour will occur:
1. padded_shapes will be determined [here](https://github.com/medicode/tensor2tensor/blob/master/tensor2tensor/data_generators/problem.py#L906), and the length to which the strings will be padded will be a function of [max_length()](https://github.com/medicode/tensor2tensor/blob/master/tensor2tensor/data_generators/problem.py#L272) which in turn depends on hparams.batch_size.
2. A padded batch will be returned, as seen [here](https://github.com/medicode/tensor2tensor/blob/master/tensor2tensor/data_generators/problem.py#L922), with batch_size == hparams.batch_size and padded_shapes will have a length of max_length == batch_size, along the previously unspecified sequence length dimension.
Our tokens per batch can be measured as num_examples * tokens_per_example, but as seen in the code above, both num_examples and tokens_per_example are being set to batch_size.
The end result is that the tokens per batch if using batch_size with tpu is batch_size * batch_size.
One way to get around this is to the use hparams.max_length hyperparameter, since it has higher precedent in determining the value of [max_length()](https://github.com/medicode/tensor2tensor/blob/master/tensor2tensor/data_generators/problem.py#L272). But this solution isn't documentation aligned.
Summary of the issue:
The number of tokens per batch follows the following behaviour
if hparams.max_length and hparams.batch_size:
tokens_per_batch = hparams.batch_size * hparams.max_length
elif batch_size:
tokens_per_batch = batch_size * batch_size
The documentation for batch_size doesn't outline either of these behaviours.
**Recommended Solutions**
If the goal is to be consistent with the current documentation
1. If hparams.max_length is not defined and thus batch_size is being used for max_length(), then the num_examples_per_batch for the dataset (e.g [here](https://github.com/medicode/tensor2tensor/blob/master/tensor2tensor/data_generators/problem.py#L922) should be 1.
2. If hparams.max_length is defined, then num_examples_per_batch should be hparams.batch_size // hparams.max_length
Alternatively, the documentation for batch_size could be changed to reflect this behaviour.
| open | 2018-12-05T16:27:34Z | 2018-12-05T16:27:34Z | https://github.com/tensorflow/tensor2tensor/issues/1278 | [] | etragas-fathom | 0 |
tensorflow/tensor2tensor | machine-learning | 1,586 | How to get perplexity scores for Language model training? | ### Description
In language model training I am using transformer model and geting logs for loss value around 3.5 after 3 lac steps.what is perplexity score equivalent to it? Is it e^(loss)?
| open | 2019-05-27T09:01:21Z | 2019-05-28T04:29:41Z | https://github.com/tensorflow/tensor2tensor/issues/1586 | [] | ashu5644 | 0 |
codertimo/BERT-pytorch | nlp | 49 | Question about the loss of Masked LM | Thank you very much for this great contribution.
I found the loss of masked LM didn't decrease when it reaches the value around 7. However, in the official tensorflow implementation, the loss of MLM decreases to 1 easily. I think something went wrong in your implementation.
In additional, I found the code can not predict the next sentence correctly. I think the reason is: `self.criterion = nn.NLLLoss(ignore_index=0)`. It can not be used as criterion for sentence prediction because the label of sentence is 1 or 0. We should remove ignore_index=0 for sentence prediction.
I am looking forward to your reply~ | open | 2018-12-07T12:07:29Z | 2023-06-14T09:36:40Z | https://github.com/codertimo/BERT-pytorch/issues/49 | [
"good first issue"
] | zhezhaoa | 5 |
InstaPy/InstaPy | automation | 5,963 | Allow commenting on specific posts that already contain comment from user | <!-- Did you know that we have a Discord channel ? Join us: https://discord.gg/FDETsht -->
<!-- Is this a Feature Request ? Please, check out our Wiki first https://github.com/timgrossmann/InstaPy/wiki -->
## Expected Behavior
Comment a specific post that already contain a comment of the user.
## Current Behavior
Double postings on same post is not possible
## Possible Solution (optional)
## InstaPy configuration
not relevant
| closed | 2020-12-16T00:12:39Z | 2020-12-17T19:57:23Z | https://github.com/InstaPy/InstaPy/issues/5963 | [] | sokratis1988 | 2 |
plotly/dash | data-visualization | 2,294 | replace `libraryTarget` in webpack config | Please use [output.library.type](https://webpack.js.org/configuration/output/#outputlibrarytype) instead of [output.libraryTarget](https://webpack.js.org/configuration/output/#outputlibrarytarget) as they might drop support for `output.libraryTarget` in the future.
https://github.com/plotly/dash/blob/a9eb3434023880c229882dd2aacb7d501e7eb4d2/components/dash-core-components/webpack.config.js#L53 | closed | 2022-10-29T14:22:36Z | 2023-05-15T19:30:16Z | https://github.com/plotly/dash/issues/2294 | [] | archmoj | 0 |
scikit-learn/scikit-learn | machine-learning | 30,811 | Are there any pitfalls by combining `n_jobs` and `random_state`? |
### Discussed in https://github.com/scikit-learn/scikit-learn/discussions/30809
<div type='discussions-op-text'>
<sup>Originally posted by **adosar** February 11, 2025</sup>
In [Controlling randomness](https://scikit-learn.org/stable/common_pitfalls.html#common-pitfalls-and-recommended-practices), the guide is discussing how to properly control randomness either for an estimator or CV or when using both. However, there is no mention if `random_state` and `n_jobs > 1` interact in any unexpected way.
Lets consider a typical use case where a user cross validates a `RandomForestClassifier` with `KFold`:
```python
estimator = RandomForestClassifer(random_state=np.random.RandomState(1)) # Recommended to pass RandomState instance.
kfold = KFold(shuffle=True, random_state=42) # Recommended to pass int.
cross_val_score(estimator, n_jobs=-1, ..., cv=kfold)
```
Since `n_jobs=-1` this means that multiple cores will be used for cross validation (e.g. 1 core per fold).
Would the same state be used for the different folds, since during multiprocessing the estimator and hence the `rng` passed to it, is copied via fork?
</div> | closed | 2025-02-11T15:52:47Z | 2025-02-20T10:39:55Z | https://github.com/scikit-learn/scikit-learn/issues/30811 | [
"Needs Triage"
] | adosar | 4 |
amidaware/tacticalrmm | django | 1,596 | Feature request: Network Load Check - like CPU Load/Memory Check | Please add an Check for Network Load, like that ones for CPU and Memory.
Ive tried it with an task that runs every second and outputs the Averange Bandwith of the Last 10 Seconds to an Custom Field, its OK but it doesnt have an Diagram like the CPU and Memory.
| open | 2023-08-11T15:02:07Z | 2023-08-16T12:58:08Z | https://github.com/amidaware/tacticalrmm/issues/1596 | [
"enhancement"
] | maieredv-manuel | 1 |
python-restx/flask-restx | flask | 540 | Validate Custom Field in @api.expect() | I want to make a custom field when making an API model and validate that it is either a string or boolean or a list
I have tried
```
class CustomField(fields.Raw):
__schema_type__ = ["String", "Boolean", "List"]
__schema_example__ = "string or boolean or list"
def format(self, value):
if isinstance(value, str) or isinstance(value, bool) or isinstance(value, list):
return value
else:
raise fields.MarshallingError(
"Invalid type. Allowed types: str, bool, or list."
)
```
but get the following error when trying to use it.
```
jsonschema.exceptions.UnknownType: Unknown type 'String' for validator with schema:
{'description': 'Value for the name type of foo',
'example': 'string or boolean or list',
'type': ['String', 'Boolean', 'List']}
```
I have also tried
```
class StringOrBooleanOrList(fields.Raw):
"""
Marshal a value as a string or list.
"""
def validate(self, value):
if isinstance(value, str):
return value
elif isinstance(value, list):
return value
elif isinstance(value, bool):
return value
else:
raise ValidationError(
"Invalid input type. Must be a string or list or boolean"
)
```
With that,
strings do not work at all
I get the validation error saying
string value must be valid JSON
With lists and booleans
it passes through but then I get an error from the API
```
{
"errors": {
"foo.0.value": "['bar'] is not of type 'object'"
},
"message": "Input payload validation failed"
}
```
What is the correct way of achieving this in Flask restx? | closed | 2023-05-04T03:55:57Z | 2023-05-09T01:12:12Z | https://github.com/python-restx/flask-restx/issues/540 | [
"question"
] | p-g-p-t | 1 |
ageitgey/face_recognition | python | 902 | opencv读取图像报错:352 环境为win10+pycharm+python3.7+opencv4.1.0 ,希望得到解决 | import cv2 as cv
import numpy as np
print("-------hello python---------")
scr = cv.imread(r"C://Users//ASUS//Pictures//Saved Pictures//d.jpg")
cv.namedWindow("input image", cv.WINDOW_AUTOSIZE)
cv.imshow("input image", scr)
cv.waitKey(0)
cv.destroyAllWindows()
报错:cv2.error: OpenCV(4.1.0) C:\projects\opencv-python\opencv\modules\highgui\src\window.cpp:352: error: (-215:Assertion failed) size.width>0 && size.height>0 in function 'cv::imshow'
| open | 2019-08-09T06:33:34Z | 2020-05-24T09:46:57Z | https://github.com/ageitgey/face_recognition/issues/902 | [] | RehobothRoselle | 7 |
indico/indico | sqlalchemy | 6,714 | Replace `Y` with `y` when formatting dates w/ babel | The `Y` pattern is an [ISO week date](https://babel.pocoo.org/en/latest/dates.html) which can give unexpected results when the date is
in the last/first week of the year. We should use `y` instead. Luckily, there's not that many instances where we use it. | closed | 2025-01-27T11:43:30Z | 2025-01-28T16:00:07Z | https://github.com/indico/indico/issues/6714 | [
"bug",
"help wanted"
] | tomasr8 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.