repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
JaidedAI/EasyOCR | machine-learning | 586 | It is possible to get position of every character? | closed | 2021-11-08T09:02:26Z | 2023-12-06T14:52:08Z | https://github.com/JaidedAI/EasyOCR/issues/586 | [] | AndyZhu1991 | 2 | |
gradio-app/gradio | data-visualization | 10,335 | How to present mathematical formulas? | Firstly, **I Tried gr.Markdown**. It doesn't work
Then, **I tried gr.Markdown and js** like this:
`<script type="text/javascript" async
src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.7/MathJax.js?config=TeX-MML-AM_CHTML'"
</script>`
`out = gr.HTML(label="Answer", value=mathjax_script + "<div>This is a formula: $y = mx + b$</div>")`
BUT, it doesn't work
**So I would like to ask how to present the mathematical formula?**
**Thanks!!!!!** | closed | 2025-01-11T11:17:32Z | 2025-01-12T15:40:11Z | https://github.com/gradio-app/gradio/issues/10335 | [] | MrJs133 | 1 |
jupyterhub/zero-to-jupyterhub-k8s | jupyter | 2,808 | Allow "filter" property for scheduling.userScheduler.plugins to add more scheduler plugins | ### Proposed change
`scheduling.userScheduler.plugins` configures kube-scheduler for better scheduling of singluser pods. kube-scheduler support various extension points like queueSort, preFilter, filter, postFilter, preScore, score, reserve, permit, preBind, bind, multiPoint. However Z2JH uses and allows only `score`.
Therefore, I suggest to allow other extension points.
### Who would use this feature?
<!-- Describe who would benefit from using this feature. -->
I am currently solving singleuser pod scheduling problem, related to this [issue](https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/1851). So I implemented custom [kube-scheduler plugin](https://github.com/team-monolith-product/scheduler-plugins/blob/master/pkg/imagelocalityfilter/imagelocalityfilter.go) to filter out node without images. So users facing same problems might use this feature.
### (Optional): Suggest a solution
<!-- Describe what you think needs to be done. Doing that is an excellent first step to get the feature implemented. -->
In this [configmap](https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/main/jupyterhub/templates/scheduling/user-scheduler/configmap.yaml), `plugins` are inserted as yaml, so no additional Helm chart work is required. We just need to update schema.yaml.
We can simply allow `additionalProperties` for `scheduling.userScheduler.plugins` or add each extension points. After the implementation details are discussed, I can make PR for it. | closed | 2022-07-23T05:18:25Z | 2022-07-23T10:03:59Z | https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/2808 | [
"enhancement"
] | a3626a | 4 |
mitmproxy/pdoc | api | 77 | Documenting methods starting with "_" character | I have a class with few methods starting with "_", all those methods were skipped by pdoc in documenting them, like example,
def _get_msg():
"""
Returns commit message
"""
def _get_password(username):
"""
return password
"""
Consistently all the methods like these in all the modules were skipped... all the getting methods are skipped...
Anything I am missing or is it a bug in pdoc tool ?
| closed | 2015-11-12T22:31:33Z | 2021-08-12T13:09:15Z | https://github.com/mitmproxy/pdoc/issues/77 | [] | gvvka0327 | 4 |
dask/dask | scikit-learn | 10,961 | Support bag.to_dataframe when query planning is enabled | As far as I can tell, dask bags can only be converted to dask dataframes when query planning is disabled. It would be great to support both query planning, and the flexibility of turning bags into dataframes. Currently calling to_dataframe with query planning enabled throws the following error.
> df = bag.to_dataframe(columns=['line'])
File ".../python3.10/site-packages/dask/bag/core.py", line 1624, in to_dataframe
return dd.DataFrame(dsk, dfs.name, meta, divisions)
TypeError: FrameBase.__init__() takes 2 positional arguments but 5 were given
| closed | 2024-02-27T15:54:35Z | 2024-02-28T13:33:43Z | https://github.com/dask/dask/issues/10961 | [
"needs triage"
] | b-phi | 1 |
JaidedAI/EasyOCR | pytorch | 603 | Combining paragraph=TRUE and keeping confidence score | Hi,
Thanks for building this library!
I had a couple of questions.
I am trying to use easyOCR and extract language from GSV images, but I am having issues with quality and performance (if anyone had any suggestions for GSV extraction, would be great). But when I use paragraph=TRUE, it gives me a better representation of the text box I'm extracting, but I lose the confidence score -- any chance to keep it?
Also, I am still a little convinced but adding multiple languages into one model. If i want to extract all the Arabic, Swedish, and English, for instance, how do i know which text box is English or Arabic? Or is it better to run the models individually and use the confidence score to understand if its closer to Arabic to English etc?
Thank you,
Tom | closed | 2021-11-30T09:00:19Z | 2023-11-14T15:46:26Z | https://github.com/JaidedAI/EasyOCR/issues/603 | [] | TomBenson27 | 4 |
jupyter-incubator/sparkmagic | jupyter | 593 | Unable to run SQL queries | I have an EMR cluster running Livy and I'm trying to work from a local notebook using `sparkmagic`. In my notebook I have the following:
```python
#### (cell separator)
%load_ext sparkmagic.magics
###
%manage_spark # where I add the endpoint and create a session
###
%%spark
df = spark.read.json("s3://mybucket/somedata/dt=2019-05-*")
###
%%spark
df.createOrReplaceTempView("data")
###
%%spark -c sql # as per the example
SHOW TABLES
```
The last cell yields the following error:
```
An error was encountered:
u'java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient;'
Traceback (most recent call last):
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/session.py", line 767, in sql
return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
File "/usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 69, in deco
raise AnalysisException(s.split(': ', 1)[1], stackTrace)
AnalysisException: u'java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient;'
```
I tried to run pretty much the same cells directly on Zeppelin (which is installed and running on the EMR) and it all worked as expected. | open | 2019-11-27T07:56:02Z | 2020-01-10T06:17:36Z | https://github.com/jupyter-incubator/sparkmagic/issues/593 | [
"awaiting-submitter-response"
] | drorata | 2 |
deepfakes/faceswap | deep-learning | 1,387 | License for commercial use | Hi,
I know your license is GPL, which means I can use this project for commercial use.
However I guess the licenses of code and model are different.
I am not sure which models you use and the models are available for the commercial use.
Could you confirm what your models are and I can use them for the commercial use?
Thanks. | closed | 2024-05-09T09:18:01Z | 2024-05-09T17:12:31Z | https://github.com/deepfakes/faceswap/issues/1387 | [] | BattleShipPark | 1 |
HIT-SCIR/ltp | nlp | 2 | LTP建立微群 | http://q.weibo.com/849045 欢迎大家讨论有关于LTP的问题或者提出建议和意见。
| closed | 2011-06-12T13:28:54Z | 2013-09-01T07:22:21Z | https://github.com/HIT-SCIR/ltp/issues/2 | [] | carfly | 1 |
pyppeteer/pyppeteer | automation | 477 | Getting error when trying to use html2image | 
| open | 2024-06-24T14:54:24Z | 2024-06-24T14:54:24Z | https://github.com/pyppeteer/pyppeteer/issues/477 | [] | DontPanic330 | 0 |
autogluon/autogluon | computer-vision | 4,301 | [tabular] Raise Exception if out-of-disk error occurs in model fit | Related: #3372
Tabular currently fails model fits if out-of-disk, but will continue to attempt to train models despite this. This can lead to very messy logs with all models failing in some form or another due to out-of-disk. We should detect the out-of-disk exception type and stop training entirely if it is encountered, with a detailed error message. | open | 2024-06-27T17:14:53Z | 2024-11-25T22:47:11Z | https://github.com/autogluon/autogluon/issues/4301 | [
"API & Doc",
"enhancement",
"module: tabular"
] | Innixma | 0 |
docarray/docarray | pydantic | 1,828 | DocList raises exception for type object. | ### Initial Checks
- [X] I have read and followed [the docs](https://docs.docarray.org/) and still think this is a bug
### Description
This [commit](https://github.com/docarray/docarray/commit/2f3b85e333446cfa9b8c4877c4ccf9ae49cae660) introduced a check to verify that DocList is not used with an object:
```
if (
isinstance(item, object)
and not is_typevar(item)
and not isinstance(item, str)
and item is not Any
):
raise TypeError('Expecting a type, got object instead')
```
This is quite a broad condition (it breaks things like `DocList[TorchTensor]`, or nested `DocList[DocList[...]]` for me for instance) as:
- Almost everything will be an object so the first line if almost a catch-all.
- `is_typevar` only checks for TypeVar objects.
Should this not check as well for something `not isinstance(item, type)` instead of `not is_typevar` to allow for classes ?
This way only non class objects (like instances of class object that are not classes themselves) will raise the `TypeError`.
### Example Code
```Python
from docarray import DocList
from docarray.typing import TorchTensor
test = DocList[TorchTensor]
```
### Python, DocArray & OS Version
```Text
0.39.1
```
### Affected Components
- [ ] [Vector Database / Index](https://docs.docarray.org/user_guide/storing/docindex/)
- [X] [Representing](https://docs.docarray.org/user_guide/representing/first_step)
- [ ] [Sending](https://docs.docarray.org/user_guide/sending/first_step/)
- [ ] [storing](https://docs.docarray.org/user_guide/storing/first_step/)
- [X] [multi modal data type](https://docs.docarray.org/data_types/first_steps/) | open | 2023-10-30T19:17:54Z | 2023-10-31T09:26:38Z | https://github.com/docarray/docarray/issues/1828 | [] | corentinmarek | 3 |
gradio-app/gradio | python | 10,471 | Things to deprecate for `gradio==6.0` and `gradio_client==2.0` | Starting a list now:
Gradio 6.0
- [ ] `type="tuples"` for `gr.Chatbot` / `gr.ChatInterface`
- [ ] `hf_token` from `load`
- [ ] Consider removing `ruff` as a core dependency and doing extras-install for custom components
- [ ] `ImageEditor` crop_size
- [ ] `DataFrame` row_count and col_count format should be better specified
- [ ] Make `allow_tags=True` the default in `gr.Chatbot`, see https://github.com/gradio-app/gradio/pull/10743
Client 2.0
- [ ] Client.deploy_discord
- [ ] Client.file() | open | 2025-01-30T22:16:14Z | 2025-03-07T21:37:56Z | https://github.com/gradio-app/gradio/issues/10471 | [
"refactor",
"tracking"
] | abidlabs | 0 |
Yorko/mlcourse.ai | pandas | 352 | Topic 7 typo | Agglomerative clustering
`# linkage — is an implementation if agglomerative algorithm`
should be `of` instead of `if`
Assignment 7
`For classification, use the support vector machine – class sklearn.svm.LinearSVC. In this course, we did study this algorithm separately, but it is well-known and you can read about it, for example here.`
it seems that it shoud be `didn't` instead of `did`. | closed | 2018-09-22T12:10:57Z | 2018-10-04T14:12:09Z | https://github.com/Yorko/mlcourse.ai/issues/352 | [
"minor_fix"
] | Vozf | 1 |
pytorch/pytorch | numpy | 149,452 | Support save_cubin (and therefore, support cpp_wrapper use cases) | cc @chauhang @penguinwu | open | 2025-03-18T22:02:34Z | 2025-03-20T19:38:52Z | https://github.com/pytorch/pytorch/issues/149452 | [
"triaged",
"oncall: pt2"
] | jamesjwu | 0 |
mljar/mljar-supervised | scikit-learn | 163 | Make BaseAlgorithm inherit from ABC | Since `BaseAlgorithm` is an abstract class, it should inherit from `ABC`, and the methods that must be implemented in child classes should be decorated with `@abstractmethod`. This could potentially prevent coding errors to fly under the radar, as it enforces child classes to implement methods.
Thoughts on this @pplonski ?
| open | 2020-09-02T01:42:38Z | 2020-09-14T09:04:50Z | https://github.com/mljar/mljar-supervised/issues/163 | [
"enhancement",
"refactor"
] | diogosilva30 | 2 |
saulpw/visidata | pandas | 2,043 | [whishlist] Completions for `help-search` without strict word order | I made a config `TableSheet.bindkey(ALT + "h", "help-search")` and I use it very frequently. However, I really miss the completions feature from `exec-longname`.
I'm asking to add completions to this function in a form that would allow entering words one by one without strict order but filtering the next words' completions selection.
<img width="603" alt="image" src="https://github.com/saulpw/visidata/assets/4896754/4bef4be3-5877-4eb6-8e0e-d37926f1caaa">
As I see it, it should suggest values from a module column and split words from a longname column in the following way:
When I call `help-search`, the input field appears. I enter "all" and the next suggested words would be: "threads", "cancel", "cmdlog".
If I choose "threads" after "all", then only the completion for "cancel" would be available.
Please consider developing such a feature. | closed | 2023-10-06T04:01:33Z | 2023-10-06T04:39:11Z | https://github.com/saulpw/visidata/issues/2043 | [
"wishlist"
] | maxim-uvarov | 2 |
autogluon/autogluon | scikit-learn | 4,334 | Add autogluon.eda support for python 3.11 | I noticed that autogluon.eda is not available for python 3.11 whereas we are in need to develop in 3.11. It would be great if this can be done, will be pretty beneficial. | open | 2024-07-22T20:59:23Z | 2024-07-23T00:34:06Z | https://github.com/autogluon/autogluon/issues/4334 | [
"module: eda",
"dependency"
] | aishsrini | 0 |
jina-ai/serve | fastapi | 5,238 | Proposal feature: Add Secrets object concept to Jina | **Problem**
Right now we have a problem with Jina that users need to pass in code or YAML sensitive information as tokens, passwords etc in `Jina YAMLs` or `Jina Python scripts`.
This exposes our users security.
**Proposal**
As an MLOps framework, we can create our own Secret concept analoguous to the Kubernetes Secret concept. The main difference is that in the Jina core we will not store or manage these secrets, only expose the logic on how these secrets have to be passed and read from the `Executors (microservices)`.
I suggest to create `secret` objects in Jina that can be attached to Flow (Gateway) or Executors both in YAML and in Python. Something similar to this (THESE DETAILS COULD CHANGE):
```yaml
jtype: Flow
secrets:
- name: secret_gateway
type: env
key: JINA_SECRET_GATEWAY
executors:
- name: preprocessor
secrets:
- name: secret_executor
type: env
key: JINA_SECRET1
uses_with:
confidential: ${{ SECRET.JINA_SECRET_EXECUTOR }}
```
```python
from jina import secrets
f = Flow(secrets=[{'name': 'secret_gateway', 'type': 'env','key': 'JINA_SECRET_GATEWAY'}]).add(secrets=[{'name': 'secret_executor', 'type': 'env','key': 'JINA_SECRET1'}], uses=PreProcessor, uses_with={'admin_emails': 'secrets.secret_executor'})
```
**User journey**
I believe the user journey can make sense accross all the ways Jina can be used. The same way `Kubernetes` separates the concerns of `creating` SECRETS and reading or accessing them in Pods we can do the same in Jina.
- Reading and accessing SECRETS in code is handled by these objects and the YAML python representation/interface
- Creating the SECRETS would remain in the user responsability in the following ways:
- When running Flows locally, simply set environment variables or store information in file system
- When running with Kubernetes, make sure to create K8s screts, `f.to_kubernetes_yaml` will do best effort to map `secrets` object to K8s secrets objects
- JCloud would eventually offer a `secrets` API to manage these objects before a Flow is created in JCloud
**Example**
With this feature, this PR #5221 could work like this:
```YAML
jtype: Flow
secrets:
- name: secret_external_deployment
type: env
key: JINA_SECRET_CAS
executors:
- name: clip-as-service
external: True
grpc_metadata:
- authentication_token: ${{SECRETS.JINA_SECRET_CAS}}
```
This would allow the user to store this `Flow.yml` without storing confidential or security problematic information
**Extra helpers**
Maybe we can provide extra helpers in Executors to read automatically from secrets in the future
**Implementation**
- [ ] Create Secrets objects and define their key-value structure
- [ ] Make them a Deployment argument in parser and make sure they are handled by JAMLCompatible
- [ ] Make sure in the local Flow YAML, the envs are properly passed to the new processed and that the ${{SECRETS.XXX}} or any agreed syntax is properly implemented to access the `secrets` INSIDE the `Executor` or `Gateway` code
- [ ] Handle in `to_kubernetes_yaml` with best effort the transformation to the `readFromEnv` and `readFromValue` for `Deployments`
- [ ] JCloud offer an API to expose that easily in the K8s cluster | closed | 2022-10-03T15:24:37Z | 2022-12-29T16:04:07Z | https://github.com/jina-ai/serve/issues/5238 | [] | JoanFM | 8 |
ml-tooling/opyrator | pydantic | 8 | Finalize auto-generation of python client | **Feature description:**
Finalize auto-generation of python client for Opyrator.
Every deployed Opyrator provides a Python client library via an endpoint method which can be installed with pip:
```bash
pip install http://my-opyrator:8080/client
```
And used in your code, as shown below:
```python
from my_opyrator import Client, Input
opyrator_client = Client("http://my-opyrator:8080")
result = opyrator_client.call(Input(text="hello", wait=1))
```
| closed | 2021-04-19T16:50:35Z | 2021-11-02T02:12:11Z | https://github.com/ml-tooling/opyrator/issues/8 | [
"feature",
"stale"
] | lukasmasuch | 2 |
JoeanAmier/TikTokDownloader | api | 217 | python运行5.4输入待采集的作品链接,报错网络异常:0,message='' | 5.3版本是能正常下载视频,试着把5.3的cookie复制进5.4的settings.json还是报错
打开链接显示blocked | open | 2024-05-16T13:17:29Z | 2024-05-16T13:17:29Z | https://github.com/JoeanAmier/TikTokDownloader/issues/217 | [] | Simba1999 | 0 |
mirumee/ariadne-codegen | graphql | 263 | Using Fragment on an interface is not working properly | Ariadne-codegen version: 0.11.0
I'm trying to define a fragment on the "Error" interface. In the actual code we have multiple different error classes implementing this same interface, here I have included a minimal example. I'm not allowed to share the actual code.
The query below works from the graphql UI but ariadne-codegen doesn't generate the right code.
## Inputs that don't work
pyproject.toml configuration:
```
[tool.ariadne-codegen]
schema_path = "schema.graphql"
queries_path = "queries.graphql"
target_package_name = "gql_client"
async_client = false
```
Contents of schema.graphql:
```
interface Error {
type: String
title: String
status: Int
detail: String
instance: String
}
type DeletionError implements Error {
type: String
title: String
status: Int
detail: String
instance: String
}
type DeleteThingPayload {
thing: Thing
errors: [DeletionError!]
}
type Thing {
id: ID!
name: String
}
input DeleteThingInput {
id: ID!
}
type Mutation {
deleteThing(input: DeleteThingInput!): DeleteThingPayload!
}
```
Contents of queries.graphql:
```
fragment DefaultError on Error {
... on Error {
type
title
status
detail
instance
}
}
mutation MyDeleteThing($id: ID!) {
deleteThing (input: {id: $id}) {
thing {
id
name
}
errors {
__typename
... DefaultError
}
}
}
```
When running ariadne-codegen, I expect to see all the error fields (type, title etc.) defined for the deleteThing query.
But instead only __typename is defined. Here's the relevant part of the generated code:
```
class MyDeleteThingDeleteThingErrors(BaseModel):
typename__: Literal["DeletionError"] = Field(alias="__typename")
```
## Inputs that work
Modifying the fragment to use the specific error type (DeletionError) works:
```
fragment DefaultError on DeletionError {
... on DeletionError {
type
title
status
detail
instance
}
}
```
Resulting code:
```
class MyDeleteThingDeleteThingErrors(BaseModel):
typename__: Literal["DeletionError"] = Field(alias="__typename")
type: Optional[str]
title: Optional[str]
status: Optional[int]
detail: Optional[str]
instance: Optional[str]
``` | closed | 2024-01-15T16:02:58Z | 2024-01-16T13:20:11Z | https://github.com/mirumee/ariadne-codegen/issues/263 | [] | akrejczinger | 3 |
kennethreitz/records | sqlalchemy | 20 | neutralize README text | A couple sentences toward the end start with "Of course", but the sentences could IMO be rewritten without those parts:
> Of course, all other features of Tablib are also available
>
> Of course, the recommended installation method is pip:
Removing "Of course" from those sentences still conveys the same amount of information. The README, as currently written, contains things that are obvious to you, but might not be to the casual reader.
As always, thanks for the great work
| closed | 2016-02-08T17:42:14Z | 2018-04-28T22:59:27Z | https://github.com/kennethreitz/records/issues/20 | [
"docs"
] | unbracketed | 1 |
tfranzel/drf-spectacular | rest-api | 943 | Model @property and SlugRelatedField not handled properly | In `v0.24.2` this works properly but `v0.25.0` and onward broke how SlugRelatedField and `@properties` are being handled.
**Describe the bug**
`SlugRelatedField` that reference a model `@property` are not being handled.
While stepping through the functions on how this field is being resolved, I tracked down how this field is being handled in `v0.25.0`.
First, this logic was updated in `v0.25.0` compared to `v0.24.2`.
https://github.com/tfranzel/drf-spectacular/blob/master/drf_spectacular/openapi.py#L663
The SlugRelatedField will get to this function. https://github.com/tfranzel/drf-spectacular/blob/7392ba99fc9017c21d571e354c87682b870d26c7/drf_spectacular/plumbing.py#L500
and will finally return this value at the end of traversal.
https://github.com/tfranzel/drf-spectacular/blob/7392ba99fc9017c21d571e354c87682b870d26c7/drf_spectacular/plumbing.py#L508-L509
Ultimately, this assertion is failing. https://github.com/tfranzel/drf-spectacular/blob/0.25.0/drf_spectacular/openapi.py#L544
Prior to this logic update, `SlugRelatedFields` used this logic to resolve. https://github.com/tfranzel/drf-spectacular/blob/0.24.2/drf_spectacular/openapi.py#L698-L699
**To Reproduce**
```python
class MyModel(models.Model):
label = models.CharField(
max_length=100,
unique=True,
)
class Meta:
verbose_name = _("MyModel")
verbose_name_plural = _("MyModels")
def __str__(self):
return self.label
@property
def property_field(self):
return "42"
# How the field is defined in a Serializer
forty_two = serializers.SlugRelatedField(
slug_field="property_field",
read_only=True,
)
```
API Response
```json
{
"forty_two": "42"
}
```
**Expected behavior**
SlugRelatedFields that use @properties don't fail assertion and are resolved the same way they are handled in `v0.24.2`
| closed | 2023-02-21T16:55:49Z | 2023-03-04T17:37:18Z | https://github.com/tfranzel/drf-spectacular/issues/943 | [
"bug",
"fix confirmation pending"
] | sparktx-adam-gleason | 4 |
davidsandberg/facenet | tensorflow | 402 | CUDA_ERROR_OUT_OF_MEMORY | @davidsandberg Even i have set the batch_size to 1, also occur this error
**I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0)
E tensorflow/stream_executor/cuda/cuda_driver.cc:924] failed to allocate 5.93G (6365773824 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY**
| closed | 2017-07-28T15:10:15Z | 2018-10-07T01:17:15Z | https://github.com/davidsandberg/facenet/issues/402 | [] | kuaikuaikim | 3 |
marimo-team/marimo | data-science | 3,588 | Allow edit mode only bookmarks or navigation headers | ### Description
For context, I am developing a web application that processes CSV files for statistical analysis. The application follows multiple decision paths depending on data characteristics. For example, if the data is normalized, specific tests are applied; if not, different tests are implemented or user input is requested before proceeding. Each test may generate multiple outputs and might require linked inputs requiring separate cells, which may also reference each other.
Following up on my request in #3586, I would like the ability to have cells that act as bookmarks or navigational headers in edit mode but remain hidden in app mode. For example, when setting up multiple helper functions or extraction functions, I want to be able to quickly navigate to specific sections. While the current bookmark system uses a table of contents, it remains visible to users. I'd prefer a way to create navigation points that are only accessible in edit mode.
### Suggested solution
A probable solution would be to add an argument like "edit only" to the markdown cell, indicating that it should only be rendered in edit mode or for debugging purposes. This way, the cell wouldn't appear when someone is using the application in app mode.
```
mo.md("Helper Function Set {editonly}")
```
### Alternative
_No response_
### Additional context
_No response_ | closed | 2025-01-27T20:09:27Z | 2025-01-27T20:48:11Z | https://github.com/marimo-team/marimo/issues/3588 | [
"enhancement"
] | mimansajaiswal | 4 |
streamlit/streamlit | machine-learning | 10,578 | Input events immediately terminate everything? | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
For a minimal reproducer, I have provided a distilled version of the "LLM Chat" demo app, with the "LLM" extracted out into a generator.
Everything works fine as long as you wait for a response before sending your next query. But if you send a query while the generator is still running (e.g. if it is still in the `sleep()` call), on the next rerun you get this exception:
```
ValueError: generator already executing
```
It seems input events immediately terminate the running instance of the script and do a next rerun, which puts the generator (and one would presume other state-machine-like constructs as well) into an inconsistent state.
Note that this is not specific to generators, as you can never be sure that some "critical section" of code for a state transition got completed. This, to me, sounds like undesirable behaviour, at least in some contexts.
Thansk for the help!
### Reproducible Code Example
```Python
import streamlit as st
import time
if "messages" not in st.session_state:
st.session_state.messages = []
if "prompt" not in st.session_state:
st.session_state.prompt = None
# Generator coroutine to simulate external API
# Returns true if data updated, false otherwise
def run_stream():
while True:
prompt = st.session_state.prompt
if prompt:
st.session_state.messages.append({"role": "user", "content": prompt})
yield True
time.sleep(0.5) # API delay
st.session_state.messages.append({"role": "assistant", "content": "Do you need help?"})
yield True
yield False
if "run_stream" not in st.session_state:
st.session_state.run_stream = run_stream()
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])
st.session_state.prompt = st.chat_input("What is up?")
if next(st.session_state.run_stream):
st.rerun() # Rerun if stream updated something
```
### Steps To Reproduce
1. send a message
2. send another message before the coroutine has responded
### Expected Behavior
I would expect the input event to be queued for the next rerun, or some way to be able to configure input widgets to do that. Or perhaps ways to have the current run terminate somewhat gracefully, by marking sections of code to be run "atomically".
### Current Behavior
It throws the exception:
```
ValueError: generator already executing
```
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.42.2
- Python version: 3.13.2
- Operating System: MacOS 15.3.1
- Browser: Firefox
### Additional Information
_No response_ | open | 2025-03-01T15:18:51Z | 2025-03-03T21:27:11Z | https://github.com/streamlit/streamlit/issues/10578 | [
"type:bug",
"status:expected-behavior"
] | ConcurrentCrab | 6 |
gradio-app/gradio | machine-learning | 10,536 | Can gr.Dataframe support editable by column name or number | - [ ] I have searched to see if a similar issue already exists.
No similar issue
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
We can specify the Dataframe to be editable in initial parameter. However, in most cases, only some of columns are editable, and the others
are not. Can we specify the editable column by its name or number? Then it will save the logic when user override onclick or onchange event.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
I supports we have a input parameter in the construction function, such as editable_column=[1,2,3], then when user clicks column1, column2, column3, the onclick event will not be triggered.
**Additional context**
Add any other context or screenshots about the feature request here.
Thanks,
Tommy
| closed | 2025-02-07T03:40:35Z | 2025-02-07T04:02:01Z | https://github.com/gradio-app/gradio/issues/10536 | [] | Yb2S3Man | 2 |
pydata/xarray | numpy | 9,557 | Incorrect error raised for chunked Zarr region write | ### What happened?
Writing a chunk with `to_zarr()` on a specific region incorrectly fails if a variable is chunked with Dask, even if the variable's chunks are compatible with the Zarr store.
### What did you expect to happen?
This code path is used by [Xarray-Beam](https://github.com/google/xarray-beam/blob/86c8f5b4f6c99324ee134a0e5668daa6ff87f407/xarray_beam/_src/zarr.py#L314). In particular, [this test](https://github.com/google/xarray-beam/blob/86c8f5b4f6c99324ee134a0e5668daa6ff87f407/xarray_beam/_src/zarr_test.py#L348) in Xarray-Beam fails with the latest development version of Xarray.
### Minimal Complete Verifiable Example
```Python
import xarray
import numpy as np
data = np.random.RandomState(0).randn(2920, 25, 53)
ds = xarray.Dataset({'temperature': (('time', 'lat', 'lon'), data)})
chunks = {'time': 1000, 'lat': 25, 'lon': 53}
store = 'testing.zarr'
ds.chunk(chunks).to_zarr(store, compute=False)
region = {'time': slice(1000, 2000, 1)}
chunk = ds.isel(region)
chunk = chunk.chunk() # triggers error
chunk.chunk().to_zarr(store, region=region)
```
### MVCE confirmation
- [x] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [x] Complete example — the example is self-contained, including all data and the text of any traceback.
- [x] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [x] New issue — a search of GitHub Issues suggests this is not a duplicate.
- [x] Recent environment — the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
```Python
ValueError Traceback (most recent call last)
<ipython-input-18-ce32e8ce49e1> in <cell line: 1>()
----> 1 chunk.chunk().to_zarr(store, region=region)
/usr/local/lib/python3.10/dist-packages/xarray/core/dataset.py in to_zarr(self, store, chunk_store, mode, synchronizer, group, encoding, compute, consolidated, append_dim, region, safe_chunks, storage_options, zarr_version, write_empty_chunks, chunkmanager_store_kwargs)
2570 from xarray.backends.api import to_zarr
2571
-> 2572 return to_zarr( # type: ignore[call-overload,misc]
2573 self,
2574 store=store,
/usr/local/lib/python3.10/dist-packages/xarray/backends/api.py in to_zarr(dataset, store, chunk_store, mode, synchronizer, group, encoding, compute, consolidated, append_dim, region, safe_chunks, storage_options, zarr_version, write_empty_chunks, chunkmanager_store_kwargs)
1782 writer = ArrayWriter()
1783 # TODO: figure out how to properly handle unlimited_dims
-> 1784 dump_to_store(dataset, zstore, writer, encoding=encoding)
1785 writes = writer.sync(
1786 compute=compute, chunkmanager_store_kwargs=chunkmanager_store_kwargs
/usr/local/lib/python3.10/dist-packages/xarray/backends/api.py in dump_to_store(dataset, store, writer, encoder, encoding, unlimited_dims)
1465 variables, attrs = encoder(variables, attrs)
1466
-> 1467 store.store(variables, attrs, check_encoding, writer, unlimited_dims=unlimited_dims)
1468
1469
/usr/local/lib/python3.10/dist-packages/xarray/backends/zarr.py in store(self, variables, attributes, check_encoding_set, writer, unlimited_dims)
784 variables_to_set = variables_encoded
785
--> 786 self.set_variables(
787 variables_to_set, check_encoding_set, writer, unlimited_dims=unlimited_dims
788 )
/usr/local/lib/python3.10/dist-packages/xarray/backends/zarr.py in set_variables(self, variables, check_encoding_set, writer, unlimited_dims)
882 # Note: Ideally there should be two functions, one for validating the chunks and
883 # another one for extracting the encoding.
--> 884 encoding = extract_zarr_variable_encoding(
885 v,
886 raise_on_invalid=vn in check_encoding_set,
/usr/local/lib/python3.10/dist-packages/xarray/backends/zarr.py in extract_zarr_variable_encoding(variable, raise_on_invalid, name, safe_chunks, region, mode, shape)
341 del encoding[k]
342
--> 343 chunks = _determine_zarr_chunks(
344 enc_chunks=encoding.get("chunks"),
345 var_chunks=variable.chunks,
/usr/local/lib/python3.10/dist-packages/xarray/backends/zarr.py in _determine_zarr_chunks(enc_chunks, var_chunks, ndim, name, safe_chunks, region, mode, shape)
243 # is equal to the size of the last chunk
244 if dchunks[-1] % zchunk != size % zchunk:
--> 245 raise ValueError(base_error)
246 elif dchunks[-1] % zchunk:
247 raise ValueError(base_error)
ValueError: Specified zarr chunks encoding['chunks']=(1000, 25, 53) for variable named 'temperature' would overlap multiple dask chunks ((1000,), (25,), (53,)) on the region (slice(1000, 2000, 1), slice(None, None, None), slice(None, None, None)). Writing this array in parallel with dask could lead to corrupted data.Consider either rechunking using `chunk()`, deleting or modifying `encoding['chunks']`, or specify `safe_chunks=False`.
```
### Anything else we need to know?
These error messages were introduced by https://github.com/pydata/xarray/pull/9527
### Environment
<details>
</details>
| closed | 2024-09-30T18:45:50Z | 2024-09-30T21:32:51Z | https://github.com/pydata/xarray/issues/9557 | [
"bug",
"topic-zarr"
] | shoyer | 3 |
mlfoundations/open_clip | computer-vision | 869 | Handling Negative Pairs in Fine-Tuning of CLIP Models | Hello guys!
First, I'd like to clarify my understanding of Contrastive Loss for CLIP models: If I'm not mistaken, when training a CLIP model, contrastive loss is employed, involving typical triplets consisting of positive and negative pairs derived from the images. Put simply, during training, the aim is to maximize the cosine similarity between correct image-caption vector pairs while minimizing the similarity scores between all incorrect pairs. Am I right?
So, I would like to be able to provide the negative Paris into my fine-tuning. In my experiment, the embeddings for both negative and positive samples must be well-distributed. However, following the guide, I noticed that it doesn't specify how to handle the negative samples. It seems that during training, the model utilizes the previous batch as the negative pairs. Am I missing something? Is this correct?
# Now, ready to take gradients for the last accum_freq batches.
# Re-do the forward pass for those batches, and use the cached features from the other batches as negatives.
# Call backwards each time, but only step optimizer at the end.
optimizer.zero_grad()
for j in range(args.accum_freq):
images = accum_images[j]
texts = accum_texts[j]
with autocast():
model_out = model(images, texts)
inputs_no_accum = {}
inputs_no_accum["logit_scale"] = logit_scale = model_out.pop("logit_scale")
if "logit_bias" in model_out:
inputs_no_accum["logit_bias"] = model_out.pop("logit_bias")
inputs = {}
for key, val in accum_features.items():
accumulated = accum_features[key]
inputs[key] = torch.cat(accumulated[:j] + [model_out[key]] + accumulated[j + 1:])
losses = loss(**inputs, **inputs_no_accum, output_dict=True)
del inputs
del inputs_no_accum
total_loss = sum(losses.values())
losses["loss"] = total_loss
backward(total_loss, scaler)
However, this approach raises a question in my mind: what happens if the previous batch contains items that are quite similar to the items in the current batch? Wouldn't this affect the efficacy of the contrastive loss?
I would greatly appreciate any insights or clarifications on these points.
Thank you in advance!!
| closed | 2024-05-08T17:53:05Z | 2024-05-08T18:21:04Z | https://github.com/mlfoundations/open_clip/issues/869 | [] | doramasma | 1 |
huggingface/datasets | machine-learning | 7,178 | Support Python 3.11 | Support Python 3.11: https://peps.python.org/pep-0664/ | closed | 2024-09-27T08:50:47Z | 2024-10-08T16:21:04Z | https://github.com/huggingface/datasets/issues/7178 | [
"enhancement"
] | albertvillanova | 0 |
slackapi/python-slack-sdk | asyncio | 1,571 | `initial_value` for `RichTextInputElement` should also accept type `RichTextBlock` | While using the `RichTextInputElement`, I realized through the type information that the type of `initial_value` expects to be an optional dictionary. There is seemingly no mention of `RichTextBlock`, so I had thought that at that point I would need to go and look at the API and manually create dictionaries to handle this.
This is not true. `initial_value` can be set to a RichTextBlock directly, without even needing to convert it to a dictionary. For example, this code works, without me even needing to transform any of these blocks to a `dict`:
```python
blocks.InputBlock(
label="Description / Context",
block_id="description",
element=blocks.RichTextInputElement(
action_id="contents",
initial_value=blocks.RichTextBlock(
elements=[
blocks.RichTextSectionElement(
elements=[
blocks.RichTextElementParts.Text(text="Hey, "),
blocks.RichTextElementParts.Text(
text="this", style={"italic": True}
),
blocks.RichTextElementParts.Text(
text="is what you should be looking at. "
),
blocks.RichTextElementParts.Text(
text="Please", style={"bold": True}
),
]
)
],
),
),
)
```
...but it gives me an error in my IDE from `mypy` (rightly, because `initial_value` doesn't have `RichTextBlock` in its type definition):

Can the `initial_value` for `RichTextInputElement` include `RichTextBlock`? Thanks!
### Category (place an `x` in each of the `[ ]`)
- [ ] **slack_sdk.web.WebClient (sync/async)** (Web API client)
- [ ] **slack_sdk.webhook.WebhookClient (sync/async)** (Incoming Webhook, response_url sender)
- [x] **slack_sdk.models** (UI component builders)
- [ ] **slack_sdk.oauth** (OAuth Flow Utilities)
- [ ] **slack_sdk.socket_mode** (Socket Mode client)
- [ ] **slack_sdk.audit_logs** (Audit Logs API client)
- [ ] **slack_sdk.scim** (SCIM API client)
- [ ] **slack_sdk.rtm** (RTM client)
- [ ] **slack_sdk.signature** (Request Signature Verifier)
### Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/python-slack-sdk/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2024-10-09T17:13:17Z | 2024-10-10T04:14:15Z | https://github.com/slackapi/python-slack-sdk/issues/1571 | [
"bug",
"web-client",
"Version: 3x"
] | macintacos | 0 |
microsoft/nni | data-science | 5,771 | Using export_data() not working with DartsStrategy() | Hi,
I want to use export_data() after experiment.run() in the DARTS tutorial (https://nni.readthedocs.io/en/stable/tutorials/darts.html). However, I get a runtime error:
> RuntimeError: Experiment is not running
For the "Hello NAS!" (https://nni.readthedocs.io/en/stable/tutorials/hello_nas.html) example, this function works fine.
Do you have any workaround/idea on how to fix this?
Best regards,
Felix | open | 2024-04-21T15:07:52Z | 2024-04-21T15:07:52Z | https://github.com/microsoft/nni/issues/5771 | [] | felix011235 | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 767 | ValueError: num_samples should be a positive integeral value, but got num_samples=0 | please help!I met this problem and can't solve it | open | 2019-09-14T09:16:51Z | 2021-12-08T21:24:06Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/767 | [] | tpy9 | 5 |
unionai-oss/pandera | pandas | 1,028 | Extended from_format, to_format support | Is there any interest/possibility for support distributed versions of the automated I/O functionality using the from(to)_format syntax in the Schema models?
Something like an additional argument in the Config class within the Schema model that would change the pd.read_csv function from being a pandas function to a modin.pandas function for example. My use case is wanting to be able to either run a pipeline without using distributed functionality during testing, and then run the pipeline on a ray cluster as the project solidifies.
Thanks! | open | 2022-11-21T17:19:41Z | 2022-11-21T17:50:34Z | https://github.com/unionai-oss/pandera/issues/1028 | [
"enhancement"
] | aboomer07 | 2 |
allenai/allennlp | nlp | 4,931 | Investigate plotext as alternative to matplotlib for find-lr command output | See https://pypi.org/project/plotext/.
Plots right in terminal with ASCII. Would be nice to see the results right in the terminal, especially when you're working on a remote server. | open | 2021-01-26T15:19:55Z | 2021-04-22T17:03:23Z | https://github.com/allenai/allennlp/issues/4931 | [] | epwalsh | 6 |
aio-libs/aiomysql | sqlalchemy | 624 | Does not work on python3.10 | Aiomysql does not work on python3.10 because python asyncio changed
# Screenshot

# Reference
https://docs.python.org/ko/3/whatsnew/3.10.html#whatsnew310-removed
> The loop parameter has been removed from most of asyncio〈s high-level API following deprecation in Python 3.8. The motivation behind this change is multifold:
> 1. This simplifies the high-level API.
> 2. The functions in the high-level API have been implicitly getting the current thread’s running event loop since Python 3.7. There isn’t a need to pass the event loop to the API in most normal use cases.
> 3. Event loop passing is error-prone especially when dealing with loops running in different threads. | closed | 2021-10-14T23:49:58Z | 2021-10-18T10:04:28Z | https://github.com/aio-libs/aiomysql/issues/624 | [] | ehdgua01 | 1 |
InstaPy/InstaPy | automation | 6,098 | Modify quickstart.py, run request before session | ## Expected Behavior
Note -> I'm javascript developer, don't had experience with Python
In quickstart.py script, I added request to pull profile from some API
```
response = requests.get("http://127.0.0.1:3333/api/v1/internal/profiles/1")
profile = response.text
print(profile)
set_workspace(path="fullpath" + insta_username)
session = InstaPy(username=insta_username,password=insta_password,headless_browser=True)
with smart_run(session):
session.follow_user_followers(['xxxx'], amount=10, randomize=False)
```
I expected to first execute request, then run Instapy session
## Current Behavior
Currently behavior is that first run instapy script, with INFO rows, follows profile followers and after that prints response and after that sets workspace
log me with this
## Possible Solution (optional)
## InstaPy configuration
| closed | 2021-02-28T21:38:06Z | 2021-07-21T04:18:44Z | https://github.com/InstaPy/InstaPy/issues/6098 | [
"wontfix"
] | makiBaraba | 2 |
wger-project/wger | django | 879 | Trying to get in touch regarding a security issue | Hey there!
I belong to an open source security research community, and a member (@asura-n) has found an issue, but doesn’t know the best way to disclose it.
If not a hassle, might you kindly add a `SECURITY.md` file with an email, or another contact method? GitHub [recommends](https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository) this best practice to ensure security issues are responsibly disclosed, and it would serve as a simple instruction for security researchers in the future.
Thank you for your consideration, and I look forward to hearing from you!
(cc @huntr-helper) | closed | 2021-11-09T18:48:31Z | 2022-01-27T07:27:42Z | https://github.com/wger-project/wger/issues/879 | [] | zidingz | 7 |
plotly/dash | flask | 2,736 | [Feature Request] Python 3.12 support | **Is your feature request related to a problem? Please describe.**
Currently CI only tests on 3.9 and 3.
**Describe the solution you'd like**
I'd like to see ci run against 3.8 and 3.12
**Describe alternatives you've considered**
n/a
**Additional context**
n/a
| closed | 2024-01-31T11:39:12Z | 2024-07-23T23:10:44Z | https://github.com/plotly/dash/issues/2736 | [] | graingert-coef | 2 |
plotly/dash | jupyter | 2,523 | Enable simple Boolean expressions and comparisons in DataTable style_data_conditional filter_query | I am trying to highlight rows in a DataTable if a column is `False`. I can’t seem to get it to work. Here’s what I’ve tried. Several failed attempts are left as comments.
```python
dash_table.DataTable(
...
style_data_conditional= {
"if": {
# "filter_query": "{my_boolean_column}",
"filter_query": "{my_boolean_column} = True",
# "filter_query": "{my_boolean_column} = true",
# "filter_query": "{my_boolean_column} = 'true'",
},
"backgroundColor": "#FF4136",
},
)
```
I thought I was doing something wrong, but I made a [post](https://community.plotly.com/t/truthiness-in-style-data-conditional-filter-query-expression-for-dash-table-datatable/75003) on the community forums, and another experienced user was also not able to make this work. Ideally, the first and second `filter_query` in my snippet above should work as written.
I was able to find a janky workaround, by adding a new column to the DataFrame I'm using to generate the DataTable that has values of either `0` or `1`, and comparing to those instead of `True` or `False`. But it's very unergonomic. | open | 2023-05-05T00:20:42Z | 2024-08-13T19:32:15Z | https://github.com/plotly/dash/issues/2523 | [
"feature",
"P3"
] | tom-kaufman | 0 |
xuebinqin/U-2-Net | computer-vision | 48 | index error | I tried to train on my image set. But got the following error.
My folder setup
train_images - has 2 folders - 'images ' and another folder 'mask'
When I ran the script it showed the correct number of images, but then got the following error
> Traceback (most recent call last):
> File "u2net_train.py", line 143, in <module>
> loss2, loss = muti_bce_loss_fusion(d0, d1, d2, d3, d4, d5, d6, labels_v)
> File "u2net_train.py", line 42, in muti_bce_loss_fusion
> print("l0: %3f, l1: %3f, l2: %3f, l3: %3f, l4: %3f, l5: %3f, l6: %3f\n"%(loss0.data[0],loss1.data[0],loss2.data[0],loss3.data[0],loss4.data[0],loss5.data[0],loss6.data[0]))
> IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number
> | closed | 2020-07-17T04:25:51Z | 2020-07-17T06:08:27Z | https://github.com/xuebinqin/U-2-Net/issues/48 | [] | johnyquest7 | 3 |
horovod/horovod | tensorflow | 3,011 | System env variables are not captured when using Spark as backend. | **Environment:**
1. Framework: (TensorFlow, Keras, PyTorch, MXNet) TensorFlow
2. Framework version: 2.5.0
3. Horovod version: 0.22.1
4. MPI version: 4.0.2
5. CUDA version: 11.2
6. NCCL version: 2.9.9
7. Python version: 3.8
8. Spark / PySpark version: 3.1.2
9. Ray version:
10. OS and version: Ubuntu20.04
11. GCC version: 9.3
12. CMake version: 3.19
**Checklist:**
1. Did you search issues to find if somebody asked this question before?
Y
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
Y
**Bug report:**
When using horovod.spark.run API, if `env` parameter is not set the following error could be seen:
```
Was unable to run mpirun --version:
/bin/sh: 1: mpirun: not found
```
my environment:
```
(base) allxu@allxu-home:~/github/e2e-train$ which mpirun
/home/allxu/miniconda3/bin/mpirun
```
The whole demo for this error could be found here: https://github.com/wjxiz1992/e2e-train
I've seen similar issue: https://github.com/horovod/horovod/issues/2002
but it's not the spark case.
| open | 2021-06-30T07:36:11Z | 2021-07-09T16:59:22Z | https://github.com/horovod/horovod/issues/3011 | [
"bug"
] | wjxiz1992 | 1 |
syrupy-project/syrupy | pytest | 332 | Incorrect indent for multiline string within list | **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Input:
```python
def test_snapshot(snapshot):
data = {
"key": [
"line 1\nline 2"
]
}
assert data == snapshot
```
```
pytest --snapshot-update
```
Output:
```ambr
# name: test_snapshot
<class 'dict'> {
'key': <class 'list'> [
'
line 1
line 2
',
],
}
---
```
**Expected behavior**
The multiline string `'` should be indented:
```ambr
# name: test_snapshot
<class 'dict'> {
'key': <class 'list'> [
'
line 1
line 2
',
],
}
---
```
syrupy==0.6.1
| closed | 2020-08-24T20:38:42Z | 2020-08-24T21:30:45Z | https://github.com/syrupy-project/syrupy/issues/332 | [
"bug",
"good first issue",
"released"
] | noahnu | 1 |
erdewit/ib_insync | asyncio | 623 | orders stuck in PendingSubmit | Anyone run into this issue and know how to resolve? I'm trying to build some Flask endpoints for ease of trading on my end, using the ib-insync Python package, but my orders never get submitted, even though I'm fairly sure the parameters are correct. I can place orders just fine on the GUI,
but even copying over the exact same parameters, the orders go through but always end up with PendingSubmit status. They are then stuck in open orders until I restart the app, never reaching Submitted status.
Here's one example, for buying some Google stock:
```
@app.route('/test-trade', methods=['GET'])
async def test_trade():
contract = Stock(conId=208813719, symbol='GOOGL', exchange='NASDAQ', currency='USD', localSymbol='GOOGL', tradingClass='NMS')
order = MarketOrder('BUY', 2)
trade = ib.placeOrder(contract, order)
print(trade.log)
print('---------------------------------------------------------------')
print(trade.orderStatus)
print('---------------------------------------------------------------')
print(ib.trades())
if trade.orderStatus.status == 'Submitted':
print(f"Order submitted: {trade}")
else:
print(f"Failed to submit order: {trade}")
return jsonify({"trade_log": trade.log}), 200
if __name__ == '__main__':
import asyncio
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
app.run(host='0.0.0.0', port=5000)
```
If I run this, I get this in console:
[TradeLogEntry(time=datetime.datetime(2023, 8, 9, 17, 7, 57, 976911, tzinfo=datetime.timezone.utc), status='PendingSubmit', message='', errorCode=0)]
---------------------------------------------------------------
OrderStatus(orderId=896049, status='PendingSubmit', filled=0.0, remaining=0.0, avgFillPrice=0.0, permId=0, parentId=0, lastFillPrice=0.0, clientId=0, whyHeld='', mktCapPrice=0.0)
---------------------------------------------------------------
[Trade(contract=Stock(conId=208813719, symbol='GOOGL', exchange='NASDAQ', currency='USD', localSymbol='GOOGL', tradingClass='NMS'), order=MarketOrder(orderId=896049, clientId=998, action='BUY', totalQuantity=2), orderStatus=OrderStatus(orderId=896049, status='PendingSubmit', filled=0.0, remaining=0.0, avgFillPrice=0.0, permId=0, parentId=0, lastFillPrice=0.0, clientId=0, whyHeld='', mktCapPrice=0.0), fills=[], log=[TradeLogEntry(time=datetime.datetime(2023, 8, 9, 17, 7, 57, 976911, tzinfo=datetime.timezone.utc), status='PendingSubmit', message='', errorCode=0)], advancedError='')]
Anyone know what this is because of? I can't seem to figure it out or find any similar solutions. Would really appreciate any suggestions.
| closed | 2023-08-09T17:01:23Z | 2023-08-25T09:07:07Z | https://github.com/erdewit/ib_insync/issues/623 | [] | 430scud | 2 |
google-research/bert | nlp | 994 | Averaging attention head weights | I was wondering if attention heads are weighted? I mean, do they contribute to the decision equally or are some heads accorded more weight overall than others? I don't mean the internal weights of each head, I mean a single weight for each head.
The reason I'm asking is because I'm visualizing attention for each head of the final layer, and it made sense in the end to combine the visualizations into just one averaged map. My one concern was that if some heads got more weight than others then averaging might not be representative. | open | 2020-01-23T22:12:05Z | 2020-01-23T22:12:05Z | https://github.com/google-research/bert/issues/994 | [] | melwazir | 0 |
xlwings/xlwings | automation | 1,611 | Transpose with formatting | #### OS (e.g. Windows 10 or macOS Sierra)
#### Python 3.8, XLwings 0.23.0
#### Currently trying to transpose a data range that is easily done with code below, however it just gives the raw value and does not keep the formatting of the original vertical range. Is it possible to transpose data and include the original formatting ?
```python
# Your traceback here
```
#### Include a minimal code sample to reproduce the issue (and attach a sample workbook if required!)
```python
# port_stats.range('B2').value = port_stats.range('A6:D53').options(transpose=True).value
``` | closed | 2021-06-07T12:47:25Z | 2021-07-09T10:11:08Z | https://github.com/xlwings/xlwings/issues/1611 | [] | Cbarrow98 | 3 |
opengeos/streamlit-geospatial | streamlit | 55 | URL https://geospatial.streamlitapp.com/ also leading to "Error running app" | Hi Qiusheng,
Thanks again for building this amazing tool. You pointed me to the newer URL, but I'm now again encountering the same error message. Interestingly, the .com URL did work for me initially (!?!).

I noticed in the latest closed issues that the geospatial streamlit app is working on your end so this may be hard to track down. I have tried all the following and am landing at "Error running app" every time:
Mac OS High Sierra 10.13.6, both Chrome and Safari (I cleared my cache and this still happened)
Windows 10 Enterprise, Chrome, Microsoft Edge and Firefox
I wonder if you check it on a machine where you've not done the development of the application if you'll run into the same error?
Cheers and thanks,
Diane
| closed | 2022-07-06T22:19:59Z | 2022-07-07T00:26:13Z | https://github.com/opengeos/streamlit-geospatial/issues/55 | [] | frizatch | 1 |
sloria/TextBlob | nlp | 471 | textblob | closed | 2024-08-29T06:30:52Z | 2024-08-29T13:16:32Z | https://github.com/sloria/TextBlob/issues/471 | [] | Byambatsogt481 | 0 | |
thp/urlwatch | automation | 612 | Example of GitHub Release is not working anymore | I just realized, that the [example](https://urlwatch.readthedocs.io/en/latest/filters.html?highlight=github#watching-github-releases) is no longer working. Working for me (without really knowing xpath is: `(//div[contains(@class,"release-header")]//a)[1]` | closed | 2021-01-11T08:10:02Z | 2021-01-11T08:12:42Z | https://github.com/thp/urlwatch/issues/612 | [] | oxivanisher | 1 |
airtai/faststream | asyncio | 1,486 | Feature: Support for disabling automatic topic creation. | **Is your feature request related to a problem? Please describe.**
We're using the confluent kafka backend, so this is relative to that. Haven't looked at aiokafka.
- FastStream will auto-create topics using confluent's AdminClient interface prior to instantiating the confluent Consumer object.
- FastStream hardcodes "allow.auto.create.topics" to True in the config it passes to the confluent Consumer. If this is also set to True on the brokers, this allows the confluent_kafka library itself to auto create topics as well at subscription time.
- FastStream does not set "allow.auto.create.topics" in the confluent Kafka Producer config. It's default value is True, however. Similarly, if also set to True on brokers, this allows auto create of topics as well at message production time.
- broker.subscriber() and broker.producer() provide no facilities to modify the above behavior.
We generally don't want to allow automatic topic creation in our production environment, or at least, we may want to disallow it and have knobs to control these things.
Moreover, I am curious why FastStream doesn't do admin-client based auto topic creation prior to making a confluent Producer object. Is there a reason. It's not something I care about directly.
**Describe the solution you'd like**
I'd like additional broker.subscriber() and broker.producer() kwargs that provide knobs to these settings where currently none exist.
**Feature code example**
Something like this:
```python
from faststream import FastStream
from faststream.confluent import KafkaBroker
from faststream import Logger
broker = KafkaBroker()
@broker.subscriber(
'my-topic',
# If True, allow FastStream to create topics using the admin client if they don't already exist
admin_create_topics=False,
# if True, allow confluent_kafka to auto create topics if the broker also allows it
allow_auto_create_topics=False,
)
@broker.producer(
'my-other-topic',
# if True, allow confluent_kafka to auto create topics if the broker also allows it
allow_auto_create_topics=False
)
async def hello(body: str, logger: Logger):
logger.info('hello')
return 'output'
app = FastStream(broker)
```
It would be a relatively simple modification, just passing around some additional parameters and doing some conditional branching.
**Describe alternatives you've considered**
I have patched such things manually to get the desired effect in lieu of direct support:
```python
import os
from faststream.confluent.client import create_topics
from confluent_kafka import Consumer, Producer
import faststream.confluent.client
from observing.environment import BUILD_ENV, BuildEnv
from observing.observing import logger as module_logger
def create_topics_patch(*args, **kwargs):
if BUILD_ENV == BuildEnv.PRODUCTION or "IS_TEST" in os.environ:
module_logger.warning(
f"faststream.confluent.client.create_topics():"
f" Automatic topic creation is disabled."
)
else:
create_topics(*args, **kwargs)
class PatchedConfluentConsumer(Consumer):
def __init__(self, config, logger=None):
config = config.copy()
if BUILD_ENV == BuildEnv.PRODUCTION or "IS_TEST" in os.environ:
if config.get("allow.auto.create.topics"):
module_logger.warning(
f"confluent_kafka.Consumer.__init__():"
f" Automatic topic creation is disabled."
)
config["allow.auto.create.topics"] = False
super().__init__(config, logger=logger)
class PatchedConfluentProducer(Producer):
def __init__(self, config, logger=None):
config = config.copy()
if BUILD_ENV == BuildEnv.PRODUCTION or "IS_TEST" in os.environ:
if config.get("allow.auto.create.topics"):
module_logger.warning(
f"confluent_kafka.Producer.__init__():"
f" Automatic topic creation is disabled."
)
config["allow.auto.create.topics"] = False
super().__init__(config, logger=logger)
def patch_conditionally_disabled_topic_creation():
"""Makes it so FastStream will only automatically create topics when Consumers are
running in a non-production environment and when not running tests.
This is useful for preventing accidental topic creation in production environments
and during tests.
FastStream, when creating a confluent Consumer, will call create_topics() which
allows for the broker.subscriber() call to pass some topic creation configuration
settings. This is disabled in production environments by this patch.
FastStream will also set the allow.auto.create.topics configuration setting to
True when creating a Confluent Kafka Consumer, and it does not touch the default
value of True for Confluent Kafka Producers. These settings require the broker to
be configured to allow for automatic topic creation as well to have any effect.
This patch will set the allow.auto.create.topics to False when in a production
environment, no matter what, for both Consumer and Producer. Topics must only be
created manually in production environments.
"""
faststream.confluent.client.create_topics = create_topics_patch
faststream.confluent.client.Consumer = PatchedConfluentConsumer
faststream.confluent.client.Producer = PatchedConfluentProducer
```
**Additional context**
None.
| closed | 2024-05-31T18:28:28Z | 2024-06-26T07:00:22Z | https://github.com/airtai/faststream/issues/1486 | [
"enhancement",
"Confluent"
] | andreaimprovised | 3 |
tartiflette/tartiflette | graphql | 378 | Want a directive to run on INPUT_FIELD_DEFINITION with access to query argument values | ## Want a directive to run on input field definition and be able to access `args`
I have this case
```
# the sdl
schema {
query(someInput: inputType): RootQuery
}
directive @exclusive(dependant: String!) on INPUT_FIELD_DEFINITION
type RootQuery {
...
}
input inputType {
inputField1: String @exclusive("inputField2")
inputField2: String @exclusive("inputField1")
}
```
```
# python
@Directive("exclusive")
class Exclusive:
async def on_field_definition(directive_args, next, parent, args, ctx, info):
###################################################################################
# if this could run when inputField1 or inputField2 are specified that'd be grrreat
###################################################################################
if directive_args["dependant"] in args:
raise Error
```
If there's a way to make the above scenario work, without mucking up the resolver, I'd gladly try it.
I've also tried with FIELD_DEFINITION, but that's illegal, as far as I can tell.
EDIT: I screwed up the sdl the first time around.
| closed | 2020-04-01T15:30:27Z | 2020-04-24T09:01:34Z | https://github.com/tartiflette/tartiflette/issues/378 | [] | jalelegenda | 3 |
exaloop/codon | numpy | 494 | frozen sets | The data type `frozenset` seems to be missing in codon. Is there a good substitute?
-- Peter Mueller | closed | 2023-10-30T18:50:25Z | 2024-11-10T19:39:23Z | https://github.com/exaloop/codon/issues/494 | [
"stdlib"
] | ypfmde | 4 |
adbar/trafilatura | web-scraping | 15 | Keep output file name same as input file name | First of all, let me thank you for working on this amazing tool!
One quick feature request: is it possible to keep the output file name same as the input file names, when running on the command line? Also, I see this error **# ERROR: file too small** in command line but while running the API programmatically, it doesn't provide any meaningful message when the extraction is not successful. Can we fix it? | closed | 2020-07-09T05:25:33Z | 2022-05-04T12:39:18Z | https://github.com/adbar/trafilatura/issues/15 | [] | sajidrahman | 2 |
CorentinJ/Real-Time-Voice-Cloning | python | 1,216 | When ı open this program my speaker brokes and if ı try to open any sound ı get a fan sound | I need to clode pc wait some time and open it again only solve | open | 2023-05-13T12:46:33Z | 2023-05-13T12:46:33Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1216 | [] | ygz23 | 0 |
man-group/arctic | pandas | 1,010 | Migrating existing tickstore to ArcticDB | #### Arctic Version
```
1.80.0
```
#### Arctic Store
```
TickStore
```
#### Platform and version
RHEL 7
#### Description of problem and/or code sample that reproduces the issue
Hello,
I have a collection of a few TB of tick data in an arctic tickstore that I want to migrate to the new ArcticDB.
I believe the only publicly available way to do this is to read all the data out from tickstore and write it to ArcticDB, is this correct?
If so, I was wondering if there is a recommended approach for that. The only way I could think of was to read it in time chunks, say 1 hour at a time, and then write it to arcticDB. Is there a way to instead iterate over the underlying mongodb documents, read 1 at a time, and write the resulting dataframe to arcticDB? I looked through tickstore.py and couldn't see any methods that would support that but maybe I missed something or maybe one of the existing methods could be modified to accomplish this?
My reason for preferring a documents approach vs a time chunks approach would just be to:
A - have deterministic data sizes in the read/write process (no risk of running out of memory during the job)
B - seems cleaner to me, I worry about ticks at the very edge of the time window getting read twice, written twice and thus duplicated.
Thanks in advance for any help you can provide.
| closed | 2023-05-18T11:32:04Z | 2023-06-05T15:42:49Z | https://github.com/man-group/arctic/issues/1010 | [] | markeasec | 3 |
art049/odmantic | pydantic | 504 | Dump an model excluding `none` values | # Feature request
### Context
I don't want to waste database space writing empty data.
### Solution
Adding `exclude_none` paramater to `model_dump_doc` would solve the problem.
| open | 2024-10-27T20:00:31Z | 2024-10-29T18:44:07Z | https://github.com/art049/odmantic/issues/504 | [
"enhancement"
] | d3cryptofc | 0 |
scikit-learn/scikit-learn | python | 30,353 | Hang when fitting `SVC` to a specific dataset | ### Describe the bug
I am trying to fit an [`SVC`](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) to a specific dataset. The training process gets stuck, never finishing.
scikit-learn uses a fork of LIBSVM [version 3.10.0](https://github.com/scikit-learn/scikit-learn/blame/caaa1f52a0632294bf951a9283d015f7b5dd5dd5/sklearn/svm/src/libsvm/svm.h#L4) from [2011](https://github.com/cjlin1/libsvm/releases/tag/v310). The equivalent code using a newer version of LIBSVM succeeds, suggesting that there is an upstream bug fix that scikit-learn could merge in.
### Steps/Code to Reproduce
[libsvm_problematic_dataset.csv](https://github.com/user-attachments/files/17927924/libsvm_problematic_dataset.csv)
```python
import logging
from polars import read_csv
from sklearn.svm import SVC
_logger = logging.getLogger(__name__)
def main():
dataset = read_csv(
source='libsvm_problematic_dataset.csv'
)
x = dataset.select('feature').to_numpy()
y = dataset['label'].to_numpy()
_logger.info("Attempting to reproduce issue. If reproduced, the program will not exit.")
SVC(
C=100,
kernel='poly',
degree=4,
gamma=0.9597420397825849,
tol=0.01,
cache_size=1000,
class_weight={
0: 1.04884106,
1: 0.95550528
},
verbose=True
).fit(X=x, y=y)
_logger.error("The issue was not reproduced.")
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG)
main()
```
### Expected Results
```
INFO:__main__:Attempting to reproduce issue. If reproduced, the program will not exit.
.................................................................................................
WARNING: using -h 0 may be faster
*..............................
WARNING: using -h 0 may be faster
*.............
WARNING: using -h 0 may be faster
*..................................................................
WARNING: using -h 0 may be faster
*..........................
WARNING: using -h 0 may be faster
*..........
WARNING: using -h 0 may be faster
*..............
WARNING: using -h 0 may be faster
*................
WARNING: using -h 0 may be faster
*................
WARNING: using -h 0 may be faster
*...........................
WARNING: using -h 0 may be faster
*..............
WARNING: using -h 0 may be faster
*.............
WARNING: using -h 0 may be faster
*.............
WARNING: using -h 0 may be faster
*.............................
WARNING: using -h 0 may be faster
*............................................
WARNING: using -h 0 may be faster
*.......
WARNING: using -h 0 may be faster
*......................
WARNING: using -h 0 may be faster
*.............
WARNING: using -h 0 may be faster
*.
WARNING: using -h 0 may be faster
*
optimization finished, #iter = 460766
obj = -245114.248664, rho = 1.000020
nSV = 2452, nBSV = 2450
Total nSV = 2452
ERROR:__main__:The issue was not reproduced.
```
This expected result was generated using LIBSVM [version 3.30.0](https://pypi.org/project/libsvm-official/3.30.0/) with the following code:
```python
import logging
from libsvm.svmutil import svm_train
from polars import read_csv
_logger = logging.getLogger(__name__)
def main():
dataset = read_csv(
source='libsvm_problematic_dataset.csv'
)
x = dataset.select('feature').to_numpy()
y = dataset['label'].to_numpy()
_logger.info("Attempting to reproduce issue. If reproduced, the program will not exit.")
svm_train(
y, x,
[
'-s', 0,
'-t', 1,
'-d', 4,
'-g', 0.9597420397825849,
'-c', 100,
'-m', 1000,
'-e', 0.01,
'-w0', 1.04884106,
'-w1', 0.95550528
]
)
_logger.error("The issue was not reproduced.")
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG)
main()
```
### Actual Results
```
INFO:__main__:Attempting to reproduce issue. If reproduced, the program will not exit.
[LibSVM].....................................................
Warning: using -h 0 may be faster
*............
Warning: using -h 0 may be faster
*.....
Warning: using -h 0 may be faster
*.................
Warning: using -h 0 may be faster
*.....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
```
The program never exits.
### Versions
```shell
System:
python: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:26:25) [Clang 17.0.6 ]
executable: /opt/homebrew/Caskroom/miniforge/base/envs/libsvm-debugging/bin/python
machine: macOS-14.7.1-arm64-arm-64bit
Python dependencies:
sklearn: 1.5.2
pip: 24.3.1
setuptools: 75.6.0
numpy: 2.1.3
scipy: 1.14.1
Cython: None
pandas: None
matplotlib: None
joblib: 1.4.2
threadpoolctl: 3.5.0
Built with OpenMP: True
threadpoolctl info:
user_api: blas
internal_api: openblas
num_threads: 10
prefix: libopenblas
filepath: /opt/homebrew/Caskroom/miniforge/base/envs/libsvm-debugging/lib/libopenblas.0.dylib
version: 0.3.28
threading_layer: openmp
architecture: VORTEX
user_api: openmp
internal_api: openmp
num_threads: 10
prefix: libomp
filepath: /opt/homebrew/Caskroom/miniforge/base/envs/libsvm-debugging/lib/libomp.dylib
version: None
```
| open | 2024-11-27T02:41:11Z | 2024-12-04T01:14:11Z | https://github.com/scikit-learn/scikit-learn/issues/30353 | [
"Bug",
"Needs Investigation"
] | fumoboy007 | 4 |
benbusby/whoogle-search | flask | 1,099 | [FEATURE] Fallback to custom search provider | <!--
DO NOT REQUEST UI/THEME/GUI/APPEARANCE IMPROVEMENTS HERE
THESE SHOULD GO IN ISSUE #60
REQUESTING A NEW FEATURE SHOULD BE STRICTLY RELATED TO NEW FUNCTIONALITY
-->
**Fallback to custom search provider if whoogle breaks**
When whoogle fails to pull the results from google, it would be good to have the option to be forwarded to an alternative search engine with the search query that whoogle received.
https://whoogle.example.com/search?q=fail -> "It seems like whoogle is broken at the moment, would you like to continue on ecosia?" -> click "Yes" -> https://www.ecosia.org/search?q=fail
**Additional context**
I have whoogle set up as the default search engine for all devices in our household and at work.
While this only happened to me once so far, whoogle can and will eventually break when google introduces incompatible changes. In this case, the user has to switch open another search engine and reenter the search text. Or even worse - reconfigure the default search engine and perhaps not switch it back afterwards ;) | closed | 2023-11-18T18:48:09Z | 2023-12-20T19:17:27Z | https://github.com/benbusby/whoogle-search/issues/1099 | [
"enhancement"
] | moritzfl | 2 |
polakowo/vectorbt | data-visualization | 398 | vbt.BinanceData.download do not accept klines_type parameter | Hi,
i am trying to download binance future klines, but i get the error
TypeError: download_symbol() got an unexpected keyword argument 'klines_type'
[https://github.com/sammchardy/python-binance/blob/8cac5eac94b71af6532aef128a8c815d7fc0c617/binance/client.py#L870](https://github.com/sammchardy/python-binance/blob/8cac5eac94b71af6532aef128a8c815d7fc0c617/binance/client.py#L870)
```
from binance.enums import HistoricalKlinesType
symbols = ["BTCUSDT"]
startDate = "2020-12-01"
endtDate = "now UTC"
timeInterval = "1h"
binance_data = vbt.BinanceData.download(
symbols=symbols,
start=startDate,
# end=endtDate,
interval=timeInterval,
klines_type = HistoricalKlinesType.FUTURES
)
binance_data = binance_data.update()
binance_data.get()
``` | open | 2022-03-01T01:04:44Z | 2022-12-22T10:33:23Z | https://github.com/polakowo/vectorbt/issues/398 | [] | keiser1080 | 8 |
graphdeco-inria/gaussian-splatting | computer-vision | 1,140 | × Encountered error while trying to install package. ╰─> simple_knn | Hello tried installing and getting these errors. Please help if you can thanks in advanced :)
----------------------------------------------------------------------------------------------
Running setup.py install for simple_knn: finished with status 'error'
Pip subprocess error:
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [21 lines of output]
running bdist_wheel
C:\Users\Ed\anaconda3\envs\gaussian_splatting\lib\site-packages\torch\utils\cpp_extension.py:411: UserWarning: At.
warnings.warn(msg.format('we could not find ninja.'))
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-37
creating build\lib.win-amd64-cpython-37\diff_gaussian_rasterization
copying diff_gaussian_rasterization\__init__.py -> build\lib.win-amd64-cpython-37\diff_gaussian_rasterization
running build_ext
C:\Users\Ed\anaconda3\envs\gaussian_splatting\lib\site-packages\torch\utils\cpp_extension.py:346: UserWarning: Erd
warnings.warn(f'Error checking compiler version for {compiler}: {error}')
C:\Users\Ed\anaconda3\envs\gaussian_splatting\lib\site-packages\torch\utils\cpp_extension.py:813: UserWarning: Th.
warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda))
building 'diff_gaussian_rasterization._C' extension
creating build\temp.win-amd64-cpython-37
creating build\temp.win-amd64-cpython-37\Release
creating build\temp.win-amd64-cpython-37\Release\cuda_rasterizer
"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\nvcc" -c cuda_rasterizer/backward.cu -o build\temp.v
nvcc fatal : Cannot find compiler 'cl.exe' in PATH
error: command 'C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.8\\bin\\nvcc.exe' failed with exit cod1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for diff_gaussian_rasterization
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [12 lines of output]
running bdist_wheel
C:\Users\Ed\anaconda3\envs\gaussian_splatting\lib\site-packages\torch\utils\cpp_extension.py:411: UserWarning: At.
warnings.warn(msg.format('we could not find ninja.'))
running build
running build_ext
C:\Users\Ed\anaconda3\envs\gaussian_splatting\lib\site-packages\torch\utils\cpp_extension.py:346: UserWarning: Erd
warnings.warn(f'Error checking compiler version for {compiler}: {error}')
C:\Users\Ed\anaconda3\envs\gaussian_splatting\lib\site-packages\torch\utils\cpp_extension.py:813: UserWarning: Th.
warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda))
building 'simple_knn._C' extension
cl.exe /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\Ed\anaconda3\envs\gaussian_splatting\lib\site-packages\torc0
error: command 'cl.exe' failed: None
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for simple_knn
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [19 lines of output]
running bdist_wheel
C:\Users\Ed\anaconda3\envs\gaussian_splatting\lib\site-packages\torch\utils\cpp_extension.py:411: UserWarning: At.
warnings.warn(msg.format('we could not find ninja.'))
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-37
creating build\lib.win-amd64-cpython-37\fused_ssim
copying fused_ssim\__init__.py -> build\lib.win-amd64-cpython-37\fused_ssim
running build_ext
C:\Users\Ed\anaconda3\envs\gaussian_splatting\lib\site-packages\torch\utils\cpp_extension.py:346: UserWarning: Erd
warnings.warn(f'Error checking compiler version for {compiler}: {error}')
C:\Users\Ed\anaconda3\envs\gaussian_splatting\lib\site-packages\torch\utils\cpp_extension.py:813: UserWarning: Th.
warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda))
building 'fused_ssim_cuda' extension
creating build\temp.win-amd64-cpython-37
creating build\temp.win-amd64-cpython-37\Release
cl.exe /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\Ed\anaconda3\envs\gaussian_splatting\lib\site-packages\torc0
error: command 'cl.exe' failed: None
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for fused_ssim
error: subprocess-exited-with-error
× Running setup.py install for simple_knn did not run successfully.
│ exit code: 1
╰─> [28 lines of output]
running install
C:\Users\Ed\anaconda3\envs\gaussian_splatting\lib\site-packages\setuptools\_distutils\cmd.py:66: SetuptoolsDeprec.
!!
********************************************************************************
Please avoid running ``setup.py`` directly.
Instead, use pypa/build, pypa/installer or other
standards-based tools.
See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details.
********************************************************************************
!!
self.initialize_options()
running build
running build_ext
C:\Users\Ed\anaconda3\envs\gaussian_splatting\lib\site-packages\torch\utils\cpp_extension.py:411: UserWarning: At.
warnings.warn(msg.format('we could not find ninja.'))
C:\Users\Ed\anaconda3\envs\gaussian_splatting\lib\site-packages\torch\utils\cpp_extension.py:346: UserWarning: Erd
warnings.warn(f'Error checking compiler version for {compiler}: {error}')
C:\Users\Ed\anaconda3\envs\gaussian_splatting\lib\site-packages\torch\utils\cpp_extension.py:813: UserWarning: Th.
warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda))
building 'simple_knn._C' extension
creating build
creating build\temp.win-amd64-cpython-37
creating build\temp.win-amd64-cpython-37\Release
cl.exe /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\Ed\anaconda3\envs\gaussian_splatting\lib\site-packages\torc0
error: command 'cl.exe' failed: None
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> simple_knn
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
failed
CondaEnvException: Pip failed | open | 2025-01-09T14:39:38Z | 2025-01-27T16:46:15Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/1140 | [] | EddieV91 | 1 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,117 | 2FA appear not enforced on first login |
We have 2FA enforced on all secondary tenants (via advanced tenant settings).
When a new user registers, they are not prompted for 2FA registration at all.
Used to work before - current version 4.5.6
| closed | 2021-11-22T12:28:10Z | 2021-11-23T19:13:52Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3117 | [
"T: Bug",
"C: Client"
] | aetdr | 4 |
microsoft/nni | data-science | 4,798 | Support Output Padding for ConvTranspose2d in Model Speedup | **Describe the issue**:
Hi, I am trying pruning with nni. I found that currently nni doesn't take the output_padding argument of ConvTranspose2d into consideration, which might lead to a shape mismatch.
**Environment**:
- NNI version: 2.6.1
- Training service (local|remote|pai|aml|etc): remote
- Server OS (for remote mode only): Linux
- Python version: 3.8
- PyTorch/TensorFlow version: 1.11.0
- Is conda/virtualenv/venv used?: Yes
- Is running in Docker?: No
**Configuration**:

**Log message**:

BTW, I fixed the problem by modifying the replace_convtranspose2d function in nni/compression/pytorch/speedup/compress_modules.py

Is there anything else that I should modify? Thank you! | closed | 2022-04-24T07:19:07Z | 2022-09-09T09:53:35Z | https://github.com/microsoft/nni/issues/4798 | [] | haoshuai-orka | 3 |
statsmodels/statsmodels | data-science | 9,440 | DOC: cross-validation examples, supporting code ? | parking an example for cross-validation
https://stats.stackexchange.com/questions/657805/error-in-fitting-zero-inflated-negative-binomial-in-python-using-cross-validatio
I don't have an overview of what we have.
GAM is the only case, AFAIR, where we have kfold cross-validation built in.
Related issue is whether we support more approximate jackknife or leave one observation out, with computational shortcuts.
Also related: Do we have better eval measures for the mean of asymmetric distributions, e.g. Poisson, zero-inflated count models, than symmetric mean squared prediction error?
e.g. out of sample loglike
it would be good to have a notebook that illustrates cross-validation for variable selection, either with sklearn or without.
Second, provide overview of what we have or don't have.
One example would be how to reuse start_params, with the caveat that it has some look-ahead bias if the problem is not nice and does not have unique (local) MLE.
(GLM and most basic discrete models are globally convex
There are a few related issues for cross-validation and jackknife/looo
| open | 2024-11-30T20:55:50Z | 2024-11-30T20:55:51Z | https://github.com/statsmodels/statsmodels/issues/9440 | [
"type-enh",
"comp-docs",
"comp-base"
] | josef-pkt | 0 |
miguelgrinberg/Flask-SocketIO | flask | 1,351 | getting ValueError: read of closed file when sending a file with paramiko and flask socketio | when try to send files using paramiko sftp to an ssh server i get ValueError: read of closed file
note : this work perfectly when try to do the same in an HTTP view
this is how i connect to server and send file
```python
@socketio.on('client-connect')
def client_connected(message):
global data
HOST = data["host"]
HOSTUSERNAME = data['username']
PASSWORD = data['password']
PATH = data['path']
FILE = data['file']
if data:
for (host, hostusername, password, path) in zip(HOST, HOSTUSERNAME, PASSWORD, PATH):
#connect to the ssh server
emit('server_try_connect', {'data' : str('trying to connect to server '+host)})
client = Connection(host=host, username=hostusername, password=password)
emit('server-connected', {'data' : 'client connected'})
emit('server-connected-establish',{'data' : str('establishing connection to '+host)})
client.connect()
emit('server-connected-success', {'data' : 'client connected'})
#sending files.....
emit('sending-files', {'data' : "sending file"})
##########
#this is the line that cause the error
client.send_filestorage(FILE, FILE.filename, path)
############
emit('files-sent', {'data' : "file sent successfully"})
emit('files-extract',{'data' : "extrcting files"})
emit('files-extract-success', {'data' : str(client.extract(path, file_name=FILE.filename))})
#check if the file existe / and command lines
client.excute_command("ls")
emit('list-dir', {'data' : str(client.check_command())})
client.close_connection()
emit('connection-closed', {'data' : ('connection closed')})
```
this is send file fucntion
```python
#send a storage file
def send_filestorage(self, file_storage, file_in_server, file_path_inserver):
sftp = self.client.open_sftp()
# create a file path if the we have a path from the user
if file_path_inserver:
self.create_and_navigate_topath(sftp,file_path_inserver)
sftp.putfo(file_storage, file_in_server)
sftp.close()
````
this is the error
```
Exception in thread Thread-7:
Traceback (most recent call last):
File "c:\users\icom\appdata\local\programs\python\python37\Lib\threading.py", line 926, in _bootstrap_inner
self.run()
File "c:\users\icom\appdata\local\programs\python\python37\Lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\icom\Envs\wisevenv\lib\site-packages\socketio\server.py", line 682, in _handle_event_internal
r = server._trigger_event(data[0], namespace, sid, *data[1:])
File "C:\Users\icom\Envs\wisevenv\lib\site-packages\socketio\server.py", line 711, in _trigger_event
return self.handlers[namespace][event](*args)
File "C:\Users\icom\Envs\wisevenv\lib\site-packages\flask_socketio\__init__.py", line 283, in _handler
*args)
File "C:\Users\icom\Envs\wisevenv\lib\site-packages\flask_socketio\__init__.py", line 713, in _handle_event
ret = handler(*args)
File "C:\Users\icom\myProjects\share-wise\app.py", line 51, in test
client.send_filestorage(FILE, FILE.filename, path)
File "C:\Users\icom\myProjects\share-wise\utils.py", line 51, in send_filestorage
sftp.putfo(file_storage, file_in_server)
File "C:\Users\icom\Envs\wisevenv\lib\site-packages\paramiko\sftp_client.py", line 717, in putfo
reader=fl, writer=fr, file_size=file_size, callback=callback
File "C:\Users\icom\Envs\wisevenv\lib\site-packages\paramiko\sftp_client.py", line 678, in _transfer_with_callback
data = reader.read(32768)
File "C:\Users\icom\Envs\wisevenv\lib\tempfile.py", line 736, in read
return self._file.read(*args)
File "C:\Users\icom\Envs\wisevenv\lib\tempfile.py", line 481, in func_wrapper
return func(*args, **kwargs)
ValueError: read of closed file
``` | closed | 2020-08-13T11:29:16Z | 2020-08-20T10:32:23Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1351 | [
"question"
] | abderrahmaneMustapha | 4 |
dynaconf/dynaconf | flask | 602 | json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 12 column 5 (char 313) | I've tried to run flask with dynaconf using .toml file and got this error.
I checked the file base.py on function get_source_data and I changed
`content = self.file_reader(open_file)` to a hard_coded dict, like {'default': {'DEBUG': True, SQLALCHEMY_TRACK_MODIFICATIONS: False}...} and worked. I still not understand yet why.
I got the same dict when I printed the variable content using ipdb to debug.
` File "/home/perrella/workspace/iga-bbsurvey-api/env/lib/python3.7/site-packages/gunicorn/arbiter.py", line 589, in spawn_worker
worker.init_process()
File "/home/perrella/workspace/iga-bbsurvey-api/env/lib/python3.7/site-packages/gunicorn/workers/base.py", line 134, in init_process
self.load_wsgi()
File "/home/perrella/workspace/iga-bbsurvey-api/env/lib/python3.7/site-packages/gunicorn/workers/base.py", line 146, in load_wsgi
self.wsgi = self.app.wsgi()
File "/home/perrella/workspace/iga-bbsurvey-api/env/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/home/perrella/workspace/iga-bbsurvey-api/env/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 58, in load
return self.load_wsgiapp()
File "/home/perrella/workspace/iga-bbsurvey-api/env/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp
return util.import_app(self.app_uri)
File "/home/perrella/workspace/iga-bbsurvey-api/env/lib/python3.7/site-packages/gunicorn/util.py", line 359, in import_app
mod = importlib.import_module(module)
File "/home/perrella/workspace/iga-bbsurvey-api/env/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/perrella/workspace/iga-bbsurvey-api/iga_bbsurvey_api/wsgi.py", line 3, in <module>
app = create_app()
File "/home/perrella/workspace/iga-bbsurvey-api/iga_bbsurvey_api/app.py", line 17, in create_app
configuration.init_app(app)
File "/home/perrella/workspace/iga-bbsurvey-api/iga_bbsurvey_api/ext/configuration.py", line 5, in init_app
FlaskDynaconf(app)
File "/home/perrella/workspace/iga-bbsurvey-api/env/lib/python3.7/site-packages/dynaconf/contrib/flask_dynaconf.py", line 107, in __init__
self.init_app(app, **kwargs)
File "/home/perrella/workspace/iga-bbsurvey-api/env/lib/python3.7/site-packages/dynaconf/contrib/flask_dynaconf.py", line 116, in init_app
app.config = self.make_config(app)
File "/home/perrella/workspace/iga-bbsurvey-api/env/lib/python3.7/site-packages/dynaconf/contrib/flask_dynaconf.py", line 134, in make_config
_app=app,
File "/home/perrella/workspace/iga-bbsurvey-api/env/lib/python3.7/site-packages/dynaconf/contrib/flask_dynaconf.py", line 148, in __init__
Config.update(self, _settings.store)
File "/home/perrella/workspace/iga-bbsurvey-api/env/lib/python3.7/site-packages/dynaconf/base.py", line 145, in __getattr__
self._setup()
File "/home/perrella/workspace/iga-bbsurvey-api/env/lib/python3.7/site-packages/dynaconf/base.py", line 196, in _setup
settings_module=settings_module, **self._kwargs
File "/home/perrella/workspace/iga-bbsurvey-api/env/lib/python3.7/site-packages/dynaconf/base.py", line 259, in __init__
self.execute_loaders()
File "/home/perrella/workspace/iga-bbsurvey-api/env/lib/python3.7/site-packages/dynaconf/base.py", line 991, in execute_loaders
self, env=env, silent=silent, key=key, filename=filename
File "/home/perrella/workspace/iga-bbsurvey-api/env/lib/python3.7/site-packages/dynaconf/loaders/__init__.py", line 127, in settings_loader
obj, filename=mod_file, env=env, silent=silent, key=key
File "/home/perrella/workspace/iga-bbsurvey-api/env/lib/python3.7/site-packages/dynaconf/loaders/json_loader.py", line 45, in load
loader.load(filename=filename, key=key, silent=silent)
File "/home/perrella/workspace/iga-bbsurvey-api/env/lib/python3.7/site-packages/dynaconf/loaders/base.py", line 65, in load
source_data = self.get_source_data(files)
File "/home/perrella/workspace/iga-bbsurvey-api/env/lib/python3.7/site-packages/dynaconf/loaders/base.py", line 86, in get_source_data
content = self.file_reader(open_file)
File "/home/perrella/.pyenv/versions/3.7.0/lib/python3.7/json/__init__.py", line 296, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "/home/perrella/.pyenv/versions/3.7.0/lib/python3.7/json/__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "/home/perrella/.pyenv/versions/3.7.0/lib/python3.7/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/home/perrella/.pyenv/versions/3.7.0/lib/python3.7/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 12 column 5 (char 313)
` | closed | 2021-06-21T21:52:40Z | 2023-04-06T14:04:41Z | https://github.com/dynaconf/dynaconf/issues/602 | [
"question"
] | Leonardoperrella | 0 |
plotly/dash | data-visualization | 2,681 | background callback with MATCH is cancelled by future callbacks with different component ids | **Describe your context**
```
dash 2.14.1
dash-bootstrap-components 1.5.0
dash-core-components 2.0.0
dash-extensions 1.0.3
dash-html-components 2.0.0
dash-iconify 0.1.2
dash-mantine-components 0.12.1
dash-table 5.0.0
dash-uploader 0.7.0a1
```
**Describe the bug**
If a background callback using pattern matching with `MATCH` is triggered twice by two different objects (and therefore different values for `MATCH`), the first callback will be cancelled.
This only applies to background callbacks. "Normal" callbacks work fine.
**Expected behavior**
Both callbacks should finish execution and return their outputs, just like non-background callbacks. (At least if their IDs are different)
**MWE**
Here's a small example to reproduce the problem:
```python
import os
import time
import diskcache
import dash
from dash import Dash, html, Output, Input, MATCH, DiskcacheManager
from dash.exceptions import PreventUpdate
def layout():
return html.Div(
[
*[html.Button(f"Update {i}", id=dict(type="button", index=i)) for i in range(5)],
*[html.P(id=dict(type="text", index=i)) for i in range(5)],
]
)
@app.callback(
Output(dict(type="text", index=MATCH), "children"),
Input(dict(type="button", index=MATCH), "n_clicks"),
background=True,
prevent_initial_call=True,
)
def show_text(n_clicks: int):
if not n_clicks:
raise PreventUpdate
index = dash.ctx.triggered_id["index"]
print(f"started {index}")
time.sleep(3)
print(f"stopped {index}")
return str(index)
if __name__ == "__main__":
cache = diskcache.Cache(os.path.join(os.getcwd(), ".cache"))
app = Dash(background_callback_manager=DiskcacheManager(cache))
app.layout = layout()
app.run(host="0.0.0.0", port=8010)
```
If you click on multiple buttons (within 3 seconds after the last click), the previous execution of the callback will be canceled and only the id of the last button will be shown.
Sample output:
```
started 0
started 1
started 2
started 3
started 4
stopped 4
``` | open | 2023-10-31T12:03:08Z | 2024-08-13T19:42:10Z | https://github.com/plotly/dash/issues/2681 | [
"bug",
"sev-2",
"P3"
] | Jonas1302 | 5 |
ivy-llc/ivy | pytorch | 28,634 | Fix Frontend Failing Test: torch - activations.tensorflow.keras.activations.get | To-do List: https://github.com/unifyai/ivy/issues/27498 | closed | 2024-03-18T17:28:51Z | 2024-03-25T13:23:13Z | https://github.com/ivy-llc/ivy/issues/28634 | [
"Sub Task"
] | ZJay07 | 0 |
tflearn/tflearn | data-science | 1,036 | Calculating Metric other than accuracy on validation data | I am using the RMSprop optimizer and the metric R2 as shown below:-
`network = regression(network, optimizer='RMSprop',loss='mean_square',metric='R2',learning_rate=0.001)`
Then I am calling fit as follows:-
`model.fit(X, Y, n_epoch=10, validation_set=(X_test,Y_test), shuffle=True, show_metric=True, batch_size=64, snapshot_step=None, snapshot_epoch=True, run_id='gal_net_r0')`
This results in output lines like this:-
`Training Step: 60 | total loss: 8.47987 | time: 1.726ss
RMSProp | epoch: 004 | loss: 8.47987 - R2: 0.6595 | val_loss: 7.42683 - val_acc: 0.6743 -- iter: 900/900`
I am not sure what is happening here. Is R2 being calculated on the validation data and being printed as val_acc? or is val_acc is actually the metric:'accuracy' -- if that is the case, then how is one supposed to calculate metrics other than accuracy on the validation data? | open | 2018-04-17T00:00:53Z | 2018-04-17T00:00:53Z | https://github.com/tflearn/tflearn/issues/1036 | [] | aritraghsh09 | 0 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 793 | Confusion : What is a model ? | Hello, sorry for being a newbie in this Git . I try to understand how the git works and i encounter a serious understanding problem . Before starting , soft loads 3 elements : encoder, synthesizer and vocoder .
All three represent a model.
As far as i understood , those are the bricks that allow the user to recreate a voice . What is then a voice model ? By voice model i mean a model trained on a specific voice. How is this instance called, how to save and retrieve it ?
Thanx for any help | closed | 2021-07-10T12:33:44Z | 2021-08-25T09:41:59Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/793 | [] | RelatApp | 4 |
Yorko/mlcourse.ai | plotly | 330 | Яндекс&МФТИ, Coursera, Final project - Идентификация пользователей | Здравствуйте! Уточните пожалуйста, какова форма ответа в задании 2 недели, вопрос 2:
"Распределено ли нормально число уникальных сайтов в сессии?". В форме нет четких указаний на формулировку ответа, варианты "Нет", "No", значение статистики и p-value критерия Шапиро-Вилка не подходят...
Может, я неверно посчитал, но ведь так и не понять :) | closed | 2018-04-28T16:12:15Z | 2018-08-04T16:07:50Z | https://github.com/Yorko/mlcourse.ai/issues/330 | [
"invalid"
] | levbed | 1 |
yinkaisheng/Python-UIAutomation-for-Windows | automation | 213 | EditContro的只读属性可否修改成可读 | open | 2022-07-15T09:35:36Z | 2022-07-15T09:35:36Z | https://github.com/yinkaisheng/Python-UIAutomation-for-Windows/issues/213 | [] | rejii0042 | 0 | |
QingdaoU/OnlineJudge | django | 380 | 能否增加sql编程的环境 | 在提交issue之前请
- 认真阅读文档 http://docs.onlinejudge.me/#/
- 搜索和查看历史issues
- 安全类问题请不要在 GitHub 上公布,请发送邮件到 `admin@qduoj.com`,根据漏洞危害程度发送红包感谢。
然后提交issue请写清楚下列事项
- 进行什么操作的时候遇到了什么问题,最好能有复现步骤
- 错误提示是什么,如果看不到错误提示,请去data文件夹查看相应log文件。大段的错误提示请包在代码块标记里面。
- 你尝试修复问题的操作
- 页面问题请写清浏览器版本,尽量有截图
| closed | 2021-08-05T00:23:55Z | 2021-08-07T05:56:05Z | https://github.com/QingdaoU/OnlineJudge/issues/380 | [] | youngsforever | 1 |
deepinsight/insightface | pytorch | 2,486 | Can buffalo_* models be used in production application? | I'm developing face recognition application for login purpose(internal applications only) and using buffalo_l model.
Can I use this model used in production? | closed | 2023-12-03T05:12:59Z | 2024-06-06T05:11:43Z | https://github.com/deepinsight/insightface/issues/2486 | [] | Manideep0425 | 3 |
unit8co/darts | data-science | 2,140 | [Question] Best way to convert stock price data as Darts TimeSeries with non-conventional frequency | I have a stock time series dataframe that looks like:

When I tried to convert to time series:
series = TimeSeries.from_dataframe(stock_normalized_df)
I get the following error:
ERROR:darts.timeseries:ValueError: The time index of the provided DataArray is missing the freq attribute, and the frequency could not be directly inferred. This probably comes from inconsistent date frequencies with missing dates. If you know the actual frequency, try setting `fill_missing_dates=True, freq=actual_frequency`. If not, try setting `fill_missing_dates=True, freq=None` to see if a frequency can be inferred.
As you know, stock trading on an exchange has its own frequency that's not listed in https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases
I tried to use timeSeries with rangeIndex, but the tft model I use complains that it only accepts time series with datetimeIndex.
Any suggestion on how I can use tft model with timeSeries? I can make it work by setting frequency to something like 'B' or 'D' and then fill missing dates with values, but obviously it doesn't make sense because we are dealing with stock data that only happen on market trading days.
| closed | 2024-01-02T05:25:16Z | 2024-04-17T07:11:40Z | https://github.com/unit8co/darts/issues/2140 | [
"question"
] | sophia-kwon | 4 |
jina-ai/serve | machine-learning | 6,125 | Will the Pydantic dependency be upgraded in the future?Currently, due to version issues, some third-party packages are incompatible. | **Describe the feature**
<!-- A clear and concise description of what the feature is. -->
**Your proposal**
<!-- copy past your code/pull request link -->
---
<!-- Optional, but really help us locate the problem faster -->
**Environment**
<!-- Run `jina --version-full` and copy paste the output here -->
Currently, due to version issues, some third-party packages are incompatible.
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. --> | closed | 2023-12-07T03:43:42Z | 2025-03-24T13:46:48Z | https://github.com/jina-ai/serve/issues/6125 | [] | wangqn1 | 19 |
sherlock-project/sherlock | python | 1,435 | Google cloud shell | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Put x into all boxes (like this [x]) once you have completed what they say.
Make sure complete everything in the checklist.
-->
- [x] I'm requesting support for a new site
- [x] I've checked for similar site support requests including closed ones
- [x] I've checked that the site I am requesting has not been removed in the past and is not documented in [removed_sites.md](https://github.com/sherlock-project/sherlock/blob/master/removed_sites.md)
- [x] The site I am requesting support for is not a pornographic website
- [x] I'm only requesting support of **one** website (create a separate issue for each site)
## Description
<!--
Provide the url to the website and the name of the website.
If there is anything else you want to mention regarding the site support request include that in this section.
-->
URL: | closed | 2022-08-12T00:58:23Z | 2023-02-04T17:30:43Z | https://github.com/sherlock-project/sherlock/issues/1435 | [
"site support request"
] | FnDee | 1 |
deepspeedai/DeepSpeed | machine-learning | 6,796 | Question about using Autotuner with ZeRO and tensor parallelism | I’m reading through the Autotuner code and found this function:
https://github.com/microsoft/DeepSpeed/blob/f743feca033515fdded50a98093da5a48eb41e74/deepspeed/autotuning/autotuner.py#L278-L302
It computes
`total_gpus = self.exp_num_nodes * self.exp_num_gpus
`
based on the autotuning config. If ZeRO is enabled, then based on which stages are enabled, `optimizer_mem`, `gradients_mem`, and/or `params_mem` get sharded across the GPUs. But then if `self.mp_size()` (for tensor parallelism, right?) is greater than 1, then the total memory usage is divided _again_ by the amount of tensor parallelism. So if ZeRO and tensor parallelism are both enabled, this is double-dipping, right? With N GPUs, we can’t get the per-GPU memory usage any smaller than 1/N. I’m not sure if
1. there’s a bug here
2. if the value of the `num_gpus` flag supposed to be reduced by the amount of tensor parallelism, or
3. if I’m not understanding this correctly. | open | 2024-11-27T17:03:40Z | 2024-11-27T17:03:40Z | https://github.com/deepspeedai/DeepSpeed/issues/6796 | [] | rlanday | 0 |
LAION-AI/Open-Assistant | machine-learning | 3,703 | Cant open a new/old chat | Its been 3-4 days that my previous chats have disappeared and I am unable to open any chat in OA. Whats the solution? Anyone?

| closed | 2023-09-27T12:35:07Z | 2023-11-28T07:19:57Z | https://github.com/LAION-AI/Open-Assistant/issues/3703 | [] | mirzajawadbaig100 | 1 |
BeanieODM/beanie | asyncio | 184 | Can't replace a dict attribute with save_changes | Hi,
In my project, I need to completely replace a dict attribute, but when using the state management and `save_changes`, the dict are merged instead of replaced. This is due to the logic in the following block: https://github.com/roman-right/beanie/blob/513865aadcf61f57c4ba921ce58bb0e7671036ee/beanie/odm/documents.py#L958-L968
How can I continue to use the state management, yet be able to override a whole dict attribute ?
Thanks for the help, | closed | 2022-01-08T18:25:32Z | 2022-02-10T16:19:39Z | https://github.com/BeanieODM/beanie/issues/184 | [] | paul-finary | 2 |
snarfed/granary | rest-api | 126 | "Z" added to datetime with explicit offset | On my [notes stream](https://gregorlove.com/notes/) I author datetimes like "2018-01-15 13:03-0800". Running that through Granary for an Atom feed is adding a "Z" at the end. [Example](https://granary.io/url?input=html&output=atom&url=http://gregorlove.com/notes/). I believe my datetime format is valid ISO 8601.
It appears to be doing [the same with my homepage](https://granary.io/url?input=html&output=atom&url=https://gregorlove.com/), which currently has slightly different ISO 8601 date formats ("T" separator, seconds, colon in offset) | closed | 2018-01-16T00:18:41Z | 2018-01-17T06:04:22Z | https://github.com/snarfed/granary/issues/126 | [] | gRegorLove | 2 |
JoeanAmier/XHS-Downloader | api | 226 | [功能请求] 作品文件名称格式 支持更加丰富的自定义文字 以及 嵌套目录 | 目前作品文件名称格式是将「以空格分隔的关键字」组合成最终文件名,无法在文件名中插入自己的文字和符号(比如:`作品id - 作品名称`),也无法通过插入"\\"等路径分隔符创建目录(比如:`作者id - 作者昵称\作品id - 作品名称`),希望可以支持这个功能。 | open | 2025-02-07T07:04:38Z | 2025-02-11T11:59:06Z | https://github.com/JoeanAmier/XHS-Downloader/issues/226 | [] | PYUDNG | 1 |
kizniche/Mycodo | automation | 915 | Phidget USB relays? | Has anyone used a USB Phidget relay similar to https://www.phidgets.com/?prodid=1020 ? This is a simple USB controlled 4x relay. From reading around I see a number of other relays and SDR's that are supported via the GPIO pins. I'm curious to see if I can get the USB I/O working as it would open up suitable controllers. If not I will look at setting up the custom I/O as mentioned in the docs. Thanks. | closed | 2020-12-29T04:24:34Z | 2021-09-20T19:15:05Z | https://github.com/kizniche/Mycodo/issues/915 | [] | ebo | 2 |
deepset-ai/haystack | machine-learning | 8,089 | Mermaid Crashes If trying to draw a large pipeline | Thanks in advance for your help :)
**Describe the bug**
I was building a huge pipeline, 30 components and 35 connections, and for debugging proposes I wanted to display the diagram, but both .draw() and .show() methods failed. It still works with small pipelines by the way.
**Error message**
```
Failed to draw the pipeline: https://mermaid.ink/img/ returned status 400
No pipeline diagram will be saved.
Failed to draw the pipeline: could not connect to https://mermaid.ink/img/ (400 Client Error: Bad Request for url: https://mermaid.ink/img/{place holder for 2km long data}
No pipeline diagram will be saved.
Traceback (most recent call last):
File "/Users/carlosfernandezloran/Desktop/babyagi-classic-haystack/.venv/lib/python3.10/site-packages/haystack/core/pipeline/draw.py", line 87, in _to_mermaid_image
resp.raise_for_status()
File "/Users/carlosfernandezloran/Desktop/babyagi-classic-haystack/.venv/lib/python3.10/site-packages/requests/models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://mermaid.ink/img/{another placeholder}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/carlosfernandezloran/Desktop/babyagi-classic-haystack/babyagi.py", line 188, in <module>
pipe.draw(path=Path("pipe"))
File "/Users/carlosfernandezloran/Desktop/babyagi-classic-haystack/.venv/lib/python3.10/site-packages/haystack/core/pipeline/base.py", line 649, in draw
image_data = _to_mermaid_image(self.graph)
File "/Users/carlosfernandezloran/Desktop/babyagi-classic-haystack/.venv/lib/python3.10/site-packages/haystack/core/pipeline/draw.py", line 95, in _to_mermaid_image
raise PipelineDrawingError(
haystack.core.errors.PipelineDrawingError: There was an issue with https://mermaid.ink/, see the stacktrace for details.
```
**Expected behavior**
I expect the .show() and .draw() methods to work for all pipelines, no matter the size.
This might be a Mermaid problem and not strictly haystacks, but we would need to work to implement a local diagram generator as said in #7896
**To Reproduce**
I will not add all the 200 lines of add_component, connect statements, but you can imagine how it goes.
**System:**
- OS: macOS
- GPU/CPU: M1
- Haystack version (commit or version number): 2.3.0
| closed | 2024-07-25T22:08:43Z | 2025-01-28T11:18:55Z | https://github.com/deepset-ai/haystack/issues/8089 | [
"P3"
] | CarlosFerLo | 10 |
BeanieODM/beanie | asyncio | 237 | question: how to avoid insert null value? | hey, roman
how can i do in this case, username is required and unique
email, phone are not required, but if exist, they must be unique
so i wan't to define a unique and sparse index
but everytime i insert document the value None will insert to mongodb
my question is how to avoid insert none ?
hope your reply, thanks!
```python
class User(BaseEntity):
username: str
phone: Optional[str] # nullable, unique and sparse index
email: Optional[EmailStr] # nullable, unique and sparse index
password: bytes
class Collection:
name = "user"
indexes = [
IndexModel([("username", DESCENDING)], unique=True),
IndexModel([("email", DESCENDING)], unique=True, sparse=True), # nullable, unique and sparse index
IndexModel([("phone", DESCENDING)], unique=True, sparse=True), # nullable, unique and sparse index
]
``` | closed | 2022-04-11T08:49:41Z | 2023-03-17T02:27:20Z | https://github.com/BeanieODM/beanie/issues/237 | [
"Stale"
] | hd10180 | 6 |
tiangolo/full-stack | sqlalchemy | 26 | backend app won't start | Following the instructions in the [generated readme](https://github.com/tiangolo/full-stack/blob/master/%7B%7Bcookiecutter.project_slug%7D%7D/README.md), when I `docker-compose up -d`, the backend fails to start with this error:
```
backend_1 | Traceback (most recent call last):
backend_1 | File "./app/main.py", line 8, in <module>
backend_1 | from .core import app_setup # noqa
backend_1 | File "./app/core/app_setup.py", line 18, in <module>
backend_1 | from ..api.api_v1 import api as api_v1 # noqa
backend_1 | File "./app/api/api_v1/api.py", line 8, in <module>
backend_1 | from .api_docs import docs
backend_1 | File "./app/api/api_v1/api_docs.py", line 3, in <module>
backend_1 | from flask_apispec import FlaskApiSpec
backend_1 | File "/usr/local/lib/python3.6/site-packages/flask_apispec/__init__.py", line 2, in <module>
backend_1 | from flask_apispec.views import ResourceMeta, MethodResource
backend_1 | File "/usr/local/lib/python3.6/site-packages/flask_apispec/views.py", line 6, in <module>
backend_1 | from flask_apispec.annotations import activate
backend_1 | File "/usr/local/lib/python3.6/site-packages/flask_apispec/annotations.py", line 6, in <module>
backend_1 | from flask_apispec.wrapper import Wrapper
backend_1 | File "/usr/local/lib/python3.6/site-packages/flask_apispec/wrapper.py", line 8, in <module>
backend_1 | from webargs import flaskparser
backend_1 | File "/usr/local/lib/python3.6/site-packages/webargs/__init__.py", line 7, in <module>
backend_1 | from webargs.core import ValidationError
backend_1 | File "/usr/local/lib/python3.6/site-packages/webargs/core.py", line 11, in <module>
backend_1 | from webargs.fields import DelimitedList
backend_1 | File "/usr/local/lib/python3.6/site-packages/webargs/fields.py", line 97, in <module>
backend_1 | class DelimitedTuple(DelimitedFieldMixin, ma.fields.Tuple):
backend_1 | AttributeError: module 'marshmallow.fields' has no attribute 'Tuple'
backend_1 | unable to load app 0 (mountpoint='') (callable not found or import error)
backend_1 | *** no app loaded. GAME OVER ***
backend_1 | 2020-12-27 22:28:17,753 INFO exited: uwsgi (exit status 22; not expected)
```
Maybe it's the wrong version of a related package? | closed | 2020-12-27T23:08:09Z | 2022-11-09T21:37:15Z | https://github.com/tiangolo/full-stack/issues/26 | [] | RobinClowers | 1 |
reloadware/reloadium | django | 56 | Error occurs when Python file path contains Non-ASCII characters | **Describe the bug**
Error occurs when Python file path contains Non-ASCII characters.
Hot Reload don't work
**To Reproduce**
Steps to reproduce the behavior:
1. Create a Python file named "😊.py" in PyCharm
2. Write some code
3. Run
4. Change some code and save
5. See error
**Expected behavior**
Works
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: Windows
- OS version: 11
- Reloadium package version: 0.9.4
- PyCharm plugin version: 0.8.8
- Editor: PyCharm
- Run mode: Run and Debug
**Additional context**
Add any other context about the problem here. | closed | 2022-11-01T01:13:33Z | 2022-11-21T22:43:24Z | https://github.com/reloadware/reloadium/issues/56 | [] | ghost | 1 |
BayesWitnesses/m2cgen | scikit-learn | 460 | Convert from VBA function to SAS | Hello all,
I have tried to use the m2cgen package in order to translate 3 specific VBA functions to SAS or R scripts.
I'm sending as an attached file an Excel spreasheet that illustrates what I'e tried to accomplish. There are 3 different functions available over this Excel spreadsheet, for instance:
a) COHORT: Function used to calculated a transition matrix through Cohort approach
b) GENERATOR: Function used to calculated the generator matrix through Aalen-Johansen Estimator
c) MEXPGENERATOR: Function used to translate the generator matrix in probabilities
I'd be so grateful if you are able to help me on that.
Thanks in advance.
Best Regards,
Raphael
[Transition Matrix_VBA Functions.zip](https://github.com/BayesWitnesses/m2cgen/files/7193046/Transition.Matrix_VBA.Functions.zip)
| open | 2021-09-19T22:36:38Z | 2021-09-19T22:36:38Z | https://github.com/BayesWitnesses/m2cgen/issues/460 | [] | raphaelchaves | 0 |
recommenders-team/recommenders | machine-learning | 1,497 | [FEATURE] O16n DNN models | ### Description
O16n DNN models (export, register, import, retraining, etc.)
### Expected behavior with the suggested feature
O16n notebook for DNN models
### Other Comments
| closed | 2021-08-15T07:42:39Z | 2022-10-19T08:09:18Z | https://github.com/recommenders-team/recommenders/issues/1497 | [
"enhancement"
] | loomlike | 3 |
OpenInterpreter/open-interpreter | python | 870 | Death spiral, ending in openAI rate limit ERROR | ### Is your feature request related to a problem? Please describe.
litellm.exceptions.RateLimitError: OpenAIException - Error code: 429 - {'error': {'message': 'Rate limit reached for gpt-4 in organization
org-lgQ******************** on tokens_usage_based per min: Limit 10000, Used 6353, Requested 4265. Please try again in 3.708s. Visit
https://platform.openai.com/account/rate-limits to learn more.', 'type': 'tokens_usage_based', 'param': None, 'code': 'rate_limit_exceeded'}}
...
...
[IPKernelApp] WARNING | Parent appears to have exited, shutting down.
### Describe the solution you'd like
when OI encounters this error, do something.
1. intercept the error
2. wait the proscribed number of seconds
3. issue error report to user screen
4. play elevator music
5. proceed without dying
### Describe alternatives you've considered
I asked OI to handle rate limiting (to 10000 tokens per second), and it agreed to, but it decided to also shut down other functions, like web scraping and os functions.
### Additional context
_No response_ | closed | 2024-01-04T22:36:45Z | 2024-03-19T18:56:24Z | https://github.com/OpenInterpreter/open-interpreter/issues/870 | [
"Enhancement"
] | cfortune | 5 |
ipython/ipython | data-science | 14,116 | Ipython seem to crash | It runs just 2 hours, but displays as below:
In [66]: Traceback (most recent call last):
File "<string>", line 1, in <module>
File "start.py", line 106, in start.attach_ipython
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\site-packages\IPython\__init__.py", line 130, in start_ipython
return launch_new_instance(argv=argv, **kwargs)
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\site-packages\traitlets\config\application.py", line 992, in launch_instance
app.start()
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\site-packages\IPython\terminal\ipapp.py", line 356, in start
self.shell.mainloop()
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\site-packages\IPython\terminal\interactiveshell.py", line 566, in mainloop
self.interact()
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\site-packages\IPython\terminal\interactiveshell.py", line 549, in interact
code = self.prompt_for_code()
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\site-packages\IPython\terminal\interactiveshell.py", line 474, in prompt_for_code
text = self.pt_app.prompt(
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\site-packages\prompt_toolkit\shortcuts\prompt.py", line 1034, in prompt
return self.app.run(
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\site-packages\prompt_toolkit\application\application.py", line 978, in run
return loop.run_until_complete(
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\asyncio\base_events.py", line 603, in run_until_complete
self.run_forever()
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\asyncio\base_events.py", line 570, in run_forever
self._run_once()
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\asyncio\base_events.py", line 1823, in _run_once
event_list = self._selector.select(timeout)
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\site-packages\prompt_toolkit\eventloop\inputhook.py", line 120, in select
th.start()
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\threading.py", line 852, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
If you suspect this is an IPython 7.31.1 bug, please report it at:
https://github.com/ipython/ipython/issues
or send an email to the mailing list at ipython-dev@python.org
You can print a more detailed traceback right now with "%tb", or use "%debug"
to interactively debug it.
Extra-detailed tracebacks for bug-reporting purposes can be enabled via:
%config Application.verbose_crash=True
Process Process-21:
Traceback (most recent call last):
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\multiprocessing\connection.py", line 312, in _recv_bytes
nread, err = ov.GetOverlappedResult(True)
BrokenPipeError: [WinError 109] The pipe has been ended
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "<string>", line 2, in run
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\multiprocessing\managers.py", line 835, in _callmethod
kind, result = conn.recv()
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\multiprocessing\connection.py", line 250, in recv
buf = self._recv_bytes()
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\multiprocessing\connection.py", line 321, in _recv_bytes
raise EOFError
EOFError
Process Process-20:466:2:
Traceback (most recent call last):
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "<string>", line 2, in save_file
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\multiprocessing\managers.py", line 850, in _callmethod
raise convert_to_error(kind, result)
BrokenPipeError: [WinError 232] The pipe is being closed
Process Process-20:468:
Traceback (most recent call last):
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "<string>", line 2, in execute
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\multiprocessing\managers.py", line 850, in _callmethod
raise convert_to_error(kind, result)
MemoryError
Process Process-20:467:
Traceback (most recent call last):
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\multiprocessing\process.py", line 315, in _bootstrap
self.run()
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "<string>", line 2, in monitor
File "C:\app\TESTUS\python\python-3.8.10.amd64\lib\multiprocessing\managers.py", line 850, in _callmethod
raise convert_to_error(kind, result)
BrokenPipeError: [WinError 232] The pipe is being closed | open | 2023-07-14T03:04:30Z | 2023-07-14T03:04:30Z | https://github.com/ipython/ipython/issues/14116 | [] | 10office | 0 |
NullArray/AutoSploit | automation | 378 | Unhandled Exception (c0a4cc041) | Autosploit version: `3.0`
OS information: `Linux-4.12.0-parrot6-amd64-x86_64-with-Parrot-3.8-JollyRoger`
Running context: `autosploit.py`
Error meesage: `global name 'Except' is not defined`
Error traceback:
```
Traceback (most recent call):
File "/home/thedevisterel/AutoSploit/autosploit/main.py", line 113, in main
loaded_exploits = load_exploits(EXPLOIT_FILES_PATH)
File "/home/thedevisterel/AutoSploit/lib/jsonize.py", line 61, in load_exploits
except Except:
NameError: global name 'Except' is not defined
```
Metasploit launched: `False`
| closed | 2019-01-18T17:28:19Z | 2019-04-18T17:32:01Z | https://github.com/NullArray/AutoSploit/issues/378 | [] | AutosploitReporter | 0 |
blb-ventures/strawberry-django-plus | graphql | 68 | Using `pk` field in `delete_mutation` raises ValueError | I run through an issue that using `pk` field in `mutation_delete` raises a value error
and I believe the follwoing
https://github.com/blb-ventures/strawberry-django-plus/blob/13b7706cf3427c6327302f0af40d2d2df8fc1374/strawberry_django_plus/mutations/fields.py#L295
should be as in
https://github.com/blb-ventures/strawberry-django-plus/blob/13b7706cf3427c6327302f0af40d2d2df8fc1374/strawberry_django_plus/mutations/fields.py#L262 | closed | 2022-06-17T19:15:04Z | 2022-06-18T18:19:01Z | https://github.com/blb-ventures/strawberry-django-plus/issues/68 | [] | ammar-faifi | 0 |
mljar/mercury | jupyter | 48 | Add `demo` in `run` command | Please add the `demo` option in the `run` command that will create a demo notebook and add it to Mercury. | closed | 2022-02-18T13:21:59Z | 2022-02-18T13:52:07Z | https://github.com/mljar/mercury/issues/48 | [
"enhancement"
] | pplonski | 1 |
tableau/server-client-python | rest-api | 1,149 | Unable to get the Flow subscription output details from server. | While creating a Flow subscription on the cloud, the payload request involves some additional details like
1. includeOutputData
2. outputId
3. includeLinkToOutputData
4. showRowsInEmailBody
5. attachOutputData
6. attachedOutputDataFormat
But I am unable to get these metrics from the server to enter into the payload request.
The response I received from the **getsubscription** on the server doesn't have these specific output details.
**Versions**
Tableau Server version - 3.15
Python version - 3.9.12
I've added the documentation containing the request payload for cloud to create subscription and response payload from server.
Any help would be appreciated.
[Payload(request and response).docx](https://github.com/tableau/server-client-python/files/10162933/Payload.request.and.response.docx)
| closed | 2022-12-06T05:49:29Z | 2022-12-22T23:56:07Z | https://github.com/tableau/server-client-python/issues/1149 | [
"help wanted"
] | JayavarshiniJJ | 2 |
HumanSignal/labelImg | deep-learning | 440 | Cannot start binary on Ubuntu | I cannot start the binary on Ubuntu 18.04.
Here is the error:
```
$ labelImg
Traceback (most recent call last):
File "/usr/local/bin/labelImg", line 10, in <module>
sys.exit(main())
File "/usr/local/lib/python3.6/dist-packages/labelImg/labelImg.py", line 1473, in main
app, _win = get_main_app(sys.argv)
File "/usr/local/lib/python3.6/dist-packages/labelImg/labelImg.py", line 1466, in get_main_app
argv[3] if len(argv) >= 4 else None)
File "/usr/local/lib/python3.6/dist-packages/labelImg/labelImg.py", line 130, in __init__
self.useDefaultLabelCheckbox = QCheckBox(getStr('useDefaultLabel'))
File "/usr/local/lib/python3.6/dist-packages/labelImg/labelImg.py", line 95, in <lambda>
getStr = lambda strId: self.stringBundle.getString(strId)
File "/usr/local/lib/python3.6/dist-packages/libs/stringBundle.py", line 36, in getString
assert(stringId in self.idToMessage), "Missing string id : " + stringId
AssertionError: Missing string id : useDefaultLabel
```
- **OS:** Ubuntu 18.04 64 bits
- **PyQt version:** 5.11.3
labelImg was installed from pypi repo.
| open | 2019-01-25T14:37:37Z | 2019-03-08T08:44:05Z | https://github.com/HumanSignal/labelImg/issues/440 | [] | swiss-knight | 5 |
python-restx/flask-restx | api | 143 | How do I manually enforce Swagger UI to include a model's definition? | It seems like using list response with nested model mentioned here #65 doesn't automatically add the definition of the model. As a result, I got an error saying "Could not resolve reference: Could not resolve pointer: /definitions/MyModel does not exist in document" at the Swagger UI page.
`@api.response(200, '', fields.List(fields.Nested(MyModel))`
Can someone help me with this? | open | 2020-05-28T01:18:16Z | 2023-12-12T10:26:08Z | https://github.com/python-restx/flask-restx/issues/143 | [
"question"
] | pinyiw | 6 |
Gozargah/Marzban | api | 1,188 | Marzban doesn't set headerType to none in KCP inbound | Hi
When I use "VLESS/VMess KCP NoTLS" , Marzban leaves headerType empty in client side while it should be like this "headerType=none".
inbound :
```
***
"header": {
"type": "none"
}
***
``` | closed | 2024-07-25T08:30:56Z | 2024-07-25T10:36:55Z | https://github.com/Gozargah/Marzban/issues/1188 | [
"Bug"
] | Kiya6955 | 2 |
schemathesis/schemathesis | graphql | 2,669 | [BUG] 428 should be an allowed negative status | ### Checklist
- [x] I checked the [FAQ section](https://schemathesis.readthedocs.io/en/stable/faq.html#frequently-asked-questions) of the documentation
- [x] I looked for similar issues in the [issue tracker](https://github.com/schemathesis/schemathesis/issues)
- [x] I am using the latest version of Schemathesis
### Describe the bug
Negative tests fail with
```
E 1. Accepted negative data
E
E Allowed statuses: 400, 401, 403, 404, 422, 5xx
E
E [428] Precondition Required:
E
E `{"error":{"code":1428,"message":"If-Match header not provided."}}`
E
E Reproduce with:
E
E curl -X PATCH -H 'host: mockserver:1234' -H 'authorization: ***"is_admin": true, "id": "admin", "name": "Admin Doe", "first_name": "Admin", "last_name": "Doe", "email": "admin.doe@gmail.com", "full_name": "Admin Doe"}' -H 'content-type: application/json' -d '[]' http://localhost/api/data/platform/config
E
E ====================
```
but 428 is a valid status in this case, as the apispec in this case [expects an If-Match header](https://github.com/SwissDataScienceCenter/renku-data-services/blob/main/components/renku_data_services/platform/api.spec.yaml#L31)
### To Reproduce
🚨 **Mandatory** 🚨: Steps to reproduce the behavior:
Apispec:
```
openapi: 3.0.2
paths:
/platform/config:
patch:
parameters:
- $ref: "#/components/parameters/If-Match"
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/PlatformConfigPatch"
responses:
"200":
content:
application/json:
schema:
$ref: "#/components/schemas/PlatformConfig"
default:
$ref: "#/components/responses/Error"
tags:
- platform
components:
schemas:
PlatformConfig:
type: object
properties:
etag:
$ref: "#/components/schemas/ETag"
incident_banner:
$ref: "#/components/schemas/IncidentBanner"
required:
- etag
- incident_banner
additionalProperties: false
PlatformConfigPatch:
type: object
properties:
incident_banner:
$ref: "#/components/schemas/IncidentBanner"
additionalProperties: false
ETag:
type: string
example: "9EE498F9D565D0C41E511377425F32F3"
IncidentBanner:
type: string
ErrorResponse:
type: object
properties:
error:
type: object
properties:
message:
type: string
required:
- "message"
required:
- "error"
responses:
Error:
content:
application/json:
schema:
$ref: "#/components/schemas/ErrorResponse"
parameters:
If-Match:
in: header
name: If-Match
required: true
schema:
$ref: "#/components/schemas/ETag"
```
### Expected behavior
To accept 428 to be a valid failure status.
Alternatively, to be able to configure this in `schemathesis.from_pytest_fixture`
### Environment
```
- OS: Linux
- Python version: 3.12
- Schemathesis version: 3.39.5
- Spec version: Open API 3.0.2
```
### Additional context
[Relevant PR](https://github.com/SwissDataScienceCenter/renku-data-services/pull/599)
[Relevant test](https://github.com/SwissDataScienceCenter/renku-data-services/blob/main/test/bases/renku_data_services/data_api/test_schemathesis.py)
| closed | 2025-01-08T15:47:26Z | 2025-01-12T19:52:43Z | https://github.com/schemathesis/schemathesis/issues/2669 | [
"Type: Bug",
"Status: Needs Triage"
] | Panaetius | 2 |
deezer/spleeter | deep-learning | 486 | [Question] Please help me use Spleeter with FFmpeg | <!-- Please respect the title [Discussion] tag. -->
Hi everybody,
I always use ffmpeg to edit video and audio. I wonder can I insert spleeter command into ffmpeg bat file then I can create a complete code without using Spleeter seperately.
Thank you so much. | closed | 2020-08-29T10:52:24Z | 2020-08-30T20:26:50Z | https://github.com/deezer/spleeter/issues/486 | [
"question"
] | Thanhcaro | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.